ERDAS Field Guide

Fourth Edition, Revised and Expanded

ERDAS , Inc. Atlanta, Georgia

®

Copyright © 1982 - 1997 by ERDAS, Inc. All rights reserved. First Edition published 1990. Second Edition published 1991. Third Edition reprinted 1995. Fourth Edition printed 1997. Printed in the United States of America.

ERDAS proprietary - copying and disclosure prohibited without express permission from ERDAS, Inc.

ERDAS, Inc. 2801 Buford Highway, NE Atlanta, Georgia 30329-2137 USA Phone: 404/248-9000 Fax: 404/248-9400 User Support: 404/248-9777

ERDAS International Telford House, Fulbourn Cambridge CBI 5HB England Phone: 011 44 1223 881 774 Fax: 011 44 1223 880 160

The information in this document is subject to change without notice.

Acknowledgments The ERDAS Field Guide was originally researched, written, edited, and designed by Chris Smith and Nicki Brown of ERDAS, Inc. The Second Edition was produced by Chris Smith, Nicki Brown, Nancy Pyden, and Dana Wormer of ERDAS, Inc., with assistance from Diana Margaret and Susanne Strater. The Third Edition was written and edited by Chris Smith, Nancy Pyden, and Pam Cole of ERDAS, Inc. The fourth edition was written and edited by Stacey Schrader and Russ Pouncey of ERDAS, Inc. Many, many thanks go to David Sawyer, ERDAS Engineering Director, and the ERDAS Software Engineers for their significant contributions to this and previous editions. Without them this manual would not have been possible. Thanks also to Derrold

Holcomb for lending his expertise on the Enhancement chapter. Many others at ERDAS provided valuable comments and suggestions in an extensive review process.

A special thanks to those industry experts who took time out of their hectic schedules to review previous editions of the ERDAS Field Guide. Of these “external”reviewers, Russell G. Congalton, D. Cunningham, Thomas Hack, Michael E. Hodgson, David McKinsey, and D. Way deserve recognition for their contributions to previous editions.

Cover image: The image on the front cover of the ERDAS IMAGINE Ver. 8.3 manuals is Global Relief Data from the National Geophysical Data Center (National Oceanic and Atmospheric Administration, U.S. Department of Commerce).

Trademarks
ERDAS and ERDAS IMAGINE are registered trademarks of ERDAS, Inc. IMAGINE Essentials, IMAGINE Advantage, IMAGINE Professional, IMAGINE Vista, IMAGINE Production, Model Maker, CellArray, ERDAS Field Guide, and ERDAS IMAGINE Tour Guides are trademarks of ERDAS, Inc. OrthoMAX is a trademark of Autometric, Inc. Restoration is a trademark of Environmental Research Institute of Michigan. Other brands and product names are trademarks of their respective owners. ERDAS IMAGINE Ver. 8.3. January, 1997. Part No. SWE-MFG4-8.3.0ALLP.

Table of Contents
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Conventions Used in this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xiv

CHAPTER 1 Raster Data
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Absorption/Reflection Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Spectral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Spatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Radiometric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Temporal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Data Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Line Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Storage Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Storage Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Calculating Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 ERDAS IMAGINE Format (.img) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Image File Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Consistent Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Keeping Track of Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Geocoded Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Using Image Data in GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Subsetting and Mosaicking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Multispectral Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Editing Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Editing Continuous (Athematic) Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

CHAPTER 2 Vector Layers
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

i

Table of Contents
Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Vector Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Attribute Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Displaying Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Symbolization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Tablet Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Screen Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Imported Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Raster to Vector Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

CHAPTER 3 Raster and Vector Data Sources
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Importing and Exporting Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Importing and Exporting Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Satellite System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Satellite Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Landsat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 SPOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 NOAA Polar Orbiter Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Advantages of Using Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Applications for Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Future Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Image Data from Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 AIRSAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 AVIRIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Image Data from Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 ADRG Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 ARC System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 ADRG File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 .OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 .IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 .Lxx (legend data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 ADRG File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 ADRI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 .OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 .IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 ADRI File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

ii

DTED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Using Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Ordering Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Addresses to Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Raster Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 ERDAS Ver. 7.X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 GRID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Sun Raster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 TIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Vector Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 ARCGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 AutoCAD (DXF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 DLG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 ETAK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 IGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 TIGER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

CHAPTER 4 Image Display
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Display Memory Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Colormap and Colorcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Display Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 8-bit PseudoColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 24-bit DirectColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 24-bit TrueColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 PC Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Displaying Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Thematic Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Using the IMAGINE Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Viewing Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Viewing Multiple Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Linking Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Zoom and Roam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Geographic Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Enhancing Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Creating New Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

CHAPTER 5 Enhancement
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Display vs. File Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Spatial Modeling Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

iii

Table of Contents
Correcting Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Radiometric Correction -Visible/Infrared Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Atmospheric Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Geometric Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Brightness Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Spatial Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Convolution Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Crisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Spectral Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Principal Components Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Decorrelation Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Tasseled Cap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 RGB to IHS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 IHS to RGB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Hyperspectral Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 IAR Reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Log Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Rescale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Processing Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Spectrum Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Signal to Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Mean per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Profile Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Wavelength Axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Spectral Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Inverse Fast Fourier Transform (IFFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Fourier Noise Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Homomorphic Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Radar Imagery Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Radiometric Correction - Radar Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Slant-to-Ground Range Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

iv

Merging Radar with VIS/IR Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

CHAPTER 6 Classification
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 The Classification Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Classification Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Classification Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Iterative Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Supervised vs. Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Classifying Enhanced Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Supervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Training Samples and Feature Space Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Selecting Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Evaluating Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Selecting Feature Space Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 ISODATA Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Signature Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Evaluating Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Contingency Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Signature Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Classification Decision Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Non-parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Mahalanobis Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Maximum Likelihood/Bayesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Evaluating Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Accuracy Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

CHAPTER 7 Photogrammetric Concepts
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262

v

Table of Contents
Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Pixel Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Image Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Ground Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Geocentric and Topocentric Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Aerial Camera Film . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Exposure Station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Image Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Strip of Photographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Block of Photographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Digital Imagery from Satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Correction Levels for SPOT Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Image Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Scanning Aerial Film . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Photogrammetric Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Aerial Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 SPOT Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Triangulation Accuracy Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Stereo Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Aerial Stereopairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 SPOT Stereopairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Epipolar Stereopairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Generate Elevation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Traditional Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Digital Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Elevation Model Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 DEM Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Image Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Image Matching Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Area Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Feature Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Relation Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Geometric Distortions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Aerial and SPOT Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Landsat Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Map Feature Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Stereoscopic Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Monoscopic Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Product Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Orthoimages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Orthomaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Topographic Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Topographic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

vi

CHAPTER 8 Rectification
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 When to Rectify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 When to Georeference Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Disadvantages of Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Rectification Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Ground Control Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 GCPs in ERDAS IMAGINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Entering GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Orders of Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Effects of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Minimum Number of GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 GCP Prediction and Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Residuals and RMS Error Per GCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Total RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Error Contribution by Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Tolerance of RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Evaluating RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 Resampling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 “Rectifying” to Lat/Lon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Map to Map Coordinate Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345

CHAPTER 9 Terrain Analysis
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Slope Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Aspect Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Topographic Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Lambertian Reflectance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Non-Lambertian Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

CHAPTER 10 Geographic Information Systems
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361

vii

Table of Contents
Continuous Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Thematic Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Raster Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Vector Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 ERDAS IMAGINE Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Analysis Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Proximity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Neighborhood Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Recoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Overlaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Matrix Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Graphical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Model Maker Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Output Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Using Attributes in Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Script Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Vector Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Editing Vector Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Constructing Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Building and Cleaning Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395

CHAPTER 11 Cartography
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Types of Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 Thematic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Neatlines, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411

viii

Labels and Descriptive Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Typography and Lettering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Properties of Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Geographical and Planar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 Available Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Choosing a Map Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 Map Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434

CHAPTER 12 Hardcopy Output
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Printing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Scale and Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Map Scaling Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Mechanics of Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Halftone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Continuous Tone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 Contrast and Color Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 RGB to CMY Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443

APPENDIX A Math Topics
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Bin Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Dimensionality of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Feature Space Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 n-Dimensional Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461

ix

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Map Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Pre-defined HFA File Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 x . . . . . . . . . . . . . . . . . 531 Gnomonic . . . . . . . . . . . . . . . . . . 462 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 Modified Transverse Mercator . . . . . . 461 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . 522 Conic Equidistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Basic Objects of an HFA File . . . . . . 527 General Vertical Near-side Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Azimuthal Equidistant . . . . . . . . . . . . . . . . . . . . . . . . . 525 Equirectangular (Plate Carrée) . . . . 546 Oblique Mercator (Hotine) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Transformation Matrix . . . . . . . . . . . . . . . . . . . 470 Attribute Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 ERDAS IMAGINE File Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 ERDAS IMAGINE . . . . . . 535 Lambert Conformal Conic . . . . . . . . . . . . . . . . . . 518 Albers Conical Equal Area . . . . . . . . 551 Polar Stereographic . . . . . . . . . . . . . . . . . . . . . . 485 Hierarchical File Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 APPENDIX B File Formats and Extensions Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 APPENDIX C Map Projections Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .img files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 MIF Data Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Map Projection Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 Raster Layer Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 MIF Data Dictionary . . . . . . . . . . . . . . . . . 529 Geographic (Lat/Lon) . . . . . . . . . . . . . . . . . . . . . . . . 463 Transposition . . . . . . . . . . . . . . . . . . 474 Machine Independent Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 Miller Cylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 ERDAS IMAGINE HFA File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 HFA Object Directory for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 USGS Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 Orthographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Pyramid Layers . . . . . . . . . . .img Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Sensor Information . . . . . . . . . . . . 538 Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 Lambert Azimuthal Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Table of Contents Order . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 Modified Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578 UTM . . . . 575 Transverse Mercator . . . 592 Robinson Pseudocylindrical . . . . . . . . . . . . . . . . . . . . 591 Rectified Skew Orthomorphic . . . . . . . . . . . . . . . 635 Index . . . . 559 Space Oblique Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583 External Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 xi . . . . . . . . . . . . . . . 594 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587 Laborde Oblique Mercator . . . . . . . . . . . . . . . . . . . . . . . . . 590 Mollweide Equal Area . . . . . 586 Cassini-Soldner . . . . . . . . . . . . . . . . . . . 593 Winkel’s Tripel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 Modified Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 Bipolar Oblique Conic Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 State Plane . . . . . . . . . . . . . 592 Southern Orientated Gauss Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Van der Grinten I . . . . . . . . . . . . . . . . . . . . . . . . . . .Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 Sinusoidal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Table of Contents xii .

74 . . . . . . . . . . . . . . . . . . 3: Electromagnetic Spectrum . . . . . . . . Landsat TM . . . . . . . . . . . . . . 8: IFOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33: ARC/Second Format . . . SPOT XS . . 37: Transforming Data File Values to Screen Values . . . . . . . . . . . . . . . . 41: Thematic Raster Layer Display Process . . . . . . . . . . . . . . . . . 30: Subset Area with Overlapping ZDRs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13: Image Files Store Raster Layers . . . . . . . . . 2: Typical File Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15: Examples of Continuous Raster Layers. . . . . . . . . . 32: ADRI Overview File Displayed in ERDAS IMAGINE Viewer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12: Band Sequential (BSQ) . . . . . . . . . . . . . . . . . . . . . . 4: Sun Illumination Spectral Irradiance at the Earth’s Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16: Vector Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38: Contrast Stretch and Colorcell Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 10 12 17 17 18 21 22 27 28 29 39 40 43 44 46 47 50 56 59 61 65 65 . . . 17: Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40: Continuous Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18: Workspace Structure . . . . . . . . . . . . . . . . . . . . . . 79 . . . . . . 39: Stretching by Min/Max vs. . . . 36: Transforming Data File Values to a Colorcell Value . . . . .2 . . . . . . . . . . . . . . . . . . . . . . 35: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 . . . . . . . . 25: SPOT Panchromatic vs. . 9: Brightness Values . . . . . . . . . . 23: Multispectral Imagery Comparison . . . 27: Received Radar Signal . . . . . . . . . . . . . . . . . 28: Radar Reflection from Different Sources and Distances (Lillesand and Kiefer 1987) . . . . 31: Seamless Nine Image DR . . . . . . . . . . 5: Factors Affecting Radiation . . . . . . . . . . . . . . . . . . . . 81 . . . . . . . . . . . Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29: ADRG Overview File Displayed in ERDAS IMAGINE Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 . . . . . . . . . . 21: Digitizing Tablet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11: Band Interleaved by Line (BIL) . . . . . . . . . . . . . 42: Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Band 2 (Four Types of Resolution). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 102 103 104 107 108 109 112 115 . . . . .7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .List of Figures Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 1: Pixels and Bands in a Raster Image . . . . . . . . . 14: Example of a Thematic Raster Layer . . . . . . . . . . . . . . . . . . . . . . . . 19: Attribute Example . . . . . . . . . . . . . . . . . . . . . . . . 66 . . . . . . . . . . . . . . . . . . . . . . . . . . 10: Landsat TM . . . . . . . . . . . . . . . . . . . . 34: Example of One Seat with One Display and Two Screens . . . . . . . . . . . . . 6: Reflectance Spectra . . 7: Laboratory Spectra of Clay Minerals in the Infrared Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 . . . . . . . . . . . . . . . . . . . . . . . 26: SLAR Radar (Lillesand and Kiefer 1987) . . . . . . . . . . . 20: Symbolization Example . . . . . . . . . . . . . . . . . . . . . 22: Raster Format Converted to Vector Format . . . . . . . . . . . . . . . . . . . 24: Landsat MSS vs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Homomorphic Filtering Process . Local Luminance Intercept . . . . . . . . . . Histogram Equalization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edge and Line Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spectral Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Equalized Histogram . . . Adjust Brightness Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Range Lines vs. . Comparison of Direct and Fourier Domain Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-Dimensional Spatial Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Very Noisy Edge Superimposed on an Ideal Edge . . . Histograms of Radiometrically Enhanced Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . High-Pass Filtering Using the Ideal Window . . . . . . . . . . . . . Example of Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graph of a Lookup Table . . . . . . . . . . Regions of Local Region Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spectrum Average GUI . . . Output Values for Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Piecewise Linear Contrast Stretch . . . . . . . . The Padding Technique . . . . . . . . . . Hue. . . . . . . . . . . . . . . . . . Two Band Scatterplot . . . . . . . Second Principal Component . . . . . . . . . . . . . . Linked Viewers . . . . . . . . . . . . . . . . . . Surface Profile. . . . . . . . . . . . . . . . . . . Histogram Equalization . . . . . . . Range of First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Three-Dimensional Spatial Profile . Filtering Using the Bartlett Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Histogram Matching . . Filtering Using the Butterworth Window . . . Rescale GUI . . . . . . . . . . . . . . . . . . . . . Spatial Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonlinear Radiometric Enhancement . . . . . . and Saturation Color Coordinate System Hyperspectral Data Axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intensity. . . . First Principal Component. . . . . . . . An Ideal Cross Section . . . . . . . . . . Contrast Stretch By Manipulating Lookup Tables and the Effect on the Output Histogram . . . and Line Models . . . . . . . . Example of Color Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . One-dimensional. . . . . . . . . . . . . . . . . . . . . . . . . . . 116 117 120 132 133 133 134 135 137 138 139 140 141 143 144 145 152 154 155 155 156 161 167 170 171 172 173 173 174 177 179 181 183 185 186 186 187 190 193 194 200 201 201 207 208 . . . . . . . . . . . . . . . . . . . . . . . .Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 43: 44: 45: 46: 47: 48: 49: 50: 51: 52: 53: 54: 55: 56: 57: 58: 59: 60: 61: 62: 63: 64: 65: 66: 67: 68: 69: 70: 71: 72: 73: 74: 75: 76: 77: 78: 79: 80: 81: 82: 83: 84: 85: 86: 87: Example of Dithering . . . . . . . . . . . . . . . . . . . . . . . One-Dimensional Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhancement with Lookup Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lines of Constant Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Effects of Mean and Median Filters . . . . . . . . . . . . . . . . . . . . . . . Continuous Edge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applying a Convolution Kernel . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120: Aerial Stereopair (60% Overlap). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ellipse Evaluation of Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . 113: Tie Points in a Block of Photos . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128: Orthographic Projection . . . . . . . . . 115: Image Coordinates in a Satellite Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100: Minimum Spectral Distance . . . . . . . . . . . . . . . . . 112: Ideal Point Distribution Over a Photograph for Aerial Triangulation . . . . . . . . . . . . . . . . . Classification Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102: Interactive Thresholding Tips . . . . . . . . . . . . . . . . . . . Process for Defining a Feature Space Object . . . . 124: Generate Elevation Models Work Flow (Method 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fiducials. . . . . . . . . . . . . Example of a Feature Space Image . . . . . . . . . . . . . . . . . . . . . . 106: A Regular (Rectangular) Block of Aerial Photos . . . . . . . . . . . and Principal Point . . . . . . . . . . . . . . . . . . . . . . . .Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 88: 89: 90: 91: 92: 93: 94: 95: 96: 97: Slant-to-Ground Range Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103: Pixel Coordinates and Image Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117: Inclination of a Satellite Stereo-Scene (View from North to South) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121: SPOT Stereopair (80% Overlap) . . . . . . . . . . . . . 110: Exterior Orientation of an Aerial Photo . . . . . . 111: Control Points in Aerial Photographs (block of 8 X 4 photos) . . . 108: Focal and Image Plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ISODATA Arbitrary Clusters . . 107: Triangulation Work Flow . . . . . . . 125: Image Pyramid for Matching at Coarse to Full Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105: Exposure Stations along a Flight Path . 127: Orthorectification Work Flow (Method 2) . . . . . . . . 114: Perspective Centers of SPOT Scan Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RGB Clustering . . . . . . . . . . . . . . . . . . . ISODATA First Pass. . . . . . . . . . . . . . . . . . 116: Interior Orientation of a SPOT Scene. . . . . . . . . . 129: Digital Orthophoto . . . . . . . . . 130: Image Displacement . . . . . . 98: Parallelepiped Corners Compared to the Signature Ellipse . . . . . . . . . . . . . . . . . . . . . 109: Image Coordinates. . ISODATA Second Pass . . . . . . 123: Generate Elevation Models Work Flow (Method 1) . . . . . . . . . . . . . . . 122: Epipolar Stereopair Creation Work Flow . . . . . . . . . 99: Feature Space Classification . . . . . . . . . 209 224 225 229 230 230 233 237 245 246 248 248 250 254 256 263 265 266 267 270 272 273 274 277 278 279 280 281 282 284 285 286 289 289 290 291 291 293 298 298 299 300 304 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104: Sample Photogrammetric Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119: Ideal Point Distribution Over a Satellite Scene for Triangulation . . . . . . . . . . . . . . . . . 126: Orthorectification Work Flow (Method 1) . . . . . . Parallelepiped Classification Using Plus or Minus Two Standard Deviations as Limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101: Histogram of a Distance Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Finding Gray Values . . . . . . . . . . . . . . . . . . . . . . . . . 118: Velocity Vector and Orientation Angle of a Single Scene . . . . . . . . . . . . . . . . . . . . . .

. . 155: Vector Attributes CellArray . . . . . . . . . . . . . . . . . . . . .img . . . . . . . . . . . . . . . . . . . . . . 133: Polynomial Curve vs. . . . . . . . . . . . . . . . . . . . . . . . 142: Residuals and RMS Error Per Point . . . 143:RMS Error Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139: Transformation Example—4th GCP Added. . . . . . . . . . . . . . . 134: Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . 141: Transformation Example—Effect of a 3rd-Order Transformation . . . . . . . . . . . . . . 147: Linear Interpolation . . . . . . . . . . Bad Lettering. . . . . . . . . . . . . . . . Tick Marks. . . . . . . . . . . . . . . . . . . . . 159: Sum Option of Neighborhood Analysis (Image Interpreter) . . . . . . . . . . 170: Sample Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138: Transformation Example—2nd-Order. . . . . 176: Ellipse . . . . . . . . . 175: Tangent and Secant Cylinders . . . . . . . . . . . . . . . 153: Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145: Nearest Neighbor . . . . . and Grid Lines . 168: Sample Legend . . . . . . . . . . . . . . . . . . . . . 164: Modeling Objects . . . . . . . . . . . . . . . . . 157: Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154: Raster Attributes for lnlandc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140: Transformation Example—3rd-Order . . . . . . . . . . . . 135: Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160: Overlay . . . . . . . 152: Shaded Relief . . . . . . . . . . . . . . . . . . . . 146: Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158: Using a Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 307 318 320 321 325 325 326 326 327 327 331 333 335 337 338 338 342 348 349 352 354 362 368 370 373 374 376 377 379 380 384 385 388 391 397 405 409 410 411 413 415 418 420 420 429 . 136: Transformation Example—1st-Order . . . . . . . . . . . 144: Resampling . . . . . . 166: Layer Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169: Sample Neatline. . . . . . . . . . . . . . . . 174: Tangent and Secant Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163: Graphical Model Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165: Graphical and Script Models For Tasseled Cap Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151: 3 × 3 Window Calculates the Aspect at Each Pixel. . . . . . . . . . . . . . . . . . . . 132: Feature Collection Work Flow (Method 2) . . . . . . . . . . . 162: Graphical Model for Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167: Sample Scale Bars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137: Transformation Example—2nd GCP Changed . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148: Cubic Convolution . . . . . . . . . . . . . . . . . . . . . 149: Regularly Spaced Terrain Data Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 131: Feature Collection Work Flow (Method 1) . . . . . . . . 156: Proximity Analysis. . . . . . . . . . . . . . . . . . . . . . . 161: Indexing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172: Good Lettering vs. . . . 150: 3 × 3 Window Calculates the Slope at Each Pixel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171: Sample Sans Serif and Serif Typefaces with Various Styles Applied. . . . . . . 173: Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . Stereographic Projection . . . . . . . . . . . . . . . . . . . . . Polyconic Projection of North America . . . . . . . Two Band Plot . . . . Lambert Azimuthal Equal Area Projection . . . . . . Histogram . . . . . . . . . . . . . . . . . . Geographic . . . . . . . . . . . . . . . . . Van der Grinten I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zones of the State Plane Coordinate System . Mean Vector. . . . . Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HFA File Structure Example. .img File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Albers Conical Equal Area Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two Band Scatterplot . . Examples of Objects Stored in an . . . Orthographic Projection . . . . . . . . . 438 439 447 450 456 457 458 459 468 471 485 486 521 524 532 537 540 543 545 553 556 558 564 577 581 584 . . . . . . . . . . . . . . . . . . . . Measurement Vector . . . . . . . . . . . . . . . . . . . . . HFA File Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zones of the Universal Transverse Mercator Grid in the United States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of a 512 x 512 Layer with a Block Size of 64 x 64 Pixels . . . . . . . . . . . . . . Miller Cylindrical Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Polar Aspect of the Azimuthal Equidistant Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Polar Stereographic Projection and its Geometric Construction . . . . . . . . . . . . . . . . Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lambert Conformal Conic Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 177: 178: 179: 180: 181: 182: 183: 184: 185: 186: 187: 188: 189: 190: 191: 192: 193: 194: 195: 196: 197: 198: 199: 200: 201: 202: Layout for a Book Map and a Paneled Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Map Composition . . . . . . . . . . . . .

.

. . . . . . . . . . . . . . . . . . . . . . . . . . . Table 35: UTM zones. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27: Acres and Hectares per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16: Description of Modeling Functions Available for Enhancement . 75 . . . . . . and zone code numbers for the United States . . . . . . . . . . . . . . . . . 89 . . . . . . . . . . central meridians. . . . . . . . . . . . . 14: Commonly Used RGB Colors . . . . . . 91 . . . . . . . . . . . . . . . . . . . . . . projection types. . . . . . . . . . . . . . .List of Tables Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table 1: Description of File Types . . . . . . . . . . . . . . . 31: ERDAS IMAGINE File Extensions . . . . 42 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25: Common Map Scales . . . . . . . . . . . . . . . . . . . . 20: Training Sample Comparison . . . . . . . . . . . . . . . . . . . . . . . . . 11: Conversion of DXF Entries . . . . . . . . . . . . . . . . . . . . . . . Table 34: NAD83 State Plane coordinate system zone numbers. . . . . . 570 . . . 12: Conversion of IGES Entities . . . . . . . . projection types. . . . 28: Map Projections . . . . . . . . . . . . . . . . . 26: Pixels per Inch . . . . . . . . . . . . . . . . . . . . . . 76 . . . . . . . . . . . . . . . . . . . . 3: Vector Data Formats for Import and Export . . . . . . . . . . . . . . . . . . . . . . . . 69 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .img . . . . . . . . 582 . 7: Legend Files for the ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . . . 13: Colorcell Example . . . . . . . . 23: General Editing Operations and Supporting Feature Types . . . . . . . . . . . 32: Usage of Binning Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30: Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29: Projection Parameters . . . . . . and longitude ranges . . . . . . . . . . . 84 . . . . . . . . . . . . . . . . . . . . . . 94 100 110 121 127 195 197 197 223 378 389 394 396 406 407 408 425 426 431 466 501 . . . . . . . . . . . . . . . . . . . . . . 5: Current Radar Sensors . . . . . . . . . . . . . . . 21: Example of a Recoded Land Cover Layer . . . . . . . . . . . 24: Comparison of Building and Cleaning Coverages . . . . . . . 9: File Types Created by Screendump . . . . . . . . . . . 17: Theoretical Coefficient of Variation Values. . . . . . . . . . . . . . . . . . . . . . . . . . 10: The Most Common TIFF Format Elements . . . . . . . . . 52 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33: NAD27 State Plane coordinate system zone numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6: ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8: Common Raster Data Products . . . 88 . . . . . . . . . . . . . . 4: Commonly Used Bands for Radar Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 . . . . . . . . . . 2: Raster Data Formats for Direct Import . 18: Parameters for Sigma Filter . . . . . 19: Pre-Classification Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22: Attribute Information for parks. . . . . . . . . . . . . . . . . . . 15: Overview of Zoom Ratio . . . . . . 66 . . . . . and zone code numbers for the United States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 . . . . . . . . . . . .

.

in some cases. Field Guide xxi .. it is geared toward ERDAS IMAGINE users and the functions within ERDAS IMAGINE software. rather than what buttons to push to actually perform those functions. Inc. First conceived as a helpful manual for ERDAS users. For the novice. graphics display hardware. Although the ERDAS Field Guide is primarily a reference to basic image processing and GIS concepts. the ERDAS Field Guide provides a brief history of the field. both to the authors and to ERDAS as a whole. There may also be functions described that are not available on your system. The enthusiasm with which the first three editions of the ERDAS Field Guide were received has been extremely gratifying. processes and functions are described that may not be in the current version of the software. Suggestions and ideas for future editions are always welcome. an extensive glossary of terms. but planned for a future release. Georgia. the ERDAS Field Guide includes the formulas and algorithms that are used in the code. cartography and map projections. and should be addressed to the Technical Writing division of Engineering at ERDAS. the ERDAS Field Guide is now being used as a textbook. The ERDAS Field Guide will continue to expand and improve to keep pace with the profession. so that he or she can see exactly how each operation works. This book is also aimed at a diverse audience: from those who are new to geoprocessing to those savvy users who have been in this industry for years. in Atlanta. lab manual. such as GIS analysis. and remote sensing. and training guide throughout the world.Preface Introduction The purpose of the ERDAS Field Guide is to provide background information on why one might use particular GIS and image processing functions and how the software is manipulating the data. and notes about applications for the different processes described. statistics. For the experienced user. However. due to the actual package that you are using. image processing.

xxii ERDAS . These paragraphs contain strong warnings or important tips.Preface Conventions Used in this Book The following paragraphs are used throughout the ERDAS Field Guide and other ERDAS IMAGINE documentation. These paragraphs lead you to other chapters in the ERDAS Field Guide or other manuals for additional information. These paragraphs direct you to the ERDAS IMAGINE software function that accomplishes the described task.

altering. Raster image data are laid out in a grid similar to the squares on a checker board. In remotely sensed image data. an image is a digital picture or representation of an object. These functions include importing. on magnetic tapes. These representations form images when they are displayed on a screen or are output to hardcopy. Each number in an image file is a data file value. Each cell of the grid is represented by a pixel. each pixel represents an area of the earth at a specific location. including: • • • • • • remote sensing data storage formats different types of resolution radiometric correction geocoded data using raster data in GIS See "CHAPTER 2: Vector Layers" for more information on vector data. Image Data In general terms. Field Guide 1 . viewing. or other media. Data file values are sometimes referred to as pixels. The term pixel is abbreviated from picture element. computer disks. A pixel is the smallest part of a picture (the area being scanned) with a single value. This chapter is an introduction to raster data. The data file value assigned to that pixel is the record of reflected radiation or emitted heat from the earth’s surface at that location. also called image files. also known as a grid cell. The data consist only of numbers. Remotely sensed image data are digital representations of the earth. Image data are stored in data files. and analyzing raster and vector data sets.Image Data CHAPTER 1 Raster Data Introduction The ERDAS IMAGINE system incorporates the functions of both image processing and geographic information systems (GIS). The data file value is the measured brightness value of the pixel at a specific wavelength.

Additional layers can be created and added to the image file (.) or some other userdefined information created by combining or enhancing the original bands. NOTE: DEMs are not remotely sensed image data. but are currently being produced from stereo points in radar imagery. Each band is a set of data file values for a specific portion of the electromagnetic spectrum of reflected light or emitted heat (red. Bands vs. as in digital elevation models (DEMs). it becomes a layer of information which can be processed in various ways. thermal. One pixel in a file may consist of many data file values. such as layers created by combining existing layers. 3 bands 1 pixel Figure 1: Pixels and Bands in a Raster Image See "CHAPTER 5: Enhancement" for more information on combining or enhancing bands of data. other types of values are represented by a pixel. bands of data are usually referred to as layers. near-infrared. etc.img) on page 27. Pixel is used as a broad term with many meanings. Bands Image data may include several bands of information. Read more about . ERDAS IMAGINE programs can handle an unlimited number of bands of image data in a single file.Data file values may also represent elevation.img files in ERDAS IMAGINE Format (.img extension) in ERDAS IMAGINE. green. infrared. 2 ERDAS . See "CHAPTER 4: Image Display" for more information on how images are displayed. The terms “pixel” and “data file value” are not interchangeable in ERDAS IMAGINE. Layers In ERDAS IMAGINE. When an image is displayed or printed. or creating new bands from other sources. Once a band is imported into a GIS. blue. one of which is data file value.

The actual value used for each category has no inherent meaning—it is simply a class value.) Ratio data measure a condition that has a natural zero.Poor” is an ordinal system. Numeral Types The range and the type of numbers used in a raster layer determine how the layer is displayed and processed. Interval data file values have an order.Moderate. rainfall. Likewise. are used. a layer with classes numbered and named “1 Good. such as electromagnetic radiation (as in most remotely sensed data). which does not necessarily have an absolute zero. causing the file values to represent continuous gradations across the layer.” “2 .257 to 553. Viewer Layers The IMAGINE Viewer permits several images to be layered. Field Guide 3 . interval and ratio layers are more likely to measure a condition. but the intervals between the values are also meaningful. For example. or slope. a layer of elevation data with values ranging from -51. Interval data measure some characteristic. The data file values in raster layers will generally fall into these categories: • Nominal data file values are simply categorized and named. except that the file values put the classes in a rank or order. An example of a nominal raster layer would be a thematic layer showing tree species. such as elevation or degrees Fahrenheit. in which case each image (including a multi-band image) may be a layer. (The difference between two values in interval data is meaningful.401 would be treated differently from a layer using only two values to show land and water. or themes. Such layers are called continuous.” and “3 . For example. Therefore. • • • Nominal and ordinal data lend themselves to applications in which categories. these layers are sometimes calledcategorical orthematic. Ordinal data are similar to nominal data.Image Data Layers vs.

In two-dimensional coordinate systems.Coordinate Systems The location of a pixel in a file or on a displayed or printed image is expressed using a coordinate system.). scanning an existing map. The X coordinate specifies the column of the grid. For more information on map coordinates and projection systems. 4 ERDAS .". a data file can be converted from one map coordinate system to another. locations are organized in a grid of columns and rows. The type of map coordinates used by a data file depends on the method used to create the file (remote sensing. see "CHAPTER 11: Cartography" or "APPENDIX C: Map Projections. 0 0 1 2 3 4 1 (3.y rows (y) 2 3 columns (x) Figure 2: Typical File Coordinates Map Coordinates Map coordinates may be expressed in one of a number of map coordinate or projection systems. etc. In ERDAS IMAGINE. There are two basic coordinate systems used in ERDAS IMAGINE: • • file coordinates — indicate the location of a pixel within the image (data file) map coordinates — indicate the location of a pixel in a map File Coordinates File coordinates refer to the location of the pixels within the image (data) file. File coordinates for the pixel in the upper left corner of the image always begin at 0. Image data organized into such a grid are known as raster data.0. Each location on the grid is expressed as a pair of coordinates known as X and Y. and the Y coordinate specifies the row.1) x. See "CHAPTER 8: Rectification" for more information on changing the map coordinate system of a data file.

Remote Sensing

Remote Sensing

Remote sensing is the acquisition of data about an object or scene by a sensor that is far from the object (Colwell 1983). Aerial photography, satellite imagery, and radar are all forms of remotely sensed data. Usually, remotely sensed data refer to data of the earth collected from sensors on satellites or aircraft. Most of the images used as input to the ERDAS IMAGINE system are remotely sensed. However, the user is not limited to remotely sensed data.

This section is a brief introduction to remote sensing. There are many books available for more detailed information, including Colwell 1983, Swain and Davis 1978, and Slater 1980 (see “Bibliography”). Electromagnetic Radiation Spectrum The sensors on remote sensing platforms usually record electromagnetic radiation. Electromagnetic radiation (EMR) is energy transmitted through space in the form of electric and magnetic waves (Star and Estes 1990). Remote sensors are made up of detectors that record specific wavelengths of the electromagnetic spectrum. The electromagnetic spectrum is the range of electromagnetic radiation extending from cosmic waves to radio waves (Jensen 1996). All types of land cover—rock types, water bodies, etc.—absorb a portion of the electromagnetic spectrum, giving a distinguishable “signature” of electromagnetic radiation. Armed with the knowledge of which wavelengths are absorbed by certain features and the intensity of the reflectance, the user can analyze a remotely sensed image and make fairly accurate assumptions about the scene. Figure 3 illustrates the electromagnetic spectrum (Suits 1983; Star and Estes 1990).

Reflected SWIR Ultraviolet

Thermal LWIR

Radar
0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0 Far-infrared (8.0 - 15.0)

Near-infrared (0.7 - 2.0) Visible (0.4 - 0.7) Blue (0.4 - 0.5) Green (0.5 - 0.6) Red (0.6 - 0.7)

Middle-infrared (2.0 - 5.0)

micrometers µm (one millionth of a meter)

Figure 3: Electromagnetic Spectrum

Field Guide

5

SWIR and LWIR The near-infrared and middle-infrared regions of the electromagnetic spectrum are sometimes referred to as the short wave infrared region (SWIR). This is to distinguish this area from the thermal or far infrared region, which is often referred to as the long wave infrared region (LWIR). The SWIR is characterized by reflected radiation whereas the LWIR is characterized by emitted radiation. Absorption/Reflection Spectra When radiation interacts with matter, some wavelengths are absorbed and others are reflected.To enhance features in image data, it is necessary to understand how vegetation, soils, water, and other land covers reflect and absorb radiation. The study of the absorption and reflection of EMR waves is called spectroscopy. Spectroscopy Most commercial sensors, with the exception of imaging radar sensors, are passive solar imaging sensors. Passive solar imaging sensors can only receive radiation waves; they cannot transmit radiation. (Imaging radar sensors are active sensors which emit a burst of microwave radiation and receive the backscattered radiation.) The use of passive solar imaging sensors to characterize or identify a material of interest is based on the principles of spectroscopy. Therefore, to fully utilize a visible/infrared (VIS/IR) multispectral data set and properly apply enhancement algorithms, it is necessary to understand these basic principles. Spectroscopy reveals the: • • absorption spectra — the EMR wavelengths that are absorbed by specific materials of interest reflection spectra — the EMR wavelengths that are reflected by specific materials of interest

Absorption Spectra Absorption is based on the molecular bonds in the (surface) material. Which wavelengths are absorbed depends upon the chemical composition and crystalline structure of the material. For pure compounds, these absorption bands are so specific that the SWIR region is often called “an infrared fingerprint.” Atmospheric Absorption In remote sensing, the sun is the radiation source for passive sensors. However, the sun does not emit the same amount of radiation at all wavelengths. Figure 4 shows the solar irradiation curve—which is far from linear.

6

ERDAS

Remote Sensing

2500 Solar irradiation curve outside atmosphere 2000 El Spectral Irradiance (Wm-2 µ -1)

1500

Solar irradiation curve at sea level Peaks show absorption by H20, C02, and O3

1000

500

0 0.0 0.3 0.6 0.9 1.2 1.5 1.8 Wavelength µm 2.1 2.4 2.7 3.0

UV

VIS

INFRARED Modified from Chahine, et al 1983

Figure 4: Sun Illumination Spectral Irradiance at the Earth’s Surface Solar radiation must travel through the earth’s atmosphere before it reaches the earth’s surface. As it travels through the atmosphere, radiation is affected by four phenomena (Elachi 1987): • • • • absorption— the amount of radiation absorbed by the atmosphere scattering — the amount of radiation scattered by the atmosphere away from the field of view scattering source — divergent solar irradiation scattered into the field of view emission source — radiation re-emitted after absorption

Field Guide

7

Radiation

Absorption - the amount of radiation absorbed by the atmosphere

Scattering - the amount of radiation scattered by the atmosphere away from the field of view

Scattering Source - divergent solar irradiations scattered into the field of view

Emission Source - radiation re-emitted after absorption

Source: Elachi 1987

Figure 5: Factors Affecting Radiation Absorption is not a linear phenomena; it is logarithmic with concentration (Flaschka 1969). In addition, the concentration of atmospheric gases, especially water vapor, is variable. The other major gases of importance are carbon dioxide (CO2) and ozone (O3), which can vary considerably around urban areas. Thus, the extent of atmospheric absorbance will vary with humidity, elevation, proximity to (or downwind of) urban smog, and other factors. Scattering is modeled as Rayleigh scattering with a commonly used algorithm that accounts for the scattering of short wavelength energy by the gas molecules in the atmosphere (Pratt 1991)—for example, ozone. Scattering is variable with both wavelength and atmospheric aerosols. Aerosols differ regionally (ocean vs. desert) and daily (for example, Los Angeles smog has different concentrations daily). Scattering source and emission source may account for only 5% of the variance. These factors are minor, but they must be considered for accurate calculation. After interaction with the target material, the reflected radiation must travel back through the atmosphere and be subjected to these phenomena a second time to arrive at the satellite.

8

ERDAS

Remote Sensing

The mathematical models that attempt to quantify the total atmospheric effect on the solar illumination are called radiative transfer equations. Some of the most commonly used are Lowtran (Kneizys 1988) and Modtran (Berk 1989).

See "CHAPTER 5: Enhancement" for more information on atmospheric modeling. Reflectance Spectra After rigorously defining the incident radiation (solar irradiation at target), it is possible to study the interaction of the radiation with the target material. When an electromagnetic wave (solar illumination in this case) strikes a target surface, three interactions are possible (Elachi 1987): • • • reflection transmission scattering

It is the reflected radiation, generally modeled as bidirectional reflectance (Clark 1984), that is measured by the remote sensor. Remotely sensed data are made up of reflectance values. The resulting reflectance values translate into discrete digital numbers (or values) recorded by the sensing device. These gray scale values will fit within a certain bit range (such as 0-255, which is 8-bit data) depending on the characteristics of the sensor. Each satellite sensor detector is designed to record a specific portion of the electromagnetic spectrum. For example, Landsat TM band 1 records the 0.45 to 0.52 µm portion of the spectrum and is designed for water body penetration, making it useful for coastal water mapping. It is also useful for soil/vegetation discriminations, forest type mapping, and cultural features identification (Lillesand and Kiefer 1987). The characteristics of each sensor provide the first level of constraints on how to approach the task of enhancing specific features, such as vegetation or urban areas. Therefore, when choosing an enhancement technique, one should pay close attention to the characteristics of the land cover types within the constraints imposed by the individual sensors. The use of VIS/IR imagery for target discrimination, whether the target is mineral, vegetation, man-made, or even the atmosphere itself, is based on the reflectance spectrum of the material of interest (see Figure 6). Every material has a characteristic spectrum based on the chemical composition of the material. When sunlight (the illumination source for VIS/IR imagery) strikes a target, certain wavelengths are absorbed by the chemical bonds; the rest are reflected back to the sensor. It is, in fact, the wavelengths that are not returned to the sensor that provide information about the imaged area.

Field Guide

9

Specific wavelengths are also absorbed by gases in the atmosphere (H20 vapor, CO2, O2, etc.). If the atmosphere absorbs a large percentage of the radiation, it becomes difficult or impossible to use that particular wavelength(s) to study the earth. For the present Landsat and SPOT sensors, only the water vapor bands were considered strong enough to exclude the use of their spectral absorption region. Figure 6 shows how Landsat TM bands 5 and 7 were carefully placed to avoid these regions. Absorption by other atmospheric gases was not extensive enough to eliminate the use of the spectral region for present day broad band sensors.

4 5 1 100 2 3

6 4

7

Landsat MSS bands

Landsat TM bands

5

7
Atmospheric absorption bands

Kaolinite

80

Reflectance%

60

Vegetation (green)

40

20

Silt loam

0 .4 .6 .8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4
Wavelength, µm

NOTE: Spectra are offset for clarity and scale.

Modified from Fraser 1986, Crist 1986, Sabins 1987

Figure 6: Reflectance Spectra NOTE: This chart is for comparison purposes only. It is not meant to show actual values. The spectra are offset to better display the lines. An inspection of the spectra reveals the theoretical basis of some of the indices in the ERDAS IMAGINE Image Interpreter. Consider the vegetation index TM4/TM3. It is readily apparent that for vegetation this value could be very large; for soils, much smaller, and for clay minerals, near zero. Conversely, when the clay ratio TM5/TM7 is considered, the opposite applies.

10

ERDAS

Remote Sensing

Hyperspectral Data As remote sensing moves toward the use of more and narrower bands (for example, AVIRIS with 224 bands only 10 nm wide), absorption by specific atmospheric gases must be considered. These multiband sensors are called hyperspectral sensors. As more and more of the incident radiation is absorbed by the atmosphere, the digital number (DN) values of that band get lower, eventually becoming useless—unless one is studying the atmosphere. Someone wanting to measure the atmospheric content of a specific gas could utilize the bands of specific absorption. NOTE: Hyperspectral bands are generally measured in nanometers (nm). Figure 6 shows the spectral bandwidths of the channels for the Landsat sensors plotted above the absorption spectra of some common natural materials (kaolin clay, silty loam soil and green vegetation). Note that while the spectra are continuous, the Landsat channels are segmented or discontinuous. We can still use the spectra in interpreting the Landsat data. For example, an NDVI ratio for the three would be very different and, hence, could be used to discriminate between the three materials. Similarly, the ratio TM5/TM7 is commonly used to measure the concentration of clay minerals. Evaluation of the spectra shows why. Figure 7 shows detail of the absorption spectra of three clay minerals. Because of the wide bandpass (2080-2350 nm) of TM band 7, it is not possible to discern between these three minerals with the Landsat sensor. As mentioned, the AVIRIS hyperspectral sensor has a large number of approximately 10 nm wide bands. With the proper selection of band ratios, mineral identification becomes possible. With this dataset, it would be possible to discriminate between these three clay minerals, again using band ratios. For example, a color composite image prepared from RGB = 2160nm/2190nm, 2220nm/2250nm, 2350nm/2488nm could produce a color coded clay mineral imagemap. The commercial airborne multispectral scanners are used in a similar fashion. The Airborne Imaging Spectrometer from the Geophysical & Environmental Research Corp. (GER) has 79 bands in the UV, visible, SWIR, and thermal-infrared regions. The Airborne Multispectral Scanner Mk2 by Geoscan Pty, Ltd., has up to 52 bands in the visible, SWIR, and thermal-infrared regions. To properly utilize these hyperspectral sensors, the user must understand the phenomenon involved and have some idea of the target materials being sought.

Field Guide

11

Landsat TM band 7

2080 nm

2350 nm

Kaolinite

Reflectance%

Montmorillonite

Illite

2000

2200

2400

2600

Wavelength, nm

Modified from Sabins 1987

Figure 7: Laboratory Spectra of Clay Minerals in the Infrared Region NOTE: Spectra are offset vertically for clarity.

The characteristics of Landsat, AVIRIS, and other data types are discussed in "CHAPTER 3: Raster and Vector Data Sources". See page 166 of "CHAPTER 5: Enhancement" for more information on the NDVI ratio

12

ERDAS

Remote Sensing

Imaging Radar Data Radar remote sensors can be broken into two broad categories: passive and active. The passive sensors record the very low intensity, microwave radiation naturally emitted by the Earth. Because of the very low intensity these images have low spatial resolution (i.e., large pixel size). It is the active sensors, termed imaging radar, that are introducing a new generation of satellite imagery to remote sensing. To produce an image, these satellites emit a directed beam of microwave energy at the target and then collect the backscattered (reflected) radiation from the target scene. Because they must emit a powerful burst of energy, these satellites require large solar collectors and storage batteries. For this reason, they cannot operate continuously; some satellites are limited to 10 minutes of operation per hour. The microwave energy emitted by an active radar sensor is coherent and defined by a narrow bandwidth. The following table summarizes the bandwidths used in remote sensing. Band Designation*
Ka (0.86 cm) K Ku X (3.0 cm, 3.2 cm) C S L (23.5 cm, 25.0 cm) P

Wavelength (λ), cm
0.8 to 1.1 1.1 to 1.7 1.7 to 2.4 2.4 to 3.8 3.8 to 7.5 7.5 to 15.0 15.0 to 30.0 30.0 to 100.0

Frequency (υ), GHz (10 9 cycles ⋅ sec -1)
40.0 to 26.5 26.5 to 18.0 18.0 to 12.5 12.5 to 8.0 8.0 to 4.0 4.0 to 2.0 2.0 to 1.0 1.0 to 0.3

*Wavelengths commonly used in imaging radars are shown in parentheses.

A key element of a radar sensor is the antenna. For a given position in space, the resolution of the resultant image is a function of the antenna size. This is termed a realaperture radar (RAR). At some point, it becomes impossible to make a large enough antenna to create the desired spatial resolution. To get around this problem, processing techniques have been developed which combine the signals received by the sensor as it travels over the target. Thus the antenna is perceived to be as long as the sensor path during backscatter reception. This is termed a synthetic aperture and the sensor a synthetic aperture radar (SAR).

Field Guide

13

The received signal is termed a phase history or echo hologram. It contains a time history of the radar signal over all the targets in the scene and is itself a low resolution RAR image. In order to produce a high resolution image, this phase history is processed through a hardware/software system called a SAR processor. The SAR processor software requires operator input parameters, such as information about the sensor flight path and the radar sensor's characteristics, to process the raw signal data into an image. These input parameters depend on the desired result or intended application of the output imagery. One of the most valuable advantages of imaging radar is that it creates images from its own energy source and therefore is not dependant on sunlight. Thus one can record uniform imagery any time of the day or night. In addition, the microwave frequencies at which imaging radars operate are largely unaffected by the atmosphere. This allows image collection through cloud cover or rain storms. However, the backscattered signal can be affected. Radar images collected during heavy rainfall will often be seriously attenuated, which decreases the signal-to-noise ratio (SNR). In addition, the atmosphere does cause perturbations in the signal phase, which decreases resolution of output products, such as the SAR image or generated DEMs.

14

ERDAS

Resolution

Resolution

Resolution is a broad term commonly used to describe: • • the number of pixels the user can display on a display device, or the area on the ground that a pixel represents in an image file.

These broad definitions are inadequate when describing remotely sensed data. Four distinct types of resolution must be considered: • • • • spectral - the specific wavelength intervals that a sensor can record spatial - the area on the ground represented by each pixel radiometric - the number of possible data file values in each band (indicated by the number of bits into which the recorded energy is divided) temporal - how often a sensor obtains imagery of a particular area

These four domains contain separate information that can be extracted from the raw data. Spectral Spectral resolution refers to the specific wavelength intervals in the electromagnetic spectrum that a sensor can record (Simonett 1983). For example, band 1 of the Landsat Thematic Mapper sensor records energy between 0.45 and 0.52 µm in the visible part of the spectrum. Wide intervals in the electromagnetic spectrum are referred to as coarse spectral resolution, and narrow intervals are referred to as fine spectral resolution. For example, the SPOT panchromatic sensor is considered to have coarse spectral resolution because it records EMR between 0.51 and 0.73 µm. On the other hand, band 3 of the Landsat TM sensor has fine spectral resolution because it records EMR between 0.63 and 0.69 µm (Jensen 1996). NOTE: The spectral resolution does not indicate how many levels into which the signal is broken down. Spatial Spatial resolution is a measure of the smallest object that can be resolved by the sensor, or the area on the ground represented by each pixel (Simonett 1983). The finer the resolution, the lower the number. For instance, a spatial resolution of 79 meters is coarser than a spatial resolution of 10 meters. Scale The terms large-scale imagery and small-scale imagery often refer to spatial resolution. Scale is the ratio of distance on a map as related to the true distance on the ground (Star and Estes 1990). Large scale in remote sensing refers to imagery in which each pixel represents a small area on the ground, such as SPOT data, with a spatial resolution of 10 m or 20 m. Small scale refers to imagery in which each pixel represents a large area on the ground, such as AVHRR data, with a spatial resolution of 1.1 km.

Field Guide

15

This terminology is derived from the fraction used to represent the scale of the map, such as 1:50,000. Small-scale imagery is represented by a small fraction (one over a very large number). Large-scale imagery is represented by a larger fraction (one over a smaller number). Generally, anything smaller than 1:250,000 is considered small-scale imagery. NOTE: Scale and spatial resolution are not always the same thing. An image always has the same spatial resolution but it can be presented at different scales (Simonett 1983). IFOV Spatial resolution is also described as the instantaneous field of view (IFOV) of the sensor, although the IFOV is not always the same as the area represented by each pixel. The IFOV is a measure of the area viewed by a single detector in a given instant in time (Star and Estes 1990). For example, Landsat MSS data have an IFOV of 79 × 79 meters, but there is an overlap of 11.5 meters in each pass of the scanner, so the actual area represented by each pixel is 56.5 × 79 meters (usually rounded to 57 × 79 meters). Even though the IFOV is not the same as the spatial resolution, it is important to know the number of pixels into which the total field of view for the image is broken. Objects smaller than the stated pixel size may still be detectable in the image if they contrast with the background, such as roads, drainage patterns, etc. On the other hand, objects the same size as the stated pixel size (or larger) may not be detectable if there are brighter or more dominant objects nearby. In Figure 8, a house sits in the middle of four pixels. If the house has a reflectance similar to its surroundings, the data file values for each of these pixels will reflect the area around the house, not the house itself, since the house does not dominate any one of the four pixels. However, if the house has a significantly different reflectance than its surroundings, it may still be detectable.

16

ERDAS

Resolution

20m

20m

20m

house

20m

Figure 8: IFOV Radiometric Radiometric resolution refers to the dynamic range, or number of possible data file values in each band. This is referred to by the number of bits into which the recorded energy is divided. For instance, in 8-bit data, the data file values range from 0 to 255 for each pixel, but in 7-bit data, the data file values for each pixel range from 0 to 128. In Figure 9, 8-bit and 7-bit data are illustrated. The sensor measures the EMR in its range. The total intensity of the energy from 0 to the maximum amount the sensor measures is broken down into 256 brightness values for 8-bit data and 128 brightness values for 7-bit data.

0 1 2 3 4 5 6 7 8 9 10 11 8-bit 0

244

249

255

max. intensity

0 7-bit 0

1

2

3

4

5

122

123 124 125 126 127

max. intensity

Figure 9: Brightness Values

Field Guide

17

on the other hand.60 mm Day 1 Day 17 Day 31 Temporal Resolution: same area viewed every 16 days Source: EOSAT Figure 10: Landsat TM .Band 2 (Four Types of Resolution) 18 ERDAS . can revisit the same area every three days. NOTE: Temporal resolution is an important factor to consider in change detection studies. Figure 10 illustrates all four types of resolution: Spatial Resolution: 1 pixel = 79 m x 79 m 79 m 79 m Radiometric Resolution: 8-bit (0 . the Landsat satellite can view the same area of the globe once every 16 days.Temporal Temporal resolution refers to how often a sensor obtains imagery of a particular area.52 . For example.0. SPOT.255) Spectral Resolution: 0.

Line Dropout Line dropout occurs when a detector either completely fails to function or becomes temporarily saturated during a scan (like the effect of a camera flash on a human retina). NOTE: Radiometric errors are usually already corrected in data from EOSAT or SPOT. See "CHAPTER 5: Enhancement" for more information on radiometric and geometric correction. Field Guide 19 . The result is a line or partial line of data with higher data file values. Use Image Interpreter or Spatial Modeler for implementing algorithms to eliminate striping. The Spatial Modeler editing capabilities allow you to adapt the algorithms to best address the data. Striping Striping or banding will occur if a detector goes out of adjustment—that is.Data Correction Data Correction There are several types of errors that can be manifested in remotely sensed data. based on the lines above and below it. You can correct line dropout using the 5 x 5 Median Filter from the Radar Speckle Suppression function. Line dropout is usually corrected by replacing the bad line with a line of estimated data file values. creating a horizontal streak until the detector(s) recovers. Among these are line dropout and striping. These errors can be corrected to an extent in GIS by radiometric and geometric correction functions. if it recovers. it provides readings consistently greater than or less than the other detectors for the same band over the same ground cover. The Convolution and Focal Analysis functions in Image Interpreter will also correct for line dropout.

Storage Formats Image data can be arranged in several ways on a tape or other media. 20 ERDAS . however. The most common storage formats are: • • • BIL (band interleaved by line) BSQ (band sequential) BIP (band interleaved by pixel) For a single band of data. The basic unit of binary data is a bit. All computer data are in binary format. file size and disk space are referred to by number of bytes. A set of bits. A byte is 8 bits of data.698 bytes of disk space. CD-ROMs. BIP.024 bytes = 1 kilobyte) of RAM (random access memory). can have many more values. BIL In BIL (band interleaved by line) format. A gigabyte (Gb) is about one billion bytes. A bit can have two possible values—0 and 1. for example—but how the data are stored (e. or a file may need 55. each record in the file contains a scan line (row) of data for one band (Slater 1980). depending on the number of bits used. a PC may have 640 kilobytes (1.Data Storage Image data can be stored on a variety of media—tapes. Generally. as long as the data are not blocked.g. all formats (BIL.. and BSQ) are identical. The number of values that can be expressed by a set of bits is 2 to the power of the number of bits used. For example. structure) is more important than on what they are stored. Blocked data are discussed under "Storage Media" on page 23. or “off” and “on” respectively. A megabyte (Mb) is about one million bytes. or floppy diskettes. All bands of data for a given line are stored consecutively within the file as shown in Figure 11.

in that: • • one band can be read and viewed easily multiple bands can be easily loaded in any order Field Guide 21 . Band 2 • • • Line 1. Band x Line 2. Band 1 Line 1. Band 1 Line 2. Band 1 Line n. This format is advantageous. Band x Line n. Band 2 • • • Line 2. Band 2 • • • Line n. BSQ In BSQ (band sequential) format.Data Storage Header Image Line 1. Band x Trailer Figure 11: Band Interleaved by Line (BIL) NOTE: Although a header and trailer file are shown in this diagram. each entire band is stored consecutively in the same file (Slater 1980). not all BIL data contain header and trailer files.

Band 1 Line 3. Band x Line 2. Band x • • • Line n. An end-of-volume marker marks the end of each volume (tape). Band 2 • • • Line n. Band 1 Line 1. 22 ERDAS . Band 2 Line 3. Band 2 Line 2. Geocoded products are normally blocked (EOSAT). Band 1 Line 2. Regular products (not geocoded) are normally unblocked. There are no header records preceding the image data. See Geocoded Data on page 32 for more information on geocoded data. Band x Trailer File(s) Figure 12: Band Sequential (BSQ) Landsat Thematic Mapper (TM) data are stored in a type of BSQ format known as fast format. it will end on the first tape. Fast format data have the following characteristics: • • • • • • Files are not split between tapes. There is one header file per tape. An end-of-file (EOF) marker follows each band. ERDAS IMAGINE will import all of the header and image file information. Band x Line 3. Band 2 end-of-file Image File Band 2 end-of-file Image File Band x Line 1. Band 1 • • • Line n. If a band starts on the first tape.Header File(s) Image File Band 1 Line 1. An end-of-volume marker consists of three end-of-file markers.

Band 2 Pixel 1. Band 1 Pixel 2. Band 2 Pixel 2. Band 1 Pixel 1. The sequence for BIP format is: Pixel 1. Band 3 . . .Data Storage BIP In BIP (band interleaved by pixel) format. it is sometimes possible to select the type of media preferred. Band 3 . . The most common forms of storage media are discussed in the following section: • • • • • 9-track tape 4 mm tape 8 mm tape 1/4” cartridge tape CD-ROM/optical disk Field Guide 23 . . Storage Media Today. The pixels are arranged sequentially on the tape (Slater 1980). Pixel 2. the values for each band are ordered within a given pixel. depending on the system hardware and devices available. most raster data are available on a variety of storage media to meet the needs of users. When ordering data.

or blank space. a record may contain 28. Tape Contents Tapes are available in a variety of sizes and storage capacities. BIP pixel depth—4-bit. data can be blocked to fit more on a tape. 10-bit.Other types of storage media are: • • • floppy disk (3. photograph. The number of logical records in each physical record is the blocking factor. Blocked Data For reasons of efficiency. read the tape label or box.5” or 5. but only 4000 columns due to a blocking factor of 7. or paper videotape Tape The data on a tape can be divided into logical records and physical records. Therefore. For example. or read the header file. BSQ. on the tape. 12-bit. For instance. all the data for one line of an image may form a logical record. Blocked data are sequenced so that there are more logical records in each physical record. followed by a gap. 8-bit. • • A logical record is a series of bytes that form a unit. or 16-bit number of bands blocking factor number of header files and header records 24 ERDAS . To obtain information about the data on a particular tape. there is limited information on the outside of the tape. it may be necessary to read the header files on each tape for specific information.25”) film. A record is the basic storage unit on a tape.000 bytes. such as: • • • • • • • • number of tapes that hold the data set number of columns (in pixels) number of rows (in pixels) data storage format—BIL. Often. A physical record is a consecutive series of bytes on a magnetic tape.

The number of bits per inch on a tape is also referred to as the tape density. which makes it easy to ship and handle. The 8 mm tape is a 2. Depending on the length of the tape. on the tape. 1/4” Cartridge Tapes This tape format falls between the 8 mm and 9-track in physical size and storage capacity. However. This is the most stable of the current media storage types and data stored on CD-ROM are expected to last for decades without degradation. CD-ROM Data such as ADRG and DLG are most often available on CD-ROM. 8 mm Tapes The 8 mm tape offers the advantage of storing vast amounts of data. A CD-ROM is an optical read-only storage device which can be read with a CD player. but it can hold up to 2 Gb of data. Field Guide 25 . The size and storage capability make 9-track less convenient than 8 mm or 1/4” tapes.5” × 4” cassette. A single 9-track tape may be referred to as a volume.75” in size. The tape is approximately 4” × 6” in size and stores up to 150 Mb of data. Up to 644 Mb can be stored on a CD-ROM. although many types of data can be requested in CD-ROM format. It requires a 9-track tape drive as a peripheral device for retrieving data. erased. it protects the data from accidentally being overwritten. CD-ROM’s offer the advantage of storing large amounts of data in a small. The tapes most commonly used have either 1600 or 6250 bpi. The complete set of tapes that contains one image is referred to as a volume set. since this device is read-only. compact device. 9-track tapes are still widely used. 9-tracks can store between 120-150 Mb of data.Data Storage 4 mm Tapes The 4 mm tape is a relative newcomer in the world of GIS. This petite cassette offers an obvious shipping and storage advantage because of its size. or changed from its original integrity. Also. It is a large circular tape approximately 10” in diameter. This tape is a mere 2” × 1. The storage format of a 9-track tape in binary format is described by the number of bits per inch. Tapes are available in 5 and 10 Gb storage capacities (although some tape drives cannot handle the 10 Gb size). 9-Track Tapes A 9-track tape is an older format that was the standard for two decades. bpi.

26 ERDAS .4 = output file size where: = rows = columns = number of bytes per pixel = number of bands 1.Calculating Disk Space To calculate the amount of disk space a raster file will require on an ERDAS IMAGINE system. y x b n [ ( ( 500 × 500 ) × 2 ) × 3 ] × 1.100. NOTE: This output file size is approximate. For example. disk space is shown as kilobytes (1. On the workstation.4 adds 30% to the file size for pyramid layers and 10% for miscellaneous adjustments. 000 Bytes Per Pixel The number of bytes per pixel is listed below: 4-bit data: . about 2. etc.5 8-bit data: 1. such as histograms.0 or 2. disk space is shown in bytes. to load a 3 band. lookup tables. 16-bit file with 500 rows and 500 columns. use the following formula: [ ( ( x × y × b ) × n ) ] × 1. 100.0 16-bit data: 2.024 bytes).4 = 2.1 Mb NOTE: On the PC.000 bytes of disk space would be needed.

such as: • • • • • soils land use land cover roads hydrology NOTE: Thematic raster layers are displayed as pseudo color layers. Thematic raster layers are used to represent data measured on a nominal or ordinal scale. When importing a LAN file. when importing a GIS file from Version 7.img) can contain two types of raster layers: • • thematic continuous An image file can store a combination of thematic and continuous layers or just one type. A thematic layer is contained within an . they are converted to the ERDAS IMAGINE file format and stored in .Data Storage ERDAS IMAGINE Format (.img file. Field Guide 27 .5. file name extensions identify the file type.img) Raster Layer(s) Thematic Raster Layer(s) Continuous Raster Layer(s) Figure 13: Image Files Store Raster Layers ERDAS Version 7. When data are imported into IMAGINE. categorical information about an area. it becomes an image file with one thematic raster layer. ERDAS IMAGINE image files (. Thematic layers lend themselves to applications in which categories or themes are used. Image File (. Thematic Raster Layer Thematic data are raster layers that contain qualitative.5 Users For Version 7. each band becomes a continuous raster layer within an image file.img) In ERDAS IMAGINE.5 users.img files.

soils Figure 14: Example of a Thematic Raster Layer See "CHAPTER 4: Image Display" for information on displaying thematic raster layers. The following types of data are examples of continuous raster layers: • • • • • • Landsat SPOT digitized (scanned) aerial photograph digital elevation model (DEM) slope temperature NOTE: Continuous raster layers can be displayed as either a gray scale raster layer or a true color raster layer. continuous values. Landsat Thematic Mapper data) or single band (e. SPOT panchromatic data).g.. 28 ERDAS .g. Continuous Raster Layer Continuous data are raster layers that contain quantitative (measuring a characteristic on an interval or ratio scale) and related. Continuous raster layers can be multiband (e..

img files contain the following additional information about the data: • • • • • the data file values statistics lookup tables map coordinates map projection This additional information can be viewed in the Image Information function from the ERDAS IMAGINE icon panel. Image File Contents The .Data Storage Landsat TM DEM Figure 15: Examples of Continuous Raster Layers Tiled Data Data in the .img format are tiled data. Tiled data are stored in tiles that can be set to any size. The default tile size for . Field Guide 29 .img files is 64 x 64 pixels.

Pyramid layers are image layers which are successively reduced by the power of 2 and resampled. The Pyramid Layer option is available in the Image Information function from the ERDAS IMAGINE icon panel and from the Import function. 30 ERDAS . See "CHAPTER 4: Image Display" for more information on pyramid layers. Pyramid Layers Sometimes a large image will take longer than normal to display in the ERDAS IMAGINE Viewer.Statistics In ERDAS IMAGINE. the file statistics are generated from the data file values in the layer and incorporated into the . and helps the user make processing decisions. The pyramid layer option enables the user to display large images faster.img file. This statistical information is used to create many program defaults. See "APPENDIX B: File Formats and Extensions" for detailed information on ERDAS IMAGINE file formats.

An image database is especially helpful when there are many image files and even many on-going projects. For example. “lanierTM.img) that are imported and created in IMAGINE. This will help everyone involved know what the file contains.ovr lanier.img” was probably created when lanierTM. if this information is modified in the Image Info utility. Many processes create an output file. it is possible to determine the progress of a project and contents of a file by examining the directory. where it is located.img was rectified to a UTM map projection. is extracted from the image file header. in a project to create a map composition for Lake Lanier. Develop a naming convention that is based on the contents of the file.map.” it is difficult to determine the contents of the file. For example. Therefore. On the other hand.map. “lanier. it will be necessary to assign a file name. or it can clarify and give direction. Consistent Naming Convention Keeping Track of Image Files Using a database to store information about images enables the user to track image files (. except archive information. map projection) and the database will return a list of image files that match the search criteria. if a standard nomenclature is developed in which the file name refers to a process or contents of the file.img lanierSymbols. Well organized files will also make data more accessible to anyone who uses the system. and its ancillary data.img) without having to know the name or location of the file. type.map lanier. This file information will help to quickly determine which image(s) to use. NOTE: All information in the Image Catalog database.. The database can be queried for specific parameters (e. one can make some educated guesses about the contents of each file based on naming conventions used.ovr lanierScalebars. Using consistent naming conventions and the ERDAS IMAGINE Image Catalog will help keep image files well organized and accessible.img From this listing. For example. “lanierUTM.img lanierSPOT. The name which is used can either cause confusion about the process that has taken place. For example.map” is probably a map composition that has map frames with lanierTM. a directory for the files may look similar to the one below: lanierTM. one could use the database to search for all of the image files of Georgia that have a UTM map projection. and every time a file is created.g.img data in them. Field Guide 31 .plt lanier. it will be necessary to re-catalog the image in order to update the information in the Image Catalog database.gcc lanierUTM.img” is probably a Landsat TM scene of Lake Lanier. size.ovr lanierlegends.img and lanierSPOT. Use the Image Catalog to track and store information for image files (.Image File Organization Image File Organization Data is easy to locate if the data files are well organized. if the name of the output file is “junk.

Geocoded data are also available from EOSAT and SPOT. When it is necessary to store some data on a tape. See "APPENDIX C: Map Projections" for detailed information on the different projections available. Geocoded Data Geocoding. 32 ERDAS . the image data are not referenced to a map projection. In this raw form. Rectification is the process of projecting the data onto a plane and making them conform to a map projection system. if desired. Raw. The Image Catalog CellArray will show which tape the . It is possible to geocode raw image data with the ERDAS IMAGINE rectification tools.img files to external devices. Geocoded data are images that have been rectified to a particular map projection and pixel size. the .img files are copies of the files on disk—nothing is removed from the disk. See "CHAPTER 8: Rectification" for information on geocoding raw imagery with ERDAS IMAGINE.img) that are imported and created in ERDAS IMAGINE. It is also possible to graphically view the coverage of the selected . and the file can be easily retrieved from the tape device to a designated disk directory. the Image Catalog database enables the user to archive . is the geographical registration or coding of the pixels in an image. The information for the .img files on a map in a canvas window.ERDAS IMAGINE Image Catalog The ERDAS IMAGINE Image Catalog database is designed to serve as a library and information management system for image files (.img files is displayed in the Image Catalog CellArray. When records are queried based on specific criteria. such as an aircraft or satellite. Once the file is archived. This CellArray enables the user to view all of the ancillary data for the image files in the database.img files that match the criteria will be highlighted in the CellArray. remotely sensed image data are gathered by a sensor on a platform. The archived . it can be removed from the disk.img file is stored on. also known as georeferencing.

It may be useful to combine data from two different dates into one file. image files contain areas much larger than a particular study area. In this case. You can also use the Subset option from Image Interpreter to define a subset area. The following chapters in this book describe many of these processes. This section briefly describes some basic image file techniques that may be useful for any application. a user may want to combine Landsat TM from one date with TM data from a later date. For example. This can be important when dealing with multiband data. each file must be georeferenced to the same coordinate system. The Import option lets you define a subset area of an image to preview or import. See "CHAPTER 8: Rectification" for information on georeferencing images. Often. This not only eliminates the extraneous data in the file. then perform a classification based on the combined data.Using Image Data in GIS Using Image Data in GIS ERDAS IMAGINE provides many tools designed to extract the necessary information from the images in a data base. and subsetting. mosaicking. Field Guide 33 . it is helpful to reduce the size of the image file to include only the area of interest. it is necessary to combine the images to create one large file. ERDAS IMAGINE programs allow image data with an unlimited number of bands. Subset Subsetting refers to breaking out a portion of a large file into one or more smaller files. To create a mosaicked image. The user can also incorporate elevation data into an existing image file as another band. use the Mosaic Images option from the Data Prep menu. All of the images to be mosaicked must be georeferenced to the same coordinate system. This is particularly useful for change detection studies. but the most common satellite data types—Landsat and SPOT—have seven or fewer bands. To combine two or more image files. there are options available to make additional image files from those acquired from EOSAT. In these cases. etc. Subsetting and Mosaicking Within ERDAS IMAGINE. but it speeds up processing due to the smaller amount of data to process. or create new bands through various enhancement techniques. Image files can be created with more than seven bands. the study area in which the user is interested may span several image files. This is called multitemporal imagery. This is called mosaicking. These options involve combining files. Mosaic On the other hand. or to each other. SPOT.

such as soil type or vegetation. See "CHAPTER 6: Classification" for a detailed explanation of classification procedures. Multispectral Classification Image data are often used to create thematic files through multispectral classification. where the original data file values are stretched to fit the range of the display device. Enhancement can make important features of raw. where the number of image file bands can be reduced and new bands created to account for the most variance in the data. Enhancement techniques are often used instead of classification for extracting useful information from images. There are many enhancement techniques available.Enhancement Image enhancement is the process of making an image more interpretable for a particular application (Faust 1989). They range in complexity from a simple contrast stretch. to principal components analysis. remotely sensed data and aerial photographs more interpretable to the human eye. See "CHAPTER 5: Enhancement" for more information on enhancement techniques. 34 ERDAS . This entails using spectral pattern recognition to identify groups of pixels that represent a common characteristic of the scene.

Field Guide 35 . See "CHAPTER 10: Geographic Information Systems" for information about recoding data. The ERDAS IMAGINE raster editing functions allow the use of focal and global spatial modeling functions for computing the values to replace noisy pixels or areas in continuous or thematic data. See the ERDAS IMAGINE On-Line Help for information about using and selecting AOIs. Global operations calculate the replacement value for an entire area rather than affecting one pixel at a time. With raster editing. data values in thematic data can also be recoded according to class. specifically the Majority option. Focal operations are filters which calculate the replacement value based on a window (3 × 3. 5 × 5. such as spikes and holes in imagery. These functions. See "CHAPTER 5: Enhancement" for information about reducing data noise using spatial filtering. etc.Editing Raster Data Editing Raster Data ERDAS IMAGINE provides raster editing tools for editing the data values of thematic and continuous raster data. Therefore this function affects one pixel at a time. and the number of surrounding pixels which influence the value is determined by the size of the moving window. Recoding is a function which reassigns data values to a region or to an entire class of pixels. This is primarily a correction mechanism that enables the user to correct bad data values which produce noise.) and replace the pixel of interest with the replacement value. The raster editing functions can be applied to the entire image or a userselected area of interest (AOI). The raster editing tools are available in the IMAGINE Viewer. are more applicable to thematic data.

This is used where the constant values of the AOI are not known. When editing continuous raster data.Editing Continuous (Athematic) Data Editing DEMs DEMs will occasionally contain spurious pixels or bad data. and digitized photographs. and other noises caused by automatic DEM extraction can be corrected by editing the raster data values and replacing them with meaningful values. Landsat. holes. The ERDAS IMAGINE Raster Editing functionality was originally designed to edit DEMs. such as radar. SPOT. but the area is flat or homogeneous with little variation (for example. the original data value plus a constant value — add a negative constant value to the original data values to compensate for the height of trees and other vertical features in the DEM. spatial filtering — filter data values to eliminate noise such as spikes or holes in the data interpolation techniques (discussed below) • • • 36 ERDAS . the user can modify or replace original pixel values with the following: • • a constant value — enter a known constant value for areas such as lakes the average of the buffering pixels — replace the original pixel value with the average of the pixels in a specified buffer area around the AOI. These spikes. This discussion of raster editing will focus on DEM editing. a lake). but it can also be used with images of other continuous data sources. This technique is commonly used in forested areas.

Editing Raster Data Interpolation Techniques While the previously listed raster editing techniques are perfectly suitable for some applications. The following equation is used: V = where: ∑ W i Qi V = output data value (elevation value for DEM) Wi = coefficients which are derived by the least squares method Qi = distance-related kernels which are actually interpretable as continuous single value surfaces Source: Wang 1990 Field Guide 37 . the following interpolation techniques provide the best methods for raster editing: • • • 2-D polynomial — surface approximation Multi-surface functions — with least squares prediction Distance weighting Each pixel’s data value is interpolated from the reference points in the data file. These interpolation techniques are described below. 2-D Polynomial This interpolation technique provides faster interpolation calculations than distance weighting and multi-surface functions. where: V a x y = = = = data value (elevation value for DEM) polynomial coefficients x coordinate y coordinate Multi-surface Functions The multi-surface technique provides the most accurate results for editing DEMs which have been created through automatic extraction. . . The following equation is used: V = a0 + a1x + a2y + a2x2 + a4xy + a5y2 +.

the values of all reference points are weighted by a value corresponding with the distance between each point and the pixel.– 1 D  where: S = normalization factor D = distance from output data point and reference point The value for any given pixel is calculated by taking the sum of weighting factors for all reference points multiplied by the data values of those points. The weighting function used in ERDAS IMAGINE is: 2 S W =  --. and dividing by the sum of the weighting factors: ∑ Wi × Vi V = i=1 ----------------------------- n ∑ Wi i=1 where: V = output data value (elevation value for DEM) i = ith reference point Wi = weighting factor of point i Vi = data value of point i n = number of reference points Source: Wang 1990 n 38 ERDAS .Distance Weighting The weighting function determines how the output data values will be interpolated from a set of reference data points. For each pixel.

Vector data consist of: • • • points lines polygons Each is illustrated in Figure 16. This chapter describes vector data. You do not need ARC/INFO software or an ARC/INFO license to use the vector capabilities in ERDAS IMAGINE. Since the ARC/INFO data model is used in ERDAS IMAGINE. See "CHAPTER 10: Geographic Information Systems" for information on editing vector layers and using vector data in a GIS. vertices node polygons line label point node points Figure 16: Vector Elements Field Guide 39 . and symbolization. The vector data structure in ERDAS IMAGINE is based on the ARC/INFO data model (developed by ESRI. this chapter is focused on vector data. While the previous chapter explored the characteristics of raster data.). attribute information. you can use ARC/INFO coverages directly without importing them. Inc.Introduction CHAPTER 2 Vector Layers Introduction ERDAS IMAGINE is designed to integrate two data types into one system: raster and vector.

40 ERDAS . Lines can also represent non-geographical boundaries. such as the endpoint of a line segment or a location in a polygon where the line segment defining the polygon changes direction. commercial districts. The ending points of a line are called nodes. such as wildlife habitats. etc. such as voting districts. Lines join other lines only at nodes. etc. land use. Each line has two nodes: a from-node and a to-node. Polygons can also be used to represent nongeographical features. label point line polygon vertices Figure 17: Vertices In Figure 17. school zones. The from-node is the first vertex in a line. the line and the polygon are each defined by three vertices. such as soil type. A series of lines in which the from-node of the first line joins the to-node of the last line is a polygon. Polygons also contain label points that identify the polygon.Points A point is represented by a single x. such as a river. A vertex is a point that defines an element. Label points are also used to identify polygons (see below). or water body. state borders.y coordinate pair. road. contour lines. Points can represent the location of a geographic feature or a point that has no area. The to-node is the last vertex in a line. Lines A line (polyline) is a set of line segments and represents a linear geographic feature. such as a mountain peak. Vertex The points that define a line are vertices. The label point links the polygon to its attributes. Polygons A polygon is a closed line or closed set of lines defining a homogeneous area. or utility line.

even though both are represented with polygons. or Lambert Conformal Conic. possibly attributes (defined as a set of named items or variables) (ESRI 1989). or Cartesian. lines. and polygons in a single layer. coordinates. the user can overlay them or create a new layer. those coordinates may be inches (as in some CAD applications). In topological vector data. Topology is not automatically created when a vector layer is created. but often the coordinates are map coordinates. The vertices that define each element are referenced with x. Usually. similar to the themes used in raster layers. lines. Topology must be updated after a layer is edited also. Tics Vector layers are referenced to coordinates or a map projection system using tic files that contain geographic control points for the layer. Vector layers contain both the vector features (points. It must be added later using specific functions. a mathematical procedure is used to define connections between features.. It is possible to have one vector layer for streams (lines) and another layer for parcels (polygons). Universal Transverse Mercator (UTM). Topology The spatial relationships between features in a vector layer are defined using topology.Introduction Coordinates Vector data are expressed by the coordinates of vertices. Field Guide 41 . See "CHAPTER 10: Geographic Information Systems" for more information about analyzing vector layers. A vector layer is defined as a set of features where each feature has a location (defined by coordinates and topological pointers to other features) and. such as State Plane. a layer typically consists of one type of feature. Political districts and soil types would probably be in separate layers. If the project requires that the coincidence of features in two or more layers be studied. identify adjacent polygons. a polygon is made of connecting lines) (ESRI 1990). Tics are not topologically linked to other features in the layer and do not have descriptive data associated with them. and define a feature as a set of other features (e. In some instances.g. This enables the user to isolate data into themes. polygons) and the attribute information. Vector Layers Although it is possible to have points. Every vector layer must have a tic file. Vector data digitized from an ungeoreferenced image are expressed in file coordinates.y. vector layers are also divided by the type of information they represent. "Digitizing" on page 47 describes how topology is created for a new or edited vector layer.

Each workspace is completely independent. the ERDAS IMAGINE vector structure is based in the ARC/INFO data model used for ARC coverages. This georelational data model is actually a set of files using the computer’s operating system for file management and input/output. They also provide a place for the storage of tabular data not directly tied to a particular layer. Table 1 summarizes the types of files that are used to make up vector layers. These files may serve the following purposes: • • • • define features provide feature attributes cross-reference feature definition files provide descriptive information for the coverage as a whole A workspace is a location which contains one or more vector layers. Workspaces provide a convenient means for organizing layers into related groups.Vector Files As mentioned above. Vector data are represented by a set of logical tables of information. It is possible to have an unlimited number of workspaces and an unlimited number of vector layers in a workspace. Table 1: Description of File Types File Type Feature Definition Files File ARC CNT LAB TIC Description Line coordinates and topology Polygon centroid coordinates Label point coordinates and topology Tic coordinates Line (arc) attribute table Polygon or point attribute table Polygon/line/node cross-reference file Coordinate extremes Layer history file Coordinate definition file Layer tolerance file Feature Attribute Files Feature CrossReference File Layer Description Files AAT PAT PAL BND LOG PRJ TOL 42 ERDAS . stored as files within the subdirectory. An ERDAS IMAGINE vector layer is stored in subdirectories on the disk.

you MUST use the utilities provided in ERDAS IMAGINE to copy and rename them. georgia parcels testdata demo INFO roads streets Figure 18: Workspace Structure Because vector layers are stored in directories rather than in simple files. See the ESRI documentation for more detailed information about the different vector files.Introduction Figure 18 illustrates how a typical vector workspace is set up (ESRI 1992). Field Guide 43 . A utility is also provided to update path names that are no longer correct due to the use of regular system commands on vector layers.

This is the same information that is stored in the INFO database of ARC/INFO. Custom fields can be added to each attribute table. or attribute. lines. as shown in the example below: /georgia/parcels/info!arc!parcels.PCODE . that feature is highlighted in the Viewer. Attribute fields can contain numerical or character data. only the required attribute information is imported into the attribute tables (AAT and PAT files) of the new vector layer.XCODE . and polygons. a vector layer can have a wealth of associated descriptive. such as exporting attributes or merging attributes. This new information can then be exported back to its original format. Once this attribute information has been merged. The user can select features in the layer based on the attribute information. information associated with it.arc attribute information <layer name>. The attributes for a roads layer may look similar to the example in Figure 19. the INFO files can be merged into the PAT and AAT files. Some attributes are automatically generated when the layer is created. Likewise.point attribute information To utilize all of this attribute information. when a row is selected in the attribute CellArray. The complete path of the file must be specified when establishing an INFO file name in an ERDAS IMAGINE Viewer application.ACODE .pcode 44 ERDAS .Attribute Information Along with points. Attribute information is displayed in ERDAS IMAGINE CellArrays.polygon attribute information <layer name>. Figure 19: Attribute Example Using Imported Attribute Data When external data types are imported into ERDAS IMAGINE. The rest of the attribute information is written to one of the following INFO files: • • • <layer name>. it can be viewed in IMAGINE CellArrays and edited as desired.

width.Displaying Vector Data Use the Attributes option in the IMAGINE Viewer to view and manipulate vector attribute data. Field Guide 45 . (The Raster Attribute Editor is for raster attributes only and cannot be used to edit vector attributes. as are other data types in ERDAS IMAGINE. size. including merging and exporting. lines. The line styles available are the same as those available for annotation. For filled polygons. symbol color.and yseparation between symbols. if a point layer represents cities and towns. Points. Symbolization Vector layers can be displayed with symbolization. the user can assign a color scheme for displaying the vector classes. he or she may want to display only the polygons in a layer that also contains street centerlines (lines). the user selects the symbol to use. or display a vector layer(s) over a raster layer(s). In layers that contain more than one feature (a combination of points. lines. and polygons are rendered. For example. and polygons). Points Point symbolization options include symbol. These class values correspond to different colors on the display screen. Polygons Polygons can be symbolized as lines or as filled polygons. The symbols available are the same symbols available for annotation. polygons. lines. For example. either a solid fill color or a repeated symbol can be selected. See "CHAPTER 4: Image Display" for a thorough discussion of how images are displayed. and color. Polygons symbolized as lines can have varying line styles (see Lines above). Figure 20 illustrates a pattern fill. and nodes are symbolized using styles and symbols similar to annotation. Displaying Vector Data Vector data are displayed in Viewers. Color Schemes Vector data are usually assigned class values in the same manner as the pixels in a thematic raster file. The user can display a single vector layer. As with a pseudo color image. When symbols are used. background color. and color. overlay several layers in one Viewer.) See the ERDAS IMAGINE On-Line Help for more information about using CellArrays. the appropriate symbol could be used at each point based on the population of that area. meaning that the attributes can be used to determine how points. Lines Lines can be symbolized with varying line patterns. the user can select which features to display. and the x. the symbol size. if a user is studying parcels. composition.

The vector layer will reflect the symbolization that is defined in the Symbology dialog. Figure 20: Symbolization Example See the ERDAS IMAGINE Tour Guides or On-Line Help for information about selecting features and using CellArrays. 46 ERDAS .

and so forth. or other satellite data are already in digital format upon receipt. Tablet Digitizing Tablet digitizing involves the use of a digitizing tablet to transfer non-digital data such as maps or photographs to vector format. voting districts. or other non-digital data that contain information they want to incorporate into the study. Any image not already in digital format must be digitized before it can be read by the computer and incorporated into the data base. The digitizing tablet contains an internal electronic grid that transmits data to ERDAS IMAGINE on cue from a digitizer keypad operated by the user. in ERDAS IMAGINE.Digitizing Vector Data Sources Vector data are created by: • • • • tablet digitizing—maps. so it is not necessary to digitize them. Digitizing In the broadest sense. However. the user may want to extract certain features from a digital image to include in a vector layer. Or. Figure 21: Digitizing Tablet Field Guide 47 . photographs or other hardcopy data can be digitized using a digitizing tablet screen digitizing—create new vector layers by using the mouse to digitize on the screen using other software packages—many external vector data types can be converted to ERDAS IMAGINE vector layers converting raster layers—raster layers can be converted to vector layers Each of these options is discussed in a separate section below. Most Landsat. photographs. such as roads. the digitizing of vectors refers to the creation of vector data from hardcopy materials or raster images that are traced using a digitizer keypad on a digitizing tablet or a mouse on a displayed image. SPOT. However. Tablet digitizing and screen digitizing enable the user to digitize certain features of a map or photograph. the user may also have maps. digitizing refers to any process that converts non-digital data into numbers. bodies of water.

assign a particular value to the point or polygon. until all the points are satisfactorily completed. The user can measure: • • • lengths and angles by drawing a line perimeters and areas using a polygonal. one of the input buttons is pushed to tell the system which function to perform.” Digitizing Modes There are two modes used in digitizing: • • point mode — one point is generated each time a keypad button is pressed stream mode — points are generated continuously at specified intervals. Position the intersection of the cross-hairs directly over the point to be digitized. You must create topology using the Build or Clean options. Select the Tablet Input function from the Viewer to use a digitizing tablet to enter new information into that layer. Newly created vector layers do not contain topological data. etc. while the puck is in proximity to the surface of the digitizing tablet You can create a new vector layer from the IMAGINE Viewer.Digitizer Set-Up The map or photograph to be digitized is secured on the tablet. The digitizer puck is used to outline the areas to measure. or elliptical shape positions by specifying a particular point 48 ERDAS . Measurement The digitizing tablet can also be used to measure both linear and areal distances on a map or photograph. connect a point to previous points. rectangular. transmit map coordinate data). such as: • • • • digitize a point (i. Digitizer Operation The hand-held digitizer keypad features a small window with cross-hairs and keypad buttons. This is discussed further in “Chapter 9: Geographic Information Systems.e. measure the distance between points. and a coordinate system is established with a set-up procedure. digitizing points at appropriate intervals (where lines curve or change direction).. Move the puck along the desired polygon boundaries or lines. Depending on the type of equipment and the program being used.

bodies of water. vector data are drawn in the Viewer with a mouse using the displayed image as a reference.G. Field Guide 49 . Inc. Imported Vector Data Many types of vector data from other software packages can be incorporated into the ERDAS IMAGINE system. political boundaries selecting training samples for input to the classification programs outlining an area of interest for any number of applications Create a new vector layer from the IMAGINE Viewer. such as: • • • digitizing roads.Inc. printed. These data formats include: • • • • • • • • • • • ARC/INFO GENERATE format files from ESRI. Census Bureau Vector Product Format (VPF) files from the Defense Mapping Agency See "CHAPTER 3: Raster and Vector Data Sources" for more information on these data. Screen Digitizing In screen digitizing. These data are then written to a vector layer. Screen digitizing is used for the same purposes as tablet digitizing. Digital Exchange Files (DXF) from Autodesk. ARCVIEW Shape files from ESRI. Inc.S. These operations can also be performed with screen digitizing.S. Digital Line Graphs (DLG) from U. ARC/INFO INTERCHANGE files from ESRI. Initial Graphics Exchange Standard (IGES) files Intergraph Design (DGN) files from Intergraph Spatial Data Transfer Standard (SDTS) vector files Topologically Integrated Geographic Encoding and Referencing System (TIGER) files from the U.S. ETAK MapBase files from ETAK. Inc.Inc. and copied.Imported Vector Data Measurements can be saved to a file. Select the Measure function from the IMAGINE Viewer or click on the Ruler tool in the Viewer tool bar to enable tablet or screen measurement.

The diagram below illustrates a thematic file in raster format that has been converted to vector format. Raster soils layer Soils layer converted to vector polygon layer Figure 22: Raster Format Converted to Vector Format Most commonly. and vice versa. Convert vector data to raster data.Raster to Vector Conversion A raster layer can be converted to a vector layer and used as another layer in a vector data base. using ERDAS IMAGINE Vector. 50 ERDAS . thematic raster data rather than continuous data are converted to vector format. since converting continuous layers may create more vector features than are practical or even manageable.

S. video digitized data.S. and many other sources. The raster data types covered include: • • • • • visible/infrared satellite data radar imagery airborne sensor data digital terrain models scanned or digitized maps and photographs The vector data types covered include: • • • • • • Importing and Exporting Raster Data ARC/INFO GENERATE format files USGS Digital Line Graphs (DLG) AutoCAD Digital Exchange Files (DXF) MapBase digital street network files (ETAK) U. raster data sources include digital x-rays.Introduction CHAPTER 3 Raster and Vector Data Sources Introduction This chapter is an introduction to the most common raster and vector data types that can be used with the ERDAS IMAGINE software package. Because of the wide variety of data formats. ERDAS IMAGINE provides two options for importing data: • • Direct import for specific formats Generic import for general formats Direct Import Table 2 lists some of the raster data formats that can be directly imported to and exported from ERDAS IMAGINE: Field Guide 51 . sonar. In addition to satellite and airborne imagery. Department of Commerce Initial Graphics Exchange Standard files (IGES) U. Census Bureau Topologically Integrated Geographic Encoding and Referencing System files (TIGER) There is an abundance of data available for use in GIS today. microscopic imagery.

a Import                     Export Once imported. . BIP. However. This program imports only the data file values—it does not import ephemeris data. See Generic Import on page 52. Each direct function is programmed specifically for that type of data and cannot be used to import other data types. BIP. For example. BSQ DTED ERDAS 7. . This program allows the import of BIL. ERDAS IMAGINE also imports the georeferencing data for the image. this ephemeris data can be viewed using the Data View option (from the Utility menu or the Import dialog).GIS. Raster data formats cannot be exported as vector data formats. Generic Import The Generic import option is a flexible program which enables the user to define the data structure for ERDAS IMAGINE. when the user imports Landsat data.ANT) GRID Landsat RADARSAT SPOT Sun Raster TIFF USGS DEM a.img). The direct import function will import the data file values that make up the raster image. such as georeferencing information. the raster data are converted to the ERDAS IMAGINE file format (. unless they are converted with the Vector utilities.Table 2: Raster Data Formats for Direct Import Data Type ADRG ADRI ASCII AVHRR BIL. and BSQ data that are stored in left to right. 52 ERDAS .LAN.X (. top to bottom row order. as well as the ephemeris or additional data inherent to the data structure. Data formats from unsigned 1-bit up to 64-bit floating point can be imported.

can also be imported. ERDAS IMAGINE: Table 3: Vector Data Formats for Import and Export Data Type GENERATE DXF DLG ETAK IGES TIGER Import            Export Once imported. and polygons using a digitizing tablet or the computer screen. lines. Field Guide 53 . See "CHAPTER 2: Vector Layers" for more information on ERDAS IMAGINE vector layers. they can be imported as two real images and then combined into one complex image using the IMAGINE Spatial Modeler. You cannot import tiled or compressed data using the Generic import option. and vice versa. which are available from a variety of government agencies and private companies. with the ERDAS IMAGINE Vector utilities. Table 3 lists some of the vector data formats that can be imported to. and exported from. You can also convert vector layers to raster format. however. Import and export vector data with the Import/Export function. These vector formats are discussed in more detail in "Vector Data from Other Software Vendors" on page 90.Introduction Complex data cannot be imported using this program. Importing and Exporting Vector Data Vector layers can be created within ERDAS IMAGINE by digitizing points. Several vector data types. the vector data are automatically converted to ERDAS IMAGINE vector layers.

the total width of the area on the ground covered by the scanner is called the swath width. The FOV is a measure of the field of view off all the detectors combined. For example. 54 ERDAS . Satellite System A satellite system is composed of a scanner with sensors and a satellite platform. The sensors are made up of detectors. However. It includes the sensor and the detectors. in the sensor system on the Landsat Thematic Mapper scanner there are 16 detectors for each wavelength band (except band 6. such as the Landsat Thematic Mapper scanner or the SPOT panchromatic scanner (Lillesand and Kiefer 1987). • • In a satellite system. Once the satellite is launched. or width of the total field of view (FOV). Many satellites orbit the earth. A sensor is a device that gathers energy. • • • Use the Import/Export function to import a variety of satellite data. FOV differs from IFOV (instantaneous field of view) in that the IFOV is a measure of the field of view of each detector. so the same area can be covered on a regular basis for change detection. and sophisticated satellite scanners.Satellite Data There are several data acquisition options available including photography. • The scanner is the entire data acquisition system. Satellites have very stable geometry. the cost for data acquisition is less than that for aircraft data. a satellite system offers these advantages: • Digital data gathered by a satellite sensor can be transmitted over radio or microwave communications links and stored on magnetic tapes. so they are easily processed and analyzed by a computer. A detector is the device in a sensor system that records electromagnetic radiation. meaning that there is less chance for distortion or skew in the final image. converts it to a signal and presents it in a form suitable for obtaining information about the environment (Colwell 1983). aerial sensors. which has 4 detectors).

Image Data Comparison Figure 23 shows a comparison of the electromagnetic spectrum recorded by Landsat TM. Nadir is the area on the ground directly beneath the scanner’s detectors. These data are described in detail in the following sections. Field Guide 55 . meaning that they rotate around the earth at the same rate as the earth rotates on its axis. • • NOTE: The current SPOT system has the ability to collect off-nadir stereo imagery. and NOAA AVHRR data. as will the future Landsat 7 system. Both scanners can produce nadir views. Multiband data are referred to as multispectral imagery. Landsat and the French SPOT satellites are two important data acquisition satellites. Landsat MSS. They both record electromagnetic radiation in one or more bands.Satellite Data Satellite Characteristics The U. Single band. or monochrome. SPOT. These systems provide the majority of remotely sensed digital images in use today. S. imagery is called panchromatic. The Landsat and SPOT satellites have several characteristics in common: • They have sun-synchronous orbits. so data are always collected at the same local time of day over the same region.

6 1.8 1.0 6.2.Landsat MSS (1.5 4.0 3.3 1.5 2.0 1.0 10.9 1.7 .2 2.0 7.3 2. Figure 23: Multispectral Imagery Comparison 56 ERDAS .0 8.0 12.0 2.9 2.0 13.7 1.0 9.0 11.0 5.6 3.3.1 1.4 2.6 . but is on NOAA 11.8 .0 Band 1 Band 2 Band 3 Band 4 Landsat TM (4.1 2.5 1. 5) Band 1 Band 2 Band 3 Band 4 SPOT XS Band 1 Band 2 Band 3 SPOT Pan NOAA AVHRR1 Band 1 Band 1 Band 2 Band 5 micrometers Band 7 Band 3 Band 6 Band 4 Band 5 1 NOAA AVHRR band 5 is not on the NOAA 10 satellite.5 .4 1.4) .2 1.

2 =Red. Detectors record electromagnetic radiation (EMR) in four bands: • • Bands 1 and 2 are in the visible portion of the spectrum and are useful in detecting cultural features. 1 =Green.10 µm This band is useful for vegetation surveys and for penetrating haze (Jensen 1996). 0. with a 79 × 79 m IFOV. 0.Satellite Data Landsat In 1972. Landsats 1.60 µm This band scans the region between the blue and red chlorophyll absorption bands. These bands also show detail in water. but it is stored as 8-bit (Lillesand and Kiefer 1987). It is useful for crop identification and emphasizes soil/crop and land/water contrasts. A typical scene contains approximately 2340 rows and 3240 columns. but Landsats 4 and 5 are still in orbit gathering data. NOTE: Landsat data are available through the Earth Observation Satellite Company (EOSAT) or the EROS Data Center. and later renamed to Landsat. and 3 are no longer operating. Landsats 1. 2. 4 =Reflective infrared.80-1. 3 =Reflective infrared. MSS The MSS (multispectral scanner) from Landsats 4 and 5 has a swath width of approximately 185 × 170 km from a height of approximately 900 km for Landsats 1. the National Aeronautics and Space Administration (NASA) initiated the first civilian program specializing in the acquisition of remotely sensed digital satellite data. The spatial resolution of MSS data is 56 × 79 m. The radiometric resolution is 6bit. such as roads. and 705 km for Landsats 4 and 5.70 µm This is the red chlorophyll absorption band of healthy green vegetation and represents one of the most important bands for vegetation discrimination. and 3. The first system was called ERTS (Earth Resources Technology Satellites). 0. 0. There have been several Landsat satellites launched since 1972. Bands 3 and 4 are in the near-infrared portion of the spectrum and can be used in land/water and vegetation discrimination. MSS and TM are discussed in more detail below.60-0.70-0.80 µm This band is especially responsive to the amount of vegetation biomass present in a scene. and it is also useful for mapping water bodies. See "Ordering Raster Data" on page 84 for more information. Field Guide 57 . MSS data are widely used for general geologic studies as well as vegetation inventories. It is also useful for determining soil boundary and geological boundary delineations and cultural features. and 3 gathered Multispectral Scanner (MSS) data and Landsats 4 and 5 collect MSS and Thematic Mapper (TM) data. 2. It corresponds to the green reflectance of healthy vegetation.2.50-0.

snow and cloud differentiation. reflective-infrared.5 × 28.5 m to match the other bands. TM has higher spatial. Lillesand and Kiefer 1987).45-0. which has a spatial resolution of 120 × 120 m.TM The TM (thematic mapper) scanner is a multispectral scanning system much like the MSS.63-0. 1. middle-infrared. forest type mapping. which is useful in crop drought studies and in plant health analyses. except that the TM sensor records reflected/emitted electromagnetic energy from the visible. These bands also show detail in water. The spatial resolution of TM is 28. and 7 are in the reflective-infrared portion of the spectrum and can be used in land/water discrimination. It is useful for crop identification and emphasizes soil/crop and land/water contrasts. differentiating between soil and vegetation.60 µm Corresponds to the green reflectance of healthy vegetation. It is useful for vegetation type and health determination. Detectors record EMR in seven bands: • • • Bands 1.74 µm This band is sensitive to the amount of water in plants. the thermal band is resampled to 28.52-0. 58 ERDAS . The larger pixel size of this band is necessary for adequate signal strength. etc. 4 =Reflective-infrared. soil moisture. and 3 are in the visible portion of the spectrum and are useful in detecting cultural features such as roads. 5. spectral. 0.69 µm Useful for discriminating between many plant species. 3 =Red. Bands 4. snow. 0. It is also useful for determining soil boundary and geological boundary delineations as well as cultural features. meaning that each pixel has a possible range of data values from 0 to 255.55-1.76-0. and thermal-infrared regions of the spectrum. TM has a swath width of approximately 185 km from a height of approximately 705 km. Also useful for cultural feature identification.90 µm This band is especially responsive to the amount of vegetation biomass present in a scene. and radiometric resolution than MSS. 0. rock type discrimination. 1 =Blue.5 × 28. However.52 µm Useful for mapping coastal water areas.5 m for all bands except the thermal (band 6). 5 =Mid-infrared. 2. Band 6 is in the thermal portion of the spectrum and is used for thermal mapping (Jensen 1996. The radiometric resolution is 8-bit. 2 =Green. and detecting cultural features. and ice. This is also one of the few bands that can be used to discriminate between clouds. 0.

roads may be red. For instance.2 create a pseudo color composite. (A thematic image is also a pseudo color image. water yellow. • • Bands 3. Green.40-12.1 create a true color composite. For instance.50 µm This band is useful for vegetation and crop stress detection. 2. and vegetation blue.) In pseudo color. Landsat TM Band Combinations for Displaying TM Data Different combinations of the TM bands can be displayed to create different composite effects. etc. True color means that objects look as they would to the naked eye—similar to a color photograph. It can also be used to locate geothermal activity. MSS 4 bands 7 bands radiometric resolution 0-127 TM 1 pixel= 57x79m 1 pixel= 30x30m radiometric resolution 0-255 Figure 24: Landsat MSS vs.2 create a false color composite. 10. False color composites appear similar to an infrared photograph where objects do not have the same colors or contrasts as they would naturally.35 µm This band is important for the discrimination of geologic rock type and soil boundaries.2.Satellite Data 6 =Thermal-infrared. vegetation appears red. • Field Guide 59 . Bands 4. in an infrared image. and Blue (RGB) color guns of the monitor. as well as soil and vegetation moisture content. 7 =Mid-infrared. heat intensity.4. insecticide applications.3. The following combinations are commonly used to display images: NOTE: The order of the bands corresponds to the Red.08-2. water appears navy or black. and for locating thermal pollution. the colors do not reflect the features in natural colors. Bands 5.

2 =Red. 60 ERDAS . The SPOT satellite can observe the same area on the globe once every 26 days. It has a radiometric resolution of 8 bits (Jensen 1996). SPOT The first Systeme Pour l’observation de la Terre (SPOT) satellite. 8-bit radiometric resolution. has 20 × 20 m spatial resolution.Different color schemes can be used to bring out or enhance the features under study. and is quite useful for collecting data in a region not directly in the path of the scanner or in the event of a natural or man-made disaster. but it does have off-nadir viewing capability. These are by no means all of the useful combinations of these seven bands. "CHAPTER 5: Enhancement" for more information on how images can be enhanced. It is also useful for soil boundary and geological boundary delineations. contains 1 band—0. The width of the swath observed varies between 60 km for nadir viewing and 80 km for off-nadir viewing at a height of 832 km (Jensen 1996). developed by the French Centre National d’Etudes Spatiales (CNES).68 µm Useful for discriminating between plant species. The sensors operate in two modes. This off-nadir viewing can be programmed from the ground control station.61-0. multispectral and panchromatic. but off to an angle. was launched in early 1986. SPOT pushes 3000/6000 sensors along its orbit. It is also very useful in collecting stereo data from which elevation data can be extracted. The second SPOT satellite was launched in 1990 and the third was launched in 1993. where timeliness of data acquisition is crucial. or multispectral. XS SPOT XS. and contains 3 bands (Jensen 1996). This is different from Landsat which scans with 16 detectors perpendicular to its orbit. The SPOT scanner normally produces nadir views.59 µm Corresponds to the green reflectance of healthy vegetation. and "Ordering Raster Data" on page 84 for information on types of Landsat data available. 0. SPOT is commonly referred to as a pushbroom scanner meaning that all scanning parts are fixed and scanning is accomplished by the forward motion of the scanner.51 to 0. Off-nadir refers to any point that is not directly beneath the detectors. See "CHAPTER 4: Image Display" for more information on how images are displayed. 1 =Green. Using this off-nadir capability. 0.50-0. Panchromatic SPOT Panchromatic (meaning sensitive to all visible colors) has 10 × 10 m spatial resolution.73 µm—and is similar to a black and white photograph. one area on the earth can be viewed as often as every 3 days. The bands to be used are determined by the particular application.

Stereoscopic imagery can also be achieved by using one vertical scene and one off-nadir scene.79-0. 0. so that the two images are acquired at angles on either side of the vertical.89 µm This band is especially responsive to the amount of vegetation biomass present in a scene. Stereoscopic Pairs Two observations can be made by the panchromatic scanner on successive days. This type of imagery can be used to produce a single image. Field Guide 61 . or topographic and planimetric maps (Jensen 1996). SPOT XS See "Ordering Raster Data" on page 84 for information on the types of SPOT data available.Satellite Data 3 =Reflective infrared. It is useful for crop identification and emphasizes soil/crop and land/water contrasts. See "Topographic Data" on page 81 and "CHAPTER 9: Terrain Analysis" for more information about topographic data and how SPOT stereopairs and aerial photographs can be used to create elevation data and orthographic images. Topographic maps indicate elevation. Planimetric maps correctly represent horizontal distances between objects (Star and Estes 1990). Panc hrom atic 1 band XS 3 bands 1 pixel= 10x10m radiometric resolution 0-255 1 pixel= 20x20m Figure 25: SPOT Panchromatic vs. resulting in stereoscopic imagery.

AVHRR images are useful for snow cover mapping. There may be four or five bands. The term packed refers to the way in which the data are written to the tape. The AVHRR system allows for direct transmission in real-time of data called High Resolution Picture Transmission (HRPT). The entire globe can be viewed in 14.10 µm This band is especially responsive to the amount of vegetation biomass present in a scene. It is useful for crop identification and emphasizes soil/crop and land/water contrasts. but the data gathered have been used in many fields—from agronomy to oceanography (Needham 1986).1 × 1. and various geologic applications (Lillesand and Kiefer 1987). 0. five additional NOAA satellites have been launched. Since the TIROS-N. fire detection.NOAA Polar Orbiter Data The National Oceanic and Atmospheric Administration (NOAA) has sponsored several polar orbiting satellites to collect data of the earth. wildfire fuel mapping.58-0.68 µm This band corresponds to the green reflectance of healthy vegetation and is important for vegetation discrimination. There are three basic formats for AVHRR data which can be imported into ERDAS IMAGINE: • • • Local Area Coverage (LAC) . 1 =Visible. LAC and HRPT have identical formats. 0.725-1. Of these. flood monitoring. vegetation mapping. 2 =Near-infrared.data recorded on board the sensor with a spatial resolution of approximately 1. the last three are still in orbit gathering data. depending on when the data were acquired. High Resolution Picture Transmission (HRPT) . AVHRR data are available in 10-bit packed and 16-bit unpacked format. the only difference is that HRPT are transmitted directly and LAC are recorded. Global Area Coverage (GAC) . This recorded data are called Local Area Coverage (LAC).data produced from LAC data by using only 1 out of every 3 scan lines. It also allows for about ten minutes of data to be recorded over any portion of the world on two recorders on board the satellite. Needham 1986). GAC data have a spatial resolution of approximately 4 × 4 km.5 days. dust and sandstorm monitoring. 62 ERDAS .1 km. The first of these satellites to be launched was the TIROS-N in 1978. regional soil moisture analysis. The swath width is 2700 km and the satellites orbit at a height of approximately 833 km (Kidwell 1988.direct transmission of AVHRR data in real-time with the same resolution as LAC. Packed data are compressed to fit more data on each tape (Kidwell 1988). AVHRR The NOAA AVHRR (Advanced Very High Resolution Radiometer) data are small-scale data and often cover an entire country. These satellites were originally designed for meteorological applications.

or all bands.30-11. 11. 10). All bands are referred to as a full set.30 µm (NOAA-7. AVHRR data have a radiometric resolution of 10-bits. It can also be used to locate geothermal activity. a combination of bands.50-11.93 µm This is a thermal band that can be used for snow and ice discrimination. 10). 10. 9. 11) See band 4. It is also useful for detecting fires. 8. meaning that each pixel has a possible data file value between 0 and 1023. Use the Import/Export function to import AVHRR data. See "Ordering Raster Data" on page 84 for information on the types of NOAA data available. AVHRR scenes may contain one band. and selected bands are referred to as an extract. 3.50 µm (NOAA-6. 10.50-12. 8.50 µm (NOAA-6. 10. 11) This band is useful for vegetation and crop stress detection.55-3.50-11. 5 =Thermal-infrared. 9. Field Guide 63 .Satellite Data 3 =Thermal-infrared. 4 =Thermal-infrared.50 µm (NOAA-7.

Airborne radar systems have typically been mounted on civilian and military aircraft. it can reflect the surface action of oceans. Surface eddies. or smoke. While there is a specific importer for RADARSAT data. and other bodies of water. Researchers are finding that a combination of the characteristics of radar data and visible/infrared data is providing a more complete picture of the earth. clouds. swells. The radar data from that mission and subsequent spaceborne radar systems have been a valuable addition to the data available for use in GIS processing. and waves are greatly affected by the bottom features of the water body. radar data are produced when: • • • a radar transmitter emits a beam of micro or millimeter waves. which is tuned to the frequency of the transmitted waves. providing data even in the presence of haze. most types of radar image data can be imported into ERDAS IMAGINE with the Generic import option of Import/Export. fixed antenna to create a synthetic aperture. or ground-based. light rain. radar can partially penetrate arid and hyperarid surfaces. The sensor transmits and receives as it is moving. and a careful study of surface action can provide accurate details about the bottom features. (See Figure 26. the waves reflect from the surfaces they strike. The signals received over a time interval are combined to create the image. the importance and applications of radar have grown rapidly. Under certain circumstances. and the backscattered radiation is detected by the radar system’s receiving antenna.) SAR (Synthetic Aperture Radar) — uses a side-looking. but in 1978. In the last decade. spaceborne. • 64 ERDAS . the radar satellite Seasat-1 was launched. The resultant radar data can be used to produce radar images. Advantages of Using Radar Data Radar data have several advantages over other types of remotely sensed imagery: • Radar microwaves can penetrate the atmosphere day or night under virtually all weather conditions. SAR sensors are mounted on satellites and the NASA Space Shuttle.Radar Data Simply put. snow. Although radar does not penetrate standing water. revealing sub-surface features of the earth. • • Radar Sensors Radar images are generated by two different types of sensors: • SLAR (Side-looking Airborne Radar) — uses an antenna which is fixed below an aircraft and pointed to the side to transmit and receive the radar signal. A radar system can be airborne. lakes.

(A target is any object or feature that is the subject of the radar scan. Figure 27 shows a graph of the data received from the radiation transmitted in Figure 26. Notice how the data correspond to the terrain in Figure 26. These data can be used to produce a radar image of the target area. Figure 26 shows a representation of an airborne SLAR system.) Range direction Beam width Sensor height at nadir Azimuth resolution Previous image lines Azimuth direction Figure 26: SLAR Radar (Lillesand and Kiefer 1987) Trees Hill Strength (DN) Time Figure 27: Received Radar Signal Field Guide Hill Shadow River Trees 65 .Radar Data Both SLAR and SAR systems use side-looking geometry.

20-10.Active and Passive Sensors An active radar sensor gives off a burst of coherent radiation that reflects from the target. or single versus multiple bounce scattering.3 cm 40. After interaction with the target area.9-19.391 GHZ Wavelength Range 5. 66 ERDAS . This is due to the different distances they travel from different targets. Radar waves are transmitted in phase.9 cm Radar System USGS SLAR ERS-1.9-6.77-2. Once reflected. they are out of phase. the waves emitted by active sensors travel in phase and interact minimally on their way to the target area. Like the coherent light from a laser.B.2 GHZ 0.0-76. these waves are no longer in phase. these bands are commonly used for radar imaging systems: Table 4: Commonly Used Bands for Radar Imaging Band X C L P Frequency Range 5.39-1. Almaz AIRSAR More information about these radar systems is given later in this chapter.90 GHZ 3.75 cm 3. Fuyo 1 SIR-A. interfering with each other and producing speckle noise.6 cm 76.8-7.55 GHZ 0. Corner Specular reflector reflector Figure 28: Radar Reflection from Different Sources and Distances (Lillesand and Kiefer 1987) Diffuse reflector At present.225-0. unlike a passive microwave sensor which simply receives the low-level radiation naturally emitted by targets.

ERDAS IMAGINE Radar enables the user to: • • • • • import radar data into the GIS as a stand-alone source or as an additional layer with other imagery sources remove speckle noise enhance edges perform texture analysis perform radiometric and slant-to-ground range correction See "CHAPTER 5: Enhancement" for more information on radar imagery enhancement. NOTE: The C band overlaps the X band. Field Guide 67 . or in any way resample the pixel values before removing speckle noise. Wavelength ranges may vary slightly between sensors. This is especially true when considering the removal of speckle noise. the radar waves can interfere constructively or destructively to produce light and dark pixels known as speckle noise. the radar image processing programs used to reduce speckle noise also produce changes to the image. Speckle Noise Once out of phase. This consideration. A rotation using nearest neighbor might be permissible. Speckle noise in radar data must be reduced before the data can be utilized.Radar Data Radar bands were named arbitrarily when radar was first developed by the military. The letter designations have no special meaning. the order in which the image processing programs are implemented is crucial. has lead ERDAS to offer several speckle reduction algorithms in ERDAS IMAGINE Radar. Since any image processing done before removal of the speckle results in the noise being incorporated into and degrading the image. combined with the fact that different applications and sensor outputs necessitate different speckle removal models. correct to ground range. do not rectify. When processing radar data. However.

Pollution monitoring — radar can detect oil on the surface of water and can be used to track the spread of an oil spill. or AVHRR. and monitoring ocean circulation. Ship monitoring — the ability to provide day/night all-weather imaging. etc. and polar oceans. Hydrology — radar data are proving useful for measuring soil moisture content and mapping snow distribution and water content. tides. crop monitoring. mineral exploration. determining weather and sea conditions for drilling and installation operations.) Offshore Oil Activities — radar data are used to provide ice updates for offshore drilling rigs. such as Landsat. Classification — a radar scene can be merged with visible/infrared data as an additional layer(s) in vegetation classification for timber mapping. sea-state and weather forecasting. as well as detect ships and associated wakes. Oceanography — radar is used for wind and wave measurement. revisiting every 35 days. • • • • • • • 68 ERDAS . makes radar a tool which can be used for ship navigation through frozen ocean areas such as the Arctic or North Atlantic Passage. (The ERS-1 satellite provides excellent coverage of these specific target areas. SPOT.Applications for Radar Data Radar data can be used independently in GIS applications or combined with other satellite data. and detecting oil spills. Possible GIS applications for radar data include: • Geology — radar’s ability to partially penetrate land cover and sensitivity to micro relief makes radar data useful in geologic mapping. and archaeology. Glaciology — the ability to provide imagery of ocean and ice phenomena makes radar an important tool for monitoring climatic change through polar ice variation.

but it does represent the ones most useful for GIS applications. P. Following are two scheduled programs which are known to be highly achievable. Light SAR NASA/JPL is currently designing a radar satellite called Light SAR. 2 JERS-1 operational 18 m 44 days 75X100 km L band SIR-A.5 m 35 days 100X100 km C band Several radar satellites are planned for launch within the next several years. 1984 25 m NA 30X60 km L band SIR-C 1994 25 m NA variable swath L. or survey (20-40m spatial. 120-170km swath) modes. 20-30 km swath).C. including high resolution. Almaz-1b will feature three synthetic aperture radars (SAR) that can collect multipolar. Almaz 1-b NPO Mashinostroenia plans to launch and operate Almaz-1b as a commercial program in 1998.X bands RADARSAT operational 10-100 m 3 days 50X50 to 500X500 km C band Almaz-1 1991-1992 15 m NA 40X100 km C band Availability Resolution Revisit Time Scene Area Bands Future Radar Sensors operational 12. intermediate (5-15m spatial. Field Guide 69 . This is not a complete list of such sensors. Table 5: Current Radar Sensors ERS-1. Present plans are for this to be a multi-polar sensor operating at L-band. but only a few programs will be successful. multifrequency (X. complex multisensor payload consisting of eight high resolution sensors which can operate in various sensor combinations. S band) imagery in high resolution (5-7m spatial. The Almaz-1b system will include a unique.pass stereo coverage in the optical and multispectral bandwidths.Radar Data Current Radar Sensors Table 5 gives a brief description of currently available radar sensors. 60-70km swath). two-pass radar stereo and single. B 1981.

AVIRIS The AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) was also developed by JPL (Pasadena. AIRSAR data have been used in many applications such as measuring snow wetness. This sensor produces multispectral data that have 224 narrow bands. as well as satellites.2. This sensor is flown at an altitude of approximately 20 km. They must be decompressed before loading with an algorithm available from JPL.4 nm. California) under a contract with NASA. because there is more control over when and where the data are gathered. This sensor collects data at three frequencies: • • • C-band L-band P-band Because this sensor measures at three different wavelengths. 70 ERDAS . See "Addresses to Contact" on page 85 for contact information. classifying vegetation. Two common types of airborne image data are: • • AIRSAR AVIRIS AIRSAR AIRSAR (Airborne Synthetic Aperture Radar) is an experimental airborne radar sensor developed by Jet Propulsion Laboratories (JPL). The swath width is 11 km and the spatial resolution is 20 m. For example. The data are recorded at 10-bit radiometric resolution. This is useful if there isn’t time to wait for the next satellite to pass over a particular area. NOTE: These data are distributed in a compressed format.4 . AVIRIS data have been available since 1987. under a contract with NASA. Pasadena.Image Data from Aircraft Image data can also be acquired from multispectral scanners or radar sensors aboard aircraft. or if it is necessary to achieve a specific spatial or spectral resolution that cannot be attained with satellite sensors. this type of data can be beneficial in the event of a natural or man-made disaster. AIRSAR data have been available since 1983. different scales of surface roughness are obtained. The AIRSAR sensor has an IFOV of 10 m and a swath width of 12 km. These bands are 10 nm wide and cover the spectral range of . California. and estimating soil moisture.

img format using the XSCAN™ Tool by Ektron and then imported directly into ERDAS IMAGINE. Eikonix data can be obtained in the ERDAS IMAGINE . Different scanning systems have different setups for scanning.. There are many commonly used scanning cameras for GIS and other desktop applications. or other object to be scanned is typically placed on a flat surface. but the term “remote sensing” is usually reserved for satellite or aerial data collection. transparency. Scanning is remote sensing in a manner of speaking. Many scanners produce a TIFF file. which can be read directly by ERDAS IMAGINE. such as Eikonix (Eikonix Corp. the map. such as photographs. Colorado).Image Data from Scanning Image Data from Scanning Hardcopy maps and photographs can be incorporated into the ERDAS IMAGINE system through the use of a scanning camera. Alabama) or Vexcel (Vexcel Imaging Corp. scanning refers to the transfer of analog data. photograph. In GIS. maps. and the camera scans across the object to record the image. transferring it from analog to digital data. Field Guide 71 .. Use the Import/Export function to import scanned data. In scanning. or other viewable images. Boulder. Huntsville. into a digital (raster) format.

The padding pixels are not imported by ERDAS IMAGINE. nor are they counted when figuring the pixel height and width of each image.Lxx (Legend or marginalia data) NOTE: Compressed ADRG (CADRG) is a different format. are primarily used for military purposes by defense contractors.18 cover the Southern hemisphere. and padding pixels needed to fulfill format requirements. which allows for easier mosaicking. The data are scanned at a nominal collection interval of 100 microns (254 lines per inch). When these images are combined. and blue. 72 ERDAS . ADRG File Format Each CD-ROM contains up to eight different file types which make up the ADRG format. with its own importer. they provide a 3-band digital representation of the original hardcopy graphic. Zone 18 is the South Polar region.IMG (Image) . Zones 1 . The ARC System divides the surface of the ellipsoid into 18 latitudinal bands called zones. 8-bit format stored on CD-ROM. Zone Distribution Rectangles (ZDRs) Each DR is divided into Zone Distribution Rectangles (ZDRs).024 rows of pixels. ADRG data provide large amounts of hardcopy graphic data without having to store and maintain the actual hardcopy graphics. Zone 9 is the North Polar region. ERDAS IMAGINE imports three types of ADRG data files: • • • . The boundary of a DR is a geographic rectangle which typically coincides with chart and map neatlines. The ZDR contains all the DR data that fall within that zone’s limits.IMG). The data are in 128 × 128 pixel tiled. Included in each image file are all raster data for a DR from a single ARC System zone.ADRG Data ADRG (ARC Digitized Raster Graphic) data. A DR may include data from one or more source charts or maps. The padding pixels are black and have a zero value. from the Defense Mapping Agency (DMA). based on the World Geodetic System 1984 (WGS 84). Each ZDR is stored on the CD-ROM as a single raster image file (.9 cover the Northern hemisphere and zones 10 . green. ADRG are divided into geographic data sets called Distribution Rectangles (DRs). Distribution Rectangles For distribution. There is one ZDR for each ARC System zone covering any part of the DR. ADRG data consist of digital copies of DMA hardcopy graphics transformed into the ARC system and accompanied by ASCII encoded support files. These digital copies are produced by scanning each hardcopy graphic into three images: red. ARC System The ARC system (Equal Arc-Second Raster Chart/Map) provides a rectangular coordinate and projection system at any scale for the earth’s ellipsoid. ZDRs typically overlap by 1.OVR (Overview) .

ERDAS IMAGINE enables the user to define a subset of the data from the preview image (see Figure 30). You can import from only one ZDR at a time. Importing ADRG Subsets Since DRs can be rather large. . There is an overview file for each DR on a CD. it may be beneficial to import a subset of the DR data for the application.ovr file formats.IMG and .ADRG Data The ADRG .OVR file formats are different from the ERDAS IMAGINE .img and .OVR (overview) The overview file contains a 16:1 reduced resolution image of the whole DR. Figure 29: ADRG Overview File Displayed in ERDAS IMAGINE Viewer Field Guide 73 . If a subset covers multiple ZDRs. they must be imported separately and mosaicked with the ERDAS IMAGINE Mosaic option.

img).IMG data files on the CD-ROM to the IMAGINE file format (.img file can then be displayed in a Viewer. Notice how the ZDRs overlap.IMG file contains one ZDR plus padding pixels. This is information which typically appears in the margin or legend of the source graphic. The ERDAS IMAGINE Import function converts the .The white rectangle in Figure 30 represents the DR. The subset area in this illustration would have to be imported as three files. .Lxx (legend data) This information can be imported into ERDAS IMAGINE and viewed. Legend files contain a variety of diagrams and accompanying information. It can also be added to a map composition with the ERDAS IMAGINE Map Composer. the .IMG files for Zones 2 and 4 would also be included in the subset area. The . Zone 4 overlap area overlap area Subset Area Zone 3 Zone 2 Figure 30: Subset Area with Overlapping ZDRs . Each . 74 ERDAS .IMG (scanned image data) The . Therefore. one for each zone in the DR.IMG files are the data files containing the actual scanned hardcopy graphic(s).

Accuracy (HA.Air) JNC (Jet Navigation Chart) ONC (Operational Navigation Chart) TPC (Tactical Pilot Chart) JOG-A (Joint Operations Graphic .000 1:500. Glossary (GL) — gives brief lists of foreign geographical names appearing on the map or chart with their English-language equivalents.Ground) JOG-C (Joint Operations Graphic .000. Grid Reference (GR) — depicts specific information needed for positional determination with reference to a particular grid system.000 1:250.000.000 Field Guide 75 . Boundary (BN) — depicts the geopolitical boundaries included on the map or chart.ADRG Data Each legend file contains information based on one of these diagram types: • • • • • Index (IN) — shows the approximate geographical position of the graphic and its relationship to other graphics in the region. Elevation/Depth Tint (EL) — a multicolored graphic depicting the colors or tints used to represent different elevations or depth bands on the printed map or chart.000.000 1:250.000 1:250. Slope (SL) — represents the percent and degree of slope appearing in slope bands. VA.000 1:50.000 1:200.Combined) JOG-R (Joint Operations Graphic . • • • • ARC System Charts The ADRG data on each CD-ROM are based on one of these chart types from the ARC system: Table 6: ARC System Chart Types ARC System Chart Type GNC (Global Navigation Chart) JNC-A (Jet Navigation Chart . AC represents a combined horizontal and vertical accuracy diagram. AC) — depicts the horizontal and vertical accuracies of selected map or chart areas.000 1:2.000 1:1. Geographic Reference (GE) — depicts the positioning information as referenced to the World Geographic Reference System.Radar) ATC (Series 200 Air Target Chart) TLM (Topographic Line Map) Scale 1:5. Landmark Feature Symbols (LS) — landmark feature symbols are used to depict navigationally-prominent entities.000 1:250.000.Air) JOG-G (Joint Operations Graphic .000 1:3.

76 ERDAS . DRs are numbered beginning with 01 for the northwesternmost DR and increasing sequentially west to east.Each ARC System chart type has certain legend files associated with the image(s) on the CD-ROM. then north to south. Table 7: Legend Files for the ARC System Chart Types ARC System Chart GNC JNC / JNC-A ONC TPC JOG-A JOG-G / JOG-C JOG-R ATC TLM IN   EL      SL BN VA HA AC GE GR GL LS                                            ADRG File Naming Convention The ADRG file naming convention is based on a series of codes: ssccddzz • • • ss = the chart series code (see the table of ARC System charts) cc = the country code dd = the DR number on the CD-ROM (01-99). in the ADRG filename JNUR0101. The data is coverage of a European continent. UR = Europe. This ADRG file is taken from a Jet Navigation chart. .IMG: • • • • • JN = Jet Navigation. zz = the zone rectangle number (01-18) • For example. 01 = This is the first Distribution Rectangle on the CD-ROM.IMG = This file contains the actual scanned image data for a ZDR. providing coverage of the northwestern edge of the image area. 01 = This is the first zone rectangle of the Distribution Rectangle. If you do not specify a file name. The legend files associated with each chart type are checked in Table 7. You may change this name when the file is imported into ERDAS IMAGINE. IMAGINE will use the ADRG file name for the image.

see the Defense Mapping Agency Product Specifications for ARC Digitized Raster Graphics (ADRG). . The data is coverage of a European continent. 01 = This is the first Distribution Rectangle on the CD-ROM. published by DMA Aerospace Center. increasing sequentially west to east. For more detailed information on ADRG file naming conventions. UR = Europe. then north to south. For example.L01 means: • • • • • JN = Jet Navigation. IN = This indicates that this file is an index diagram from the original hardcopy graphic.L01 = This legend file contains information for the source graphic 01. Source directories and their files include this number code within their names. The source graphics in each DR are numbered beginning with 01 for the northwesternmost source graphic. the file JNUR01IN. Field Guide 77 . providing coverage of the northwestern edge of the image area. This ADRG file is taken from a Jet Navigation chart.ADRG Data Legend File Names Legend file names will include a code to designate the type of diagram information contained in the file (see the previous legend file description).

See the previous section on ADRG data for more information on the ARC system. like ADRG data. which encloses a 1 degree by 1 degree geographic area. See more about DTED data on page 83. are also from the DMA and are currently available only to Department of Defense contractors. Each DR consists of all or part of one or more images mosaicked to meet the ARC bounding rectangle.ADRI Data ADRI (ARC Digital Raster Imagery). with no overlapping areas. 78 ERDAS . ADRI data are stored in the ARC system in DRs. (See Figure 31. Each ZDR is stored as a single raster image file. Like ADRG. stored on 8 mm tape in band sequential format.) Source images are orthorectified to mean sea level using DMA Level I Digital Terrain Elevation Data (DTED) or equivalent data (Air Force Intelligence Support Agency 1991). Image 1 Image 2 3 Image 4 Image 5 Image 6 Image 8 7 Image 9 Figure 31: Seamless Nine Image DR In ADRI data. ADRI consists of SPOT panchromatic satellite imagery transformed into the ARC system and accompanied by ASCII encoded support files. each DR contains only one ZDR. The data are in 128 × 128 tiled. 8-bit format.

Figure 32: ADRI Overview File Displayed in ERDAS IMAGINE Viewer Field Guide 79 .OVR (Overview) . There is an overview file for each DR on a tape. . ERDAS IMAGINE imports the two types of ADRI data files: • • .img and . (See Figure 32.OVR (overview) The overview file (.) This does not appear on the ZDR image. and a color test patch file.IMG (Image) The ADRI .IMG and .ovr file formats.OVR images show the mosaicking from the source images and the dates when the source images were collected.OVR) contains a 16:1 reduced resolution image of the whole DR. three types of header files. The .OVR file formats are different from the ERDAS IMAGINE .ADRI Data There are six different file types that make up the ADRI format: two types of data files.

IMG files contain the actual mosaicked images. If you do not specify a file name. then north to south.img). providing coverage of the northwestern edge of the image area.IMG: • • • • • SP = SPOT 10 m panchromatic image.IMG data files to the IMAGINE file format (. The data is coverage of a European continent.IMG file contains one ZDR plus any padding pixels needed to fit the ARC boundaries. The . You may change this name when the file is imported into ERDAS IMAGINE.img file can then be displayed in a Viewer.IMG = This file contains the actual scanned image data for a ZDR. 80 ERDAS . 01 = This is the first zone rectangle of the Distribution Rectangle. zz = the zone rectangle number (01-18) • For example. DRs are numbered beginning with 01 for the northwesternmost DR and increasing sequentially west to east. IMAGINE will use the ADRI file name for the image. The ERDAS IMAGINE Import function converts the .IMG (scanned image data) The .. Padding pixels are black and have a zero data value. 01 = This is the first Distribution Rectangle on the CD-ROM. nor are they counted in image height or width. Each . Padding pixels are not imported. The ADRI file naming convention is based on a series of codes: ssccddzz • ss = the image source code: • • SP (SPOT panchromatic) SX (SPOT multispectral) (not currently available) TM (Landsat Thematic Mapper) (not currently available) ADRI File Naming Convention cc = the country code dd = the DR number on the tape (01-99). in the ADRI filename SPUR0101. . UR = Europe.

Each minute is made up of 60 seconds. Field Guide 81 . Longitude 1201 1201 La t i t u de 1201 Figure 33: ARC/Second Format In Figure 33 there are 1201 pixels in the first row and 1201 pixels in the last row. The data are not rectangular. 3 arc/second data have pixels which are 3 × 3 seconds in size. Arc/second refers to data in the Latitude/Longitude (Lat/Lon) coordinate system. most available elevation data are created with stereo photography and topographic maps. ERDAS IMAGINE software can load and use: • • USGS Digital Elevation Models (DEMs) Digital Terrain Elevation Data (DTED) Arc/Second Format Most elevation data are in arc/second format. A row of data file values from a DEM or DTED file is called a profile. as discussed above under SPOT. The profiles of DEM and DTED run south to north. the first pixel of the record is the southernmost pixel. Radar sensor data can also be a source of topographic information. or topographic.Topographic Data Topographic Data Satellite data can also be used to create elevation. Figure 33 illustrates a 1° × 1° area of the earth. that is. Arc/second data are often referred to by the number of seconds in each pixel. as discussed in “Chapter 8: Terrain Analysis. Each degree of latitude and longitude is made up of 60 minutes. but the area represented by each pixel increases in size from the top of the file to the bottom of the file. data through the use of stereoscopic pairs.” However. The actual area represented by each pixel is a function of its latitude. For example. The extracted section in the example above has been exaggerated to illustrate this point. but follow the arc of the earth’s latitudinal and longitudinal lines.

DEM DEMs are digital elevation model data.000 scale. 1:250. DEM data are stored in ASCII format. is usually referenced to the UTM coordinate system.5-minute DEM. meaning each pixel can have a possible elevation of -32. ERDAS IMAGINE rotates the data 90° counterclockwise as part of the Import process so that coordinates read with any IMAGINE program will be correct. Both types have a 16-bit range of elevation values. USGS DEMs There are two types of DEMs that are most commonly available from USGS: • • 1:24. DEM was originally a term reserved for elevation data provided by the United States Geological Survey (USGS).768 to 32. must be rectified or projected onto a planar coordinate system such as UTM. DEM data files from USGS are initially oriented so that North is on the right side of the image instead of at the top. It has a spatial resolution of 30 × 30 m. See "Ordering Raster Data" on page 84 for information on ordering DEMs.000 scale available only in arc/second format. but it is now used to describe any digital elevation data. 82 ERDAS .Arc/second data used in conjunction with other image data. such as TM or SPOT. The data file values in ASCII format are stored as ASCII characters rather than as zeros and ones like the data file values in binary data.767. DEMs can be: • • purchased from USGS (for US areas only) created from stereopairs (derived from satellite data or aerial photographs) See "CHAPTER 9: Terrain Analysis" for more information on using DEMs. also called 7.

Field Guide 83 . There are two types of DTED data available: • • DTED 1—a 1° × 1° area of coverage DTED 2—a 1° × 1° or less area of coverage Both are in arc/second format and are distributed in cells. IMAGINE rotates the data 90° counterclockwise as part of the Import process so that coordinates read with any ERDAS IMAGINE program will be correct. topographic data can be used in conjunction with other data to: • • • • • • • calculate the shortest and most navigable path over a mountain range assess the visibility from various lookout points or along roads simulate travel through a landscape determine rates of snow melt orthocorrect satellite or airborne images create aspect and slope layers provide ancillary data from image classification See "CHAPTER 9: Terrain Analysis" for more information about using topographic and elevation data. For example. DTED data files are also oriented so that North is on the right side of the image instead of at the top. Like DEMs. A cell is a 1° × 1° area of coverage. DTED data are distributed on 9track tapes and on CD-ROM. Both have a 16-bit range of elevation values.Topographic Data DTED DTED data are produced by the Defense Mapping Agency (DMA) and are available only to US government agencies and their contractors. Using Topographic Data Topographic data have many uses in a GIS.

5 × 80 km 185 × 170 km 60 × 60 km 2700 × 2700 km 4000 × 4000 km 7.5’ 1° × 1° Pixel Size 28.1 km 4 km 30 m 3” × 3” # of Bands 7 7 4 1-3 1-5 1-5 1 1 Format Fast (BSQ) Fast (BSQ) BSQ. Table 8: Common Raster Data Products Data Type Landsat TM Full Scene Landsat TM Quarter Scene Landsat MSS Full Scene SPOT NOAA AVHRR (LAC) NOAA AVHRR (GAC) USGS DEM 1:24.5 m 28.000 USGS DEM 1:250. but only the most common types that can be imported into ERDAS IMAGINE.Ordering Raster Data Table 8 describes the different Landsat. and DEM products that can be ordered.5 m 79 × 56 m 10 m and 20 m 1. BIL BIL 10-bit packed or unpacked 10-bit packed or unpacked ASCII ASCII Available Geocoded     (UTM) 84 ERDAS . SPOT. Information in this chart does not reflect all the products that are available. AVHRR.5’ × 7.000 Ground Covered 185 × 170 km 92.

airphotos. space images. and private agencies: National Cartographic Information Center U. Room 100 Washington. SD 57198 USA • • • • • • • Field Guide 85 . Reston. TM. and related information from federal. NY 13440-5700 USA Landsat data: EROS Data Center Sioux Falls.spot. state. planimetric data.Ordering Raster Data Addresses to Contact For more information about these and related products. contact the agencies below: • Landsat MSS. Geological Survey 507 National Center Reston. VA 22092 USA ADRG data (available only to defense contractors): DMA (Defense Mapping Agency) ATTN: PMSC Combat Support Center Washington. VA 22091-4368 USA Telephone: 703-620-2200 Fax: 703-648-1813 Internet: www. DC 20233 USA AVHRR Dundee Format NERC Satellite Station University of Dundee Dundee. DEMs.S.eosat.com NOAA AVHRR data: Satellite Data Service Branch NOAA/National Environment Satellite. maps. Scotland DD1 4HN Cartographic data including. MD 20706 USA Telephone: 1-800-344-9933 Fax: 301-552-0507 Internet: www. and Information Service World Weather Building. Lanham. DC 20315-0010 USA ADRI data (available only to defense contractors): Rome Laboratory/IRRP Image Products Branch Griffiss AFB.com SPOT data: SPOT Image Corporation 1897 Preston White Dr. and ETM data: EOSAT International Headquarters 4300 Forbes Blvd. Data.

jpl. Inc.com • • • • • 86 ERDAS .• ERS-1 radar data: RADARSAT International. Pasadena.gov RADARSAT data: RADARSAT International. Suite 204 Ottawa.ca JERS-1 (Fuyo 1) radar data: National Space Development Agency of Japan (NASDA) Earth Observation Program Office Tokyo 105. Inc.095.rsi. Japan Telephone: 81-3-5470-4254 Fax: 81-3-3432-3969 SIR-A.. Reutov.S.nasa. Government RADARSAT sales: Joel Porter Lockheed Martin Astronautics M/S: DC4001 12999 Deer Creek Canyon Rd. Ontario Canada K1S 2E1 Telephone: 613-238-5424 Fax: 613-238-5425 Internet: www.porter@den. Russia Telephone: 7. Suite 204 Ottawa.rsi. C radar data: Jet Propulsion Laboratories California Institute of Technology 4800 Oak Grove Dr. CA 91109-8099 USA Telephone: 818-354-2386 Internet: www.msk.307-9194 Fax: 7. Littleton. Ontario Canada K1S 2E1 Telephone: 613-238-6413 Fax: 613-238-5425 Internet: www. CO 80127 Telephone: 303-977-3233 Fax: 303-971-9827 email: joel.mmc. 265 Carling Ave.095...su U. Moscow Region 143952. 33. 265 Carling Ave. B.ca Almaz radar data: NPO Mashinostroenia Scientific Engineering Center “Almaz” Gagarin st.302-2001 Email: npo@mashstroy.

it becomes an .img file.X data files are indicated by the file name extensions: • • . The ERDAS Ver. by using ERDAS IMAGINE Vector.GIS file. The Import function will directly import these raster data types from other software systems: • • • • ERDAS Ver.X file structure includes: • • • a header record at the beginning of the file the data file values a statistics or trailer file When you import a .img file with one thematic raster layer. if another type of digital data system is currently in use. 7.X The ERDAS Ver. or 16-bit. The two basic types of ERDAS Ver. or vice versa. Vector to Raster Conversion Vector data can also be a source of raster data by converting it to raster format.Raster Data from Other Software Vendors Raster Data from Other Software Vendors ERDAS IMAGINE also enables the user to import data created by other software vendors.X series was the predecessor of ERDAS IMAGINE software. or if data is received from another system. Convert a vector layer to a raster layer. 7.X GRID Sun Raster TIFF Other data types might be imported using the Generic import option. each band becomes a continuous raster layer within the . 7. 7.LAN and .LAN file. ERDAS Ver.GIS image files are stored in the same format. When you import a . The image data are arranged in a BIL format and can be 4-bit. This way. Field Guide 87 .GIS — a single-band thematic data file in which pixels are divided into discrete categories (the name is derived from geographic information system) . 7. 8-bit. it will easily convert to the ERDAS IMAGINE file format for use in ERDAS IMAGINE.LAN — a multiband continuous image file (the name is derived from the Landsat satellite) .

and per-layer analyses. Sun Raster files can be used in desktop publishing applications or any application where a screen capture would be useful. the TIFF format is a widely supported format used in video. a well-known vector GIS which is also distributed by ESRI. TIFF File Formats TIFF’s great flexibility can also cause occasional problems in compatibility. The TIFF format’s main appeal is its flexibility. Today. ARC/INFO. In addition to GIS. It was designed to function as a complement to the vector data model system. (Redlands. The elements supported in ERDAS IMAGINE are checked. TIFF The Tagged Image File Format (TIFF) was developed by Aldus Corp. document storage and retrieval. screendump can create any of the file types listed in Table 9. This is because TIFF is really a family of file formats that are comprised of a variety of elements within the format. GRID files are in a compressed tiled raster data structure. per-zone. The name is taken from the raster data format of presenting information in a grid of cells. Washington) in 1986 in conjunction with major scanner vendors who needed an easily portable file format for raster image data. RLE None. GRID is a spatial analysis and modeling language that enables the user to perform per-cell. Table 10 shows the most common TIFF format elements. which can be easily transported between different operating systems and computers. RLE None. the GEOTIFF extensions permit TIFF files to be geocoded.GRID GRID is a raster geoprocessing program distributed by Environmental Systems Research Institute. Sun Raster A Sun raster file is an image captured from a monitor display. It handles black and white line images. and desktop publishing applications. RLE The data are stored in BIP format. 88 ERDAS . RLE (run-length encoded) None. California). satellite imaging. There are two basic ways to create a Sun raster file on a Sun workstation: • • use the OpenWindows Snapshot application use the UNIX screendump command Both methods read the contents of a frame buffer and write the display data to a userspecified file. fax transmission. medical imaging. (Seattle. Table 9: File Types Created by Screendump File Type 1-bit black and white 8-bit color paletted (256 colors) 24-bit RGB true color 32-bit RGB true color Available Compression None. per-neighborhood. Depending on the display hardware and options chosen. In addition. Inc. as well as gray scale and color images.

8 3.8). Multi-band data assigned to different bits cannot be imported into IMAGINE. ****LZW is governed by patents and is not supported by the basic version of IMAGINE..4 or 8. NOTE: The checked items in Table 10 are supported in IMAGINE. Table 10: The Most Common TIFF Format Elements Byte Order Intel (LSB/MSB) Motorola (MSB/LSB) Black and white Gray scale         Image Type Inverted gray scale Color palette RGB (3-band) Configuration BIP BSQ Bits Per Plane** 1*.7 None CCITT G3 (B&W only)      Compression*** CCITT G4 (B&W only) Packbits LZW**** LZW with horizontal differencing**** *Must be imported and exported as 4-bit data. 4. Field Guide 89 .2*.5.4. **All bands must contain the same number of bits (i.e.6. ***Compression supported on import only.4.8.Raster Data from Other Software Vendors Any TIFF format that contains an unsupported element may not be compatible with ERDAS IMAGINE.

If there is a syntax error in the data file. Inc. See the ARC/INFO documentation for more information about these files. exported back to its original format (if desired). and many other applications. These files become vector layers when imported. in most cases. See"CHAPTER 10: Geographic Information Systems" for more information about using vector data in a GIS. See "IGES" on page 94 for more information about IGES files. 90 ERDAS . they can import a DXF file into ERDAS IMAGINE. The import ARCGEN program is used to import features to a new layer. See "CHAPTER 2: Vector Layers" for more information on ERDAS IMAGINE vector layers. ARCGEN ARCGEN files are ASCII files created with the ARC/INFO UNGENERATE command. you must kill the process. engineering. AutoCAD (DXF) AutoCAD is a vector software package distributed by Autodesk. if a user has information in AutoCAD that they would like to use in the GIS. These data can then be used for the analyses and. and then export the data back to DXF format. AutoCAD is a computer-aided design program that enables the user to draw two-and three-dimensional models. This software is frequently used in architecture. the import process may not work. California). Each section below lists the types of attribute data that are imported. Use Import/Export to import vector data from other software vendors into ERDAS IMAGINE vector layers. In most cases.Vector Data from Other Software Vendors It is possible to directly import several common vector formats into ERDAS IMAGINE. Although data can be converted from one type to another by importing a file into IMAGINE and then exporting the IMAGINE file into another format. If this happens. Topology is not created or maintained. and then try importing again. attribute data are also imported into ERDAS IMAGINE. These routines are based on ARC/INFO data conversion routines. correct the data file. ARCGEN files must be properly prepared before they are imported into ERDAS IMAGINE. AutoCAD files can also be output to IGES format using the AutoCAD program IGESOUT. do the analysis. therefore the coverage must be built or cleaned after it is imported into ERDAS IMAGINE. the import and export routines were designed to work together. urban planning. The AutoCAD Drawing Interchange File (DXF) is the standard interchange format used by most CAD systems. The AutoCAD program DXFOUT will create a DXF file that can be converted to an ERDAS IMAGINE vector layer. (Sausalito. For example.

The initial Z value of 3D entities is stored. this information will also be exported. Field Guide 91 . It is structured just like the ASCII format. The first and last point is at the same location. Circles are composed of 361 points—one vertex for each degree. Line IMAGINE Feature Line Comments These entities become two point lines. Table 11 describes how various DXF entities are converted to IMAGINE. These entities become point features in a layer. When converted to an ERDAS IMAGINE vector layer. Each layer contains one or more drawing elements or entities. Table 11: Conversion of DXF Entries DXF Entity Line 3DLine Trace Solid 3DFace Circle Arc Polyline Point Shape Line Point Line These entities form lines. each entity becomes a single feature. An entity is a drawing element that can be placed into an AutoCAD drawing with a single command.Vector Data from Other Software Vendors DXF files can be converted in the ASCII or binary format. These entities can be grouped to form a single line having many vertices. The initial Z value of 3D entities is stored. The binary format is an optional format for AutoCAD Releases 10 and 11. These entities become four or five point lines. The ERDAS IMAGINE import process also imports line and point attribute data (if they exist) and creates an INFO directory with the appropriate ACODE (arc attributes) and XCODE (point attributes) files. DXF files are composed of a series of related layers. only the data are in binary format. Refer to an AutoCAD manual for more information about the format of DXF files. If an imported DXF file is exported back to DXF format.

and 15-minute topographic quadrangles 1:100. stream. and polygon attribute data (if they exist) and creates an INFO directory with the appropriate ACODE (arc attributes). and public land survey boundaries.DLG Digital Line Graphs (DLG) are furnished by the U. The major code describes the class of the feature (road. you must Build or Clean it.000-scale quadrangles 1:2. Most DLGs are in the Universal Transverse Mercator map projection.000. If an imported DLG file is exported back to DLG format. 92 ERDAS . etc. this information will also be exported. The user can export to DLG-3 optional format.000 scale series are in geographic coordinates. and polygons in an ERDAS IMAGINE vector layer). To maintain the topology of a vector layer created from a DLG file. PCODE (polygon attributes) and XCODE (point attributes) files.000.S. Code pairs are encoded in two integer fields. the 1:2.5. lines.000-scale national atlas maps DLGs are topological files that contain nodes. DLG files are available for the following USGS map series: • • • 7. such as transportation. line.) and the minor code stores more specific information about the feature. and areas (similar to the points. Geological Survey and provide planimetric base map information. DLGs can be imported in standard format (144 bytes per record) and optional format (80 bytes per record). each containing six digits. DLGs also store attribute information in the form of major and minor code pairs. See “Chapter 9: Geographic Information Systems” for information on this process. lines. hydrography. The ERDAS IMAGINE import process also imports point. contours. However.

in some areas. The coordinates are stored in Lat/Lon decimal degrees. • • • ERDAS IMAGINE vector data cannot be exported to ETAK format. census. a line is created along with a corresponding ACODE (arc attribute) record.Vector Data from Other Software Vendors ETAK ETAK’s MapBase is an ASCII digital street centerline map product available from ETAK. Inc. and ZIP code boundary information. Landmark or L types — if the feature type is L and the user opts to output a landmark layer. major landmark features. Field Guide 93 . California). These records are written to the attribute file. The coordinates for these features are in Lat/Lon decimal degrees. There are four possible types of ETAK features: • DIME or D types — if the feature type is D. ETAK has also included road class designations and. Each record represents a single linear feature with address and political. and are useful for building address coverages. Census Bureau. (Menlo Park.S. then a point feature is created along with an associated PCODE record. Alternate address or A types — each record contains an alternate address record for a line. Shape features or S types — shape records are used to add vertices to the lines. ETAK files are similar in content to the Dual Independent Map Encoding (DIME) format used by the U.

IGES

Initial Graphics Exchange Standard (IGES) files are often used to transfer CAD data between systems. IGES Version 3.0 format, published by the U.S. Department of Commerce, is in uncompressed ASCII format only. IGES files can be produced in AutoCAD using the IGESOUT command. The following IGES entities can be converted: Table 12: Conversion of IGES Entities IGES Entity
IGES Entity 100 (Circular Arc Entities) IGES Entity 106 (Copious Data Entities) IGES Entity 106 (Line Entities) IGES Entity 116 (Point Entities)

IMAGINE Feature
Lines Lines Lines Points

The ERDAS IMAGINE import process also imports line and point attribute data (if they exist) and creates an INFO directory with the appropriate ACODE (arc attributes) and XCODE (point attributes) files. If an imported IGES file is exported back to IGES format, this information will also be exported.

94

ERDAS

Vector Data from Other Software Vendors

TIGER

Topologically Integrated Geographic Encoding and Referencing System (TIGER) files are line network products of the U.S. Census Bureau. The Census Bureau is using the TIGER system to create and maintain a digital cartographic database that covers the United States, Puerto Rico, Guam, the Virgin Islands, American Samoa, and the Trust Territories of the Pacific. TIGER/Line is the line network product of the TIGER system. The cartographic base is taken from Geographic Base File/Dual Independent Map Encoding (GBF/DIME), where available, and from the USGS 1:100,000-scale national map series, SPOT imagery, and a variety of other sources in all other areas, in order to have continuous coverage for the entire United States. In addition to line segments, TIGER files contain census geographic codes and, in metropolitan areas, address ranges for the left and right sides of each segment. TIGER files are available in ASCII format on both CD-ROM and tape media. All released versions after April 1989 are supported. There is a great deal of attribute information provided with TIGER/Line files. Line and point attribute information can be converted into ERDAS IMAGINE format. The ERDAS IMAGINE import process creates an INFO directory with the appropriate ACODE (arc attributes) and XCODE (point attributes) files. If an imported TIGER file is exported back to TIGER format, this information will also be exported. TIGER attributes include the following: • • • • • • Version numbers—TIGER/Line file version number Permanent record numbers—each line segment is assigned a permanent record number that is maintained throughout all versions of TIGER/Line files Source codes—each line and landmark point feature is assigned a code to specify the original source Census feature class codes—line segments representing physical features are coded based on the USGS classification codes in DLG-3 files Street attributes—includes street address information for selected urban areas Legal and statistical area attributes—legal areas include states, counties, townships, towns, incorporated cities, Indian reservations, and national parks. Statistical areas are areas used during the census-taking, where legal areas are not adequate for reporting statistics. Political boundaries—the election precincts or voting districts may contain a variety of areas, including wards, legislative districts, and election districts. Landmarks—landmark area and point features include schools, military installations, airports, hospitals, mountain peaks, campgrounds, rivers, and lakes

• •

TIGER files for major metropolitan areas outside of the United States (e.g., Puerto Rico, Guam) do not have address ranges.

Field Guide

95

Disk Space Requirements TIGER/Line files are partitioned into counties ranging in size from less than a megabyte to almost 120 megabytes. The average size is approximately 10 megabytes. To determine the amount of disk space required to convert a set of TIGER/Line files, use this rule: the size of the converted layers is approximately the same size as the files used in the conversion. The amount of additional scratch space needed depends on the largest file and whether it will need to be sorted. The amount usually required is about double the size of the file being sorted.

The information presented in this section, "Vector Data from Other Software Vendors", was obtained from the Data Conversion and the 6.0 ARC Command References manuals, both published by ESRI, Inc., 1992.

96

ERDAS

Introduction

CHAPTER 4 Image Display

Introduction

This section defines some important terms that are relevant to image display. Most of the terminology and definitions used in this chapter are based on the X Window System (Massachusetts Institute of Technology) terminology. This may differ from other systems, such as Microsoft Windows NT. A seat is a combination of an X-server and a host workstation. • • • A host workstation consists of a CPU, keyboard, mouse, and a display. A display may consist of multiple screens. These screens work together, making it possible to move the mouse from one screen to the next. The display hardware contains the memory that is used to produce the image. This hardware determines which types of displays are available (e.g., true color or pseudo color) and the pixel depth (e.g., 8-bit or 24-bit).

Screen

Screen

Figure 34: Example of One Seat with One Display and Two Screens Display Memory Size The size of memory varies for different displays. It is expressed in terms of: • display resolution, which is expressed as the horizontal and vertical dimensions of memory—the number of pixels that can be viewed on the display screen. Some typical display resolutions are 1152 × 900, 1280 × 1024, and 1024 × 780. For the PC, typical resolutions are 640 × 480, 800 × 600, 1024 × 768, and 1280 × 1024. the number of bits for each pixel or pixel depth, as explained below.

Field Guide

97

Bits for Image Plane A bit is a binary digit, meaning a number that can have two possible values—0 and 1, or “off” and “on.” A set of bits, however, can have many more values, depending upon the number of bits used. The number of values that can be expressed by a set of bits is 2 to the power of the number of bits used. For example, the number of values that can be expressed by 3 bits is 8 (23 = 8). Displays are referred to in terms of a number of bits, such as 8-bit or 24-bit. These bits are used to determine the number of possible brightness values. For example, in a 24bit display, 24 bits per pixel breaks down to eight bits for each of the three color guns per pixel. The number of possible values that can be expressed by eight bits is 28, or 256. Therefore, on a 24-bit display, each color gun of a pixel can have any one of 256 possible brightness values, expressed by the range of values 0 to 255. The combination of the three color guns, each with 256 possible brightness values, yields 2563, (or 224, for the 24-bit image display), or 16,777,216 possible colors for each pixel on a 24-bit display. If the display being used is not 24-bit, the example above will calculate the number of possible brightness values and colors that can be displayed. Pixel The term pixel is abbreviated from picture element. As an element, a pixel is the smallest part of a digital picture (image). Raster image data are divided by a grid, in which each cell of the grid is represented by a pixel. A pixel is also called a grid cell. Pixel is a broad term that is used for both: • • the data file value(s) for one data unit in an image (file pixels), or one grid location on a display or printout (display pixels).

Usually, one pixel in a file corresponds to one pixel in a display or printout. However, an image can be magnified or reduced so that one file pixel no longer corresponds to one pixel in the display or printout. For example, if an image is displayed with a magnification factor of 2, then one file pixel will take up 4 (2 × 2) grid cells on the display screen. To display an image, a file pixel that consists of one or more numbers must be transformed into a display pixel with properties that can be seen, such as brightness and color. Whereas the file pixel has values that are relevant to data (such as wavelength of reflected light), the displayed pixel must have a particular color or gray level that represents these data file values. Colors Human perception of color comes from the relative amounts of red, green, and blue light that are measured by the cones (sensors) in the eye. Red, green, and blue light can be added together to produce a wide variety of colors—a wider variety than can be formed from the combinations of any three other colors. Red, green, and blue are therefore the additive primary colors. A nearly infinite number of shades can be produced when red, green, and blue light are combined. On a display, different colors (combinations of red, green, and blue) allow the user to perceive changes across an image. Color displays that are available today yield 224, or 16,777,216 colors. Each color has a possible 256 different values (28).

98

ERDAS

Introduction

Color Guns On a display, color guns direct electron beams that fall on red, green, and blue phosphors. The phosphors glow at certain frequencies to produce different colors. Color monitors are often called RGB monitors, referring to the primary colors. The red, green, and blue phosphors on the picture tube appear as tiny colored dots on the display screen. The human eye integrates these dots together, and combinations of red, green, and blue are perceived. Each pixel is represented by an equal number of red, green, and blue phosphors. Brightness Values Brightness values (or intensity values) are the quantities of each primary color to be output to each displayed pixel. When an image is displayed, brightness values are calculated for all three color guns, for every pixel. All of the colors that can be output to a display can be expressed with three brightness values—one for each color gun. Colormap and Colorcells A color on the screen is created by a combination of red, green, and blue values, where each of these components is represented as an 8-bit value. Therefore, 24 bits are needed to represent a color. Since many systems have only an 8-bit display, a colormap is used to translate the 8-bit value into a color. A colormap is an ordered set of colorcells, which is used to perform a function on a set of input values. To display or print an image, the colormap translates data file values in memory into brightness values for each color gun. Colormaps are not limited to 8-bit displays. Colormap vs. Lookup Table The colormap is a function of the display hardware, whereas a lookup table is a function of ERDAS IMAGINE. When a contrast adjustment is performed on an image in IMAGINE, lookup tables are used. However, if the auto-update function is being used to view the adjustments in near real-time, then the colormap is being used to map the image through the lookup table. This process allows the colors on the screen to be updated in near real-time. This chapter explains how the colormap is used to display imagery. Colorcells There is a colorcell in the colormap for each data file value. The red, green, and blue values assigned to the colorcell control the brightness of the color guns for the displayed pixel (Nye 1990). The number of colorcells in a colormap is determined by the number of bits in the display (e.g., 8-bit, 24-bit).

Field Guide

99

For example, if a pixel with a data file value of 40 was assigned a display value (colorcell value) of 24, then this pixel would use the brightness values for the 24th colorcell in the colormap. In the colormap below (Table 13), this pixel would be displayed as blue. Table 13: Colorcell Example Colorcell Index
1 2 3 24 255 0 0 0

Red
0

Green
0 90

Blue

170 0 0

255 255

The colormap is controlled by the Windows system. There are 256 colorcells in a colormap with an 8-bit display. This means that 256 colors can be displayed simultaneously on the display. With a 24-bit display, there are 256 colorcells for each color: red, green, and blue. This offers 256 × 256 × 256 or 16,777,216 different colors. When an application requests a color, the server will specify which colorcell contains that color and will return the color. Colorcells can be read-only or read/write. Read-Only Colorcells The color assigned to a read-only colorcell can be shared by other application windows, but it cannot be changed once it is set. To change the color of a pixel on the display, it would not be possible to change the color for the corresponding colorcell. Instead, the pixel value would have to be changed and the image redisplayed. For this reason, it is not possible to use auto update operations in ERDAS IMAGINE with read-only colorcells. Read/Write Colorcells The color assigned to a read/write colorcell can be changed, but it cannot be shared by other application windows. An application can easily change the color of displayed pixels by changing the color for the colorcell that corresponds to the pixel value. This allows applications to use auto update operations. However, this colorcell cannot be shared by other application windows, and all of the colorcells in the colormap could quickly be utilized. Changeable Colormaps Some colormaps can have both read-only and read/write colorcells. This type of colormap allows applications to utilize the type of colorcell that would be most preferred.

100

ERDAS

Introduction

Display Types

The possible range of different colors is determined by the display type. ERDAS IMAGINE supports the following types of displays: • • • • 8-bit PseudoColor 15-bit HiColor (for Windows NT) 24-bit DirectColor 24-bit TrueColor

The above display types are explained in more detail below.

A display may offer more than one visual type and pixel depth. See “ERDAS IMAGINE 8.3 Installing and Configuring” for more information on specific display hardware. 32-bit Displays A 32-bit display is a combination of an 8-bit PseudoColor and 24-bit DirectColor or TrueColor display. Whether or not it is DirectColor or TrueColor depends on the display hardware.

Field Guide

101

8-bit PseudoColor

An 8-bit PseudoColor display has a colormap with 256 colorcells. Each cell has a red, green, and blue brightness value, giving 256 combinations of red, green, and blue. The data file value for the pixel is transformed into a colorcell value. The brightness values for the colorcell that is specified by this colorcell value are used to define the color to be displayed.

Data File Values Colormap Red band value
Colorcell Index Red Value Green Value Blue Value

Green band value Blue band value

Colorcell value (4)

1 2 3 4 5 6 0 0 255 Blue pixel

Figure 35: Transforming Data File Values to a Colorcell Value In Figure 35, data file values for a pixel of three continuous raster layers (bands) is transformed to a colorcell value. Since the colorcell value is four, the pixel is displayed with the brightness values of the fourth colorcell (blue). This display grants a small number of colors to ERDAS IMAGINE. It works well with thematic raster layers containing less than 200 colors and with gray scale continuous raster layers. For image files with three continuous raster layers (bands), the colors will be severely limited because, under ideal conditions, 256 colors are available on an 8-bit display, while 8-bit, 3-band image files can contain over 16,000,000 different colors. Auto Update An 8-bit PseudoColor display has read-only and read/write colorcells, allowing ERDAS IMAGINE to perform near real-time color modifications using Auto Update and Auto Apply options.

102

ERDAS

Introduction

24-bit DirectColor

A 24-bit DirectColor display enables the user to view up to three bands of data at one time, creating displayed pixels that represent the relationships between the bands by their colors. Since this is a 24-bit display, it offers up to 256 shades of red, 256 shades of green, and 256 shades of blue, which is approximately 16 million different colors (2563). The data file values for each band are transformed into colorcell values. The colorcell that is specified by these values is used to define the color to be displayed.

Colormap Data File Values Colorcell Values Red band value (1) Green band value (2) Blue band value (6) ColorGreen Cell Value Index ColorCell Index 1 2 3 4 5 6 200 Blue Value 1 2 0 0 3 4 5 6 55 0 90

ColorRed Cell Value Index 1 2 3 4 5 6 55 0 0

Red band value

Green band value Blue band value

Blue-green pixel (0, 90, 200 RGB)

Figure 36: Transforming Data File Values to a Colorcell Value In Figure 36, data file values for a pixel of three continuous raster layers (bands) are transformed to separate colorcell values for each band. Since the colorcell value is 1 for the red band, 2 for the green band, and 6 for the blue band, the RGB brightness values are 0, 90, 200. This displays the pixel as a blue-green color. This type of display grants a very large number of colors to ERDAS IMAGINE and it works well with all types of data. Auto Update A 24-bit DirectColor display has read-only and read/write colorcells, allowing ERDAS IMAGINE to perform real-time color modifications using the Auto Update and Auto Apply options.

Field Guide

103

24-bit TrueColor

A 24-bit TrueColor display enables the user to view up to three continuous raster layers (bands) of data at one time, creating displayed pixels that represent the relationships between the bands by their colors. The data file values for the pixels are transformed into screen values and the colors are based on these values. Therefore, the color for the pixel is calculated without querying the server and the colormap. The colormap for a 24-bit TrueColor display is not available for ERDAS IMAGINE applications. Once a color is assigned to a screen value, it cannot be changed, but the color can be shared by other applications. The screen values are used as the brightness values for the red, green, and blue color guns. Since this is a 24-bit display, it offers 256 shades of red, 256 shades of green, and 256 shades of blue, which is approximately 16 million different colors (2563).

Data File Values

Screen Values

Red band value

Red band value (0)

Green band value Blue band value

Green band value (90) Blue band value (200)

Blue-green pixel (0, 90, 200 RGB)

Figure 37: Transforming Data File Values to Screen Values In Figure 37, data file values for a pixel of three continuous raster layers (bands) are transformed to separate screen values for each band. Since the screen value is 0 for the red band, 90 for the green band and 200 for the blue band, the RGB brightness values are 0, 90, and 200. This displays the pixel as a blue-green color. Auto Update The 24-bit TrueColor display does not use the colormap in ERDAS IMAGINE, and thus does not provide IMAGINE with any real-time color changing capability. Each time a color is changed, the screen values must be calculated and the image must be re-drawn. Color Quality The 24-bit TrueColor visual provides the best color quality possible with standard equipment. There is no color degradation under any circumstances with this display.

104

ERDAS

Introduction

PC Displays

ERDAS IMAGINE for Microsoft Windows NT supports the following visual type and pixel depths: • • • 8-bit PseudoColor 15-bit HiColor 24-bit TrueColor

8-bit PseudoColor An 8-bit PseudoColor display for the PC uses the same type of colormap as the X Windows 8-bit PseudoColor display, except that each colorcell has a range of 0 to 63 on most video display adapters, instead of 0 to 255. Therefore, each colorcell has a red, green, and blue brightness value, giving 64 different combinations of red, green, and blue. The colormap, however, is the same as the X Windows 8-bit PseudoColor display. It has 256 colorcells allowing 256 different colors to be displayed simultaneously. 15-bit HiColor A 15-bit HiColor display for the PC assigns colors the same way as the X Windows 24bit TrueColor display, except that it offers 32 shades of red, 32 shades of green, and 32 shades of blue, for a total of 32,768 possible color combinations. Some video display adapters allocate 6 bits to the green color gun, allowing 64 thousand colors. These adapters use a 16-bit color scheme. 24-bit TrueColor A 24-bit TrueColor display for the PC assigns colors the same way as the X Windows 24-bit TrueColor display.

Field Guide

105

Displaying Raster Layers

Image files (.img) are raster files in the IMAGINE format. There are two types of raster layers: • • continuous thematic

Thematic raster layers require a different display process than continuous raster layers. This section explains how each raster layer type is displayed. Continuous Raster Layers An image file (.img) can contain several continuous raster layers, and therefore, each pixel can have multiple data file values. When displaying an image file with continuous raster layers, it is possible to assign which layers (bands) are to be displayed with each of the three color guns. The data file values in each layer are input to the assigned color gun. The most useful color assignments are those that allow for an easy interpretation of the displayed image. For example: • • a natural-color image will approximate the colors that would appear to a human observer of the scene. a color-infrared image shows the scene as it would appear on color-infrared film, which is familiar to many analysts.

Band assignments are often expressed in R,G,B order. For example, the assignment 4,2,1 means that band 4 is assigned to red, band 2 to green, and band 1 to blue. Below are some widely used band to color gun assignments (Faust 1989): • Landsat TM - natural color: 3,2,1 This is natural color because band 3 = red and is assigned to the red color gun, band 2 = green and is assigned to the green color gun, and band 1 is blue and is assigned to the blue color gun. Landsat TM - color-infrared: 4,3,2 This is infrared because band 4 = infrared. SPOT Multispectral - color-infrared: 3,2,1 This is infrared because band 3 = infrared.

• •

Contrast Table When an image is displayed, ERDAS IMAGINE automatically creates a contrast table for continuous raster layers. The red, green, and blue brightness values for each band are stored in this table. Since the data file values in continuous raster layers are quantitative and related, the brightness values in the colormap are also quantitative and related. The screen pixels represent the relationships between the values of the file pixels by their colors. For example, a screen pixel that is bright red has a high brightness value in the red color gun, and a high data file value in the layer assigned to red, relative to other data file values in that layer.

106

ERDAS

When these values are used as brightness values. so that the contrast of the displayed image is higher—that is. but they usually remain in the same order of lowest to highest. the contrast of the displayed image is poor. Since the data file values in a continuous raster layer often represent raw data (such as elevation or an amount of reflected light). lower data file values are displayed with the lowest brightness values. The range of most displays is 0 to 255 for each color gun. Therefore.Displaying Raster Layers The brightness values often differ from the data file values. Contrast Stretch Different displays have different ranges of possible brightness values. (The numbers in Figure 38 are approximations and do not show an exact linear relationship. For example. A contrast stretch simply “stretches” the range between the lower and higher data file values. Field Guide 107 . a contrast stretch is usually performed. The colormap stretches the range of colorcell values from 30 to 40 to the range 0 to 255. Figure 38 shows a layer that has data file values from 30 to 40.) 255 30 31 32 33 34 35 36 ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ 0 25 51 76 102 127 153 178 204 229 255 output brightness values 37 30 to 40 range 0 0 255 38 39 40 input colorcell values Figure 38: Contrast Stretch and Colorcell Values See "CHAPTER 5: Enhancement" for more information about contrast stretching. and higher data file values are displayed with the highest brightness values. the range of data file values is often not the same as the range of brightness values of the display. Contrast stretching is performed the same way for display purposes as it is for permanent image enhancement. this stretch is a linear contrast stretch. Some meaningful relationships between the values are usually maintained. which stretches the range of the values to fit the range of the display. Since the output values are incremented at regular intervals.

such as the mean and the standard deviation of the data file values in each layer. and the displayed image has low contrast. certain statistics are necessary. which determines the range of data used in the stretch. not all of the data file values are used in the contrast stretch calculations.img files before they are displayed in the Viewer. frequency 0 -2σ mean +2σ 255 stored data file values Original Histogram most of the data frequency frequency most of the data values stretched less than 0 are not displayed values stretched over 255 are not displayed -2σ 0 mean stretched data file values +2σ 255 0 -2σ mean +2σ 255 stretched data file values Standard Deviation Stretch Min/Max Stretch Figure 39: Stretching by Min/Max vs. When the minimum and maximum are extreme in relation to the rest of the data. Standard Deviation The mean and standard deviation of the data file values for each band are used to locate the majority of the data file values.A two standard deviation linear contrast stretch is applied to stretch pixel values from 0 to 255 of all . See "APPENDIX A: Math Topics" for more information on mean and standard deviation. Use the Image Information utility to create and view statistics for a raster layer. The number of standard deviations above and below the mean can be entered. 108 ERDAS . Usually. Statistics Files To perform a contrast stretch. The minimum and maximum data file values of each band are often too extreme to produce good results. then the majority of data file values are not stretched across a very wide range. This often improves the initial appearance of the data in the Viewer. unless a saved contrast stretch exists (the file is not changed).

Displaying Raster Layers Use the Contrast Tools dialog. The process would be similar on a TrueColor display except that the colormap would not be used. to enter the number of standard deviations to be used in the contrast stretch. 24-bit DirectColor and TrueColor Displays Figure 40 illustrates the general process of displaying three continuous raster layers on a 24-bit DirectColor display. Band-tocolor gun assignments: Band 3 assigned to RED Band 2 assigned to GREEN Band 1 assigned to BLUE Histograms of each band: 0 255 0 255 0 255 Ranges of data file values to be displayed: 0 data file values in 255 0 data file values in 255 0 data file values in 255 Colormap: 0 brightness values out 255 0 brightness values out 255 0 brightness values out 255 Color guns: Brightness values in each color gun: Color display: Figure 40: Continuous Raster Layer Display Process Field Guide 109 . which is accessible from the Lookup Table Modification dialog.

ERDAS IMAGINE automatically creates a color table. green. Colors can be expressed numerically. which is simply a number for a particular category. However. and blue in different combinations. The maximum brightness value for the display device is scaled to 1. See"Dithering" on page 116 for more information. Brightness values of a display generally range from 0 to 255. Thematic Raster Layers A thematic raster layer generally contains pixels that have been classified. Table 14 contains only a partial listing of commonly used colors. Since there are only 256 colors available. as the brightness values for each color gun.392 .8-bit PseudoColor Display When displaying continuous raster layers on an 8-bit PseudoColor display. green. in which each class can have its own color.img) file. however. IMAGINE translates the values from 0 to 1. and blue brightness values for each class are stored in this table. green. or put into distinct categories. Color Table When a thematic raster layer is displayed. Over 16 million colors are possible on a 24-bit display.588 1 1 110 ERDAS . The colors listed in Table 14 are based on the range that would be used to assign brightness values in ERDAS IMAGINE. Since these class values are not necessarily related. a continuous raster layer looks different when it is displayed in an 8-bit display than a 24-bit display that offers 16 million different colors. Only one data file value—the class value—is stored for each pixel.608 1 . RGB Colors Individual color schemes can be created by combining red. the data file values from the red. Table 14: Commonly Used RGB Colors Color Red Red-Orange Orange Yellow Yellow-Green 1 1 . This colorcell then provides the red. and blue brightness values. the gradations that are possible in true color mode are not usually useful in pseudo color. the ERDAS IMAGINE Viewer performs dithering with the available colors in the colormap to let a smaller set of colors appear to be a larger set of colors. green.490 Red 0 Green 0 0 0 0 0 Blue . Each data file value is a class value. A thematic raster layer is stored in an image (. The class system gives the thematic layer a discrete look. and blue bands are combined and transformed to a colorcell value in the colormap. The red. and assigning colors to the classes of a thematic layer.

471 . decrease all three brightness values. To lighten a color.373 Red 1 1 0 0 0 0 1 Green 0 1 1 .Displaying Raster Layers Table 14: Commonly Used RGB Colors Color Green Cyan Blue Blue-Violet Violet Black White Gray Brown 0 0 0 . 24-bit DirectColor and TrueColor Displays Figure 41 illustrates the general process of displaying thematic raster layers on a 24-bit DirectColor display.392 .0) and white is created from the highest values of all three colors (1.0.1).588 0 1 . The process would be similar on a TrueColor display except that the colormap would not be used. Use the Raster Attribute Editor to create your own color scheme.1. Display a thematic raster layer from the ERDAS IMAGINE Viewer.227 NOTE: Black is the absence of all color (0.498 . increase all three brightness values. To darken a color.498 0 Blue .498 . Field Guide 111 .588 0 1 .

112 ERDAS . ERDAS IMAGINE does not typically have access to the entire colormap.1 Original image by class: 2 3 1 3 5 4 GREEN brightness value 0 128 255 0 255 BLUE brightness value 0 0 0 255 0 4 2 Color scheme: CLASS 1 2 3 4 5 COLOR Red = Orange = Yellow = Violet = Green = RED brightness value 255 255 255 128 0 1 class values in 2 3 4 5 1 class values in 2 3 4 5 1 class values in 2 3 4 5 Colormap: 0 128 brightness values out 255 0 128 brightness values out 255 0 128 brightness values out 255 Color guns: Brightness values in each color gun: = 255 = 128 =0 R Display: O Y R Y G V V O Figure 41: Thematic Raster Layer Display Process 8-bit PseudoColor Display The colormap is a limited resource which is shared among all of the applications that are running concurrently. Due to the limited resources.

the Viewer uses the file as its source. Color Flickering If an application requests a new color that does not exist in the colormap. if there are not any available colorcells and the application requires a private colorcell.uses the data file values of 16 pixels in a 4 × 4 window to calculate an output value with a cubic function. Once the cursor is moved into the application window. The ERDAS IMAGINE Viewer not only makes digital images visible quickly. Field Guide 113 . the file pixels may be resampled for display on the screen. the server will use the main colormap and the brightness values assigned to the colorcells. The user can open as many Viewer windows as their window manager supports. and flickering may occur as well. the Viewer re-fits the file grid to the new screen grid. terminal windows. the more RAM memory is necessary. Resampling When a raster layer(s) is displayed. the raster grid defined by the file must be fit to the grid of screen pixels in the Viewer. Since this is a private colormap. So. ARC View. Bilinear Interpolation . Therefore. the colors in the private colormap will not be applied and the screen will flicker. but it can also be used as a tool for image processing and raster GIS modeling. All Viewer operations are file-based. then a private colormap will be created for the application window. there are some limitations to the number of colors that the Viewer can display simultaneously. and annotation layers. Cubic Convolution . In this case. The uses of the Viewer are listed briefly in this section. when the cursor is moved out of the window. These are discussed in detail in "CHAPTER 8: Rectification". Colormap ERDAS IMAGINE does not use the entire colormap because there are other applications that also need to use it. or a clock. and described in greater detail in other chapters of the ERDAS Field Guide. NOTE: The more Viewers that are opened simultaneously. including the window manager. If the raster layer is magnified or reduced. vector. Therefore.Using the IMAGINE Viewer Using the IMAGINE Viewer The ERDAS IMAGINE Viewer is a window for displaying raster. The resampling methods available are: • • • Nearest Neighbor . any time an image is resampled in the Viewer. the server will assign that color to an empty colorcell.uses the value of the closest pixel to assign to the output pixel value. However. the correct colors will be applied for that window.uses the data file values of four pixels in a 2 × 2 window to calculate an output value with a bilinear function. Resampling is used to calculate pixel values when one raster grid must be fitted to another.

Pyramid layers are added as additional layers in the . these layers cannot be accessed for display. When the Create Pyramid Layer option is selected. The number of pyramid layers created depends on the size of the original image. If the raster layer is thematic. See "CHAPTER 8: Rectification" for more information on Nearest Neighbor. it is resampled by a method that is similar to cubic convolution. See "CHAPTER 1: Raster Data" for information on block size. See the ERDAS IMAGINE On-Line Help for the Preference Editor for information on how to set preferences for the Viewer. Pyramid layers are image layers which are copies of the original layer successively reduced by the power of 2 and then resampled. The actual increase in file size can be determined by multiplying the layer size by this formula: i=0 where: n = number of pyramid layers --∑ 4-i 1 n 114 ERDAS .The default resampling method is Nearest Neighbor. If the raster layer is continuous. The default block size is 64 × 64 pixels. The data file values for sixteen pixels in a 4 × 4 window are used to calculate an output data file value with a filter function. A larger image will produce more pyramid layers. The Pyramid Layer option enables the user to display large images faster and allows certain applications to rapidly access the resampled data.img file. then it is resampled using the Nearest Neighbor method. However. ERDAS IMAGINE automatically creates successively reduced layers until the final pyramid layer can be contained in one block. Pyramid Layers Sometimes a large . The file size is increased by approximately one-third when pyramid layers are created. Preference Editor The ERDAS IMAGINE Preference Editor enables the user to set parameters for the ERDAS IMAGINE Viewer that affect the way the Viewer operates.img file may take a long time to display in the ERDAS IMAGINE Viewer or to be resampled by an application.

However. they are for viewing purposes only. down to 64 × 64. but ERDAS IMAGINE will utilize this file space. they can no longer be viewed or used in an application. The Pyramid Layer option creates additional layers successively reduced from 4K × 4K. 1K × 1K. Pyramid layer (64 × 64) Pyramid layer (128 × 128) Pyramid layer (512 × 512) Pyramid layer (1K × 1K) IMAGINE selects the pyramid layer that will display the fastest in the Viewer.that is. they will not appear as layers in other parts of the ERDAS IMAGINE system (e. a file which is 4K × 4K pixels could take a long time to display when the image is fit to the Viewer.Using the IMAGINE Viewer Pyramid layers do not appear as layers which can be processed. The pyramid layer option is available from Import and the Image Information utility. Pyramid layers can be deleted through the Image Information utility. they will not be deleted from the . when pyramid layers are deleted. if necessary. 512 × 512. to 2K × 2K.so the .g. ERDAS IMAGINE then selects the pyramid layer size most appropriate for display in the Viewer window when the image is displayed. the Arrange Layers dialog). Pyramid layers are deleted from viewing and resampling access only .img file Figure 42: Pyramid Layers For example. 128 × 128. Therefore.. Field Guide 115 . Viewer Window Pyramid layer (2K × 2K) Original Image (4K × 4K) .img file size will not change.img file .

116 ERDAS . a dithering algorithm mixes available colors to provide something that looks like the desired color. assume the system can display only two colors. see "CHAPTER 1: Raster Data" and "APPENDIX B: File Formats and Extensions". black and white. The colors that the ERDAS IMAGINE Viewer will dither between will be similar to each other. For a simple example. Dithering lets a smaller set of colors appear to be a larger set of colors. a maximum of 256 colors can be displayed at the same time. and will be dithered on the pixel level. and the user wants to display gray. dithering is used between a black pixel and a white pixel to obtain a gray pixel. For example.img format. an 8-bit display has a colormap with 256 colorcells. Dithering allows multiple images to be displayed in different Viewers without refreshing the currently displayed image(s) each time a new image is displayed. Using similar colors and dithering on the pixel level makes the image appear smooth.For more information about the . Black Gray White Figure 43: Example of Dithering In Figure 43. This can be accomplished by alternating the display of black and white pixels. If the desired display color is not available. the color quality will degrade. Dithering A display is capable of viewing only a limited number of colors simultaneously. therefore. If some colors are being used for auto update color adjustment while other colors are still being used for other imagery.

Figure 44 shows what the color patches would look like if the usable colors were black and white and the desired color was gray. Color Artifacts Since the Viewer requires 2 × 2 pixel patches to represent a color. and blue components of a desired color. the difference in color resolution is insignificant. Viewing Layers The ERDAS IMAGINE Viewer displays layers as one of the following types of view layers: • • • • • annotation vector pseudo color gray scale true color Field Guide 117 . The Viewer separately dithers the red. the patch will contain two pixels of each of the surrounding usable colors.Using the IMAGINE Viewer Color Patches When the Viewer performs dithering. artifacts may appear in an image that has been dithered. it uses patches of 2 × 2 pixels. the patch will contain 3 pixels of the color it is closest to and 1 pixel of the color that is second closest. then all of the values in the patch will match it. green. Similarity between adjacent pixels usually smooths out artifacts that would appear. Exact 25% away 50% away 75% away Next color Figure 44: Example of Color Patches If the desired color is not an even multiple of 1/4 of the way between two allowable colors. and actual images typically have a different color for each pixel. If the desired color has an exact match. Usually. because adjacent pixels are normally similar to each other. If it is 3/4 of the way between two usable colors. it is rounded to the nearest 1/4. If the desired color is halfway between two of the usable colors.

This is most appropriate for thematic layers. A continuous raster layer may be displayed as a gray scale view layer.ovr) is displayed in the Viewer. This layer is then displayed in all three color guns. Gray Scale View Layer When a raster layer is displayed as a gray scale layer in the Viewer. the layer would initially appear gray.Annotation View Layer When an annotation layer (xxx. Vector View Layer Vector layers are displayed in the Viewer as a vector view layer. True Color View Layer Continuous raster layers should be displayed as true color layers in the Viewer. the colormap uses the brightness values in the contrast table for one layer. 118 ERDAS . the colormap uses the RGB brightness values for the one layer in the RGB table. Pseudo Color View Layer When a raster layer is displayed as a pseudo color layer in the Viewer. If the layer is a continuous raster layer. producing a gray scale image. it is displayed as an annotation view layer. one for each color gun to display the set of layers. since there are not any values in the RGB table. The colormap uses the RGB brightness values for three layers in the contrast table.

Opacity is a component of the color scheme of categorical data displayed in pseudo color. they must all be referenced to the same map coordinate system. raster layers are resampled from the file to fit to the new scale. By manipulating opacity. Use the Arrange Layers dialog to re-stack layers in a Viewer so that they overlap in a different order. the areas with zero values will allow the underlying layers to show through. and resampled to the same scale as previously displayed layers. Be sure to turn off the Clear Display check box when you open subsequent layers. The map coordinate systems for the layers must be the same. 0% opacity allows underlying layers to show completely. To overlay multiple layers in one Viewer. and lets some of the underlying layers show through. In a raster layer. 50% opacity lets some color show. and cannot be seen through. The last layer that is opened will always appear to be “on top” of the previously opened layers. The layers are positioned geographically within the window. Overlapping Layers When layers overlap. raster layers in one Viewer can have different cell sizes.Using the IMAGINE Viewer Viewing Multiple Layers It is possible to view as many layers of all types (in the exception of vector layers. you can compare two or more layers of raster data that are displayed in a Viewer. Opacity is a measure of how opaque. Therefore. Layers that cover distinct geographic areas can be opened in the same Viewer. Opacity can be set at any value in the range of 0% to 100%. if needed. meaning that they have no opacity. or solid. Thus. a color is displayed in a raster layer. • • • 100% opacity means that a color is completely opaque. The layers will be automatically positioned in the Viewer window according to their map coordinates. When multiple layers are magnified or reduced. if a raster layer with zeros is displayed over other layers. which have a limit of 10) at one time in a single Viewer. and will be positioned relative to one another geographically. The effect is like looking at the underlying layers through a colored fog. Display multiple layers from the Viewer. Field Guide 119 . the order in which the layers are opened is very important. Non-Overlapping Layers Multiple layers that are opened in the same Viewer do not have to overlap. it is possible to make values of zero transparent in the Viewer.

When two such Viewers are linked. Figure 45: Linked Viewers 120 ERDAS . Any image that is displayed at a magnification (higher zoom ratio) of another image in a linked Viewer is represented in the other Viewer by a box. When two Viewers are linked: • • • • • either the same geographic point is displayed in the centers of both Viewers. or a box shows where one view fits inside the other scrolling one Viewer affects the other the user can manipulate the zoom ratio of one Viewer from another any inquire cursors in one Viewer appear in the other.Linking Viewers Linking Viewers is appropriate when two Viewers cover the same geographic area (at least partially) and are referenced to the same map units. for multiple-Viewer pixel inquiry the auto-zoom is enabled. Figure 45 shows how one view fits inside the other linked Viewer. and then a close-up of a particular area in another Viewer. there may be multiple boxes in that Viewer. If several Viewers are linked together. The link box shows the extent of the larger-scale view. if the Viewers have the same zoom ratio and nearly the same window size It is often helpful to display a wide view of a scene in one Viewer. a box opens in the wide view window to show where the close-up view lies.

When an image is zoomed. or the Quick View right-button menu. which makes the image features appear larger in the Viewer. The zoom ratio describes the size of the image on the screen in terms of the number of file pixels used to store the image. NOTE: ERDAS IMAGINE allows floating point zoom ratios. Effectively. It is the ratio of the number of screen pixels in the X or Y dimension to the number that are used to display the corresponding file pixels. Roaming and zooming have no effect on how the image is stored in the file. which makes the image features appear smaller in the Viewer. each block of 2 × 2 file pixels is displayed with 1 screen pixel. The default resampling method is Nearest Neighbor. continuous fractional zoom)...g.. Zoom the data in the Viewer via the Viewer menu bar. the image is displayed at 200%. A zoom ratio greater than 1 is a magnification.Using the IMAGINE Viewer Zoom and Roam Zooming enlarges an image on the display. A zoom ratio of 2 means. A zoom ratio less than 1 is a reduction. The resampling method used when an image is zoomed is the same one used when the image is displayed. it can be roamed (scrolled) so that the desired portion of the image appears on the display screen. each file pixel is displayed with a block of 2 × 2 screen pixels.. Any image that does not fit entirely in the Viewer can be roamed and/or zoomed. the image is displayed at 50%. each file pixel is displayed with 1 screen pixel in the Viewer. A zoom ratio of 0. Field Guide 121 . Table 15: Overview of Zoom Ratio A zoom ratio of 1 means... Effectively..5 means. as specified in the Open Raster Layer dialog. so that images can be zoomed at virtually any scale (e. Resampling is necessary whenever an image is displayed with a new pixel grid. the Viewer tool bar.

Geographic Information

To prepare to run many programs, it may be necessary to determine the data file coordinates, map coordinates, or data file values for a particular pixel or a group of pixels. By displaying the image in the Viewer and then selecting the pixel(s) of interest, important information about the pixel(s) can be viewed.

The Quick View right-button menu gives you options to view information about a specific pixel. Use the Raster Attribute Editor to access information about classes in a thematic layer.

See "CHAPTER 10: Geographic Information Systems" for information about attribute data. Enhancing Continuous Raster Layers Working with the brightness values in the colormap is useful for image enhancement. Often, a trial-and-error approach is needed to produce an image that has the right contrast and highlights the right features. By using the tools in the Viewer, it is possible to quickly view the effects of different enhancement techniques, undo enhancements that aren’t helpful, and then save the best results to disk.

Use the Raster options from the Viewer to enhance continuous raster layers.

See "CHAPTER 5: Enhancement" for more information on enhancing continuous raster layers. Creating New Image Files It is easy to create a new image file (.img) from the layer(s) displayed in the Viewer. The new .img file will contain three continuous raster layers (RGB), regardless of how many layers are currently displayed. The IMAGINE Image Info utility must be used to create statistics for the new .img file before the file is enhanced. Annotation layers can be converted to raster format, and written to an .img file. Or, vector data can be gridded into an image, overwriting the values of the pixels in the image plane, and incorporated into the same band as the image.

Use the Viewer to .img function to create a new .img file from the currently displayed raster layers.

122

ERDAS

Using the IMAGINE Viewer

Field Guide

123

124

ERDAS

Introduction

CHAPTER 5 Enhancement

Introduction

Image enhancement is the process of making an image more interpretable for a particular application (Faust 1989). Enhancement makes important features of raw, remotely sensed data more interpretable to the human eye. Enhancement techniques are often used instead of classification techniques for feature extraction—studying and locating areas and objects on the ground and deriving useful information from images. The techniques to be used in image enhancement depend upon: • The user’s data — the different bands of Landsat, SPOT, and other imaging sensors were selected to detect certain features. The user must know the parameters of the bands being used before performing any enhancement. (See "CHAPTER 1: Raster Data" for more details.) The user’s objective — for example, sharpening an image to identify features that can be used for training samples will require a different set of enhancement techniques than reducing the number of bands in the study. The user must have a clear idea of the final product desired before enhancement is performed. The user’s expectations — what the user thinks he or she will find. The user’s background — the experience of the person performing the enhancement.

• •

This chapter will briefly discuss the following enhancement techniques available with ERDAS IMAGINE: • • • • Data correction — radiometric and geometric correction Radiometric enhancement — enhancing images based on the values of individual pixels Spatial enhancement — enhancing images based on the values of individual and neighboring pixels Spectral enhancement — enhancing images by transforming the values of each pixel on a multiband basis

Field Guide

125

• • •

Hyperspectral image processing — an extension of the techniques used for multispectral datasets Fourier analysis — techniques for eliminating periodic noise in imagery Radar imagery enhancement— techniques specifically designed for enhancing radar imagery

See "Bibliography" on page 635 to find current literature which will provide a more detailed discussion of image processing enhancement techniques. Display vs. File Enhancement With ERDAS IMAGINE, image enhancement may be performed: • • temporarily, upon the image that is displayed in the Viewer (by manipulating the function and display memories), or permanently, upon the image data in the data file.

Enhancing a displayed image is much faster than enhancing an image on disk. If one is looking for certain visual effects, it may be beneficial to perform some trial-and-error enhancement techniques on the display. Then, when the desired results are obtained, the values that are stored in the display device memory can be used to make the same changes to the data file.

For more information about displayed images and the memory of the display device, see "CHAPTER 4: Image Display". Spatial Modeling Enhancements Two types of models for enhancement can be created in ERDAS IMAGINE: • • Graphical models — use Model Maker (Spatial Modeler) to easily, and with great flexibility, construct models which can be used to enhance the data. Script models — for even greater flexibility, use the Spatial Modeler Language to construct models in script form. The Spatial Modeler Language (SML) enables the user to write scripts which can be written, edited, and run from the Spatial Modeler component or directly from the command line. The user can edit models created with Model Maker using the Spatial Modeling Language or Model Maker.

Although a graphical model and a script model look different, they will produce the same results when applied. Image Interpreter ERDAS IMAGINE supplies many algorithms constructed as models, ready to be applied with user-input parameters at the touch of a button. These graphical models, created with Model Maker, are listed as menu functions in the Image Interpreter. These functions are mentioned throughout this chapter. Just remember, these are modeling functions which can be edited and adapted as needed with Model Maker or the Spatial Modeler Language.

126

ERDAS

Introduction

See "CHAPTER 10: Geographic Information Systems"for more information on Raster Modeling. The modeling functions available for enhancement in Image Interpreter are briefly described in Table 16. Table 16: Description of Modeling Functions Available for Enhancement Function SPATIAL ENHANCEMENT Convolution Non-directional Edge Focal Analysis Description These functions enhance the image using the values of individual and surrounding pixels.
Uses a matrix to average small sets of pixels across an image. Averages the results from two orthogonal 1st derivative edge detectors. Enables the user to perform one of several analyses on class values in an .img file using a process similar to convolution filtering. Defines texture as a quantitative characteristic in an image. Varies the contrast stretch for each pixel depending upon the DN values in the surrounding moving window. Produces the pixel output DN by averaging pixels within a moving window that fall within a statistically defined range. Merges imagery of differing spatial resolutions. Sharpens the overall scene luminance without distorting the thematic content of the image.

Texture Adaptive Filter Statistical Filter Resolution Merge Crisp RADIOMETRIC ENHANCEMENT LUT (Lookup Table) Stretch Histogram Equalization Histogram Match

These functions enhance the image using the values of individual pixels within each band.
Creates an output image that contains the data values as modified by a lookup table. Redistributes pixel values with a nonlinear contrast stretch so that there are approximately the same number of pixels with each value within a range. Mathematically determines a lookup table that will convert the histogram of one image to resemble the histogram of another. Allows both linear and nonlinear reversal of the image intensity range. De-hazes Landsat 4 and 5 Thematic Mapper data and panchromatic data. Removes noise using an adaptive filter. Removes striping from a raw TM4 or TM5 data file.

Brightness Inversion Haze Reduction* Noise Reduction* Destripe TM Data

Field Guide

127

Table 16: Description of Modeling Functions Available for Enhancement Function SPECTRAL ENHANCEMENT Principal Components Inverse Principal Components Decorrelation Stretch Tasseled Cap RGB to IHS IHS to RGB Indices Natural Color FOURIER ANALYSIS Fourier Transform* Fourier Transform Editor* Inverse Fourier Transform* Fourier Magnitude* Periodic Noise Removal* Homomorphic Filter* Description These functions enhance the image by transforming the values of each pixel on a multiband basis.
Compresses redundant data values into fewer bands which are often more interpretable than the source data. Performs an inverse principal components analysis. Applies a contrast stretch to the principal components of an image. Rotates the data structure axes to optimize data viewing for vegetation studies. Transforms red, green, blue values to intensity, hue, saturation values. Transforms intensity, hue, saturation values to red, green, blue values. Performs band ratios that are commonly used in mineral and vegetation studies. Simulates natural color for TM data.

These functions enhance the image by applying a Fourier Transform to the data. NOTE: These functions are currently view only—no manipulation is allowed.
Enables the user to utilize a highly efficient version of the Discrete Fourier Transform (DFT). Enables the user to edit Fourier images using many interactive tools and filters. Computes the inverse two-dimensional Fast Fourier Transform of the spectrum stored. Converts the Fourier Transform image into the more familiar Fourier Magnitude image. Automatically removes striping and other periodic noise from images. Enhances imagery using an illumination/reflectance model.

* Indicates functions that are not graphical models. NOTE: There are other Image Interpreter functions that do not necessarily apply to image enhancement.

128

ERDAS

Correcting Data

Correcting Data

Each generation of sensors shows improved data acquisition and image quality over previous generations. However, some anomalies still exist that are inherent to certain sensors and can be corrected by applying mathematical formulas derived from the distortions (Lillesand and Kiefer 1979). In addition, the natural distortion that results from the curvature and rotation of the earth in relation to the sensor platform produces distortions in the image data, which can also be corrected. Radiometric Correction Generally, there are two types of data correction: radiometric and geometric. Radiometric correction addresses variations in the pixel intensities (digital numbers, or DNs) that are not caused by the object or scene being scanned. These variations include: • • • differing sensitivities or malfunctioning of the detectors topographic effects atmospheric effects

Geometric Correction Geometric correction addresses errors in the relative positions of pixels. These errors are induced by: • • sensor viewing geometry terrain variations

Because of the differences in radiometric and geometric correction between traditional, passively detected visible/infrared imagery and actively acquired radar imagery, the two will be discussed separately. See "Radar Imagery Enhancement" on page 191. Radiometric Correction Visible/Infrared Imagery Striping Striping or banding will occur if a detector goes out of adjustment—that is, it provides readings consistently greater than or less than the other detectors for the same band over the same ground cover. Some Landsat 1, 2, and 3 data have striping every sixth line, due to improper calibration of some of the 24 detectors that were used by the MSS. The stripes are not constant data values, nor is there a constant error factor or bias. The differing response of the errant detector is a complex function of the data value sensed. This problem has been largely eliminated in the newer sensors. Various algorithms have been advanced in current literature to help correct this problem in the older data. Among these algorithms are simple along-line convolution, high-pass filtering, and forward and reverse principal component transformations (Crippen 1989).

Field Guide

129

Data from airborne multi- or hyperspectral imaging scanners will also show a pronounced striping pattern due to varying offsets in the multi-element detectors. This effect can be further exacerbated by unfavorable sun angle. These artifacts can be minimized by correcting each scan line to a scene-derived average (Kruse 1988).

Use the Image Interpreter or the Spatial Modeler to implement algorithms to eliminate striping.The Spatial Modeler editing capabilities allow you to adapt the algorithms to best address the data. The Radar Adjust Brightness function will also correct some of these problems. Line Dropout Another common remote sensing device error is line dropout. Line dropout occurs when a detector either completely fails to function, or becomes temporarily saturated during a scan (like the effect of a camera flash on the retina). The result is a line or partial line of data with higher data file values, creating a horizontal streak until the detector(s) recovers, if it recovers. Line dropout is usually corrected by replacing the bad line with a line of estimated data file values, based on the lines above and below it. Atmospheric Effects The effects of the atmosphere upon remotely-sensed data are not considered “errors,” since they are part of the signal received by the sensing device (Bernstein 1983). However, it is often important to remove atmospheric effects, especially for scene matching and change detection analysis. Over the past 20 years a number of algorithms have been developed to correct for variations in atmospheric transmission. Four categories will be mentioned here: • • • • dark pixel subtraction radiance to reflectance conversion linear regressions atmospheric modeling

Use the Spatial Modeler to construct the algorithms for these operations. Dark Pixel Subtraction The dark pixel subtraction technique assumes that the pixel of lowest DN in each band should really be zero and hence its radiometric value (DN) is the result of atmosphereinduced additive errors. These assumptions are very tenuous and recent work indicates that this method may actually degrade rather than improve the data (Crane 1971, Chavez et al 1977).

130

ERDAS

Correcting Data

Radiance to Reflectance Conversion Radiance to reflectance conversion requires knowledge of the true ground reflectance of at least two targets in the image. These can come from either at-site reflectance measurements or they can be taken from a reflectance table for standard materials. The latter approach involves assumptions about the targets in the image. Linear Regressions A number of methods using linear regressions have been tried. These techniques use bispectral plots and assume that the position of any pixel along that plot is strictly a result of illumination. The slope then equals the relative reflectivities for the two spectral bands. At an illumination of zero, the regression plots should pass through the bispectral origin. Offsets from this represent the additive extraneous components, due to atmosphere effects (Crippen 1987). Atmospheric Modeling Atmospheric modeling is computationally complex and requires either assumptions or inputs concerning the atmosphere at the time of imaging. The atmospheric model used to define the computations is frequently Lowtran or Modtran (Kneizys et al 1988). This model requires inputs such as atmospheric profile (pressure, temperature, water vapor, ozone, etc.) aerosol type, elevation, solar zenith angle, and sensor viewing angle. Accurate atmospheric modeling is essential in preprocessing hyperspectral data sets where bandwidths are typically 10 nm or less. These narrow bandwidth corrections can then be combined to simulate the much wider bandwidths of Landsat or SPOT sensors (Richter 1990). Geometric Correction As previously noted, geometric correction is applied to raw sensor data to correct errors of perspective due to the earth’s curvature and sensor motion. Today, some of these errors are commonly removed at the sensor’s data processing center. But in the past, some data from Landsat MSS 1, 2, and 3 were not corrected before distribution. Many visible/infrared sensors are not nadir-viewing; they look to the side. For some applications, such as stereo viewing or DEM generation, this is an advantage. For other applications, it is a complicating factor. In addition, even a nadir-viewing sensor is viewing only the scene center at true nadir. Other pixels, especially those on the view periphery, are viewed off-nadir. For scenes covering very large geographic areas (such as AVHRR), this can be a significant problem. This and other factors, such as earth curvature, result in geometric imperfections in the sensor image. Terrain variations have the same distorting effect but on a smaller (pixelby-pixel) scale. These factors can be addressed by rectifying the image to a map.

See "CHAPTER 8: Rectification" for more information on geometric correction using rectification and "CHAPTER 7: Photogrammetric Concepts" for more information on orthocorrection.

Field Guide

131

A more rigorous geometric correction utilizes a DEM and sensor position information to correct these distortions. This is orthocorrection.

Radiometric Enhancement

Radiometric enhancement deals with the individual values of the pixels in the image. It differs from spatial enhancement (discussed on page 143), which takes into account the values of neighboring pixels. Depending on the points and the bands in which they appear, radiometric enhancements that are applied to one band may not be appropriate for other bands. Therefore, the radiometric enhancement of a multiband image can usually be considered as a series of independent, single-band enhancements (Faust 1989). Radiometric enhancement usually does not bring out the contrast of every pixel in an image. Contrast can be lost between some pixels, while gained on others.

(j and k are reference points)

Frequency

0

j

k

255

Frequency
0

j

k

255

Original Data

Enhanced Data

Figure 46: Histograms of Radiometrically Enhanced Data In Figure 46, the range between j and k in the histogram of the original data is about one third of the total range of the data. When the same data are radiometrically enhanced, the range between j and k can be widened. Therefore, the pixels between j and k gain contrast—it is easier to distinguish different brightness values in these pixels. However, the pixels outside the range between j and k are more grouped together than in the original histogram, to compensate for the stretch between j and k. Contrast among these pixels is lost.

132

ERDAS

Radiometric Enhancement

Contrast Stretching

When radiometric enhancements are performed on the display device, the transformation of data file values into brightness values is illustrated by the graph of a lookup table. For example, Figure 47 shows the graph of a lookup table that increases the contrast of data file values in the middle range of the input data (the range within the brackets). Note that the input range within the bracket is narrow, but the output brightness values for the same pixels are stretched over a wider range. This process is called contrast stretching.
255

output brightness values
0 0

255

input data file values

Figure 47: Graph of a Lookup Table Notice that the graph line with the steepest (highest) slope brings out the most contrast by stretching output values farther apart. Linear and Nonlinear The terms linear and nonlinear, when describing types of spectral enhancement, refer to the function that is applied to the data to perform the enhancement. A piecewise linear stretch uses a polyline function to increase contrast to varying degrees over different ranges of the data, as in Figure 48.
255

output brightness values

linear nonlinear piecewise linear

0 0

255

input data file values

Figure 48: Enhancement with Lookup Tables

Field Guide

133

Linear Contrast Stretch A linear contrast stretch is a simple way to improve the visible contrast of an image. It is often necessary to contrast-stretch raw image data, so that they can be seen on the display. In most raw data, the data file values fall within a narrow range—usually a range much narrower than the display device is capable of displaying. That range can be expanded to utilize the total range of the display device (usually 0 to 255).

A two standard deviation linear contrast stretch is automatically applied to images displayed in the IMAGINE Viewer. Nonlinear Contrast Stretch A nonlinear spectral enhancement can be used to gradually increase or decrease contrast over a range, instead of applying the same amount of contrast (slope) across the entire image. Usually, nonlinear enhancements bring out the contrast in one range while decreasing the contrast in other ranges. The graph of the function in Figure 49 shows one example.

255

output brightness values
0 0

255

input data file values

Figure 49: Nonlinear Radiometric Enhancement Piecewise Linear Contrast Stretch A piecewise linear contrast stretch allows for the enhancement of a specific portion of data by dividing the lookup table into three sections: low, middle, and high. It enables the user to create a number of straight line segments which can simulate a curve. The user can enhance the contrast or brightness of any section in a single color gun at a time. This technique is very useful for enhancing image areas in shadow or other areas of low contrast.

In ERDAS IMAGINE, the Piecewise Linear Contrast function is set up so that there are always pixels in each data file value from 0 to 255. You can manipulate the percentage of pixels in a particular range but you cannot eliminate a range of data file values.

134

ERDAS

img output file with the same data values as the displayed contrast stretched image. there can be no break in the values between High. Middle. These values will be loaded to the Viewer as the default display values the next time the image is displayed. so that the data file values are not changed. The brightness value for each range represents the middle of the total range of brightness values occupied by that range. For example. Range specifications will adjust in relation to any changes to maintain the data value range. a contrast stretch is performed on the display device only. Use the Image Interpreter LUT Stretch function to create an . you can permanently change the data file values to the lookup table values. it forces the contrast of the middle to decrease. if the contrast of the low range increases. 2) The data values specified can go only in an upward.img file.Radiometric Enhancement A piecewise linear contrast stretch normally follows two rules: 1) The data values are continuous. Lookup tables are created that convert the range of data file values to the maximum range of the display device. as shown in Figure 50.img files. The user can then edit and save the contrast stretch values and lookup tables as part of the raster data . Field Guide 135 . 100% LUT Value Low Middle High 0 Data Value Range 255 Figure 50: Piecewise Linear Contrast Stretch The contrast value for each range represents the percent of the available output range that particular range occupies. they may affect the contrast and brightness of other ranges. See "CHAPTER 1: Raster Data" for more information on the data contained in . In ERDAS IMAGINE. as the contrast and brightness values are changed. Since rules 1 and 2 above are enforced. Contrast Stretch on the Display Usually. increasing direction. and Low.

By manipulating the lookup tables as in Figure 51. 136 ERDAS . Usually the data file values that are two standard deviations above and below the mean are used.img file contain the mean. or by a specific amount. Varying the Contrast Stretch There are variations of the contrast stretch that can be used to change the contrast of values over a specific range. The user can specify the number of standard deviations from the mean that are to be used in the contrast stretch. Statistics terms are discussed in "APPENDIX A: Math Topics".The statistics in the . If the data have a normal distribution. The shadow pixels are usually at the low extreme of the data file values. standard deviation. created by adding breakpoints to the histogram. This is also a good example of a piecewise linear contrast stretch. increasing contrast in some areas and decreasing it in others. because the minimum and maximum data file values are usually not representative of most of the data. The mean and standard deviation are used instead of the minimum and maximum data file values. Figure 51 shows how the contrast stretch manipulates the histogram of the data. (A notable exception occurs when the feature being sought is in shadow. outside the range of two standard deviations from the mean. the maximum contrast in the features of an image can be brought out. then this range represents approximately 95 percent of the data. and other statistics on each band of data. The mean and standard deviation are used to determine the range of data file values to be translated into brightness values or new data file values.) The use of these statistics in contrast stretching is discussed and illustrated in "CHAPTER 4: Image Display".

Linear stretch. A breakpoint is added to the linear function. The breakpoint at the top of the function is moved so that values are not clipped. Contrast at the peak of the histogram continues to increase. 255 output brightness values 0 0 output brightness values 255 255 255 input data file values 0 0 3. redistributing the contrast. Another breakpoint added.Radiometric Enhancement output brightness values output histogram input histogram 0 0 output brightness values 255 255 input data file values 255 0 0 1. Values are clipped at 255. input data file values 4. 255 Figure 51: Contrast Stretch By Manipulating Lookup Tables and the Effect on the Output Histogram Field Guide 137 . input data file values 2.

equaling the number of pixels per bin. This can have the visual effect of a crude classification. if there are few output values over a wide range.the maximum of the range of the output values. The range of the output values will be from 0 to M. The result approximates a flat histogram. some bins may be empty. as shown in the following equation: T A = --N where: N = the number of bins T = the total number of pixels in the image A = the equalized number of pixels per bin EQUATION 1 138 ERDAS .the number of bins to which pixel values can be assigned.” Histogram equalization can also separate pixels into distinct groups. contrast is increased at the “peaks” of the histogram and lessened at the “tails. Therefore.contrast is gained Figure 52: Histogram Equalization To perform a histogram equalization. The following parameters are entered: • • N . The pixels are then given new values. based upon the bins to which they are assigned. the pixel values of an image (either data file values or brightness values) are reassigned to a certain number of bins.Histogram Equalization Histogram equalization is a nonlinear stretch that redistributes pixel values so that there are approximately the same number of pixels with each value within a range. Original Histogram peak After Equalization tail pixels at tail are grouped contrast is lost pixels at peak are spread apart . which are simply numbered sets of pixels. If there are many bins or many pixels with the same value(s). The total number of pixels is divided by the number of bins. M .

Radiometric Enhancement The pixels of each input value are assigned to bins. Consider Figure 53: 60 60 number of pixels 40 30 A = 24 15 10 5 5 10 5 0 1 2 3 4 5 6 7 8 9 data file values Figure 53: Histogram Equalization Example There are 240 pixels represented by this histogram. there would be: 240 pixels / 10 bins = 24 pixels per bin = A To assign pixels to bins. To equalize this histogram to 10 bins. so that the number of pixels in each bin is as close to A as possible. the following equation is used:   Hi  ∑ H k  + ----k = 1  2 B i = int ---------------------------------A i–1 EQUATION 2 where: A = equalized number of pixels per bin (see above) Hi = the number of values with the value i (histogram) int= integer function (truncating real numbers to integer) Bi = bin number for pixels with value i Source: Modified from Gonzalez and Wintz 1977 Field Guide 139 .

the user must specify a range for the output brightness values and a number of output levels. A level slice on a true color display creates a “stair-stepped” lookup table. The output histogram of this equalized image looks like Figure 54: numbers inside bars are input data file values 60 60 number of pixels 40 30 4 20 2 1 0 15 3 0 0 0 5 A = 24 6 7 8 9 15 0 1 2 3 4 5 6 7 8 9 output data file values Figure 54: Equalized Histogram Effect on Contrast By comparing the original histogram of the example data with the one above. So. one can see that the enhanced image gains contrast in the “peaks” of the original histogram— for example. However. 140 ERDAS . The effect on the data is that input file values are grouped together at regular intervals into a discrete number of levels. is lost. or are slightly different because of sun angle or atmospheric effects. The resulting histogram is not exactly flat. which usually make up the darkest and brightest regions of the input image. M = 9. since the pixels can rarely be grouped together into bins with an equal number of pixels. Histogram matching is useful for matching data of the same or adjacent scenes that were scanned on separate days. Sets of pixels with the same value are never split up to form equal bins. contrast among the “tail” pixels. To perform a true color level slice. so that the equalized histogram can be compared to the original. Histogram Matching Histogram matching is the process of determining a lookup table that will convert the histogram of one image to resemble the histogram of another.The 10 bins are rescaled to the range 0 to M. the input range of 3 to 7 is stretched to the range 1 to 8. since the input values ranged from 0 to 9. In this example. Level Slice A level slice is similar to a histogram equalization in that it divides the data into equal amounts. each with one output brightness value. The lookup table is then “stairstepped” so that there is an equal number of input pixels in each of the output levels. This is especially useful for mosaicking or change detection. data values at the “tails” of the original histogram are grouped together—input values 0 through 2 all have the output value of 0.

histogram matching is performed band to band (e. In ERDAS IMAGINE. If one image has clouds and the other does not. The AOI function is available from the Viewer menu bar. even when matching scenes that are not of the same area. Figure 55: Histogram Matching Field Guide 141 .Radiometric Enhancement To achieve good results with histogram matching. The relative distributions of land covers should be about the same. approximates model histogram (c). then the clouds should be “removed” before matching the histograms.. Relative dark and light features in the image should be the same.g. etc. the two input images should have similar characteristics: • • • • The general shape of the histogram curves should be similar. (a) (b) (c) = frequency frequency + 0 input 255 0 input 255 frequency 0 input 255 Source histogram (a). This can be done using the Area of Interest (AOI) function. a lookup table is mathematically derived. as illustrated in Figure 55. band 2 of one image is matched to band 2 of the other image. mapped through the lookup table (b). For some applications. the spatial resolution of the data should be the same.). To match the histograms. which serves as a function for converting one histogram to the other.

This can also be used to invert a negative image that has been scanned and produce a positive image. This function applies the following algorithm: DNout = 1. A min-max remapping is used to simultaneously stretch the image and handle any input bit format.1 DNin if 0.255) to 0 .0. The output image is in floating point format. Brightness inversion has two options: inverse and reverse.1 < DN < 1 Reverse is a linear function that simply reverses the DN values: DNout = 1.0 < DNin < 0. Both options convert the input data range (commonly 0 . Dark detail becomes light. Inverse is useful for emphasizing detail that would otherwise be lost in the darkness of the low DN pixels.1 DNout = 0.0 . so a min-max stretch is used to convert the output image into 8bit format.Brightness Inversion The brightness inversion functions produce images that have the opposite contrast of the original image.DNin Source: Pratt 1986 142 ERDAS .1. and light detail becomes dark.0 if 0.

Jensen (1986) defines spatial frequency as “the number of changes in brightness value per unit distance for any particular part of an image.an image consisting of a checkerboard of black and white pixels zero spatial frequency low spatial frequency high spatial frequency Figure 56: Spatial Frequencies This section contains a brief description of the following: • • Convolution.Spatial Enhancement Spatial Enhancement While radiometric enhancements operate on each pixel individually.an image consisting of a smoothly varying gray scale highest spatial frequency . Field Guide 143 . which is the difference between the highest and lowest values of a contiguous set of pixels. in which every pixel has the same value low spatial frequency . and Adaptive filtering Resolution merging See "Radar Imagery Enhancement" on page 191 for a discussion of Edge Detection and Texture Analysis. spatial enhancement modifies pixel values based on the values of surrounding pixels.a flat image. Spatial enhancement deals largely with spatial frequency.” Consider the examples in Figure 56: • • • . Crisp. zero spatial frequency . These spatial enhancement techniques can be applied to any type of data.

referring to the altering of spatial or spectral features for image enhancement (Jensen 1996). 144 ERDAS . The numbers in the matrix serve to weight this average toward particular pixels. imagine that the convolution kernel is overlaid on the data file values of the image (in one band). In ERDAS IMAGINE. so that the pixel to be convolved is in the center of the window. there are four ways you can apply convolution filtering to an image: 1) The kernel filtering option in the Viewer 2) The Convolution function in Image Interpreter 3) The Radar Edge Enhancement function 4) The Convolution function in Model Maker Filtering is a broad term.Convolution Filtering Convolution filtering is the process of averaging small sets of pixels across an image. These numbers are often called coefficients. because they are used as such in the mathematical equations. third row of the sample data (the pixel that corresponds to the center of the kernel). A convolution kernel is a matrix of numbers that is used to average the value of each pixel with the values of surrounding pixels in a particular way. 2 2 2 2 2 8 8 2 2 2 6 6 8 2 2 6 6 6 8 2 6 6 6 6 8 -1 -1 -1 -1 16 -1 -1 -1 -1 Kernel Data Figure 57: Applying a Convolution Kernel Figure 57 shows a 3 × 3 convolution kernel being applied to the pixel in the third column. Convolution filtering is one method of spatial filtering. Convolution Example To understand how one pixel is convolved. Convolution filtering is used to change the spatial frequency characteristics of an image (Jensen 1996). Some texts may use the terms synonymously.

thus increasing the spatial frequency of the image. each value in the convolution kernel is multiplied by the image pixel value that corresponds to it.Spatial Enhancement To compute the output value for this pixel. Field Guide 145 . as shown here: integer ( (-1 x 8) + (-1 x 6) + (-1 x 6) + (-1 x 2) + (16 x 8) + (-1 x 6) + (-1 x 2) + (-1 x 2) + (-1 x 8) : (-1 + -1 + -1 + -1 + 16 + -1 + -1 + -1 + -1)) = int ((128-40) / (16-8)) = int (88 / 8) = int (11) = 11 When the 2 × 2 set of pixels in the center of this 5 x 5 image is convolved. and the higher values become higher. and the total is divided by the sum of the values in the kernel. It is important to note that the relatively lower values become lower. the output values are: 1 1 2 3 4 5 2 2 2 2 2 2 8 11 0 2 2 3 6 5 11 2 2 4 6 6 6 8 2 5 6 6 6 6 8 Figure 58: Output Values for Convolution Kernel The kernel used in this example is a high frequency kernel. as explained below. These products are summed.

When a zero-sum kernel is used. Since F cannot equal zero (division by zero is not defined). since division by zero is not defined. Zero-Sum Kernels Zero-sum kernels are kernels in which the sum of all coefficients in the kernel equals zero.Convolution Formula The following formula is used to derive an output data file value for the pixel being convolved (in the center):   ∑  ∑ f ij d ij  i = 1j = 1 V = -----------------------------------F q q where: fij = dij = q = F = V = the coefficient of a convolution kernel at position i. then the sum of the coefficients is not used in the convolution equation. Schowengerdt 1983 The sum of the coefficients (F) is used as the denominator of the equation above. V is clipped to 0.j (in the kernel) the data value of the pixel that corresponds to fij the dimension of the kernel. This generally causes the output values to be: • • • zero in areas where all input values are equal (no edges) low in areas of low spatial frequency extreme (high values become much higher. the kernel is 3 × 3) either the sum of the coefficients of the kernel. as above. F is set to 1 if the sum is zero. no division is performed (F = 1). low values become much lower) in areas of high spatial frequency 146 ERDAS . or 1 if the sum of coefficients is zero the output pixel value In cases where V is less than 0. Source: Modified from Jensen 1996. In this case. assuming a square kernel (if q=3. so that the output values will be in relatively the same range as the input values.

Field Guide 147 . which is at the edges between homogeneous groups of pixels.Spatial Enhancement Therefore. which usually smooths out or zeros out areas of low spatial frequency and creates a sharp contrast where spatial frequency is high. The resulting image often consists of only edges and zeros. a zero-sum kernel is an edge detector.

BEFORE 64 61 58 60 125 60 57 69 70 64 61 58 AFTER 60 187 60 57 69 70 .the low value gets lower. Inversely. or high-pass kernel. like this. High-Frequency Kernels A high-frequency kernel. when the kernel is used on a set of pixels in which a relatively high value is surrounded by lower values.the high value becomes higher. they highlight edges and do not necessarily eliminate other features...Zero-sum kernels can be biased to detect edges in a particular direction. since they bring out the edges between homogeneous groups of pixels. 148 ERDAS . -1 1 1 -1 -2 1 -1 1 1 See the section on "Edge Detection" on page 200 for more detailed information.. For example. In either case... High-frequency kernels serve as edge enhancers.. spatial frequency is increased by this kernel. BEFORE 204 201 198 200 100 200 197 209 210 204 201 198 AFTER 200 9 200 197 209 210 . has the effect of increasing spatial frequency. -1 -1 -1 -1 16 -1 -1 -1 -1 When this kernel is used on a set of pixels in which a relatively low value is surrounded by higher values. Unlike edge detectors (such as zero-sum kernels). this 3 × 3 kernel is biased to the south (Jensen 1996)...

rapid sensor motion. The algorithm used for this function is: 1) Calculate principal components of multiband input image. For information on applying filters to thematic layers. Luminance is sharpened. which decreases spatial frequency. Source: ERDAS (Faust 1993) The logic of the algorithm is that the first principal component (PC-1) of an image is assumed to contain the overall scene luminance. or low-pass kernel. This is a useful enhancement if the image is blurred due to atmospheric haze.Spatial Enhancement Low-Frequency Kernels Below is an example of a low-frequency kernel. or a broad point spread function of the sensor. 1 1 1 1 1 1 1 1 1 This kernel simply averages the values of the pixels. the user can sharpen only PC-1 and then reverse the principal components calculation to reconstruct the original image. 3) Retransform to RGB space. causing them to be more homogeneous (homogeneity is low spatial frequency). see “Chapter 9: Geographic Information Systems. Thus. but variance is retained.” Crisp The Crisp filter sharpens the overall scene luminance without distorting the interband variance content of the image. The other PC’s represent intra-scene variance. The resulting image looks either smoother or more blurred. 2) Convolve PC-1 with summary filter. Field Guide 149 .

Chavez (1991). 150 ERDAS . this algorithm is mathematically rigorous. replacing I (from transformed TM data) with the SPOT panchromatic image. it is assumed that the intensity component (PC-1 or I) is spectrally equivalent to the SPOT panchromatic image. PC-1 is removed and its numerical range (min to max) is determined. spatial. This remapping is done so that the mathematics of the reverse transform do not distort the thematic information (Welch and Ehlers 1987). It is then substituted for PC-1 and the reverse transform is applied. or temporal resolution.. However. spectral.B). Another technique (Schowengerdt 1980) combines a high frequency image derived from the high spatial resolution data (i. all interband variation is contained in the other 5 PCs. Welch and Ehlers (1987) used forward-reverse RGB to IHS transforms.5 m. Combining these two images to yield a seven band data set with 10 m resolution would provide the best characteristics of both sensors. and that all the spectral information is contained in the other PC’s or in H and S.G. but it is in the same numerical range as PC-1. 7). SPOT panchromatic) additively with the high spectral resolution Landsat TM image. Landsat TM sensors have seven bands with a spatial resolution of 28. among others. In the above two techniques. this assumption does not strictly hold. See "CHAPTER 1: Raster Data" for a full description of resolution types. With the above assumptions. Since SPOT data do not cover the full spectral range that TM data do.e. SPOT panchromatic has one broad band with very good spatial resolution—10 m. The high spatial resolution image is then remapped so that its histogram shape is kept constant. It is assumed that: • • PC-1 contains only overall scene luminance. and Scene luminance in the SWIR bands is identical to visible scene luminance. The Resolution Merge function has two different options for resampling low spatial resolution data to a higher spatial resolution while retaining spectral information: • • forward-reverse principal components transform multiplicative Principal Components Merge Because a major goal of this merge is to retain the spectral information of the six TM bands (1-5. It is unacceptable to resample the thermal band (TM6) based on the visible (SPOT panchromatic) image. the forward transform into principal components is made. this technique is limited to three bands (R. replacing PC-1.Resolution Merge The resolution of a specific sensor can refer to radiometric. uses the forward-reverse principal components transforms with the SPOT image. A number of models have been suggested to achieve this image merge.

. city planning. etc. often want roads and cultural features (which tend toward high reflection) to be pronounced in the image. However. Adaptive filters attempt to achieve this (Fahnestock and Schowengerdt 1983. ERDAS IMAGINE supplies two adaptive filters with user-adjustable parameters. only multiplication is unlikely to distort the color. division. utilities routing. a filter that “adapts” the stretch to the region of interest (the area within the moving window) would produce a better enhancement. such as SPOT. coastal studies where much of the water detail is spread through a very low DN range and the land detail is spread through a much higher DN range would be such a circumstance. There are many circumstances where this is not the optimum approach. The Adaptive Filter function in Image Interpreter can be applied to undegraded images. Landsat. Peli and Lim 1982. it is argued that of the four possible arithmetic methods to incorporate an intensity image into a chromatic image (addition. Adaptive Filter Contrast enhancement (image stretching) is a widely applicable standard image processing technique. In these cases. However. and multiplication). or PC transform.Spatial Enhancement Multiplicative The second technique in the Image Interpreter uses a simple multiplicative algorithm: (DNTM1) (DNSPOT) = DNnew TM1 The algorithm is derived from the four component technique of Crippen (Crippen 1989). this is desirable. The Image Enhancement function in Radar is better for degraded or difficult images. Schwartz 1977). The algorithm shown above operates on the original image. In this paper. Users involved in urban or suburban studies. For example. in his study Crippen first removed the intensity component via band ratios. spectral indices. even adjustable stretches like the piecewise linear stretch act on the scene globally. subtraction. Field Guide 151 . For many applications. The result is an increased presence of the intensity component. and digitized photographs.

These scenes need an increase in both contrast and overall scene luminance. The low frequency image is considered to be overall scene luminance. enhanced. these are the scenes one would prefer to obtain from imagery sources such as EOSAT or SPOT. These LUTs are driven by the overall scene luminance: DNout = K(DNHi) + DNLL where: K = user-selected contrast multiplier Hi = high luminance (derives from the LUT) LL = local luminance (derives from the LUT) 255 Local Luminance 0 Intercept (I) Low Frequency Image DN 255 Figure 59: Local Luminance Intercept 152 ERDAS . and then recombined. For this function. Low luminance — these scenes have an overall or regional less-than-optimum intensity. multiband images may require different parameters for each band.Scenes to be adaptively filtered can be divided into three broad and overlapping categories: • Undegraded — these scenes have good and uniform illumination overall. Without the use of adaptive filters. Examples of such circumstances would be an over-exposed (scanned) photograph or a scene with a light cloud cover or haze. • • No one filter with fixed parameters can address this wide variety of conditions. the image is separated into high and low frequency component images. the different bands would have to be separated into one-band files. Given a choice. An underexposed photograph (scanned) or shadowed areas would be in this category. In addition. These two component parts are then recombined in various relative amounts using multipliers derived from look-up tables. These scenes need a decrease in luminance and an increase in contrast. High luminance — these scenes are characterized by overall excessively high DN values.

which is the output luminance value that an input luminance value of 0 would be assigned. It allows redundant data to be compacted into fewer bands—that is. an ellipse shape results. you are not limited to two-dimensional (two-band) data. However. Scatterplots and normal distributions are discussed in "APPENDIX A: Math Topics. some examples are illustrated with two-dimensional graphs. The values of one band are plotted against those of the other. Principal Components Analysis Principal components analysis (or PCA) is often used as a method of data compression. you risk losing some information. ERDAS IMAGINE programs allow an unlimited number of bands to be used. Faust 1989). Some of these enhancements can be used to prepare data for classification.G.Spectral Enhancement Figure 59 shows the local luminance intercept. The bands of PCA data are non-correlated and independent. Keep in mind that processing such data sets can require a large amount of computer swap space. They can be used to: • • • • compress bands of data that are similar extract new bands of data that are more interpretable to the eye apply mathematical transforms and algorithms display a wider variety of information in the three available color guns (R. which shows the relationships of data file values in two bands. The process is easily explained graphically with an example of data in two bands. Below is an example of a two-band scatterplot. If both bands have normal distributions. the dimensionality of the data is reduced. Spectral Enhancement The enhancement techniques that follow require more than one band of data.B) In this documentation. this is a risky practice unless you are very familiar with your data. the principles outlined below apply to any number of bands. In practice. and the changes that you are making to it." Field Guide 153 . However. and are often more interpretable than the source data (Jensen 1996. Anytime you alter values.

changing the coordinates of each pixel in spectral space. in spectral space. ellipsoid (3 dimensions) or hyperellipsoid (more than 3) is formed if the distributions of each input band are normal or near normal. the coordinates of the points are the data file values. the axes of the spectral space are rotated. These values are stored in the first principal component band of a new data file. an ellipse (2 dimensions). and the data file values as well. 154 ERDAS . A new axis of the spectral space is defined by this first principal component. which corresponds to the major (longest) axis of the ellipse.255 Band B data file values histogram Band B 0 0 histogram Band A 255 Band A data file values Figure 60: Two Band Scatterplot Ellipse Diagram In an n-dimensional histogram. which correspond to this new axis. The direction of the first principal component is the first eigenvector. Since. First Principal Component The length and direction of the widest transect of the ellipse are calculated using matrix algebra in a process explained below. is called the first principal component of the data. new data file values are derived from this process.) To perform principal components analysis. (The term “ellipse” will be used for general purposes here. The new axes are parallel to the axes of the ellipse. The points in the scatterplot are now given new coordinates. and its length is the first eigenvalue (Taylor 1977). The transect.

as an axis in spectral space. As such. Therefore.Spectral Enhancement 255 Principal Component (new axis) 0 0 255 Figure 61: First Principal Component The first principal component shows the direction and length of the widest transect of the ellipse. In a two-dimensional analysis. In Figure 62 it is easy to see that the first eigenvalue will always be greater than the ranges of the input bands. Field Guide range of Band B 155 . it measures the highest variation within the data. the second principal component corresponds to the minor axis of the ellipse. 255 range of pc 1 Band B data file values range of Band A 0 0 255 Band A data file values Figure 62: Range of First Principal Component Successive Principal Components The second principal component is the widest transect of the ellipse that is orthogonal (perpendicular) to the first principal component. the second principal component describes the largest amount of variance in the data that is not already described by the first principal component (Taylor 1977). just as the hypotenuse of a right triangle must always be longer than the legs.

255 PC 2 PC 1 90˚ angle (orthogonal) 0 0 255 Figure 63: Second Principal Component In n dimensions. Computing Principal Components To compute a principal components transformation. The result of the transformation is that the axes in n-dimensional spectral space are shifted and rotated to be relative to the axes of the ellipse. principal components analysis is useful for compressing data into fewer bands. there are n principal components. Each successive principal component: • • is the widest transect of the ellipse that is orthogonal to the previous components in the n-dimensional space of the scatterplot (Faust 1989) accounts for a decreasing amount of the variation in the data which is not already accounted for by previous principal components (Taylor 1977) Although there are n output bands in a principal components analysis. a linear transformation is performed on the data—meaning that the coordinates of each pixel in spectral space (the original data file values) are recomputed using a linear equation. 156 ERDAS . In other applications. the first few bands account for a high proportion of the variance in the data—in some cases. These bands may also show regular noise in the data (for example. useful information can be gathered from the principal component bands with the least variance. These bands can show subtle details in the image that were obscured by higher contrast in the original image. Therefore. the striping in old MSS data) (Faust 1989). almost 100%.

. Source: Faust 1989 A full explanation of this computation can be found in Gonzalez and Wintz 1977... 0 .. v n E Cov ET = V where: Cov = the covariance matrix E T = the matrix of eigenvectors = the transposition function = a diagonal matrix of eigenvalues.. the eigenvectors and eigenvalues of the n principal components must be mathematically derived from the covariance matrix.. 0 0 0 . The matrix V is the covariance matrix of the output principal component file.. > vn .. so that v1 > v2 > v3. Field Guide 157 ... 0 V = 0 v 2 0 .. in which all non-diagonal elements are zeros V V is computed so that its non-zero elements are ordered from greatest to least. the first eigenvalue is the largest and represents the most variance in the data. and the eigenvalues are the variance values for each band. as shown in the following equation: v 1 0 0 . Because the eigenvalues are ordered from v1 to vn. The zeros represent the covariance between bands (there is none).Spectral Enhancement To perform the linear transformation.

etc. second. The decorrelation stretch stretches the principal components of an image. E. A principal components transform converts a multiband image into a set of mutually orthogonal images portraying inter-band variance. Either the original PCs or the stretched PCs may be saved as a permanent image file for viewing after the stretch. Each PC is separately stretched to fully utilize the data range.) the output principal component value for principal component band e a particular input band the total number of bands an input data file value in band k the eigenvector matrix. The new stretched PC composite image is then retransformed to the original data areas. NOTE: Storage of PCs as floating point-single precision is probably appropriate in this case. Pe = where: e = Pe = k = n = dk = E = ∑ d k E ke k=1 n the number of the principal component (first. which shows the direction of the principal component (the ellipse axis). Depending on the DN ranges and the variance of the individual input bands. not to the original image. these new images (PCs) will occupy only a portion of the possible 0 . column e Source: Modified from Gonzalez and Wintz 1977 Decorrelation Stretch The purpose of a contrast stretch is to: • • alter the distribution of the image DN values within the 0 . describes a unit-length vector in spectral space. 158 ERDAS . to transform the original data file values into the principal component values.255 range of the display device and utilize the full range of values in a linear fashion.Each column of the resulting eigenvector matrix. The numbers are used as coefficients in the following equation.255 data range. such that Eke = the element of that matrix at row k.

These rotations are sensor-dependent. For example.Spectral Enhancement Tasseled Cap The different bands in a multispectral image can be visualized as defining an N-dimensional space where N is the number of bands. See "CHAPTER 1: Raster Data" for more information on absorption/reflection spectra. Wetness — relates to canopy and soil moisture (Lillesand and Kiefer 1987). This clustering of the pixels is termed the data structure (Crist & Kauth 1986). Crist & Kauth 1986): • • • Brightness — a weighted sum of all bands. The Tasseled Cap transformation offers a way to optimize data viewing for vegetation studies. This pixel distribution is determined by the absorption/reflection spectra of the imaged material. In particular. it is advantageous to rotate the N-dimensional space. MSS allowed Crist et al to define three additional axes. the user could view the axes that are largest for the data structure produced by the absorption peaks of special interest for the application. Strongly related to the amount of green vegetation in the scene. Both would benefit from viewing the data in a way that would maximize visibility of the data structure of interest. Greenness — orthogonal to brightness. They would want to view different data structures and therefore. a geologist and a botanist are interested in different absorption features. Field Guide 159 . termed Haze. Each pixel. a contrast between the near-infrared and visible bands. The principal axes of this data structure are not necessarily aligned with the axes of the data space (defined as the bands of the input image). defined in the direction of the principal variation in soil reflectance. different data structure axes. See the discussion on "Principal Components Analysis" on page 153. Laurin (1986) has used this haze parameter to devise an algorithm to de-haze Landsat imagery. such that one or two of the data structure axes are aligned with the viewer X and Y axes. but once defined for a particular sensor (say Landsat 4 TM). and Sixth. The increased dimensionality (number of bands) of TM vs. lies within the N-dimensional space. the same rotation will work for any scene taken by that sensor. They are more directly related to the absorption spectra. Research has produced three data structure axes which define the vegetation information content (Crist et al 1986. positioned according to its DN value in each band. Fifth. For viewing purposes. A simple calculation (linear combination) then rotates the data space to present any of these axes to the user. The data structure can be considered a multi-dimensional hyperellipsoid.

. In Figure 64.G. the additive primary colors.4580 (TM3) .0840 (TM5) .0032 (TM4) . For TM4.5436 (TM3) + .B).3037(TM1) + . However..G..0130 (TM7) Source: Modified from Crist et al 1986.5082 (TM5) + . Hue is representative of the color or dominant wavelength of the pixel.1973 (TM2) + . Jensen 1996 RGB to IHS The color monitors used for image display on image processing systems have three color guns. hue must vary from 0-360 to define the entire sphere (Buchanan 1979).5585 (TM4) + .1509 (TM1) + . These correspond to red.3406 (TM4) . It is a circular dimension (see Figure 64). • • • Intensity is the overall brightness of the scene (like PC-1) and varies from 0 (black) to 1 (white).2848 (TM1) .The Tasseled Cap algorithm implemented in the Image Interpreter provides the correct coefficient for MSS.8832 (TM1) .1800 (TM7) = .3279 (TM3) + . 0-255 is the selected range..0819 (TM2) . and TM5 imagery. and blue (R. This system is advantageous in that it presents colors more nearly as perceived by the human eye.2435 (TM2) . and B). the calculations are: Brightness Greenness Wetness Haze = .B space. TM4.0563 (TM5) + .1863 (TM7) = -. green. However..7243 (TM4) + .. 160 ERDAS . and Saturation (S) as the three positioned parameters (in lieu of R.2793)(TM2) + ..G. it is possible to define an alternate color space that uses Intensity (I).4572 (TM7) = . Saturation represents the purity of color and also varies linearly from 0 to 1. When displaying three bands of a multiband data set. it could be defined as any data range.7112 (TM5) . the viewed image is said to be in R..4743 (TM3) + . It varies from 0 at the red midpoint through green and blue back to the red midpoint at 360. Hue (H).

G. corresponding to the color with the largest value. G. and at least one of the R. R. use the RGB to IHS function from Image Interpreter. M= largest value. or B NOTE: At least one of the R.G. corresponding to the color with the least value.B are each in the range of 0 to 1. or B values is 1. G. and Saturation Color Coordinate System To use the RGB to IHS transform. Hue.0.0 Red Source: Buchanan 1979 Figure 64: Intensity. Field Guide 161 . R. or B values is 0. The algorithm used in the Image Interpreter RGB to IHS transform is (Conrac 1980): M–R R = -------------M–m M–G G = -------------M–m M–B B = -------------M–m where: R. or B m= least value.Spectral Enhancement 255 INTENSITY 255 SATURATION Blue 0 Green HUE 255. G.

M= largest value.0 is: M+m I = --------------2 The equations for calculating saturation in the range of 0 to 1. The user could define I and/or S as other parameters. In another approach (Daily 1983). saturation (S).B are each in the range of 0 to 1. it largely defines what we perceive as color. a circular dimension.G. The values for hue (H). M–m S = --------------M+m M–m S = ----------------------2–M–m If I > 0. H = 60 (4 + r . H = 60 (6 + g . G.1 range. R. H = 60 (2 + b .r) where: R.The equation for calculating intensity in the range of 0 to 1. H = 0 If R = M. set Hue at 0-360. or both. However.1 value range.1 value range. so that they more fully utilize the 0 .max stretch is applied to either intensity (I).360. and the resultant image looks very much like the input image. are 0 .g) If G = M. and then transform to RGB space.0. The equations for calculating hue in the range of 0 to 360 are: If M = m. As the parameter Hue is not modified. the full IHS image is retransformed back to the original RGB space. depending on the dynamic range of the DN values of the input image. Chavez evaluates the use of the IHS to RGB transform to resolution merge Landsat TM with SPOT panchromatic imagery (Chavez 1991). In the IHS to RGB algorithm. it is possible that I or S or both will occupy only a part of the 0 . This is a method of color coding other data sets.5. S = 0 If I < 0. 162 ERDAS . NOTE: Use the Spatial Modeler for this analysis. G. S. a min-max stretch is applied to either I. It is not essential that the input parameters (IHS) to this transform be derived from an RGB to IHS transform. R. or both. a min . or B IHS to RGB The family of IHS to RGB are intended as a complement to the standard RGB to IHS transform. so that they more fully utilize the 0 . In this model. H and I are replaced by low.0 are: If M = m. After stretching. or B m= least value.5.b) If B = M. The user can also replace I with radar intensity before the IHS to RGB transform (Holcomb 1993).and high-frequency radar imagery.

The algorithm used by ERDAS IMAGINE for the IHS to RGB function is (Conrac 1980): Given: H in the range of 0 to 360.m) If 180 £ H < 300. B = M  120 – H  ----------------- 60  H – 240  -----------------.  60  Field Guide 163 . R = m + (M . I and S in the range of 0 to 1.0: If H < 60. M = I (1 + S) If I > 0. G = m If 120 £ H < 180. R = m The equations for calculating G in the range of 0 to 1. R = m + (M .Spectral Enhancement See the previous section on RGB to IHS transform for more information. G = m + (M-m) H – 120  -----------------.5. B = m + (M .m) If 120 £ H < 240.m) If 60 £ H < 180. M = I + S .0 are: If H < 60.0 If I £ 0. G = M If 300 £ H £ 360. m + (M . B = m If 240 £ H < 300.5. R = M H  ----.m) If 300 £ H £ 360.m) If 240 £ H £ 360.  60   360 – H  ----------------- 60  Equations for calculating B in the range of 0 to 1. B = M If 60 £ H < 120.  60   240 – H  ----------------- 60  If 180 £ H < 240.I (S) m=2*1-M The equations for calculating R in the range of 0 to 1. B = m + (M .0 are: If H < 120.

If A>B and A is never much greater than B. These may be simplistic: (Band X . 164 ERDAS . If there are two bands. judiciously chosen indices can highlight and enhance differences which cannot be observed in the display of the original color bands. For example: Red 5/7. Blue 3/1. Thus. A and B. The absorption is based on the molecular bonds in the (surface) material. Indices can also be used to minimize shadow effects in satellite and aircraft multispectral images. the ratio often gives information on the chemical composition of the target. Applications • Indices are used extensively in mineral exploration and vegetation analyses to bring out small differences between various rock types and vegetation classes.Indices Indices are used to create output images by mathematically combining the DN values of different bands. scaling might be a problem in that the data range might only go from 1 to 2 or 1 to 3. See "CHAPTER 1: Raster Data" for more information on the absorption/reflection spectra. • • Integer Scaling Considerations The output images obtained by applying indices are generally created in floating point to preserve all numerical precision.Band Y Band X + Band Y In many instances. In many cases. Black and white images of individual indices or a color combination of three ratios may be generated. Green 5/4.Band Y) or more complex: Band X . Integer scaling in this case would give very little contrast. Certain combinations of TM ratios are routinely used by geologists for interpretation of Landsat imagery for mineral type. then: ratio = A/B If A>>B (much greater than). then a normal integer scaling would be sufficient. these indices are ratios of band DN values: Band X Band Y These ratio images are derived from the absorption/reflection spectra of the material of interest.

+ 0.Spectral Enhancement For cases in which A<B or A<<B. which may very well be a substantial part of the data image. One approach to handling the entire ratio range is to actually process the function: ratio = atan(A/B) This would give a better representation for A/B < 1 as well as for A/B > 1 (Faust 1992). 3/1. Jensen 1996. 4/3 Source: Modified from Sabins 1987. A multiplication constant factor would also not be very effective in seeing the data contrast between 0 and 1.5 IR + R • • • • • Iron Oxide = TM 3/1 Clay Minerals = TM 5/7 Ferrous Minerals = TM 5/4 Mineral Composite = TM 5/7. All fractional data would be lost. 5/4. 3/1 Hydrothermal Composite = TM 5/7. Index Examples The following are examples of indices which have been preprogrammed in the Image Interpreter in ERDAS IMAGINE: • • • • IR/R (infrared/red) SQRT (IR/R) Vegetation Index = IR-R Normalized Difference Vegetation Index (NDVI) = IR – R --------------IR + R • Transformed NDVI (TNDVI) = IR – R --------------. integer scaling would always truncate to 0. Tucker 1979 Field Guide 165 .

which uses a more complicated mathematical combination of as many as six bands to define vegetation. These are derived from the absorption spectra of the material of interest. such as: TM 5 ----------. yet very useful. the numerator is a baseline of background absorption and the denominator is an absorption peak. Band ratios. See "CHAPTER 1: Raster Data" for more information on absorption/reflection spectra. subtraction.= clay minerals TM 7 are also commonly used. measure of the presence of vegetation. At the other extreme is the Tasseled Cap calculation (described in the following pages).DNred yields a simple.The following table shows the infrared (IR) and red (R) band for some common sensors (Tucker 1979. and division: IR – R NDVI = --------------IR + R 166 ERDAS . the calculation: (infrared band) .(red band) DNir . Jensen 1996): Sensor Landsat MSS SPOT XS Landsat TM NOAA AVHRR IR Band 7 3 4 2 R Band 5 2 3 1 Image Algebra Image algebra is a general term used to describe operations that combine the pixels of two or more raster layers in mathematical combinations. The Normalized Difference Vegetation Index (NDVI) is a combination of addition. For example.

The JPL and USGS mineral spectra libraries are included in IMAGINE.) The unknown atmospheric absorbances superimposed upon the Earth surface reflectances makes comparison to laboratory spectra or spectra taken with a different atmosphere inexact. many of the techniques or algorithms currently used for multi-spectral datasets are logically applicable. but the spectral band-width of the bands (channels). etc. often of high purity and defined particle size. the x-axis is the column indicator and the y-axis is the row indicator. the remote sensor records an image after the sunlight has (twice) passed through the atmosphere with variable and unknown amounts of water vapor. A serious complication in using this approach is assuring that all spectra are corrected to the same background. the wavelength of that band (channel). A hyperspectral image can be visualized as shown in Figure 65.Hyperspectral Image Processing Hyperspectral Image Processing Hyperspectral image processing is in many respects simply an extension of the techniques used for multi-spectral datasets. Conversely. Thus. indeed. The z-axis is the band number or. Analysis of the data in this fashion is termed "imaging spectrometry". Indeed. What is of relevance in evaluating these datasets is not the number of bands per se. it becomes possible to view the dataset as an absorption spectrum rather than a collection of discontinuous bands. Atmospheric absorption and scattering is discussed on pages 6 through 10 of this manual. (This atmospheric absorbance curve is shown in Figure 4. As in a traditional raster image. CO2. it is possible to obtain spectral libraries of common materials. Field Guide 167 . The spectrometer is commonly purged with pure nitrogen to avoid absorbance by atmospheric gases. At present. These are laboratory measured reflectance spectra of reference minerals. more correctly. it has been shown that atmospheric composition can vary within a single scene. This complicates the use of spectral signatures even within one scene. A hyperspectral image data set is recognized as a three-dimensional pixel array. regardless of the number of bands in the dataset (see the discussion of Figure 7 on page 12 of this manual). there is no set number of bands beyond which a dataset is hyperspectral. As the bandwidths get smaller. Y Z X Figure 65: Hyperspectral Data Axes A dataset with narrow contiguous bands can be plotted as a continuous spectrum and compared to a library of known spectra using full profile spectral pattern fitting algorithms.

The algorithm is based on the assumption that this scene average spectrum is largely composed of the atmospheric contribution and that the atmosphere is uniform across the scene. the average spectrum could contain absorption features related to target materials of interest. For an image which contains 2 (or more) distinctly different regions (e. This technique calculates a relative reflectance by dividing each spectrum (pixel) by the scene average spectrum (Kruse 1988).e. this may not be a valid assumption.. it is less pronounced for satellite sensors. this normalization algorithm helps remove albedo variations and topographic effects. Two specific techniques. However. Correctly applied. The algorithm could then overcompensate for (i. an "equal area normalization" algorithm can be applied (Zamudio and Atkinson 1990). These have the advantage of not requiring auxiliary input information. In particular. remove) these absorbance features. This calculation shifts each (pixel) spectrum to the same overall average brightness. The average spectrum should be visually inspected to check for this possibility.. These are introduced briefly on page 130 of this manual for the general case. the correction parameters are scene-derived. this technique can remove the majority of atmospheric effects. For airborne sensors this look angle effect can be large across a scene.g. these assumptions are not always valid. half ocean and half forest). The disadvantage is that they produce relative reflectances (i.3. For these datasets. are implemented in IMAGINE 8. the average scene luminance between the two half-scenes can be large.A number of approaches have been advanced to help compensate for this atmospheric contamination of the spectra. To help minimize these effects. it is desired to convert the spectra recorded by the sensor into a form that can be compared to known reference spectra.e. Some scanners look to both sides of the aircraft. Normalize Pixel albedo is affected by sensor look angle and local topographic effects. This enhancement must be used with a consideration of whether this assumption is valid for the scene. IAR Reflectance 168 ERDAS . Properly applied. they can be compared to reference spectra in a semi-quantitative manner only).. Internal Average Relative Reflectance (IARR) and Log Residuals. As discussed above.

a low S/N layer) should be excluded. when rescaling data to be used for imaging spectrometry analysis. In addition. it may be appropriate to rescale each EM region separately. When rescaling a data cube. These can be input using the Select Layer option in the IMAGINE Viewer.and short-wave reflective infra-red). Any bit format can be input. The algorithm can be conceptualized as: Output Spectrum = (input spectrum) .(pixel brightness) + (image brightness) All parameters in the above equation are in logarithmic space. The output image will always be 8-bit. When rescaling these data sets. Rescale Many hyperspectral scanners record the data in a format larger than 8-bit. a “bad” band (i.e. Field Guide 169 ..(average spectrum) . This algorithm corrects the image for atmospheric absorption. The version implemented here is similar to the approach of Lyon (1987). Some sensors image in different regions of the electromagnetic (EM) spectrum (e. it will be advantageous to compress the data back into an 8-bit range for effective storage and/or display..Hyperspectral Image Processing Log Residuals The Log Residuals technique was originally described by Green and Craig (1985).g. a decision must be made as to which bands to include in the rescaling. but has been variously modified by researchers. and illuminance differences between pixels. hence the name. Clearly. reflective and thermal infra-red or long. many of the calculations used to correct the data will be performed with a floating point format to preserve precision. not just within the layer of interest. However. systemic instrumental variation. At some point. This algorithm is designed to maintain the 3-dimensional integrity of the data values. it is necessary to consider all data values within the data cube.

Green and Craig 1985. Automatic Log Residuals — Implements the following algorithms: Normalize.Use this dialog to rescale the image Enter the bands to be included in the calculation here Figure 66: Rescale GUI NOTE: Bands 26 through 28 and 46 through 55 have been deleted from the calculation. IAR Reflectance. as follows: • • Automatic Relative Reflectance — Implements the following algorithms: Normalize. Rescale. Two common processing sequences have been programmed as single automatic enhancements. Kruse 1988.The deleted bands will still be rescaled. 170 ERDAS . Processing Sequence The above (and other) processing steps are utilized to convert the raw image into a form that is easier to interpret. Lyon 1987). Rescale. but they will not be factored into the rescale calculation. Log Residuals. This interpretation often involves comparing the imagery. either visually or automatically. At present there is no widely accepted standard processing sequence to achieve this. to laboratory spectra or other known "end-member" spectra. although some have been advanced in the scientific literature (Zamudio and Atkinson 1990.

In preparing reference spectra for classification. This is mentioned above under IAR Reflectance as a test for applicability. This enables the user to average any set of pixels which are defined. Note that to implement this function it is necessary to define which pixels to average using the IMAGINE AOI tools. an average spectrum may be more representative than a single pixel. or to save in the Spectral Library. Note that the output from this program is a single pixel with the same number of input bands as the original image. it may be desirable to average together several pixels.Hyperspectral Image Processing Spectrum Average In some instances. AOI Polygon Use this IMAGINE dialog to rescale the image Click here to enter an Area of Interest Figure 67: Spectrum Average GUI Field Guide 171 . they do not need to be contiguous and there is no limit on the number of pixels averaged.

In this implementation.Dev. Mean per Pixel Profile Tools Figure 68: Spectral Profile 172 ERDAS . To aid in visualizing this three-dimensional data cube. each layer in the output image should be visually inspected to evaluate suitability for inclusion into the analysis. three basic tools have been designed: • Spectral Profile — a display that plots the reflectance spectrum of a designated pixel. For example. these would be revealed in the Mean per Pixel image. it is possible to see if particular pixels are "outside the norm". After running this function on a data set. as shown in Figure 68. regardless of the number of input bands. S/N is defined as Mean/Std.Signal to Noise The signal-to-noise (S/N) ratio is commonly used to evaluate the usefulness or validity of a particular band. Layers deemed unacceptable can be excluded from the processing by using the Select Layers option of the various Graphical User Interfaces (GUIs). they should be evaluated in this context. This can be used as a sensor evaluation tool. While this does not mean that these pixels are incorrect. in a 3X3 moving window. This can be used as a sensor evaluation tool. By visually inspecting this output image. This algorithm outputs a single band. a CCD detector could have several sites (pixels) that are dead or have an anomalous response.

Figure 70: Three-Dimensional Spatial Profile Field Guide 173 .Hyperspectral Image Processing • Spatial Profile — a display that plots spectral information along a user-defined polyline. The data can be displayed two-dimensionally for a single band. as in Figure 69. Figure 69: Two-Dimensional Spatial Profile The data can also be displayed three-dimensionally for multiple bands. as in Figure 70.

This information should be linked to the image intensity values for accurate analysis or comparison to other spectra. When plotted using the profile tools. such as the Spectra Libraries. Figure 71: Surface Profile Wavelength Axis Data tapes containing hyperspectral imagery commonly designate the bands as a simple numerical sequence.• Surface Profile — a display that allows the operator to designate an x. Elsewhere on the tape or in the accompanying documentation is a file which lists the center frequency and width of each band.2. this yields an x-axis labeled as 1. This library can then be used for visual comparison with other image spectra. In addition. it is possible to extract spectra (pixels) from a data set or prepare average spectra from an image and save these in a user-derived spectral library. Spectral Library 174 ERDAS . or it can be used as input signatures in a classification.y area and view any selected layer. etc. two spectral libraries are presently included in the software package (JPL and USGS).3. z. As discussed on page 167.4.

To process this scene will require corresponding large swap and temp space. Temporary file space requirements will. Or the reference spectra (signatures) can be scene-derived from either the scene under study or another similar scene (Adams. Monor. For certain applications. The JPL and USGS libraries are compiled this way. and Hong 1990) and Minimum Distance (Merenyi. and Minimum Distance. of course. Smith. Monor. Much research has been directed toward the use of Artificial Neural Networks (ANN) to more fully utilize the information content of hyperspectral images (Merenyi. A second category of classification techniques utilizes the imaging spectroscopy model for approaching hyperspectral datasets. Taranik. System Requirements Because of the large number of bands. depend upon the process being run. it requires over 140 megabytes of data storage space. Ersoy. a hyperspectral dataset can be surprisingly large. however. "CHAPTER 6: Classification" contains a detailed discussion of these classification techniques. an AVIRIS scene is only 512 X 614 pixels in dimension — seems small. Field Guide 175 . but has not obviated their usefulness. it has been found that a 48 Mb memory board and 100 Mb of swap space is a minimum requirement for efficient processing. In practice. and Gillespie 1989). Gallagher. both Maximum Likelihood (Benediktsson. However. This approach requires a library of possible end-member materials. These can be from laboratory measurements using a scanning spectrometer and reference standards (Clark. To date. For example. these advanced techniques have proven to be only marginally better at a considerable cost in complexity and computation. when multiplied by 224 bands (channels) and 16 bits. and Farrand 1996). Taranik. Swain.Hyperspectral Image Processing Classification The advent of datasets with very large numbers of bands has pressed the limits of the "traditional classifiers" such as Isodata. and Swayze 1990). Maximum Likelihood. and Farrand 1996) have proven to be appropriate.

Also included are some examples of techniques that will generally work for specific applications. with no concern for the values of neighboring pixels. In ERDAS IMAGINE. these techniques require the processing of a possibly large number of pixels for each output pixel. A more complicated function. Once the Fourier image is edited. The basic premise behind a Fourier transform is that any one-dimensional function. These techniques include contrast stretches (non-adaptive). but the magnitude of the image can be calculated. it is then transformed back into the spatial domain by using an inverse Fast Fourier Transform. An enhancement that requires a convolution operation in the spatial domain can be implemented as a simple multiplication in frequency space—a much faster calculation. as the size of the moving window increases. For example. such as striping. a line of pixels with a high spatial frequency gray scale pattern might be represented in terms of a single coefficient multiplied by a sin(x) function. level slices. Analysts can edit the Fourier image to reduce noise or remove periodic features. The result is an enhanced version of the original image. Neighborhood techniques enhance a pixel based on the values of surrounding pixels. Point techniques enhance the pixel based only on its value. High spatial frequencies are those that represent frequent gray scale changes in a short pixel distance. NOTE: You may also want to refer to the works cited at the end of this section for more information. However. might have to be represented by many sine and cosine terms with their associated coefficients. The most common way of implementing these enhancements is via a moving window convolution. f(x) (which might be a row of pixels). can be represented by a Fourier series consisting of some combination of sine and cosine terms and their associated coefficients. f(x). the Fast Fourier Transform (FFT) is used to convert a raster image from the spatial (normal) domain into a frequency domain image. 176 ERDAS . etc. The Fourier image itself cannot be easily viewed.Fourier Analysis Image enhancement techniques can be divided into two basic categories: point and neighborhood. This section focuses on the Fourier editing techniques available in the ERDAS IMAGINE FFT Editor. As a result. the number of requisite calculations becomes enormous. classification. The FFT calculation converts the image into a series of two-dimensional sine waves of various frequencies. Some rules and guidelines for using these tools are presented in this document. such as striping. Low spatial frequencies represent infrequent gray scale changes that occur gradually over a relatively large number of pixel distances. which can then be displayed either in the IMAGINE Viewer or in the FFT Editor.

a two-dimensional FFT has been devised that incrementally uses one-dimensional FFT’s in each direction and then combines the result.) Fourier Transform of f(x) 0 π 2π Figure 72: One-Dimensional Fourier Analysis Figure 72 shows how a function f(x) can be represented as a linear combination of sine and cosine. This analysis technique can also be used across bands as another form of pattern/feature recognition. a discrete Fourier transform (DFT) has been developed. striping).g. Field Guide 177 . These images are symmetrical about the origin. where electrical signals are continuous and not discrete. A Fourier transform is a linear transformation that allows calculation of the coefficients necessary for the sine and cosine terms to adequately represent the image.. The Fourier transform of that same function is also shown. Fourier editing can be used to remove regular errors in data such as those caused by sensor anomalies (e. a highly efficient version of the DFT was developed and called the Fast Fourier Transform (FFT). or vibration in imagery by identifying periodicities (areas of high spatial frequency). Therefore. This theory is used extensively in electronics and signal processing. To handle images which consist of many one-dimensional rows of pixels. Because of the computational load in calculating the values for all the sine and cosine terms along with the coefficient multiplications. spots.Fourier Analysis Original Function f(x) Sine and Cosine of f(x) Sine Cosine 0 π 2π 0 π 2π (These graphics are for illustration purposes only and are not mathematically accurate. Applications Fourier transformations are typically used for the removal of noise such as striping.

You should run a Fourier Magnitude transform on an .fft file extension.v = 0. 178 ERDAS . these components are combined in a root-sum of squares operation.Fast Fourier Transform (FFT) The Fast Fourier Transform (FFT) calculation is: M – 1N – 1 F ( u.fft file before viewing it in the ERDAS IMAGINE Viewer.0). If the dimensions of the input image are not a power of two.v= spatial frequency variables e≈ 2. Each pixel of a fourier image is a complex number (i. the origin is shifted to the center of the raster array. v ) ← ∑ ∑ [ f ( x.71828. Source: Modified from Oppenheim 1975 and Press 1988 Images computed by this algorithm are saved with an . 0 ≤ v ≤ N – 1 where: M = the number of pixels horizontally N= the number of pixels vertically u. Fourier Magnitude The raster image generated by the FFT calculation is not an optimum image for viewing or editing. a Fourier image is symmetric about the origin (u. since the dynamic range of Fourier spectra vastly exceeds the range of a typical display device. the natural logarithm base j = the imaginary component of a complex number The number of pixels horizontally and vertically must each be a power of two. they are padded up to the next highest power of two. the symmetry is more difficult to see than if the origin is at the center of the image.e. Therefore. The FFT Editor automatically displays the magnitude without further processing. it has two components—real and imaginary).. If the origin is plotted at the upper left corner. Also. in the Fourier magnitude image. For display as a single image. the Fourier Magnitude calculation involves a logarithmic function. y )e – j2πux ⁄ M – j2πvy ⁄ N ] x=0y=0 0 ≤ u ≤ M – 1. There is more information about this later in this section. Finally.

Then. is computed. Source: ERDAS In Figure 73. the following computation is performed for each FFT element magnitude x: x y ( x ) = 255. |X|max.Fourier Analysis In this transformation. origin origin Image A Image B Figure 73: Example of Fourier Magnitude Field Guide 179 . ( e – 1 ) + 1  x max  where: x= input FFT element y= the normalized log magnitude of the FFT element |x|max= the maximum magnitude e≈ 2. each .fft layer is processed twice. with y(0)=0 and y (|x|max) = 255.71828. Image A is one band of a badly striped Landsat TM scene. Image B is the Fourier Magnitude image derived from the Landsat image. First. the maximum magnitude.0ln  ------------. the natural logarithm base | |= the magnitude operator This function was chosen so that y would be proportional to the logarithm of a linear function of x.

7 × 10-5 × M-1 180 ERDAS .= 6.5m) into a Fourier image: –5 –1 1 ∆u = ∆v = ------------------------. The sampling increments in the spatial and frequency domain are related by: 1 ∆u = ----------M∆x 1 ∆v = ---------N∆y where: M= horizontal image size in pixels N= vertical image size in pixels ∆ x= pixel size ∆ y= pixel size For example.85 × 10-5 × M-1 13. converting a 512 × 512 Landsat TM image (pixel size = 28. the majority of the information in an image is in the low frequencies. In Image B.0) in the upper left corner.g. e. As mentioned. The low frequencies are plotted near this origin while the higher frequencies are plotted further out.0) is in the center of the raster. these lower frequencies are plotted nearer to the center (u. m-1.Note that although Image A has been transformed into Image B. these raster images are very different symmetrically.0) of the Fourier image. The origin of Image A is at (x.y) = (0. Generally. A large spatial domain image contains components of lower frequency than a small spatial domain image. Note that the units of spatial frequency are inverse length.85 × 10 m 512 × 28.5 u or v 0 1 2 0 Frequency 6.v = 0. It is important to realize that a position in a Fourier image. This is indicated by the bright area at the center (origin) of the Fourier image. the origin (u.v) = (0. because it depends on the size of the input raster image.. designated as (u. does not always represent the same frequency.v).

= 3. 128 × 128. Three possible solutions are available in ERDAS IMAGINE: • • • Subset the image. For example: 300 512 400 mean value 512 Figure 74: The Padding Technique The padding technique is automatically performed by the FFT program. Pad the image — the input raster is increased in size to the next power of two by imbedding it in a field of the mean value of the entire input image. It produces a minimum of artifacts in the output Fourier image.Fourier Analysis If the Landsat TM image was 1024 × 1024: –5 –1 1 ∆u = ∆v = ---------------------------.42 × 10-5 6. etc.e. Field Guide 181 . the frequency represented by a (u.5 u or v 0 1 2 0 Frequency 3. the sample images are 512 × 512 and 1024 × 1024—powers of two.) no padding is used. For the above calculation. 64 × 64. as noted above. If the image is subset using a power of two (i. Resample the image so that its height and width are powers of two. input images will usually not meet this criterion. These were selected because the FFT calculation requires that the height and width of the input image be a power of two (although the image need not be square).v) position depends on the size of the input image.42 × 10 m 1024 × 28. In practice.. 64 × 128.85 × 10-5 So.

h The names high-pass. Filtering Operations performed in the frequency (Fourier) domain can be visualized in the context of the familiar convolution function. indicate that these convolution functions derive from the frequency domain.v) where: f(x.71828. etc. The mathematical basis of this interrelationship is the convolution theorem. If the original image was padded by the FFT program. high-frequency. output from the Fast Fourier Transform or FFT Editor).y) ∗ f(x.. low-pass. 0 ≤ y ≤ N – 1 where: M= the number of pixels horizontally N= the number of pixels vertically u. 182 ERDAS .ifft.y) G. the natural logarithm base Source: Modified from Oppenheim and Press 1988 Images computed by this algorithm are saved with an . This program creates (and deletes.y) = h(x.fft data.e.v) = H(u.v) × F(u. • • • The input file must be in the compressed . H = = = = input image position invariant operation (convolution kernel) output image Fourier transforms of g.. v )e j2πux ⁄ M + j2πvy ⁄ N ] 0 ≤ x ≤ M – 1.Inverse Fast Fourier Transform (IFFT) The Inverse Fast Fourier Transform (IFFT) computes the inverse two-dimensional Fast Fourier Transform of the spectrum stored. upon normal termination) a temporary file large enough to contain one entire band of . f. which states that a convolution operation in the spatial domain is equivalent to a multiplication operation in the frequency domain: g(x.The specific expression calculated by this program is: 1 f ( x. y ) ← -------------N1N2 M – 1N – 1 u = 0v = 0 ∑ ∑ [ F ( u. the padding will automatically be removed by IFFT.y) ≡ G(u.y) g(x.y) h(x. F.fft format described earlier (i.v= spatial frequency variables e≈ 2.img file extension by default.

2 2 2 Field Guide 183 . Thus. Size of neighborhood for calculation 16 Fourier processing more efficient 12 8 4 Direct processing more efficient 0 200 400 600 800 1000 1200 Source: Pratt. Figure 75 compares Direct and Fourier domain processing for finite area convolution. In practice. a smaller radius (r) has the same effect as a larger N (where N is the size of a kernel) in a spatial domain low-pass convolution.Fourier Analysis Low-Pass Filtering The simplest example of this relationship is the low-pass kernel. the calculation becomes more time-consuming. low-pass kernel. it can be faster to generate a low-pass image via Fourier processing. Depending on the size of the input image and the size of the kernel. the size of the low-pass kernel increases. is derived from a filter that would pass low frequencies and block (filter out) high frequencies. particularly.” As mentioned. as the size of the image and. 1991 Size of input image Figure 75: Comparison of Direct and Fourier Domain Processing In the Fourier domain. The name. the low-pass operation is implemented by attenuating the pixels whose frequencies satisfy: u + v > D0 D0 is frequently called the “cutoff frequency. the low-pass information is concentrated toward the origin of the Fourier image. this is easily achieved in the spatial domain by the M = N = 3 kernel: 1 1 1 1 1 1 1 1 1 Obviously.

Thus. the frequency represented by a particular u. a low-pass operation of r = 20 will be equivalent to a spatial low-pass of various kernel sizes. High-Pass Filtering Just as images can be smoothed (blurred) by attenuating the high-frequency components of an image using low-pass filters. images can be sharpened and edge-enhanced by attenuating the low-frequency components using high-pass filters.v (or r) position depends on the size of the input image.5 5 9 14 13 22 25 42 128 × 128 20 10 256 × 256 20 10 This table shows that using a window on a 64 × 64 Fourier image with a radius of 50 as the cutoff is the same as using a 3 × 3 low-pass kernel on a 64 × 64 spatial domain image. depending on the size of the input image. In the Fourier domain. For example: Image Size 64 × 64 Fourier Low-Pass r= 50 30 20 10 5 Convolution Low-Pass N= 3 3.As was pointed out earlier. the high-pass operation is implemented by attenuating pixels whose frequencies satisfy: u + v < D0 2 2 2 184 ERDAS .

v) gain 1 0 H(u.” H(u. This application is perhaps easiest understood in the context of the high-pass and low-pass filter operations. and all frequencies outside the radius are completely attenuated. Note that in Figure 76 the cross section is “ideal. In ERDAS IMAGINE Fourier processing. The point D0 is termed the cutoff frequency. Ideal The simplest low-pass filtering is accomplished using the ideal window.v) > D0 D0 frequency D(u.v) ≤ D0 0 if D(u. Field Guide 185 . Each window is discussed in more detail below.Fourier Analysis Windows The attenuation discussed above can be done in many different ways.v) = 1 if D(u. High-pass filtering using the ideal window looks like the illustration below. so named because its cutoff point is absolute.v) Figure 76: An Ideal Cross Section All frequencies inside a circle of a radius D0 are retained completely (passed). five window functions are provided to achieve different types of attenuation: • • • • • Ideal Bartlett (triangular) Butterworth Gaussian Hanning (cosine) Each of these windows must be defined when a frequency domain process is used.

A major disadvantage of the ideal filter is that it can cause “ringing” artifacts. H(u.v) 0 D0 frequency D(u. and all frequencies outside the radius are retained completely (passed). Hanning.v) gain 1 0 H(u.) minimize this effect. as shown in the low.v) > D0 D0 frequency D(u. Bartlett Filtering using the Bartlett window is a triangular function. particularly if the radius (r) is small.e.v) High-Pass gain gain 1 1 0 D0 frequency D(u.v) Figure 77: High-Pass Filtering Using the Ideal Window All frequencies inside a circle of a radius D0 are completely attenuated. The smoother functions (i.and high-pass cross-sections below.v) Figure 78: Filtering Using the Bartlett Window 186 ERDAS .v) Low-Pass H(u. etc. Butterworth.H(u.v) = 0 if D(u.v) ≤ D0 1 if D(u..

Fourier Analysis Butterworth. The Butterworth window reduces the ringing effect because it does not contain abrupt changes in value or slope.1 + cos  --------.v) = 1 ---------------------------------------------------1 + [ ( D ( u. Low-Pass H(u.v) = 1 πx -. and Hanning windows are all “smooth” and greatly reduce the effect of ringing. For most “normal” types of Fourier image enhancement. The low. D0 The equation for the Hanning low-pass window is: H(u. and Hanning The Butterworth. they are essentially interchangeable.   2D 0  2 for 0 ≤ x ≤ 2D 0 0 otherwise Field Guide 187 .v) = e x 2 –  ----.v) 1 0.v) 1 gain gain H(u.v) D0 1 2 frequency 3 1 2 frequency 3 D(u.5 0 High-Pass 0. The equation for the Gaussian low-pass window is: H(u. Gaussian.5 0 D(u. Gaussian.v) D0 Figure 79: Filtering Using the Butterworth Window The equation for the low-pass Butterworth window is: H(u. v ) ) ⁄ D 0 ] 2n NOTE: The Butterworth window approaches its window center gain asymptotically.and high-pass cross sections below illustrate this. The differences between them are minor and are of interest mainly to experts.

images are corrupted by “noise” that is periodic in nature. The average power spectrum is then used as a filter to adjust the FFT of the entire image. It is possible to remove these lines using very narrow wedges with the Ideal window. When these images are transformed into Fourier space. However. the result is an image which should have any periodic noise eliminated or significantly reduced..0). This method is partially based on the algorithms outlined in Cannon et al.Fourier Noise Removal Occasionally. some sort of periodic interference). the periodic line pattern becomes a radial line. An example of this is the scan lines that are present in some TM images. such as lines not centered at u. editing tools operate on both components simultaneously. Other types of noise can produce artifacts. 188 ERDAS . such as Butterworth. The Fourier Editor contains tools that enable the user to attenuate a circular or rectangular region anywhere on the image.0 or circular spots in the Fourier image. Use of this algorithm requires a minimum of input from the user.e.v = 0. The averaging removes all frequency domain quantities except those which are present in each block (i. However. The Fourier Transform of each block is calculated and the log-magnitudes of each FFT block are averaged. Automatic Periodic Noise Removal The use of the FFT Editor. the sudden transitions resulting from zeroing out sections of a Fourier image will cause a ringing of the image when it is transformed back into the spatial domain. The image is first divided into 128 x 128 pixel blocks. enables the user to selectively and accurately remove periodic noise from any image. When the inverse Fourier Transform is performed. as described above.0) are best removed using back-to-back wedges centered at (0. it has been found that radial lines centered at the Fourier origin (u. 1988. These can be removed using the tools provided in the IMAGINE FFT Editor. This effect can be lessened by using a less abrupt window. As these artifacts are always symmetrical in the Fourier magnitude image. Select the Periodic Noise Removal option from Image Interpreter to use this function. operator interaction and a bit of trial and error are required.v = 0. The ERDAS IMAGINE Fourier Analysis functions provide two main tools for reducing noise in images: • • Editing Automatic removal of periodic noise Editing In practice. 1983 and Srinivasan et al. The automatic periodic noise removal algorithm has been devised to address images degraded uniformly by striping or other periodic anomalies.

y) = reflectance at pixel x. the image is now transformed into Fourier space.y The illumination image is a function of lighting conditions.y) = image intensity (DN) at pixel x. In this application. the image may be effectively manipulated in the Fourier domain.y) = illumination of pixel x. With the two component images separated.y) = ln i(x. By using a filter on the Fourier image.y) where: I(x. etc. the enhanced image is returned to the normal spatial domain. Select the Homomorphic Filter option from Image Interpreter to use this function. any linear operation can be performed.y) × r(x. which increases the high-frequency components.y r(x.y) = i(x. The reflectance image is a function of the object being imaged. By applying an inverse fast Fourier transform followed by an exponential function.y) + ln r(x. while the illumination image (related to the scene illumination) is de-emphasized. the reflectance image (related to the target material) may be enhanced. Because the illumination component usually dominates the low frequencies. shadows.y i(x. A log function can be used to separate the two components (i and r) of the image: ln I(x.y) This transforms the image from multiplicative to additive superposition. The flow chart in Figure 80 summarizes the homomorphic filtering process in ERDAS IMAGINE.Fourier Analysis Homomorphic Filtering Homomorphic filtering is based upon the principle that an image may be modeled as the product of illumination and reflectance components: I(x. while the reflectance component dominates the higher frequencies. Field Guide 189 .

in automatic processing. r = high freq. For manual editing. 190 ERDAS . A detailed description of the theory behind Fourier series and Fourier transforms is given in Gonzales and Wintz (1977). it is recommended that images that are not a power of two be subset before being used in an automatic process. this causes no problems. Enhanced Image Exponential IFFT Filtered Fourier Image i decreased r increased Figure 80: Homomorphic Filtering Process As mentioned earlier. However. For this reason. the artifacts induced by the padding may have a deleterious effect on the output image.Input Image Log Log Image FFT Fourier Image Butter- worth Filter i×r ln i + ln r i = low freq. such as the homomorphic filter. See also Oppenheim (1975) and Press (1988). the ERDAS IMAGINE Fourier analysis software will automatically pad the image to the next largest size to make it a power of two. if an input image is not a power of two.

The strength of radar return is affected by slope. unlike a passive microwave sensor that simply receives the lowlevel radiation naturally emitted by targets. enhance. See "CHAPTER 1: Raster Data" and "CHAPTER 3: Raster and Vector Data Sources" for more information on radar data. Once out of phase. radar and VIS/IR data are complementary. ERDAS IMAGINE Radar provides a sophisticated set of image processing tools designed specifically for use with radar imagery. The conductivity of a target area is related to the porosity of the soil and its water content. or in any way resample. After interaction with the target area.Radar Imagery Enhancement Radar Imagery Enhancement The nature of the surface phenomena involved in radar imaging is inherently different from that of VIS/IR images. An active radar sensor gives off a burst of coherent radiation that reflects from the target. When radar microwaves strike a surface. but not advisable. When VIS/IR radiation strikes a surface it is either absorbed. they are reflected according to the physical and electrical properties of the surface. However. For information on the Radar Image Enhancement function. these waves are no longer in phase. Since any image processing done before removal of the speckle results in the noise being incorporated into and degrading the image. While these techniques can be applied to other types of image data. This is due to the different distances they travel from targets. see the section on "Radiometric Enhancement" on page 132. you should not rectify. Like the light from a laser. Consequently. Speckle noise must be reduced before the data can be effectively utilized. this imagery provides information on the chemical composition of the target. The absorption is based on the molecular bonds in the (surface) material. radar waves can interact to produce light and dark pixels known as speckle noise. rather than the chemical composition. the image processing programs used to reduce speckle noise produce changes in the image. and vegetation cover. or single versus multiple bounce scattering. the waves emitted by active sensors travel in phase and interact minimally on their way to the target area. reflected. although it may appear in any type of remotely sensed image utilizing coherent radiation. this discussion will focus on the special requirements of radar imagery enhancement. This section will describe the functions of ERDAS IMAGINE Radar. Field Guide 191 . correct to ground range. or classify the pixel values before removing speckle noise. Functions using nearest neighbor are technically permissible. Speckle Noise Speckle noise is commonly observed in radar (microwave or millimeter wave) sensing systems. roughness. Thus. This section describes enhancement techniques that are particularly useful for radar imagery. they provide different information about the target area. or transmitted. An image in which these two data types are intelligently combined can present much more information than either image by itself.

However. Mean Filter The Mean filter is a simple calculation. 7 × 7). which argues for a small window size. NOTE: Speckle noise in radar images cannot be completely removed. 192 ERDAS . Median Filter A better way to reduce speckle. a bright and a dark pixel within the same window would cancel each other out. The pixel of interest is replaced by the value in the center of this distribution.. this is the least satisfactory method of speckle reduction. it can be reduced significantly. Pulse functions of less than one-half of the moving window width are suppressed or eliminated. averaging results in a loss of detail. This filter does not remove the aberrant (speckle) value—it averages it into the data. This filter operates by arranging all DN (digital number) values within the user-defined window in sequential order. In addition. This consideration would argue in favor of a large window size (i. It is useful for “quick and dirty” applications or those where loss of resolution is not a problem. A Median filter is useful for removing pulse or spike noise. step functions or ramp functions are retained.Since different applications and different sensors necessitate different speckle removal models. is the Median filter. However. The effect of Mean and Median filters on various signals is shown (for one dimension) in Figure 81. In general. IMAGINE Radar includes several speckle reduction algorithms: • • • • • • • Mean filter Median filter Lee-Sigma filter Local Region filter Lee filter Frost filter Gamma-MAP filter These filters are described below.e. In theory. but still simplistic. The pixel of interest (center of window) is replaced by the arithmetic average of all values within the window.

Field Guide 193 . NE. NW.Radar Imagery Enhancement ORIGINAL MEAN FILTERED MEDIAN FILTERED Step Ramp Single Pulse Double Pulse Figure 81: Effects of Mean and Median Filters The Median filter is useful for noise suppression in any image. Figure 82 shows a 5 × 5 moving window and the regions of the Local Region filter. West. It does not affect step or ramp functions—it is an edge preserving filter (Pratt 1991). East. SW. South. An example of the application of the Median filter is the removal of dead-detector striping. such as is found in Landsat 4 TM data (Crippen 1989). Local Region Filter The Local Region filter divides the moving window into eight regions based on angular position (North. and SE). It is also applicable in removing pulse function noise which results from the inherent pulsing of microwaves.

the most uniform region. The resultant output image is an appropriate input to a classification application.. A region of low variance will probably be such for several surrounding pixels. The pixel of interest is replaced by the mean of all DN values within the region with the lowest variance. the size of which is determined by the moving window size. In practice. 194 ERDAS . this filter can be utilized sequentially 2 or 3 times.e. the variance is calculated as follows: Σ ( DN x. increasing the window size. i.= pixel of interest = North region = NE region = SW region Figure 82: Regions of Local Region Filter For each region. The result is that the output image is composed of numerous uniform areas. y – Mean ) Variance = ----------------------------------------------n–1 Source: Nagao 1979 2 The algorithm compares the variance values of the regions surrounding the pixel of interest. A region with low variance is assumed to have pixels minimally affected by wave interference yet very similar to the pixel of interest.

Table 17 gives theoretical coefficient of variation values for various look-average radar scenes: Table 17: Theoretical Coefficient of Variation Values # of Looks (scenes) Coef. The standard deviation of the noise can be mathematically defined as: VARIANCE Standard Deviation=> ---------------------------------. Field Guide 195 .30 .52 for 1-look radar data and SD = .18 The Lee filters are based on the assumption that the mean and variance of the pixel of interest are equal to the local mean and variance of all pixels within the user-selected moving window.Radar Imagery Enhancement Sigma and Lee Filters The Sigma and Lee filters utilize the statistical distribution of the DN values within the moving window to estimate what the pixel of interest should be. is used as an input parameter in the Sigma and Lee filters.= Coefficient Of Variation => sigma (σ) MEAN The coefficient of variation.26 .52 .26 for 4-look radar data. as a scene-derived parameter. It can be assumed that imaging radar data noise follows a Gaussian distribution.37 . Speckle in imaging radar can be mathematically modeled as multiplicative noise with a mean of 1.21 . This would yield a theoretical value for Standard Deviation (SD) of . It is also useful in evaluating and modifying visible/infrared (VIS/IR) data for input to a 4-band composite image or in preparing a 3-band ratio color composite (Crippen 1989). of Variation Value 1 2 3 4 6 8 .

thus filtering at 2 standard deviations should remove this noise. This noise suppression filter replaces the pixel of interest with the average of all DN values within the moving window that fall within the designated range. the user must specify how many standard deviations to use (2. 196 ERDAS . it is assumed that 95. Briefly. Finally.5) to define the accepted range.The actual calculation used for the Lee filter is: DNout = [Mean] + K[DNin . resulting in a few erratic pixels. Any sensor system has various sources of noise.5% of random samples are within a 2 standard deviation (2 sigma) range. The statistical filters (Sigma and Statistics) are logically applicable to any data set for preprocessing. a coefficient of variation specific to the data set must be input. In VIS/IR imagery. 1. the center pixel of which is the pixel of interest. most natural scenes are found to follow a normal distribution of DN values.Mean] where: Mean = average of pixels in a moving window K= Var ( x ) ---------------------------------------------------2 2 [ Mean ] σ + Var ( x ) The variance of x [Var (x)] is defined as:  [ Variance within window ] + [ Mean within window ] 2 Var ( x ) =  -----------------------------------------------------------------------------------------------------------------------------2  [ Sigma ] + 1 – [ Mean within window ] Source: Lee 1981 2 The Sigma filter is based on the probability of a Gaussian distribution. or 0. This is particularly true of experimental sensor systems that frequently have significant noise problems. the user must specify a moving window size. As with all the Radar speckle filters. As with the Statistics filter.

26 NA Sigma Multiplier NA NA NA Window Size 3×3 5×5 5 × 5 or 7 × 7 With all speckle reduction filters there is a playoff between noise reduction and loss of resolution.Radar Imagery Enhancement These speckle filters can be used iteratively.26 0.5 1 2 3×3 5×5 7×7 Similarly.26 0. The ERDAS IMAGINE filters have been designed to be versatile and gentle in reducing noise (and resolution). The following sequence is useful prior to a classification: Table 19: Pre-Classification Sequence Filter Lee Lee Local Region Pass 1 2 3 Sigma Value 0.26 0. Field Guide 197 .26 0. there is no reason why successive passes must be of the same filter. three passes of the Sigma filter with the following parameters is very effective when used with any type of data: Table 18: Parameters for Sigma Filter Pass Sigma Value Sigma Multiplier Window Size 1 2 3 0. and then decide if another pass is appropriate and what parameters to use on the next pass. Each data set and each application will have a different acceptable balance between these two factors. For example. The user must view and evaluate the resultant image after each pass (the data histogram is useful for this).

Frost Filter The Frost filter is a minimum mean square error algorithm which adapts to the local statistics of the image. Touzi. Nezry. The formula used is: DN = Where K n×n ∑ Kαe = –α t normalization constant Ι σ σ |t| n And = local mean = local variance = image coefficient of variation value = = |X-X0| + |Y-Y0| moving window size α = ( 4 ⁄ nσ ) ( σ ⁄ I ) 2 2 2 Source: Lopes. 1990 198 ERDAS . The local statistics serve as weighting parameters for the impulse response of the filter (moving window). Laur. This algorithm assumes that noise is multiplicative with stationary statistics.

Shanmugan. which is assumed to lie between the local average and the degraded (actual) pixel DN.Radar Imagery Enhancement Gamma-MAP Filter The Maximum A Posteriori (MAP) filter attempts to estimate the original pixel DN. Lee. Holtzman.g. Frost) assume a Gaussian distribution for the speckle noise. Recent work has shown this to be an invalid assumption. The exact formula used is the cubic equation: ˆ 3 – II 2 + σ ( ˆ – DN ) = 0 ˆ I I Where Î = sought value Ι σ = local mean input value DN = = original image variance Source: Frost. This algorithm incorporates this assumption. 1982 Field Guide 199 . MAP logic maximizes the a posteriori probability density function with respect to the original image.. Natural vegetated areas have been shown to be more properly modeled as having a Gamma distributed cross section. Lee-Sigma. Stiles. Many speckle reduction filters (e.

(see Figure 84). geologists are often interested in mapping lineaments. or vice versa. The models in Figure 83 represent ideal theoretical edges. Continuous Edge. Edge detection could imply amplifying an edge. Roof edge — a line with a width near zero. or a spot (see Figure 83). hence the need for edge detection algorithms. Distinguished by DN change. which may be fault lines or bedding structures.Edge Detection Edge and line detection are important operations in digital image processing. real data values will vary to produce a more distorted edge. vibration. it is first necessary to understand the nature of what is being enhanced. a line. width must be less than the moving window size. Step edge — a ramp edge with a slope angle of 90 degrees. edge and line detection is a major enhancement technique. etc. due to sensor noise. Line — a region bounded on each end by an edge. and Line Models • • • • Ramp edge — an edge modeled as a ramp. DN Value slope slope midpoint DN Value DN change 90o DN change Ramp edge x or y Step edge x or y width DN Value DN change x or y DN Value width near 0 DN change x or y Line Roof edge Figure 83: One-dimensional. increasing in DN value from a low to a high level. However. In selecting an algorithm. 200 ERDAS . and slope midpoint. There are no perfect edges in raster data. For this purpose. slope. For example.

Figure 85 shows ideal one-dimensional edge and line intensity curves with the associated 1st-order and 2nd-order derivatives.Radar Imagery Enhancement Actual data values Ideal model step edge Intensity Figure 84: A Very Noisy Edge Superimposed on an Ideal Edge Edge detection algorithms can be broken down into 1st-order derivative and 2nd-order derivative operations. Step Edge Line Original Feature g(x) x g(x) x 1st Derivative ∂g ----∂x ∂g ----∂x x x 2nd Derivative ∂g ------2 ∂x 2 ∂g ------2 ∂x 2 x x Figure 85: Edge and Line Derivatives Field Guide 201 .

= – 1 2 – 1 2 ∂x –1 2 –1 1st-Order Derivatives (Prewitt) and ∂2 ------. (Gradient kernels with zero weighting. have no output in uniform regions. The compass names indicate the slope direction creating maximum response. i.= ∂x 1 1 1 0 0 0 –1 –1 –1 and The 2nd-order derivative kernel(s) derives from Laplacian operators: –1 2 –1 ∂2 ------.) The detected edge will be orthogonal to the gradient direction. Northwest. Northeast. Extension of the 3 × 3 impulse response arrays to a larger size is not clear cut— different authors suggest different lines of rationale. These operators approximate to the eight possible compass orientations (North.. To avoid positional shift. Southeast. all operating windows are odd number arrays. it may be advantageous to extend the 3-level (Prewitt 1970) to: 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 –1 –1 –1 –1 –1 –1 –1 –1 –1 –1 or the following might be beneficial: 2 2 2 2 2 1 1 1 1 1 0 0 0 0 0 –1 –1 –1 –1 –1 –2 –2 –2 –2 –2 or 4 4 4 4 4 2 2 2 2 2 0 0 0 0 0 –2 –2 –2 –2 –2 –4 –4 –4 –4 –4 202 ERDAS . Southwest).= 2 ∂y –1 –1 –1 2 2 2 –1 –1 –1 ERDAS IMAGINE Radar utilizes sets of template matching operators. the sum of the kernel coefficient is zero. South.The 1st-order derivative kernel(s) derives from the simple Prewitt kernel: 1 0 –1 ∂ ---. For example. West.e.= 1 0 – 1 ∂y 1 0 –1 ∂ ---. East. with the center pixel being the pixel of interest.

you should reduce speckle noise by using the Radar Speckle Suppression function. ERDAS IMAGINE Radar offers two such arrays: Unweighted line: –1 2 –1 –1 2 –1 –1 2 –1 Weighted line: –1 2 –1 –2 4 –2 –1 2 –1 Source: Pratt 1991 Field Guide 203 . These are best for line (or spot) detection as distinct from ramp edges. Zero-Sum Filters A common type of edge detection kernel is a zero-sum filter. but are computationally more demanding. Examples of two zero-sum filters are given below: Sobel = –1 –2 –1 0 0 0 1 2 1 horizontal 1 0 –1 2 0 –2 1 0 –1 vertical Prewitt= –1 –1 –1 0 0 0 1 1 1 horizontal 1 0 –1 1 0 –1 1 0 –1 vertical Prior to edge enhancement. the coefficients are designed to add up to zero. 2nd-Order Derivatives (Laplacian Operators) The second category of edge enhancers is 2nd-order derivative or Laplacian operators. For this type of filter.Radar Imagery Enhancement Larger template arrays provide a greater noise immunity.

one can experiment to discern which texture image best aids the classification. green. the phenomena involved is absorption at the molecular level. as we know from arraytype antennae. In these areas the scene can often be characterized as exhibiting a consistent structure analogous to the texture of cloth. this stems from the nature of the imaging process itself. other mathematical texture definitions will prove useful and will be added to ERDAS IMAGINE Radar. it is showing even greater applicability to radar imagery. although it may be applied to any type of data with varying results. and 61 × 61 windows can be combined into a three-color RGB (red. The texture transforms can be used in several ways to enhance the use of radar imagery.and 2nd-order derivative images produces the best output. it has been shown (Blom et al 1982) that a three-layer variance image using 15 × 15. the proper texture image (function and window size) can greatly increase the discrimination. The user could also prepare a three-color image using three different functions operating through the same (or different) size moving window(s). In part. As radar data come into wider use. 31 × 31. each data set and application would need different moving window sizes and/or texture measures to maximize the discrimination. “Many portions of images of natural scenes are devoid of sharp edges over large areas. Texture According to Pratt (1991). The interaction of the radar waves with the surface of interest is dominated by reflection involving the surface roughness at the wavelength scale. texture is particularly applicable to radar data. Also. Using known test sites. This provides for a more precise method for quantifying the character of texture in a radar return. radar is especially sensitive to regularity that is a multiple of its wavelength. For example. Radar Texture Analysis While texture analysis has been useful in the enhancement of visible/infrared image data (VIS/IR). The ability to use radar data to detect texture and provide topographic information about an image is a major advantage over other types of imagery where texture is not a quantitative characteristic.Some researchers have found that a combination of 1st. Adding the radar intensity image as an additional layer in a (vegetation) classification is fairly straightforward and may be useful. the texture image could then be added as an additional layer to the TM bands. In practice. However. In VIS/IR imaging. blue) image that is useful for geologic discrimination. The same could apply to a vegetation classification. See Eberlein and Weszka (1975) for information about subtracting the 2nd-order derivative (Laplacian) image from the 1st-order derivative image (gradient). Image texture measurements can be used to segment an image and classify its segments. you will interactively decide which algorithm and window size is best for your data and application. 204 ERDAS . For example.” As an enhancement. However.

where: 2 Σx ij Mean = --------n Field Guide 205 . Irons 1981). The algorithms incorporated into ERDAS IMAGINE are those which are applicable in a wide variety of situations and are not computationally over-demanding.j) n= number of pixels in a window M = Mean of the moving window.j) of a multispectral image xcλ= DN value for spectral band λ of a window’s center pixel n = number of pixels in a window Variance 1 2 -2 ) ] Σ ( x ij – M ) Variance = ----------------------------n–1 where: xij = DN value of pixel (i. Research has shown that very large moving windows are often needed for proper enhancement. Blom (Blom et al 1982) uses up to a 61 × 61 window. it can be enhanced with mathematical algorithms.Radar Imagery Enhancement Texture Analysis Algorithms While texture has typically been a qualitative measure. Four algorithms are currently utilized for texture enhancement in ERDAS IMAGINE: • • • • mean Euclidean distance (1st-order) variance (2nd-order) skewness (3rd-order) kurtosis (4th order) Mean Euclidean Distance These algorithms are shown below (Irons and Petersen 1981): Mean Euclidean Distance = Σ [ Σ λ ( x cλ – x ijλ ----------------------------------------------n–1 where: xijλ= DN value for spectral band λ and pixel (i. For example. Many algorithms appear in literature for specific applications (Haralick 1979. This later point becomes critical as the moving window size increases.

j) n = number of pixels in a window M= Mean of the moving window (see above) V = Variance (see above) Kurtosis 3 3 -2 Σ ( x ij – M ) Kurtosis = ----------------------------2 (n – 1 )(V ) where: xij = DN value of pixel (i.Skewness Σ ( x ij – M ) Skew = -------------------------------(n – 1)(V ) where: xij = DN value of pixel (i. 206 ERDAS .j) n = number of pixels in a window M = Mean of the moving window (see above) V = Variance (see above) 4 Texture analysis is available from the Texture function in Image Interpreter and from the Radar Texture Analysis function.

it is not specific to any particular radar sensor. For this to be a valid approach.. the number of data values must be large enough to provide good average values. However. Figure 86: Adjust Brightness Function Field Guide 207 . The Adjust Brightness function in ERDAS IMAGINE works by correcting each range line average. speckle) the inherently stronger signal from a near range (closest to the sensor flight path) than a far range (farthest from the sensor flight path) target Many imaging radar systems use a single antenna that transmits the coherent radar burst and receives the return echo. and imperfections. Be careful not to use too small an image. rows of data a = average data value of each row columns of data Add the averages of all data rows: a1 + a2 + a3 + a4 .Radar Imagery Enhancement Radiometric Correction Radar Imagery The raw radar image frequently contains radiometric errors due to: • • • imperfections in the transmit and receive pattern of the radar antenna errors due to the coherent pulse (i.e. These two problems can be addressed by adjusting the average brightness of each range line to a constant— usually the average overall scene brightness (Chavez 1986)... it may have various lobes. dead spots. This causes the received signal to be slightly distorted radiometrically. This approach is generic. This will depend upon the character of the scene itself. This requires that each line of constant range be long enough to reasonably approximate the overall scene brightness (see Figure 86). no antenna is perfect. range fall-off will cause far range targets to be darker (less return signal)..ax x Overall average ax = Overall average = calibration coefficient of line x This subset would not give an accurate average for correcting the entire scene. In addition.

the user must tell ERDAS IMAGINE whether the lines of constant range are in columns or rows in the displayed image.Range Lines/Lines of Constant Range Lines of constant range are not the same thing as range lines: • • • Range lines — lines that are perpendicular to the flight of the sensor Lines of constant range — lines that are parallel to the flight of the sensor Range direction — same as range lines Because radiometric errors are a function of the imaging geometry. parallel to the sides of the display screen: Lines Of Constant Range Display Screen Range Direction Figure 87: Range Lines vs. the image must be correctly oriented during the correction process. Figure 87 shows the lines of constant range in columns. For the algorithm to correctly address the data set. Lines of Constant Range Flight (Azimuth) Direction Range Lines 208 ERDAS .

In operation. as shown in Figure 88. an imaging radar is always side-looking. the user can approximate: Dist s ≈ ( Dist g ) ( cos θ ) where: Dists= slant range distance Distg= ground range distance Dist s cos θ = -----------Dist g Source: Leberl 1990 Field Guide 209 . antenna across track arcs of constant range θ = depression angle H C Dists 90o A θ B Distg Figure 88: Slant-to-Ground Range Correction Assuming that angle ACB is a right angle. By design. the depression angle is usually 75o at most. the radar sensor determines the range (distance to) each target. which is similar in concept to orthocorrecting a VIS/IR image.Radar Imagery Enhancement Slant-to-Ground Range Correction Radar images also require slant-to-ground range correction. In practice.

pixel size can be changed for coregistration with other data sets. option to view this information.This has the effect of compressing the near range areas more than the far range areas. However. this may not be important. To do this. Use the Data View. If it is not contained in the header file. to geocode the scene or to register radar to infrared or visible imagery.. 210 ERDAS .. the following parameters relating to the imaging geometry are needed: • • • • Depression angle (θ) — angular distance between sensor horizon and scene center Sensor height (H) — elevation of sensor (in meters) above its nadir point Beam width— angular distance between near range and far range for entire scene Pixel size (in meters) — range input image cell size This information is usually found in the header file of data. For many applications. the scene must be corrected to a ground range format. Once the scene is range-format corrected. you must obtain this information from the data supplier.

If the two images are correctly combined. The option that proves to be most useful will depend upon the data sets (both radar and VIS/IR). green. an RGB (red. blue) color composite of bands (or band derivatives. and your final objective. Use the Viewer with the Clear Display option disabled for this type of merge. Select the color guns to display the different layers.Radar Imagery Enhancement Merging Radar with VIS/IR Imagery As aforementioned. see "RGB to IHS" on page 160. In this technique. The intensity component is replaced by the radar image. and the scene is reverse transformed. you must experiment. the resultant image will convey both chemical and physical information and could prove more useful than either image alone. the phenomena involved in radar imaging is quite different from that in VIS/IR imaging. Because these two sensor types give different information about the same target (chemical vs. your experience. saturation color space. The methods for merging radar and VIS/IR data are still experimental and open for exploration. For more information. while the green and blue guns display VIS/IR bands or band ratios. RGB to IHS Transforms Another common technique uses the RGB to IHS transforms. such as ratios) is transformed into intensity. Co-Displaying The simplest and most frequently used method of combining radar with VIS/IR imagery is co-displaying on an RGB color monitor. hue. they are complementary data sets. In this technique the radar image is displayed with one (typically the red) gun. Field Guide 211 .it is feature extraction. There are currently no rules to suggest which options will yield the best results for a particular application. This technique follows from no logical model and does not truly merge the two data sets. The following methods are suggested for experimentation: • • • • Co-displaying in a Viewer RGB to IHS transforms Principal components transform Multiplicative The ultimate goal of enhancement is not mathematical or logical purity . This technique integrally merges the two data types. physical).

Multiplicative A final method to consider is the multiplicative technique. see "Principal Components Analysis" on page 153. This value is replaced by the radar image and the reverse transform is applied. the chromatic components are usually band ratios or PC's. this should be the case. If the target area is accompanied by silicification. For example. The first principal component. the radar image is input multiplicatively as intensity (Holcomb 1993). the sites suited for mineral exploration will be bright overall. the logic being that if all three ratios are high. TM5/TM4. With this transform. radar would not correlate with high 5/7. However. PC-1. more than three components can be used. However. the VIS/IR intensity. The two sensor merge models using transforms to integrate the two data sets (Principal Components and RGB to IHS) are based on the assumption that the radar intensity correlates with the intensity that the transform derives from the data inputs. 3/1 intensity and the substitution would not produce the desired results (Holcomb 1993). if the alteration zone was basaltic rock to kaolinite/alunite. is generally accepted to correlate with overall scene brightness. In this case. 5/4. the logic of mathematically merging radar with VIS/IR data sets is inherently different from the logic of the SPOT/TM merges (as discussed under the section in this chapter on Resolution Merge). For more information. This requires several chromatic components and a multiplicative component. 212 ERDAS . Landsat TM imagery is often used to aid in mineral exploration. then the radar return could be weaker than the surrounding rock. which results in an area of dense angular rock. In practice. It cannot be assumed that the radar intensity is a surrogate for. which is assigned to the image intensity. The acceptability of this assumption will depend on the specific case. These are converted to a series of principal components. or equivalent to. TM3/TM1. A common display for this purpose is RGB = TM5/TM7.Principal Components Transform A similar image merge involves utilizing the principal components (PC) transformation of the VIS/IR image.

Radar Imagery Enhancement Field Guide 213 .

214 ERDAS .

This process is also referred to as image segmentation. The classification process breaks down into two parts—training and classifying (using a decision rule). pasture. classes may be associated with known features on the ground or may simply represent areas that “look different” to the computer. the computer system must be trained to recognize patterns in the data. which can be extracted through classification. The Classification Process Pattern Recognition Pattern recognition is the science—and art—of finding meaningful patterns in data. Then. Training can be performed with either a supervised or an unsupervised method. the resulting classes represent the categories within the data that the user originally identified. By spatially and spectrally enhancing an image. and of the classes desired. An example of a classified image is a land cover map. spectral pattern recognition can be more scientific. If the classification is accurate. urban. ground truth data. or that they can identify with help from other sources. If a pixel satisfies a certain set of criteria. Training First. Depending on the type of information the user wants to extract from the original data. the user can “train” the computer system to identify pixels with similar characteristics. based on their data file values. pattern recognition can be performed with the human eye—the human brain automatically sorts certain textures and colors into categories. as explained below. showing vegetation. In this process. Training is the process of defining the criteria by which these patterns are recognized (Hord 1982). In a computer system. etc. By identifying patterns. the user selects pixels that represent patterns or landcover features that they recognize. Field Guide 215 . the pixel is assigned to the class that corresponds to that criteria. is required before classification. Statistics are derived from the spectral characteristics of all pixels in an image. the pixels are sorted based on mathematical criteria.The Classification Process CHAPTER 6 Classification Introduction Multispectral classification is the process of sorting pixels into a finite number of individual classes. or maps. Knowledge of the data. or categories of data. bare land. Supervised Training Supervised training is closely controlled by the analyst. such as aerial photos.

Signatures The result of training is a set of signatures that defines a training sample or cluster. to attach meaning to the resulting classes (Jensen 1996). In some cases. Each signature corresponds to a class. Unsupervised classification is useful only if the classes can be appropriately interpreted. the user is more able to analyze and visualize the class definitions than either type of signature provides independently (Kloer 1994). Signatures in ERDAS IMAGINE can be parametric or non-parametric. but on discrete objects (polygons or rectangles) in a feature space image.. mean and covariance matrix) of the pixels that are in the training sample or cluster. This function will allow a feature space object to be used to create a parametric signature from the image being classified. easily recognized areas of a particular soil type or land use. See "APPENDIX A: Math Topics" for information on feature space images and how they are created. They are simply clusters of pixels with similar spectral characteristics.img) to a class. These feature space objects are used to define the boundaries for the classes. These patterns do not necessarily correspond to directly meaningful characteristics of the scene.Unsupervised Training Unsupervised training is more computer-automated. A non-parametric classifier will use a set of nonparametric signatures to assign pixels to a class based on their location either inside or outside the area in the feature space image. 216 ERDAS . maximum likelihood) to define the classes. after classification.g. When both parametric and non-parametric signatures are used to classify an image. ERDAS IMAGINE enables the user to generate statistics for a non-parametric signature. such as contiguous. and is used with a decision rule (explained below) to assign the pixels in the image file (.. the only feature space object for which this would be mathematically valid would be an ellipse (Kloer 1994). Unsupervised training is dependent upon the data itself for the definition of classes. It enables the user to specify some parameters that the computer uses to uncover statistical patterns that are inherent in the data. A set of parametric signatures can be used to train a statistically-based classifier (e. it may be more important to identify groups of pixels with similar spectral characteristics than it is to sort pixels into recognizable categories.g. A non-parametric signature is not based on statistics. This method is usually used when less is known about the data before classification. Supervised and unsupervised training can generate parametric signatures. since a parametric classifier requires a normal distribution of data. It is then the analyst’s responsibility. A parametric signature is based on statistical parameters (e. However. Supervised training is used to generate nonparametric signatures (Kloer 1994).

since the parametric decision space is continuous (Kloer 1994). If a pixel is located within the boundary of a nonparametric signature. Non-Parametric Decision Rule A non-parametric decision rule is not based on statistics. Parametric Decision Rule A parametric decision rule is trained by the parametric signatures. it is independent from the properties of the data. every pixel is assigned to a class. using data contained in the signature. a non-parametric decision rule determines whether or not the pixel is located inside or outside of a non-parametric signature boundary. then this decision rule will assign the pixel to the signature’s class.The Classification Process Decision Rule After the signatures are defined. the pixels of the image are sorted into classes based on the signatures. Field Guide 217 . by use of a classification decision rule. When a parametric decision rule is used. therefore. These signatures are defined by the mean vector and covariance matrix for the data file values of the pixels in the signatures. The decision rule is a mathematical algorithm that. Basically. performs the actual sorting of pixels into distinct class values.

1975. classification is performed with a set of target classes in mind. It is recommended that the classification process is begun by the user defining a classification scheme for the application. Lewis M.S. 550-010-001-a. J. which can describe a study area in several levels of detail. et al. Washington. et al. using previously developed schemes.C. like those above. Such a set is called a classification scheme (or classification system).. Geological Survey Professional Paper 964. 1976. Michigan: State of Michigan Office of Land Use. Michigan Land Cover/Use Classification System. 218 ERDAS . A number of classification schemes have been developed by specialists who have inventoried a geographic region.R. Procedure No.: U. Florida Land Use. “A Land Use and Land Cover Classification System for Use with Remote Sensor Data.” U. The purpose of such a scheme is to provide a framework for organizing and categorizing the information that can be extracted from the data (Jensen 1983). Fish and Wildlife Service. Michigan Land Use Classification and Reference Committee. Thematic Mapping Section. • Other states or government agencies may also have specialized land use/cover studies. Cover and Forms Classification System. The proper classification scheme will include classes that are both important to the study and discernible from the data on hand. Classification of Wetlands and Deepwater Habitats of the United States. Florida Topographic Bureau.S.Classification Tips Classification Scheme Usually. D. Most schemes have a hierarchical structure.. 1985. Lansing. 1979. Cowardin. as a general framework. Some references for professionally-developed schemes are listed below: • • • Anderson. Florida Department of Transportation.

. The IMAGINE classification utilities are a “tool box” to be used as needed. Classifying Enhanced Data For many specialized applications. • Supervised vs. image algebra. However. The total classification can be achieved with either the supervised or unsupervised methods. NOTE: Supervised classification also includes using a set of classes that was generated from an unsupervised classification. The objective of the ERDAS IMAGINE system is to enable the user to iteratively create and refine signatures and classified . homogeneous regions that represent each class. without understanding the data and the enhancements used. classifying data that have been merged. Using the optimum band combination may reduce the time required to run a classification process. if the user wants the classes to be determined by spectral distinctions that are inherent in the data. Field Guide 219 .img files to arrive at a desired final classification. For example. it is recommended that only the original. or when the user can identify distinct. and identify classes that are not in contiguous. then supervised classification could be used for further definition of the classes. Using a combination of supervised and unsupervised classification may yield optimum results. Unsupervised training enables the user to define many classes easily. especially with large data sets (e.Classification Tips Iterative Classification A process is iterative when it repeats an action. or a combination of both. spectrally merged or enhanced—with principal components. These tools also help define optimum band combinations for classification. it is important to have a set of desired classes in mind. or other transformations—can produce very specific and meaningful results. Unsupervised Training In supervised training. On the other hand. unsupervised classification may be useful for generating a basic set of classes. not a numbered list of steps that must always be followed in order.g. This will help to determine which signatures should be merged or deleted. when the user has selected training sites that can be verified with ground truth data. Supervised classification is usually appropriate when the user wants to identify relatively few classes. Since classifications (supervised or unsupervised) can be based on a particular area of interest (either defined in a raster layer or an . so that he or she can define the classes later. remotely-sensed data be classified. multiple Landsat scenes). The user must also have some way of recognizing pixels that represent the classes that he or she wants to extract. Some examples are below: • • Signatures created from both supervised and unsupervised training can be merged and appended together. Signature evaluation tools can be used to indicate which signatures are spectrally similar. and then create the appropriate signatures from the data. easily recognized regions.aoi layer). signatures and classifications can be generated from previous classification results. then the application is better suited to unsupervised training.

previously classified data. a data file with 3 layers is said to be 3-dimensional. it is usually wise to reduce the dimensionality of the data as much as possible. Using ancillary data enables the user to incorporate variables into the classification from. Use the Signature Editor to evaluate separability to calculate the best subset of layer combinations. Therefore. Limiting Dimensions Although ERDAS IMAGINE allows an unlimited number of layers of data to be used for one classification. thus influencing the classification (Jensen 1996). For example. 220 ERDAS . the user can incorporate data (called ancillary data) other than remotelysensed data into the classification. Feature space and dimensionality are discussed in "APPENDIX A: Math Topics".img files. certain layers of data are redundant or extraneous to the task at hand. Often. since 3-dimensional feature space is “plotted” to analyze the data. vector layers. Unnecessary data take up valuable disk space. for example. or elevation data.Dimensionality Dimensionality refers to the number of layers being classified. the user can add layers to existing . Use the Image Information tool (in the ERDAS IMAGINE icon panel) to delete a layer(s). The data file values of the ancillary data become an additional feature of each pixel. which slows down processing. Use the Image Interpreter functions to merge or subset layers. and cause the computer system to perform more arduous calculations. Adding Dimensions Using programs in ERDAS IMAGINE.

Training Samples and Feature Space Objects Training samples (also called samples) are sets of pixels that represent what is recognized as a discernible pattern. such as a land cover type. which types of land cover. so that the data correspond as much as possible (Star and Estes 1990). it is previously identified with the use of ground truth data. is a set of pixels selected to represent a potential class. or training site. Field Guide 221 . the user relies on his or her own pattern recognition skills and a priori knowledge of the data to help the system determine the statistical criteria (signatures) for data classification. They should be collected at the same time as the remotely sensed data. and human shortcomings. or vegetation (or whatever) are represented by the data? In supervised training. The data file values for these pixels are used to generate a parametric signature. The following terms are sometimes used interchangeably in reference to training samples. To select reliable samples. For clarity. soil. Ground truthing refers to the acquisition of knowledge about the study area from field work. or sample. they will be used in this documentation as follows: • • Training sample. However. Usually. or potential class. The feature space signature is based on this objects(s). analysis of aerial photography. The system will calculate statistics from the sample pixels to create a parametric signature for the class. some ground data may not be very accurate due to a number of errors. Feature space objects are user-defined areas of interest (AOIs) in a feature space image. the user should know some information—either spatial or spectral—about the pixels that they want to classify. etc. is the geographical area(s) of interest (AOI) in the image represented by the pixels in a sample. Ground truth data are considered to be the most accurate (true) data available about the area of study. The location of a specific characteristic. inaccuracies.Supervised Training Supervised Training Supervised training requires a priori (already known) information about the data. Training field. such as: • • What type of classes need to be extracted? Soil type? Land use? Vegetation? What classes are most likely to be present in the data? That is. personal experience. may be known through ground truthing.

Use the Signature Editor to create signatures from training samples that are identified with the polygons. and other variations into account). sun angle. User-Defined Polygon Using his or her pattern recognition skills (with or without supplemental ground truth information). date. The selection of training samples depends largely upon the user’s knowledge of the data. if it is known that oak trees reflect certain frequencies of green and infrared light according to ground truth data. The locations of the training sites can be digitized from maps with the ERDAS IMAGINE Vector or AOI tools. time. The vector layers can then be used as input to the AOI tools and used as training samples to create signatures. 222 ERDAS .. the user may be able to base his or her sample selections on the data (taking atmospheric conditions. Polygons representing these areas are then stored as vector layers. ERDAS IMAGINE enables the user to identify training samples via one or more of the following methods: • • • • • using a vector layer defining a polygon in the image identifying a training sample of contiguous pixels with similar spectral characteristics identifying a training sample of contiguous pixels within a certain area.Selecting Training Samples It is important that training samples be representative of the class that the user is trying to identify. The area within the polygon(s) would be used to create a signature. the result of an unsupervised classification) Digitized Polygon Training samples can be identified by their geographical location (training sites. using maps. Use the Signature Editor to create signatures from training samples that are identified with digitized polygons. For example.e. of the study area. and of the classes that he or she wants to extract. ground truth data). Use the AOI tools to define the polygon(s) to be used as the training sample. This does not necessarily mean that they must contain a large number of pixels or be dispersed across a wide region of the data. the user can identify samples by examining a displayed image of the data and drawing a polygon around the training site(s) of interest. with or without similar spectral characteristics using a class from a thematic raster layer from an image file of the same area (i. Use the Vector and AOI tools to digitize training samples from a map.

against which the pixels that are contiguous to it are compared based on parameters specified by the user. and the boundaries can then be used as an AOI for training samples defined under Seed Properties. The training sample can be defined by as many class values as desired. represents known ground information high degree of user control auto-assisted. Then.consuming may underestimate class variance must have previously defined thematic layer User-Defined Polygon Seed Pixel Thematic Raster Layer Field Guide 223 . the cursor (cross hair) can be used to identify a single pixel (seed pixel) that is representative of the training sample. This seed pixel will be used as a model pixel. time. These homogenous pixels will be converted from individual raster pixels to a polygon and used as an area of interest (AOI) layer.Selecting Training Samples Identify Seed Pixel With the Seed Properties dialog and AOI tools. the pixels contiguous to the sample are compared in the same way. Select the Seed Properties option in the Viewer to identify training samples with a seed pixel.consuming may overestimate class variance. time. Vector layers (polygons or lines) can be displayed as the top layer in the Viewer. NOTE: The thematic raster layer must have the same coordinate system as the image file being classified. This process repeats until no pixels that are contiguous to the sample satisfy the spectral parameters. the mean of the sample is calculated from the accepted pixels. In effect. Thematic Raster Layer A training sample can be defined by using class values from a thematic raster layer (see Table 20). Seed Pixel Method with Spatial Limits The training sample identified with the seed pixel method can be limited to a particular region by defining the geographic distance and area. The data file values in the training sample will be used to create a signature. Table 20: Training Sample Comparison Training Samples Method Digitized Polygon Advantages precise map coordinates. less time allows iterative classifying Disadvantages may overestimate class variance. When one or more of the contiguous pixels is accepted. the sample “grows” outward from the model pixel with each iteration.

It is also possible to perform a classification using the known signatures. color level slicing. To generate signatures that accurately represent the classes to be identified. and then either take new samples or manipulate the signatures as necessary. 224 ERDAS . band 2 band 1 Figure 89: Example of a Feature Space Image The transformation of a multilayer raster image into a feature space image is done by mapping the input pixel values to a position in the feature space image. then mask out areas that are not classified to use in gathering more signatures. This transformation defines only the pixel position in the feature space image. virtual roam. evaluate the signatures that are generated from the samples.fsp. Signature manipulation may involve merging. the colors reflect the density of points for both bands. feature space images can be used with other IMAGINE utilities. It does not define the pixel’s value. and Map Composer. Mapping a thematic layer into a feature space image can be useful for evaluating the validity of the parametric and non-parametric decision boundaries of a classification (Kloer 1994).img) in an ERDAS IMAGINE Viewer. See "Evaluating Signatures" on page 236 for methods of determining the accuracy of the signatures created from your training samples. or appending from one file to another. A feature space image is simply a graph of the data file values of one band of data against the values of another band (often called a scatterplot). which is calculated when the feature space image is defined. therefore. the user may have to repeatedly select training samples. When you display a feature space image file (. Selecting Feature Space Objects The ERDAS IMAGINE Feature Space tools enable the user to interactively define feature space objects (AOIs) in the feature space image(s). The pixel values in the feature space image can be the accumulated frequency. Spatial Modeler. a feature space image has the same data structure as a raster image. including zoom. deleting.Evaluating Training Samples Selecting training samples is often an iterative process. The pixel values can also be provided by a thematic raster layer of the same geometry as the source multilayer image. The bright tones represent a high density and the dark tones represent a low density. In ERDAS IMAGINE.

and vice versa. can be used to define the signature. This helps improve classification accuracies for specific non-normal classes.. it is possible to mask AOIs from the image being classified to the feature space image. The user can also directly link a cursor in the image Viewer to the feature space Viewer. not the image being classified. A single feature space image. These functions will help determine a location for the AOI in the feature space image. but multiple AOIs. A decision rule will be used to analyze each pixel in the .e. The decisions made in the classification process have no dependency on the statistics of the pixels.Selecting Feature Space Objects Create Non-parametric Signature The user can define a feature space object (AOI) in the feature space image and use it directly as a non-parametric signature. and the pixels with the corresponding data file values will be assigned to the feature space class. Draw an AOI (feature space object around the desired area in the feature space image. layer 2). This signature is taken within the feature space image. feature space object) will be assigned to that class. it can be used as a signature. 1). et al 1991). such as urban and exposed rock (Faust. See "APPENDIX A: Math Topics" for information on feature space images. Create feature space image from . 2. One fundamental difference between using the feature space image to define a training sample and the other traditional methods is that it is a non-parametric signature. The pixels in the image that correspond to the data file values in the signature (i.img file being classified (layer 1 vs. Since the IMAGINE Viewers for the feature space image and the image being classified are both linked to the IMAGINE Signature Editor.img file to be classified in a Viewer (layers 3. Figure 90: Process for Defining a Feature Space Object Field Guide 225 .img file being classified. Display .Once the user has a desired AOI.

Disadvantages The classification decision process allows overlap and unclassified pixels. The image displayed in the Viewer must be the image from which the feature space image was created. The polygons in the feature space image can be easily modified and/or masked until the desired regions of the image have been identified. Use the Feature Space tools in the Signature Editor to create a feature space image and mask the signature. only one feature space image can be used per signature. Certain features may be more visually identifiable in a feature space image. Use the AOI tools to draw polygons. Once it is defined as a mask.Evaluate Feature Space Signatures Via the Feature Space tools. However. This process will help the user to visually analyze the correlations between various spectral bands to determine which combination of bands brings out the desired features in the image. residential and urban). 226 ERDAS . it is also possible to let use a feature space signature to generate a mask. The classification decision process is fast. Any polygon or rectangle in these feature space images can be used as a nonparametric signature.. The user can have as many feature space images with different band combinations as desired.g. The feature space image may be difficult to interpret. the pixels under the mask will be identified in the image file and highlighted in the Viewer. Feature Space Signatures Advantages Provide an accurate way to classify a class with a non-normal distribution (e.

• The ISODATA clustering method uses spectral distance as in the sequential method. However. Clusters Clusters are defined with a clustering algorithm. According to the specified parameters. and classifies again. It applies to three-band. • Each of these methods is explained below. because it is based on the natural groupings of pixels in image data when they are plotted in feature space. 8-bit data.Unsupervised Training Unsupervised Training Unsupervised training requires only minimal initial input from the user. The RGB clustering method is more specialized than the ISODATA method. Field Guide 227 . the user will have the task of interpreting the classes that are created by the unsupervised training algorithm. The clustering algorithm has no regard for the contiguity of the pixels that define each cluster. redefines the criteria for each class. otherwise manipulated. and divides that space into sections that are used to define clusters. along with its advantages and disadvantages. which often uses all or many of the pixels in the input data file for its analysis. but iteratively classifies the pixels. Some of the statistics terms used in this section are explained in "APPENDIX A: Math Topics". or used as the basis of a signature. Unsupervised training is also called clustering. RGB clustering plots pixels in three-dimensional feature space. so that the spectral distance patterns in the data gradually emerge. disregarded. Feature space is explained in "APPENDIX A: Math Topics". these groups can later be merged.

M . which is the maximum percentage of pixels whose class values are allowed to be unchanged between iterations.the maximum number of clusters to be considered. • • 228 ERDAS . T .the maximum number of iterations to be performed. Since each cluster is the basis for a class. it is not biased to the top of the data file. The ISODATA method uses minimum spectral distance to assign a cluster for each candidate pixel. ISODATA Clustering Parameters To perform ISODATA clustering. Because the ISODATA method is iterative. so that those means will shift to the means of the clusters in the data. this number becomes the maximum number of classes to be formed. It is iterative in that it repeatedly performs an entire classification (outputting a thematic raster layer) and recalculates statistics. the user specifies: • N . The process begins with a specified number of arbitrary cluster means or the means of existing signatures. as are the one-pass clustering algorithms. The ISODATA process begins by determining N arbitrary cluster means.ISODATA Clustering ISODATA stands for Iterative Self-Organizing Data Analysis Technique (Tou and Gonzalez 1974). Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA clustering. Some clusters with too few pixels can be eliminated. Self-Organizing refers to the way in which it locates clusters with minimum user input. leaving less than N clusters.a convergence threshold. and then it processes repetitively.

img file exists that shows the assignments of the pixels to the clusters. . The initial cluster means are evenly distributed between (µ1-s1. The process continues until there is little change between iterations (Swain 1973).σ A µA µA+σA Band A data file values Figure 91: ISODATA Arbitrary Clusters Pixel Analysis Pixels are analyzed beginning with the upper-left corner of the image and going left to right. the means of N clusters can be arbitrarily determined. block by block. Field Guide 229 . . Such a vector in two dimensions is illustrated in Figure 91. 5 arbitrary cluster means in two-dimensional spectral space µB+ σB Band B data file values µB µB. At the end of each iteration. After each iteration. these new means are used for defining clusters in the next iteration. Then. an . µn-sn) and (µ1+s1. µ2+s2.img file with a thematic raster layer and/or a signature file (. µn+sn). µ3-s3. based on the actual spectral locations of the pixels in the cluster... µn-sn) and the coordi- nates (µ1+s1. µ3+s3. µn+sn). The initial cluster means are distributed in feature space along a vector that runs between the point at spectral coordinates (µ1-s1. The ISODATA function creates an output . a new mean for each cluster is calculated.σB 0 0 µA .. instead of the initial arbitrary calculation. The spectral distance between the candidate pixel and each cluster mean is calculated..sig) as a result of the clustering. The pixel is assigned to the cluster whose mean is the closest.Unsupervised Training Initial Cluster Means On the first iteration of the ISODATA algorithm. µ2-s2.

The entire process is repeated—each candidate pixel is compared to the new cluster means and assigned to the closest cluster mean. Band B data file values Band A data file values Figure 93: ISODATA Second Pass Percentage Unchanged After each iteration. the means of all clusters are recalculated.Considering the regular. the first iteration of the ISODATA algorithm will always give results similar to those in Figure 92. When this number reaches T (the convergence threshold). the program terminates. causing them to shift in feature space. arbitrary assignment of the initial cluster means. Cluster 4 Cluster 3 Cluster 5 Band B data file values Cluster 2 Cluster 1 Band A data file values Figure 92: ISODATA First Pass For the second iteration. 230 ERDAS . the normalized percentage of pixels whose assignments are unchanged since the last iteration is displayed in the dialog.

no particular decision rule is recommended over others. and classify the resulting signatures. Disadvantages The clustering process is time-consuming. or specify a reasonable maximum number of iterations.img file that would be created by a minimum distance classification. the signatures created by ISODATA will be merged. which gives results similar to using a minimum distance classifier (as explained below) on the signatures that are created. except for the nonconvergent pixels (100-T% of the pixels). Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA clustering. Therefore. This thematic raster layer can be used for analyzing and manipulating the signatures before actual classification takes place. It does not matter where the initial arbitrary cluster means are located. Therefore. it may be beneficial to monitor the percentage. the signatures can produce good results with any type of classification.img file created by ISODATA is the same as the . The . Use the Merge and Delete options in the Signature Editor to manipulate signatures.Unsupervised Training It is possible for the percentage of unchanged pixels to never converge or reach T (the convergence threshold). Recommended Decision Rule Although the ISODATA algorithm is the most similar to the minimum distance decision rule. Field Guide 231 . so that the program will not run indefinitely. ISODATA Clustering Advantages Clustering is not geographically biased to the top or bottom pixels of the data file. as long as enough iterations are allowed. This algorithm is highly successful at finding the spectral clusters that are inherent in the data. deleted. In most cases. or appended to other signature sets. because it can repeat many times. M. generate signatures. A preliminary thematic raster layer is created. Does not account for pixel spatial homogeneity. because it is iterative.

and blue directions in 3-dimensional space. but it does employ a clustering algorithm and. The default number of divisions per band is listed below: • • • RED is divided into 7 sections (32 for advanced version) GREEN is divided into 6 sections (32 for advanced version) BLUE is divided into 6 sections (32 for advanced version) 232 ERDAS . This allows for more color variation in the output file. In practice. therefore. no signature file is created and no other classification decision rule is used. RGB Clustering differs greatly from the other clustering methods. or between the minimum and maximum data values for each band. In this case. The algorithm plots all pixels in 3-dimensional feature space and then partitions this space into clusters on a grid. Along each axis of the 3-dimensional scatterplot. it is explained here. However. each input histogram is scaled so that the partitions divide the histograms between specified limits— either a specified number of standard deviations above and below the mean. green. In the more simplistic version of this function. RGB clustering is a simple classification and data compression technique for three bands of data. Pixels which do not fall into any of the remaining clusters are assigned to the cluster with the smallest city-block distance from the pixel. It is a fast and simple algorithm that quickly compresses a 3-band image into a single band pseudocolor image. The advanced version requires that a minimum threshold on the clusters be set.RGB Clustering The RGB Clustering and Advanced RGB Clustering functions in Image Interpreter create a thematic raster layer. each of these clusters becomes a class in the output thematic raster layer. the city-block distance is calculated as the sum of the distances in the red. without necessarily classifying any particular features. so that only clusters at least as large as the threshold will become output classes.

The number of sections should vary according to the histograms of each band. and narrow histograms should be divided into fewer sections (see Figure 94). Broad histograms should be divided into more sections. G. 98 G 195 R 16 34 R 0 35 55 G B 0 35 16 B Figure 94: RGB Clustering 5 25 Partitioning Parameters It is necessary to specify the number of R. It is possible to interactively change these parameters in the RGB Clustering function in the Image Interpreter. The number of classes is calculated based on the current parameters.Unsupervised Training input histograms R frequency G B 16 0 16 35 98 195 255 This cluster contains pixels between 16 and 34 in RED. and between 35 and 55 in GREEN. and between 0 and 16 in BLUE. and it displays on the command screen. and B sections in each dimension of the 3dimensional scatterplot. Field Guide 233 .

simple classification for applications that do not require specific classes. (Advanced version only) A highly interactive function. Not biased to the top or bottom of the data file. Tips Some starting values that usually produce good results with the simple RGB clustering are: R=7 G=6 B=6 which results in 7 X 6 X 6 = 252 classes. To decrease the number of output colors/classes or to darken the output. G. and B. and B parameter values until the desired number of output classes is obtained. which is not suitable for all applications. Disadvantages Exactly three bands must be input. Does not always create thematic classes that can be analyzed for informational purposes. G. For the Advanced RGB clustering function. allowing an iterative adjustment of the parameters until the number of clusters and the thresholds are satisfactory for analysis.RGB Clustering Advantages The fastest classification method. The order in which the pixels are examined does not influence the outcome. decrease these values. 234 ERDAS . start with higher values for R. It is designed to provide a fast. Adjust by raising the threshold parameter and/or decreasing the R.

value — the output class value for the signature. feature space object (AOI).sig). color — the color for the signature and is used as the color for the class in the output thematic raster layer. Field Guide 235 . • • • Parametric Signature A parametric signature is based on statistical parameters (e. This color is also used with other signature visualization functions. such as alarms.img file being classified. A parametric signature includes the following attributes in addition to the standard attributes for signatures: • • • • • the number of bands in the input image (as processed by the training program) the minimum and maximum data file value in each band for each sample or cluster (minimum vector and maximum vector) the mean data file value in each band for each sample or cluster (mean vector) the covariance matrix for each sample or cluster the number of pixels in the sample or cluster Non-parametric Signature A non-parametric signature is based on an AOI that the user defines in the feature space image for the . either inside or outside the area in the feature space image. The default signature name is Class <number>. ellipses. This value should be a positive integer. parallelepiped limits — the limits used in the parallelepiped classification.Signature Files Signature Files A signature is a set of data that defines a training sample. The signature is used in a classification process. mean and covariance matrix) of the pixels that are in the training sample or cluster.sig file is described in "APPENDIX B: File Formats and Extensions". The output class value does not necessarily need to be the class number of the signature. Each classification decision rule (algorithm) requires some signature attributes as input—these are stored in the signature file (. order — the order to process the signatures for order-dependent processes. such as signature alarms and parallelepiped classifications. etc. The format of the .g. A non-parametric classifier will use a set of non-parametric signatures to assign pixels to a class based on their location. Information on these statistics can be found in "APPENDIX A: Math Topics".. masking. Signatures in ERDAS IMAGINE can be parametric or nonparametric. or cluster. The following attributes are standard for all signatures (parametric and nonparametric): • • name — identifies the signature and is used as the class name in the output thematic raster layer.

it would be beneficial to merge or delete them.. Using Signature Data There are tests to perform that can help determine whether the signature data are a true representation of the pixels to be classified for each class. or perform any other operations to improve the classification. With this test. The user can evaluate signatures that were created either from supervised or unsupervised training. deleted. a feature space signature). parametric and/or non-parametric). the pixels that fit the classification criteria are highlighted in the displayed image. The evaluation methods in ERDAS IMAGINE include: • Alarm — using his or her own pattern recognition ability. These percentages are presented in a contingency matrix. the user can use his or her own pattern recognition skills. Divergence — measure the divergence (statistical distance) between signatures and determine band subsets that will maximize the classification. to determine the accuracy of a signature.e. After analyzing the signatures. and merged with signatures from other files.Evaluating Signatures Once signatures are created. Use the Signature Editor to view the contents of each signature. The user also has the option to indicate an overlap by having it appear in a different color. they can be evaluated. or some ground-truth data. This method is for supervised training only. Ellipse — view ellipse diagrams and scatterplots of data file values for every pair of bands. According to the parallelepiped decision rule. you can use only the alarm evaluation method. manipulate signatures. as it appears in the ERDAS IMAGINE Viewer. Alarm The alarm evaluation enables the user to compare an estimated classification of one or more signatures against the original data. renamed. • • • • NOTE: If the signature is non-parametric (i. the user views the estimated classified area for a signature (using the parallelepiped decision rule) against a display of the original image. 236 ERDAS . for which polygons of the training samples exist. Contingency matrix — do a quick classification of the pixels in a set of training samples to see what percentage of the sample pixels are actually classified as expected. Statistics and histograms — analyze statistics and histograms of the signatures to make evaluations and comparisons. eliminate redundant bands from the data. add new bands of data. and perform your own mathematical tests on the statistics. Merging signatures enables the user to perform complex classifications with signatures that are derived from more than one training method (supervised and/or unsupervised.

Ellipses are explained and illustrated in "APPENDIX A: Math Topics" under the discussion of Scatterplots.Evaluating Signatures Use the Signature Alarm utility in the Signature Editor to perform n-dimensional alarms on the image in the Viewer. The alarm utility creates a functional layer. In this evaluation. the mean and the standard deviation of every signature are used to represent the ellipse in 2-dimensional feature space. and labels. Some overlap. This range can be altered. The first graph shows how the ellipses are plotted based on the range of 2 standard deviations from the mean. Figure 95 shows how ellipses are plotted and how they can overlap. Ellipse In this evaluation. It is also possible to generate parallelepiped rectangles. Signature Overlap Band D data file values signature 1 Distinct Signatures Band B data file values µµB2+2 s B2 +2s µB2 µB2 µ µB2-2s B2 -2 s signature 2 µD1 µD1 µ D2 µD2 signature 1 signature 2 µA2-2s Band A data file values µA2+2s µA2 µC2 C2 µC1 µC1 µ A2 -2 s Figure 95: Ellipse Evaluation of Signatures µ A2 µ A2 +2s Band C data file values Field Guide 237 . and the IMAGINE Viewer allows you to toggle between the image layer and the functional layer. means. ellipses of concentration are calculated with the means and standard deviations stored in the signature file. there is no overlap. is expected. When the ellipses in the feature space image show extensive overlap. Analyzing the plots with differing numbers of standard deviations is useful for determining the limits of a parallelepiped classification. however. using the parallelepiped decision rule. changing the ellipse plots. then the spectral characteristics of the pixels represented by the signatures cannot be distinguished in the two bands that are graphed. In the best case. The ellipse is displayed in a feature space image.

then a high percentage of each sample’s pixels will be classified as expected. a contingency matrix is presented. Separability can be calculated for any combination of bands that will be used in the classification. 238 ERDAS . For the distance (Euclidean) evaluation. There are three options for calculating the separability. Each sample pixel only weights the statistics that determine the classes. The formulas used to calculate separability are related to the maximum likelihood decision rule. which contains the number and percentages of pixels that were classified as expected.By analyzing the ellipse graphs for all band pairs. then they may not be distinct enough to produce a successful classification. Use the Signature Editor to perform the contingency matrix evaluation. Contingency Matrix NOTE: This evaluation classifies all of the pixels in the selected AOIs and compares the results to the pixels of a training sample. Separability Signature separability is a statistical measure of distance between two signatures. maximum likelihood. Use the Signature Editor to create a feature space image and to view an ellipse(s) of signature data. However. enabling the user to rule out any bands that are not useful in the results of the classification. The spectral distance is also the basis of the minimum distance classification (as explained below). the user can determine which signatures and which bands will provide accurate classification results. Therefore. computing the distances between signatures will help the user predict the results of a minimum distance classification. In this evaluation. Then. The pixels of each training sample are not always so homogeneous that every pixel in a sample will actually be classified to its corresponding class. If the spectral distance between two samples is not significant for any pair of bands. or Mahalanobis distance decision rule. Use the Signature Editor to compute signature separability and distance and automatically generate the report. the spectral distance between the mean vectors of each pair of signatures is computed. The maximum likelihood decision rule is explained below. if the signature statistics for each sample are distinct from those of the other samples. evaluating signature separability helps the user predict the results of a maximum likelihood classification. All of these formulas take into account the covariances of the signatures in the bands being compared. as well as the mean vectors of the signatures. a quick classification of the sample pixels is performed using the minimum distance. Therefore.

Divergence The formula for computing Divergence (Dij) is as follows: 1 1 –1 –1 –1 –1 T D ij = -.Evaluating Signatures Refer to "APPENDIX A: Math Topics" for information on the mean vector and covariance matrix.tr ( ( C i – C j ) ( µ i – µ j ) ( µ i – µ j ) ) 2 2 where: i and j= the two signatures (classes) being compared Ci= the covariance matrix of signature i µi= the mean vector of signature i tr= the trace function (matrix algebra) T= the transposition function Source: Swain and Davis 1978 Transformed Divergence The formula for computing Transformed Divergence (TD) is as follows: 1 1 –1 –1 –1 –1 T D ij = -.   8  where: i and j= the two signatures (classes) being compared Ci= the covariance matrix of signature i µi= the mean vector of signature i tr= the trace function (matrix algebra) T= the transposition function Source: Swain and Davis 1978 Field Guide 239 .tr ( ( C i – C j ) ( C i – C j ) ) + -.tr ( ( C i – C j ) ( µ i – µ j ) ( µ i – µ j ) ) 2 2 – D ij TD ij = 2 1 – exp  --------.tr ( ( C i – C j ) ( C i – C j ) ) + -.

JM ij = where: 2(1 – e ) –α i and j = the two signatures (classes) being compared Ci= the covariance matrix of signature i µi = the mean vector of signature i ln = the natural logarithm function |Ci |= the determinant of Ci (matrix algebra) Source: Swain and Davis 1978 Separability Both transformed divergence and Jeffries-Matusita distance have upper and lower bounds. weight factors may be specified for each signature. The separability listing also contains the average divergence and the minimum divergence for the band set. but they do influence the report of the best average and best minimum separability. Weight Factors As with the Bayesian classifier (explained below with maximum likelihood). The listing contains every divergence value for the bands studied for every possible pair of signatures. • • TD is between 0 and 2000. 240 ERDAS . These numbers can be compared to other separability listings (for other band combinations). if the user knows that twice as many pixels should be assigned to Class A as to Class B.( µ i – µ j )  ----------------. to determine which set of bands is the most useful for classification. If the calculated divergence is equal to the appropriate upper bound. ( µ i – µ j ) + -. A separability listing is a report of the computed divergence for every class pair and one band combination. These weight factors are based on a priori (already known) probabilities that any given pixel will be assigned to each class. NOTE: The weight factors do not influence the divergence equations (for TD or JM). JM is between 0 and 1414. then Class A should receive a weight factor that is twice that of Class B. For example. then the signatures can be said to be totally separable in the bands being studied.Jeffries-Matusita Distance The formula for computing Jeffries-Matusita Distance (JM) is as follows: –1 1 1  (Ci + C j) ⁄ 2  T Ci + C j α = -. A calculated divergence of zero means that the signatures are inseparable.ln  --------------------------------  2  8 2  C × C  i j .

 ∑ f i – ∑ f i 2 2   i=1 i=1 c–1 c where: i and j = the two signatures (classes) being compared Uij = the unweighted divergence between i and j Wij = the weighted divergence between i and j c= the number of signatures (classes) fi = the weight factor for signature i Probability of Error The Jeffries-Matusita distance is related to the pairwise probability of error.Evaluating Signatures The weight factors for each signature are used to compute a weighted divergence with the following calculation: W ij   ∑  ∑ f i f j U ij  i = 1j = i + 1 = ------------------------------------------------------2 c c   1 -.( 2 – JM ij ) ≤ P e ≤ 1 – -.1 + -. this probability can be estimated according to the expression below: 1 1 1 2 2 2 ----. which is the probability that a pixel assigned to class i is actually in class j.JM ij   16 2 2 where: i and j = the signatures (classes) being compared JMij = the Jeffries-Matusita distance between i and j Pe = the probability that a pixel will be misclassified from i to j Source: Swain and Davis 1978 Field Guide 241 . Within a range.

and different clustering programs— all using different techniques. After each signature file is evaluated. merge.Signature Manipulation In many cases. so that they form one larger class when classified Append signatures from other files. Use the Signature Editor to view statistics and histogram listings and to delete. and rename signatures within a signature file. delete. the user may merge. The user can combine signatures that are derived from different training methods for use in one classification. The desired signatures can finally be moved to one signature file to be used in the classification. Signatures can be gathered from different sources—different training samples. or create new signatures. The following operations upon signatures and signature files are possible with ERDAS IMAGINE: • • • • • View the contents of the signature statistics View histograms of the samples or clusters that were used to derive the signatures Delete unwanted signatures Merge signatures together. 242 ERDAS . append. training must be repeated several times before the desired signatures are produced. feature space images.

If a non-parametric rule is not set. and nonparametrically as objects in feature space.Classification Decision Rules Classification Decision Rules Once a set of reliable signatures has been created and evaluated. processing order. Pixels that pass the criteria that are established by the decision rule are then assigned to the class for that signature. All of the parametric signatures will be tested. Figure 96 shows the flow of an image pixel through the classification decision making process in ERDAS IMAGINE (Kloer 1994). If the non-parametric test results in zero classes (i. according to a decision rule. the next step is to perform a classification of the data. If the pixel falls into more than one class as a result of the non-parametric test.e. the pixel will either be classified by the parametric rule. the pixel lies outside all the non-parametric decision boundaries). ERDAS IMAGINE enables the user to classify the data both parametrically with statistical representation. Each pixel is analyzed independently. then the unclassified rule will be applied. the pixel will either be classified by the parametric rule or left unclassified. the overlap rule will be applied. or algorithm. With this rule. This rule results in the following conditions: • • If the non-parametric test results in one unique class.. the pixel will be assigned to that class. the pixel will be tested against all of the signatures with non-parametric definitions. or left unclassified. The measurement vector for each pixel is compared to each signature. • Field Guide 243 . then the pixel is classified using only the parametric rule. If a non-parametric rule is set. With this rule.

Non-parametric Rules ERDAS IMAGINE provides these decision rules for non-parametric signatures: • • parallelepiped feature space Unclassified Options ERDAS IMAGINE provides these options if the pixel is not classified by the nonparametric rule: • • parametric rule unclassified Overlap Options ERDAS IMAGINE provides these options if the pixel falls into more than one feature space object: • • • Parametric Rules parametric rule by order unclassified ERDAS IMAGINE provides these commonly-used decision rules for parametric signatures: • • • minimum distance Mahalanobis distance maximum likelihood (with Bayesian variation) 244 ERDAS .

Classification Decision Rules Candidate Pixel No Non-parametric Rule Yes Resulting Number of Classes 1 0 >1 Unclassified Options Overlap Options Parametric Unclassified Parametric Unclassified By Order Parametric Rule Unclassified Assignment Class Assignment Figure 96: Classification Flow Diagram Field Guide 245 .

class 2 class 2 µA2+2s Band A data file values Figure 97: Parallelepiped Classification Using Plus or Minus Two Standard Deviations as Limits The large rectangles in Figure 97 are called parallelepipeds. class 2 µB2 = mean of Band B. the data file values of the candidate pixel are compared to upper and lower limits. q x class 3 ? ? x x x ? ? xx ? x x xx ? ? ? x x ? ? x x x ? ? v v vv x x x ? v ? ? ? v v v ? ? ? ? ? ? ? v v vv ? ? ? v v q v q q q q v ? v q class 1 q v v ? ? ? v ? ? x ? ? ? v x ? = pixels in class 1 = pixels in class 2 = pixels in class 3 = unclassified pixels Band B data file values µB2+2s µB2 µB2-2s µA2 = mean of Band A. When a pixel’s data file values are between the limits for every band in a signature. then the pixel is assigned to that signature’s class. They are the regions within the limits for each signature.Parallelepiped In the parallelepiped decision rule. based on his or her knowledge of the data and signatures. These limits can be set using the Parallelepiped Limits utility in the Signature Editor. These limits can be either: • • • the minimum and maximum data file values of each band in the signature. This knowledge may come from the signature evaluation techniques discussed above. plus and minus a number of standard deviations. There are high and low limits for every signature in every band. the mean of each band. Figure 97 is a two-dimensional example of a parallelepiped classification. µA2-2s µA2 246 ERDAS . or any limits that the user specifies.

. the user must define how the pixel will be classified. The pixel can be left unclassified. If one of the signatures is first and the other signature is fourth.Classification Decision Rules Overlap Region In cases where a pixel may fall into the overlap region of two or more parallelepipeds. minimum distance. The pixel will be tested against the overlapping signatures only. spectrally. The pixel will be tested against all of the parametric signatures. since the data file values are compared to limits that remain constant for each band in each signature. • Use the Supervised Classification utility in the Signature Editor to perform a parallelepiped classification. Not dependent on normal distributions. Parallelepiped Decision Rule Advantages Fast and simple. If neither of these signatures is parametric. An example of this is shown in Figure 98. • • Regions Outside of the Boundaries If the pixel does not fall into one of the parallelepipeds. Mahalanobis distance or maximum likelihood). If none of the signatures is parametric. then the pixel will be assigned automatically to that signature’s class. then the pixel will be left unclassified. This order can be set in the ERDAS IMAGINE Signature Editor. If only one of the signatures is parametric. then the pixel will be left unclassified.” pixels that are actually quite far. Often useful for a first-pass. thus cutting processing time (e. from the mean of the signature may be classified. this decision rule quickly narrows down the number of possible classes to which each pixel can be assigned before the more time-consuming calculations are made. • The pixel can be classified by the order of the signatures. the pixel will be assigned to the first signature’s class. Disadvantages Since parallelepipeds have “corners. Field Guide 247 . broad classification. then the user must define how the pixel will be classified. The pixel can be left unclassified. • The pixel can be classified by the defined parametric decision rule.g. The pixel can be classified by the defined parametric decision rule.

x x ? x x x x x x x x x x x class 3 ?? ? ? ? ? ? ? ? ? ? ? ? ? Band B data file values ? q ? ? ? ? ? v x ? = pixels in class 1 = pixels in class 2 = pixels in class 3 = unclassified pixels ? vv v ? v v v v v v v v v v v v v q class 2 v q ? ?? ? ? ? ? ? q q q q q q q q q q q q class 1 ? ? ? ? Band A data file values Figure 99: Feature Space Classification 248 ERDAS . The polygons in this figure are AOIs used to define the feature space signatures.Band B data file values Signature Ellipse µB * Parallelepiped boundary candidate pixel µA Band A data file values Figure 98: Parallelepiped Corners Compared to the Signature Ellipse Feature Space The feature space decision rule determines whether or not a candidate pixel lies within the non-parametric signature in the feature space image. When a pixel’s data file values are in the feature space signature. Figure 99 is a two-dimensional example of a feature space classification. then the pixel is assigned to that signature’s class.

The feature space method is fast. then the user must define how the pixel will be classified. • The pixel can be classified by the order of the feature space signatures. then the pixel will be left unclassified. The pixel can be classified by the defined parametric decision rule. If neither of these feature space signatures is parametric. Use the Decision Rules utility in the Signature Editor to perform a feature space classification. The pixel will be tested against all of the parametric signatures. The pixel can be left unclassified. If one of the signatures is first and the other signature is fourth. The pixel can be left unclassified.Classification Decision Rules Overlap Region In cases where a pixel may fall into the overlap region of two or more AOIs. • The pixel can be classified by the defined parametric decision rule. Feature Space Decision Rule Advantages Often useful for a first-pass. broad classification. Provides an accurate way to classify a class with a non-normal distribution (e. which can help discriminate between classes that are spectrally similar and hard to differentiate with parametric information. the user must define how the pixel will be classified. then the pixel will be left unclassified.g. then the pixel will be assigned automatically to that signature’s class. The feature space image may be difficult to interpret. • • Regions Outside of the AOIs If the pixel does not fall into one of the AOIs for the feature space signatures. the pixel will be assigned to the first signature’s class. Certain features may be more visually identifiable. If none of the signatures is parametric. Field Guide 249 . residential and urban).. If only one of the signatures is parametric. • Disadvantages The feature space decision rule allows overlap and unclassified pixels. The pixel will be tested against the overlapping signatures only. This order can be set in the ERDAS IMAGINE Signature Editor.

spectral distance is illustrated by the lines from the candidate pixel to the means of the three signatures. The candidate pixel is assigned to the class with the closest mean. the class of the candidate pixel is assigned to the class for which SD is the lowest. The equation for classifying by spectral distance is based on the equation for Euclidean distance: n SD xyc = where: ∑ ( µci – X xyi ) i=1 2 n= number of bands (dimensions) i= a particular band c= a particular class Xxyi= data file value of pixel x.y in band i µci= mean of data file values in band i for the sample for class c SDxyc= spectral distance from pixel x.y to the mean of class c Source: Swain and Davis 1978 When spectral distance is computed for all possible values of c (all possible classes). candidate pixel µB3 x µ3 Band B data file values µB2 µ2 x µB1 o o x µ1 µA1 µA2 µA3 Band A data file values Figure 100: Minimum Spectral Distance In Figure 100. 250 ERDAS .Minimum Distance The minimum distance decision rule (also called spectral distance) calculates the spectral distance between the measurement vector for the candidate pixel and the mean vector for each signature.

except for parallelepiped. For example. Inversely. which may tend to be farther from the mean of the signature. when classifying urban areas—typically a class whose pixels vary widely—correctly classified pixels may be farther from the mean than those of a class for water. Field Guide 251 . a class with less variance.) Does not consider class variability. Mahalanobis Distance The Mahalanobis distance algorithm assumes that the histograms of the bands have normal distributions. this problem is alleviated by thresholding out the pixels that are farthest from the means of their classes. within limits that are reasonable to the user) will become classified. which is usually not a highly varied class (Swain and Davis 1978). The fastest decision rule to compute. a class like an urban land cover class is made up of pixels with a high variance. there are no unclassified pixels. classify more pixels than are appropriate to the class). Variance and covariance are figured in so that clusters that are highly varied will lead to similarly varied classes. like water. because the pixels that belong to the class are usually spectrally closer to their mean than those of other classes to their means. and vice-versa. you may have better results with the parallelepiped or minimum distance decision rule. outlying urban pixels may be improperly classified. For example.Classification Decision Rules Minimum Distance Decision Rule Advantages Since every pixel is spectrally closer to either one sample mean or another. Using this decision rule. However. (See the discussion of Thresholding on page 254. they are not spectrally close to the mean of any sample. Disadvantages Pixels which should be unclassified (that is. may tend to overclassify (that is. If this is not the case. except that the covariance matrix is used in the equation. or by performing a first-pass parallelepiped classification. Mahalanobis distance is similar to minimum distance.

0 in the equation. This variation of the maximum likelihood decision rule is known as the Bayesian decision rule (Hord 1982). The maximum likelihood decision rule is based on the probability that a pixel belongs to a particular class. In this case. Slower to compute than parallelepiped or minimum distance. unlike minimum distance or parallelepiped. 252 ERDAS . Maximum Likelihood/Bayesian The maximum likelihood algorithm assumes that the histograms of the bands of data have normal distributions. Mahalanobis distance is parametric. you may have better results with the parallelepiped or minimum distance decision rule. for which D is the lowest. Bayesian Classifier If the user has a priori knowledge that the probabilities are not equal for all classes. Unless the user has a priori knowledge of the probabilities. Disadvantages Tends to overclassify signatures with relatively large values in the covariance matrix. or by performing a first-pass parallelepiped classification. it is recommended that they not be specified. meaning that it relies heavily on a normal distribution of the data in each input band. and that the input bands have normal distributions. If this is not the case. then the covariance matrix of that signature will contain large values. but the weighting factors that are available with the maximum likelihood/Bayesian option are not needed. The basic equation assumes that these probabilities are equal for all classes. May be more useful than minimum distance in cases where statistical criteria (as expressed in the covariance matrix) must be taken into account. c. these weights default to 1.The equation for the Mahalanobis distance classifier is as follows: D = (X-Mc)T (Covc-1) (X-Mc) where: D= Mahalanobis distance c= a particular class X= the measurement vector of the candidate pixel Mc= the mean vector of the signature of class c Covc= the covariance matrix of the pixels in the signature of class c Covc-1= inverse of Covc T= transposition function The pixel is assigned to the class. If there is a large dispersion of the pixels in a cluster or training sample. Mahalanobis Decision Rule Advantages Takes the variability of classes into account. he or she can specify weight factors for particular classes.

Takes the variability of classes into account by using the covariance matrix. Disadvantages An extensive equation that takes a long time to compute. Maximum Likelihood/Bayesian Decision Rule Advantages The most accurate of the classifiers in the ERDAS IMAGINE system (if the input samples/clusters have a normal distribution). If there is a large dispersion of the pixels in a cluster or training sample.Classification Decision Rules The equation for the maximum likelihood/Bayesian classifier is as follows: D = ln(ac) . c. Maximum likelihood is parametric. as does Mahalanobis distance.0. for which D is the lowest. would be explained in a textbook of matrix algebra. Field Guide 253 . because it takes the most variables into consideration. meaning that it relies heavily on a normal distribution of the data in each input band.[0. or is entered from a priori knowledge) Covc = the covariance matrix of the pixels in the sample of class c |Covc| = determinant of Covc (matrix algebra) Covc-1 = inverse of Covc (matrix algebra) ln = natural logarithm function T = transposition function (matrix algebra) D c X Mc ac The inverse and determinant of a matrix. Tends to overclassify signatures with relatively large values in the covariance matrix. along with the difference and transposition of vectors.5 (X-Mc)T (Covc-1) (X-Mc)] where: = weighted distance (likelihood) = a particular class = the measurement vector of the candidate pixel = the mean vector of the sample of class c = percent probability that any candidate pixel is a member of class c (defaults to 1.[0. then the covariance matrix of that signature will contain large values. The multiple inverse of the function is computed and the pixel is assigned to the class. The computation time increases with the number of input bands.5 ln(|Covc|)] .

These pixels are put into another class (usually class 0).Evaluating Classification After a classification is performed. A distance image file is a one-band. If supervised training was used. Distance Image Histogram number of pixels 0 0 distance value Figure 101: Histogram of a Distance Image 254 ERDAS . • The brighter pixels (with the higher distance file values) are spectrally farther from the signature means for the classes to which they were assigned. each distance value is the Euclidean spectral distance between the measurement vector of the pixel and the mean vector of the pixel’s class. the distance value is the Mahalanobis distance between the measurement vector of the pixel and the mean vector of the pixel’s class. a distance image file can be produced in addition to the output thematic raster layer. These pixels are identified statistically. Distance File When a minimum distance. They are more likely to be misclassified. these methods are available for testing the accuracy of the classification: • • Thresholding — Use a probability image file to screen out misclassified pixels. 32-bit offset continuous raster layer in which each data file value represents the result of a spectral distance equation. and more likely to be classified correctly. In a Mahalanobis distance or maximum likelihood classification. or maximum likelihood classification is performed. • In a minimum distance classification. The darker pixels are spectrally nearer. the darkest pixels are usually the training samples. Mahalanobis distance. Accuracy Assessment — Compare the classification to ground truth or other data. Thresholding Thresholding is the process of identifying the pixels in a classified image that are the most likely to be classified incorrectly. based upon the distance measures that were used in the classification decision rule. depending upon the decision rule used.

• In both cases. which is a symmetrical bell curve. representing the pixels with the highest distance values. This distribution is called a chi-square distribution. The cutoff point is the threshold. This option enables the user to select a chi-square value by selecting the cut-off value in the distance histogram. when a distance histogram is displayed while using the threshold function.Evaluating Classification Figure 101 shows how the histogram of the distance image usually appears. To determine the threshold: • interactively change the threshold with the mouse. so that the threshold can be calculated statistically. as opposed to a normal distribution. At some point that the user defines—either mathematically or visually—the “tail” of this histogram is cut off. or input a chi-square parameter or distance measurement. thresholding has the effect of cutting the tail off of the histogram of the distance image file. Field Guide 255 . Threshold The pixels that are the most likely to be misclassified have the higher distance file values at the tail of this histogram.

Figure 102: Interactive Thresholding Tips Figure 102 shows some example distance histograms.Smooth chi-square shape try to find the “breakpoint” where the curve becomes more horizontal. and how to threshold it. 256 ERDAS . and cut off Minor mode(s) (peaks) in the curve probably indicate that the class picked up other features that were not Not a good class. Indicates that the signature mean is off-center from the pixels it represents. With each example is an explanation of what the curve might mean. The signature for this class probably represented a polymodal (multi-peaked) Peak of the curve is shifted from 0.

the threshold is more clearly defined as follows: T is the distance value at which C% of the pixels in a class have a distance value greater than or equal to T. A further discussion of chi-square statistics can be found in a statistics text. then the threshold is simply a certain spectral distance. X2 is a function of: • • the number of bands of data used—known in chi-square statistics as the number of degrees of freedom the confidence level When classifying an image in ERDAS IMAGINE.. which is not usually true of image data.e. When statistics are used to calculate the threshold. Use the Classification Threshold utility to perform the thresholding. if Mahalanobis or maximum likelihood were used. The value X2 (chisquared) is used in the equation. number of bands) used for the classification. The chi-square table is built into the threshold application. known as the confidence level T is related to the distance values by means of chi-square statistics. where: T = the threshold for a class C% = the percentage of pixels that are believed to be misclassified. the value of X2 is an approximation. the classified image automatically has the degrees of freedom (i. NOTE: In this application of chi-square statistics. Field Guide 257 . However. then chi-square statistics are used to compare probabilities (Swain and Davis 1978). Chisquare statistics are generally applied to independent variables (having no covariance).Evaluating Classification Chi-square Statistics If the minimum distance classifier was used.

Random Reference Pixels When reference pixels are selected by the analyst. Accuracy Assessment CellArray An Accuracy Assessment CellArray is created to compare the classified image with reference data. it is often tempting to select the same pixels for testing the classification as were used in the training samples.img file and the class values for the corresponding reference pixels. Reference pixels are points on the classified image for which actual data are (or will be) known. the possibility of bias is lessened or eliminated (Congalton 1991). in order to determine the accuracy of the classification process.5 and imported into IMAGINE).. The CellArray data reside in an . Therefore. 258 ERDAS . This biases the test. The number of reference pixels is an important factor in determining the accuracy of the classification. since the training samples are the basis of the classification. Use the Accuracy Assessment CellArray to enter reference pixels for the class values. you can run an accuracy assessment on a thematic layer that was classified in ERDAS Version 7. Usually. This CellArray is simply a list of class values for the pixels in the classified . a set of reference pixels is usually used.img file. This layer did not have to be classified by IMAGINE (e. It has been shown that more than 250 reference pixels are needed to estimate the mean accuracy of a class to within plus or minus five percent (Congalton 1991). ERDAS IMAGINE uses a square window to select the reference pixels. NOTE: You can use the ERDAS IMAGINE Accuracy Assessment utility to perform an accuracy assessment for any thematic layer. By allowing the reference pixels to be selected at random. The class values for the reference pixels are input by the user. the assumed-true data are derived from ground truth data.Accuracy Assessment Accuracy assessment is a general term for comparing the classification to geographical data that are assumed to be true. The size of the window can be defined by the user. The reference pixels are randomly selected (Congalton 1991). It is usually not practical to ground truth or otherwise test every pixel of a classified image. Three different types of distribution are offered for selecting the random pixels: • • • random — no rules will be used stratified random — the number of points will be stratified to the distribution of thematic layer classes equalized random — each class will have an equal number of random points Use the Accuracy Assessment utility to generate random reference points.g.

82 would imply that the classification process was avoiding 82 percent of the errors that a completely random classification would generate (Congalton 1991).Output File Error Reports From the Accuracy Assessment CellArray. where c is the number of classes (including class 0). see a statistics manual. The accuracy report calculates statistics of the percentages of accuracy. based upon the results of the error matrix. Output File When classifying an . and colors can be set with the Signature Editor or the Raster Attribute Editor.img file with a thematic raster layer. values.img file will also contain any signature attributes that were selected in the ERDAS IMAGINE Supervised Classification utility. For more information on the Kappa coefficient. Kappa Coefficient The Kappa coefficient expresses the proportionate reduction in error generated by a classification process. Field Guide 259 . it is important to observe the percentage of correctly classified pixels and to determine the nature of errors of the producer and the user. compared with the error of a completely random classification. When interpreting the reports. This file will automatically contain the following data: • • • • • class values class names color table statistics histogram The . the output file is an .img file. For example. Use the Accuracy Assessment utility to generate the error matrix and accuracy reports. two kinds of reports can be derived. The class names. a value of . • • The error matrix simply compares the reference points to the classified points in a c × c matrix.

260 ERDAS .

CHAPTER 7 Photogrammetric Concepts Introduction This chapter is an introduction to photogrammetric concepts. so topics exclusive to traditional methods are omitted. However. Many of the concepts presented for aerial photographs also pertain to most imagery which has a single perspective center. some of the presented concepts. Likewise. the focus here is digital photogrammetry. such as use of satellite data or automatic image correlation. an across-track scanning device. There are numerous sources of image data for both traditional and digital photogrammetry. In addition. Finally. many of which are equally applicable to traditional and digital photogrammetry. a significantly different geometric model and approach is discussed for the Landsat satellite. the SPOT concepts have much in common with other sensors that also use a linear Charged Coupled Device (CCD) in a pushbroom fashion. This document focuses on three main sources: aerial photographs (metric frame cameras). SPOT satellite imagery. and Landsat satellite data. Field Guide 261 . exist only in the digital realm.

However. managed. Analog Photogrammetry. measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena. and digital orthophotos saved on computer storage media. application of photogrammetry is to extract topographic information (e. the development of photogrammetry has passed through the phases of Plane Table Photogrammetry.. and has now entered the phase of Digital Photogrammetry. the computer replaces some expensive optical and mechanical components by substituting analog measurement and calculation with mathematical computation. they can be easily stored. science and technology of obtaining reliable information about physical objects and the environment through the process of recording. Digital images can be scanned from photographs or can be directly captured by digital cameras. and applied by the user. but can also be digital products. The output products are in digital form. such as digital maps and digital elevation models (DEMs). The resulting devices were analog/digital hybrids.Definitions Photogrammetry is the "art. and largest. 262 ERDAS . photogrammetric techniques are more closely integrated into Remote Sensing and GIS. analytical plotters. photographs taken on the ground were used to extract the relationships between objects using geometric principles. The traditional. DEMs. In Analytical Photogrammetry. Digital Photogrammetry is photogrammetry as applied to digital images that are stored and processed on a computer. starting with stereomeasurement in 1901. Therefore.. optical or mechanical instruments were used to reconstruct three-dimensional geometry from two overlapping photographs.g. Prior to the invention of the airplane. Digital Photogrammetry is sometimes called Softcopy Photogrammetry. and orthophoto projectors were the main developments during this phase. This was during the phase of Plane Table Photogrammetry." (ASP. automatic DEM extraction and digital orthophoto generation). topographic maps) from aerial images. Analytical aerotriangulation. Over time. The main product during this phase was topographic maps. 1980) Photogrammetry was invented in 1851 by Laussedat. Analytical Photogrammetry. photogrammetric techniques have also been applied to process satellite images and close range images in order to acquire topographic or non-topographic information of photographed objects. In Analog Photogrammetry. Outputs of analytical photogrammetry can be topographic maps.g. Many photogrammetric tasks can be highly automated in digital photogrammetry (e. such as digital maps. and has continued to develop over the last 140 years. With the development of digital photogrammetry.

This coordinate system is referenced as image coordinates (x. A pixel coordinate system is usually a coordinate system with its origin in the upper-left corner of the image. An image space coordinate system is identical to image coordinates.Coordinate Systems Coordinate Systems Pixel Coordinates There are a variety of coordinate systems used in photogrammetry. as illustrated by axis x and y in Figure 103. This chapter will reference these systems as described below. This coordinate system is referenced as image space coordinates (x. The file coordinates of a digital image are defined in a pixel coordinate system. the x-axis pointing to the right. except that it adds a third axis (z). Image coordinates are used to describe positions on the film plane. Image coordinate units are usually millimeters or microns. Image Coordinates y c x r Figure 103: Pixel Coordinates and Image Coordinates Field Guide 263 . An image coordinate system is usually defined as a two-dimensional coordinate system occurring on the image plane with its origin at the image center.y. and the unit in pixels. Image space coordinates are used to describe positions inside the camera and usually use units in millimeters or microns. as shown by axis c and r in Figure 103. the y-axis pointing downward.y) in this chapter. These file coordinates (c.z) in this chapter.r) in this chapter.r) can also be thought of as the pixel column and row number. This coordinate system is referenced as pixel coordinates (c.

The Z value is elevation above mean sea level for a given vertical datum. The YG-axis is perpendicular to both the ZG-axis and XG-axis. This is done by adding a correction value or by computing geometry in a coordinate system which includes curvature. Basic photogrammetric principles can be presented without adding this additional level of complexity. The plane is called the reference plane or the local datum. This coordinate system is referenced as ground coordinates (X. and the z-axis is vertical to the reference plane (up). Most photogrammetric applications account for earth curvature in their calculations. For simplicity of presentation. 264 ERDAS . photogrammetric processing.Z) in this chapter. Ground coordinates (X. so as to create a three-dimensional coordinate system that follows the right hand rule. Two such systems are geocentric and topocentric coordinates. Geocentric and Topocentric Coordinates Work Flow The work flow of photogrammetry can be summarized in three steps: image acquisition. A topocentric coordinate system has its origin at the center of the image projected on the earth ellipsoid. and the XG-axis passes through the Greenwich meridian.Z) are usually expressed in feet or meters. the y-axis northward.Y. The three perpendicular coordinate axis are defined on a tangential plane at this center point. The ZG-axis equals the rotational axis of the earth. The x-axis is oriented eastward. and product output. the remainder of this chapter will not explicitly reference geocentric or topocentric coordinates.Y. A geocentric coordinate system has its origin at the center of the earth ellipsoid.Ground Coordinates A ground coordinate system is usually defined as a three-dimensional coordinate system which utilizes a known map projection.

A Landsat satellite model is described at the end of the orthorectification section. the aerial model is presented first. Field Guide 265 .Work Flow Aerial Camera Film Digital Imagery from Satellites Image Acquisition Image Preprocessing: Scan Aerial Film Import Digital Imagery Triangulation Stereopair Creation Photogrammetric Processing Generate Elevation Models Orthorectification Map Feature Collection Orthoimages Topographic Database Product Output Orthomaps Topographic Maps Figure 104: Sample Photogrammetric Work Flow The remainder of this chapter is presented in the same sequence as the items in Figure 104. when appropriate. followed by the SPOT model. For each section.

Flight Line 3 flight path of airplane Flight Line 2 Flight Line 1 exposure station Figure 105: Exposure Stations along a Flight Path Image Scale The image scale expresses the average ratio between a distance in the image and the same distance on the ground.Image Acquisition Aerial Camera Film Exposure Station Each point in the flight path at which the camera exposes the film is called an exposure station. Camera tilt relative to the vertical is assumed to be minimal. NOTE: The flying height above ground is used. normally with an overlap of 60%. All photos in the strip are assumed to be taken at approximately the same flying height and with a constant distance between exposure stations.000m and a focal length of 15 cm. Strip of Photographs A strip of photographs consists of images captured along a flight-line. It is computed as focal length divided by the flying height above the mean ground elevation. with an altitude of 1. For example. versus the altitude above sea level. the image scale (SI) would be 1:6667. 266 ERDAS .

A block of photographs consists of a number of parallel strips. The figure below shows a block of 5 X 2 photographs. 60% overlap strip 2 20-30% sidelap strip 1 flying direction Figure 106: A Regular (Rectangular) Block of Aerial Photos Field Guide 267 . normally with a sidelap of 20-30%.Aerial Camera Film Block of Photographs The photographs from the flight path can be combined to form a block. A regular block of photos is a rectangular block in which the number of photos in each strip is the same. Photogrammetric triangulation is performed on the whole block of photographs to transform images and ground points into a homologous coordinate system.

Scanning Resolution The storage requirement for digital image data can be huge.Digital Imagery from Satellites Digital image data from satellites are distributed on a variety of media. Scanning Aerial Film 268 ERDAS . For example. This section addresses photogrammetric operations on SPOT satellite images. though most of the concepts are universal for any pushbroom sensor. Assuming 8 bits per pixel and no image compression. the digital image can be imported into a digital photogrammetric system. A SPOT scene covers an area of approximately 60 X 60 km. (Off-nadir scenes can cover up to 80 X 60 km. The internal format varies. Therefore. a standard panchromatic image is 9 by 9 inches (23 x 23 cm). The resolution of one pixel corresponds to about 10 X 10 m on the ground for panchromatic images. SPOT Image Corporation provides two correction levels that are of interest: • • Level 1A images correspond to raw camera data to which only radiometric corrections have been applied. For example. This data is internally transformed to level 1A before the triangulation calculations are applied. Pixels are resampled from the level 1A camera data by cubic polynomials.) Image Preprocessing Images must be read into the computer before processing can begin. depending on the specific sensor and data vendor. obtaining the optimal pixel size (or scanning density) is often a trade-off between capturing maximum image information and the digital storage burden. Usually the images are not digitally enhanced prior to photogrammetric processing. Aerial film must be scanned (digitized) to create a digital image. Most digital photogrammetric software packages have basic image enhancement tools. Once scanned. producing roughly the same ground pixel size throughout the scene. Scanning at 25 microns (roughly 1000 pixels per inch) results in a file with 9000 rows and 9000 columns. such as tapes and CD-ROMs. orthoimages or orthomosaics). Photogrammetric projects often have hundreds or even thousands of photographs.. The common practice is to perform more sophisticated enhancements on the end products (e. SPOT scenes are delivered at different levels of correction.g. Correction Levels for SPOT Imagery Refer to "CHAPTER 3: Raster and Vector Data Sources" for more information on satellite remote sensing and the characteristics of satellite data that can be read into ERDAS. depending on the inclination of the sensors. Level 1B images have been corrected for the earth’s rotation and viewing angle. this file occupies about 81 megabytes.

Image Preprocessing Photogrammetric Scanners Photogrammetric quality scanners are special devices capable of high image quality and excellent positional accuracy. but they are much less expensive. elevation extraction can become problematic if the scan quality is only marginal. Therefore. because film is superior to paper. Calibrating these units improves geometric accuracy. and are capable of scanning at a maximum resolution of 5 to 10 microns (5 microns is equivalent to approximately 5. such as digital photogrammetry in support of GIS or remote sensing applications.000 pixels per inch). both in terms of image detail and geometry. The image correlation techniques which are necessary for automatic elevation extraction are often sensitive to scan quality. These units usually scan only film (either positive or negative). Orthophoto applications often use 15 to 30 micron pixels. enabling the entire photo frame to be captured. Use of this type of scanner results in geometric accuracies similar to traditional analog and analytical photogrammetric instruments. therefore color orthoapplications often use 20 to 40 micron pixels. They lack the image detail and geometric accuracy of photogrammetric quality units. Color film is less sharp than panchromatic. Desktop scanners are appropriate for less rigorous uses. Desktop Scanners Desktop scanners are general purpose devices. Aerial triangulation and feature collection applications often scan in the 10 to 15 micron range. Field Guide 269 . These scanners are necessary for digital photogrammetric applications which have high accuracy requirements. The needed pixel resolution varies depending on the application. but the results are still inferior to photogrammetric units. the user should make sure that the active area is at least 9 X 9 inches. These units usually have an RMSE (Root Mean Square Error) positional accuracy of 4 microns or less. When using a desktop scanner.

First. orthorectification. However. Once triangulation is completed. Triangulation Triangulation establishes the geometry of the camera or sensor relative to objects on the earth’s surface. the next step is usually stereopair generation. fiducial marks are measured on the digital imagery and camera calibration information is entered. Uncorrected Digital Imagery Calculate Interior Orientation Camera or Sensor Information Calculate Exterior Orientation Ground Control Points Triangulation Results Figure 107: Triangulation Work Flow 270 ERDAS . For aerial photographs. The final step is to calculate the exterior orientation. and map feature collection.Photogrammetric Processing Photogrammetric processing consists of triangulation. elevation model generation. The interior orientation information for SPOT is already known (they are fixed values). the user could proceed directly to generating elevation models or orthoimages. stereopair creation. Ground control points aid this process. which establishes the location and attitude (rotation angles) of the camera or sensor during the time of image acquisition. the interior orientation establishes the geometry inside the camera or sensor. It is the first and most critical step of photogrammetric processing. Figure 107 illustrates the triangulation work flow.

measure the tie points on the images and digitize some control points. image coordinates and control point coordinates) The equations can be solved using the iterative least squares adjustment: X = ( A T PA ) –1 A T PL where X A P L = = = = the matrix containing the corrections to the unknown parameters the matrix containing the partial derivatives with respect to the unknown parameters the matrix containing the weights of the observations the matrix containing the observations Before the triangulation can be computed.e. there are usually image coordinate observations. Field Guide 271 .. and bundle methods are the common approaches for implementing triangulation. containing a minimum of two images. in which the fiducial marks are readily visible on the scanned images and the camera calibration information is available from an external source. of which bundle block adjustment is the most mathematically rigorous. the user should acquire images that overlap in the block. In bundle block adjustment. using ground control points or other kinds of known information. and possibly observations from GPS and satellite orbit information.Triangulation Aerial Triangulation The following discussion assumes that a standard metric aerial camera is being used. The strip. independent model. ground coordinate point observations. It is an economic technique to measure a large amount of object points with very high accuracy. Aerial triangulation is normally carried out for a block of images. The observation equation can be represented as follows: V = AX – L where V A X L = = = = the matrix containing the image coordinate residuals the matrix containing the partial derivatives with respect to the unknown parameters (exterior orientation parameters and XYZ ground coordinates) the matrix containing the corrections to the unknown parameters the matrix containing the observations (i. Aerial triangulation determines the exterior orientation parameters of images and three-dimensional coordinates of unknown points.

actual focal plane (of film) perspective center (all light rays intersect) virtual focal plane (camera image plane) terrain Figure 108: Focal and Image Plane For purposes of photogrammetric triangulation. The location of each point in the image can be expressed in terms of this image coordinate system. The light rays intersect both planes in the same manner. The orthogonal distance from the perspective center to the image plane is the focal length of the lens. all light rays are straight and intersect at the perspective center. the virtual focal plane is called the “image plane. To record an image. The virtual focal plane is the same distance (focal length) from the perspective center as is the plane of the film or scanner. The light rays are then projected onto the film. The plane of the film is called the focal plane. A virtual focal plane exists between the perspective center and the terrain. Concepts pertaining to interior orientation are described in this section. a local coordinate system is defined in each image. This point is called the principal point.Interior Orientation of an Aerial Photo The interior orientation of an image defines the geometry within the camera. The perspective center is projected onto a point in the image plane that lies directly beneath it.” and is used to describe photogrammetric concepts. During triangulation. the interior orientation must be available in order to accurately define the external geometry of the camera. Virtual focal planes are often more convenient to diagram. and therefore are often used in place of focal planes in photogrammetric diagrams. Ideally. 272 ERDAS . NOTE: In the discussion following. light rays reflected by an object on the ground are projected through a lens.

as shown by A-XF and A-YF in Figure 109. a1. Once the file coordinates of fiducials are measured. and Principal Point Fiducials are four or eight reference markers fixed on the frame of an aerial metric camera and visible in each exposure as illustrated by points F1. and the unit in pixels. the transformation from file coordinates to image coordinates can be carried out. The file coordinates of a digital image are defined in a pixel coordinate system. F2. b1.y = file coordinates = image coordinates Field Guide 273 . a2 = affine parameters bo. Usually the six-parameter affine transformation is used here: x = ao + a1 X F + a2 Y F y = bo + b1 X F + b2 Y F Where ao. For example. Fiducials are used to compute the transformation from file coordinates to image coordinates. F3. YF) can also be thought of as the pixel column and row number. the x-axis pointing to the right. in digital photogrammetry. Fiducials. These file coordinates (XF. The image coordinates of the fiducials are provided in a camera calibration report. and F4 in Figure 109. b 2 = affine parameters XF.Triangulation y A F1 XF F4 P F2 x F3 YF Figure 109: Image Coordinates. the y-axis pointing downward. YF x. it is usually a coordinate system with its origin in the upper-left corner of the image.

κ). the three coordinates of the perspective center (Xo.Zo) in the ground coordinate system.-f) Z l ZO Y XG PG (XG. image coordinates are directly obtained and the interior orientation is complete.Yo. Each aerial camera image has six exterior orientation parameters. ZG) ZG YO XO X YG Figure 110: Exterior Orientation of an Aerial Photo 274 ERDAS . as shown in Figure 110. Z' κ ϕ O Y' ω z X' y PP x PI (x.Once this transformation is in place. ϕ. Exterior Orientation The exterior orientation determines the relationship of an image to the ground coordinate system. and three rotation angles of (ω.y. YG.

that transforms the image system to the ground system The collinearity equations are the most basic principle of photogrammetry. and orientation parameters is described by the following collinearity equations: r 11 ( X – X O ) + r 21 ( Y – Y O ) + r 31 ( Z – Z O ) x = – f --------------------------------------------------------------------------------------------r 13 ( X – X O ) + r 23 ( Y – Y O ) + r 33 ( Z – Z O ) r 12 ( X – X O ) + r 22 ( Y – Y O ) + r 32 ( Z – Z O ) y = – f --------------------------------------------------------------------------------------------r 13 ( X – X O ) + r 23 ( Y – Y O ) + r 33 ( Z – Z O ) Where: x. ZO) image space coordinate system with origin in the perspective center and the x. O-y. y X. YO. Y. O-z XG. Z f XO. omega rotation angle around the X'-axis phi rotation angle around the Y'-axis kappa rotation angle around the Z'-axis point in the image plane point on the ground O-X'. ϕ.Triangulation Where PP O O-x.ZO r11 . but has its origin at the perspective center.ZG = = = = principal point perspective center with ground coordinates (XO. Used for expressing rotation angles (ω.YO.y-axis parallel to the image coordinate system axis ground coordinates a local coordinate system which is parallel to the ground coordinate system. Field Guide 275 .κ. O-Z' = ω ϕ κ PI PG = = = = = Collinearity Equations The relationship among image coordinates.r33 = = = = image coordinates ground coordinates focal length ground coordinates of perspective center = coefficients of a 3 X 3 rotation matrix defined by angles ω.YG. κ). O-Y'. ground coordinates. ϕ.

They consist of X. Control points for aerial triangulation are created by identifying points with known ground coordinates in the aerial photos. or Electronic Distance Measuring devices (EDMs). locate control points that lie on multiple images Control is needed around the outside of a block Control is needed at certain distances within the block 276 ERDAS .Control for Aerial Triangulation The distribution and density of ground control is a major factor in the accuracy of photogrammetric triangulation. and goes beyond the scope of this document. which are elevation values expressed in units above datum that are consistent with the map coordinate system. such as the global positioning system (GPS). Horizontal control only specifies the X. These ground coordinates are typically three-dimensional.Y coordinates along with a Z (elevation of the point). Ground control points serve as stable (known) values. while vertical control only specifies the Z. General rules for control distribution within a block are: • • • Whenever possible. Optimizing control distribution is part art and part science. Ground coordinates of control points can be acquired by digitizing existing maps or from geodetic measurements.5 pixels in the image. For optimal accuracy. the example presented in Figure 111 illustrates a specific case. In triangulation.1 to 0. However. A full control point specifies map X. there can be several types of control points. The user selects these points based on their relation to clearly defined and visible ground features. expressed in the units of the specified map projection. so their accuracy determines the accuracy of the triangulation. A control point is a point with known coordinates in the ground coordinate system. a surveying instrument.Y. Control points are used to establish a reference frame for the photogrammetric triangulation of a block of images.Y coordinates in a map projection system and Z coordinates. the coordinates should be accurate to within the distance on the ground that is represented by approximately 0.

0 X 1. the pixel size in the image is 25 microns (0. For example.5 pixels in the image.5 meters. This verification. These additional points become check points. Field Guide 277 .0 meters on the ground.000.Triangulation v = control (along all edges of the block and after the 3rd photo of each strip) v v v v v v v v v v v v v v v v v v v v v Figure 111: Control Points in Aerial Photographs (block of 8 X 4 photos) For optimal results.025mm). called check point analysis. Digitization of existing maps often does not yield this degree of accuracy. and can be used to independently verify the degree of accuracy of the triangulation. control points should be measured by geodetic techniques with an accuracy that corresponds to about 0. the ground control points should be accurate to about 0. each pixel covers approximately 1. is discussed on page 287.1 to 0. For an image scale of 1:40. A greater number of known ground points should be available than will actually be used in the triangulation. if a photograph was scanned with a resolution of 1000 dpi (9000 X 9000 pixels).1 to 0. Applying the above rule.

Typically. A tie point is a point whose ground coordinates are not known. they should show good contrast in two directions. the tie point can be omitted. but can be recognized visually in the overlap or sidelap area between two or more images.y = the image coordinates 278 ERDAS . like the corner of a building or a road intersection. y tie points in a single image x Figure 112: Ideal Point Distribution Over a Photograph for Aerial Triangulation Where x. Ideally. but can be supplemented by other points which are identified as tie points. Tie points should be visually well-defined in all images. If a control point already exists in the candidate location of a tie point. Tie points should also be well distributed over the area of the block.Ground control points need not be available in every image. nine tie points in each image are adequate for photogrammetric triangulation of aerial photographs. Ground coordinates for tie points are computed during the photogrammetric triangulation.

nine points are sufficient to tie together the block as well as individual strips. • • • Field Guide 279 . in later processing. they can serve as a check point for independent analysis of the accuracy of the triangulation.Triangulation In a block of aerial photographs with 60% overlap and 25-30% sidelap. A tie point is a point that is visually identifiable in at least two images for which ground coordinates are unknown. it helps to control both images. Nine tie points in each image tie the block together. Figure 113: Tie Points in a Block of Photos In summary: • A control point must be visually identifiable in one or more images and have known ground coordinates. the control point can be changed to a tie point. If. If the ground coordinates of a control point are not used in the triangulation. If a control point is in the overlap area. the ground coordinates for a control point are found to have low reliability.

A sun-synchronous orbit is one in which the orbital rotation is the same rate as the earth’s rotation.000 lines. (A scene from SPOT XS 1A is composed of only 3000 lines and 3000 columns and has 20 meter pixels. The image captured by the satellite is called a scene. Each of these lines consists of 6000 pixels. The satellite orbit is circular. north-south and south-north.12o. The field of view is 4.084 mm.SPOT Triangulation The SPOT satellite carries two HRV (High Resolution Visible) sensors. Since the motion of the satellite is smooth and practically linear over the length of a scene. A scene (SPOT Pan 1A) is composed of 6. the perspective centers of all scan lines of a scene are assumed to lie along a smooth line. while Pan has 10 meter pixels. each of which is a pushbroom scanner that takes a sequence of line images while the satellite circles the earth. there is a unique perspective center and a unique set of rotation angles. 280 ERDAS .) NOTE: This section will address only the 10 meter Pan scenario. The focal length of the camera optic is 1. For each line scanned. about 830 km above the earth.5 milliseconds. perspective centers of scan lines motion of satellite scan lines on image ground Figure 114: Perspective Centers of SPOT Scan Lines The satellite exposure station is defined as the perspective center in ground coordinates for the center scan line. which is very large relative to the length of the camera (78 mm). Each line is exposed for 1. The location of the perspective center relative to the line scanner is constant for each line (interior orientation and focal length). and sun-synchronous. so it takes 9 seconds to scan the entire scene.

It is the origin of the image coordinate system. light-sensitive element is 13 X 13 microns. The physical dimension of a single. A-YF = C C-x. y A XF 6000 lines (rows) C x 6000 pixels (columns) YF Figure 115: Image Coordinates in a Satellite Scene Where A = A-XF.Triangulation A pixel in the SPOT image records the light detected by one of the 6. This is the pixel size in image coordinates. The center of the scene is the center pixel of the center scan line. Each pixel is defined by file coordinates (column and row numbers). C-y = = origin of file coordinates file coordinate axes origin of image coordinates (center of scene) image coordinate axes Field Guide 281 .000 light-sensitive elements in the camera.

On Ok f O1 orbiting direction (N —> S) f f PPn scan lines (image plane) PPk Pn xn Pk xk PP1 P1 P1 x1 lk l1 ln Figure 116: Interior Orientation of a SPOT Scene For each scan line. The transformation between file coordinates and image coordinates is constant. where Pk xk = = image point x value of image coordinates for scan line k focal length of the camera perspective center for scan line k. aligned along the orbit principal point for scan line k light rays for scan line. bundled at perspective center Ok f = Ok = PPk = lk = 282 ERDAS .SPOT Interior Orientation Figure 116 shows the interior orientation of a satellite scene. a separate bundle of light-rays is defined.

as well as the exact time of the center scan line of the scene. A stereo-scene is achieved when two images of the same area are acquired on different days from different orbits. The center of a satellite scene is interpolated from the header data. Instead. and the exact time of exposure of the center scan line of the scene. Ephemeris data that are used in satellite triangulation are: • • • • the position of the satellite in geocentric coordinates (with the origin at the center of the earth) to the nearest second. geocentric coordinates at 60-second increments. For this to occur. The header of the data file of a SPOT scene contains ephemeris data. which provides information about the recording of the data and the satellite orbit. Inclination is the angle between a vertical on the ground at the center of the scene and a light ray from the exposure station. The cameras can be tilted in increments of 0. SPOT has off-nadir viewing capability.6o to a maximum of 27o to the east (negative inclination) or west (positive inclination).Triangulation SPOT Exterior Orientation SPOT satellite geometry is stable and the sensor parameters (e. Ephemeris data for the orbit are available in the header file of SPOT scenes. Nadir is the point directly below the camera. The velocity vector and some rotational velocities relating to the attitude of the camera are given... which is the direction of the satellite’s travel. attitude changes of the camera. They give the satellite’s position in three-dimensional. the triangulation of SPOT scenes is somewhat unstable because of the narrow. east or west of the nadir). but is off to an angle (i. the velocity vector. However. This angle defines the degree of off-nadir viewing when the scene was recorded.e. one taken east of the other. lessening the importance of the satellite’s position. focal length) are wellknown. The geocentric coordinates included with the ephemeris data are converted to a local ground system for use in triangulation. almost parallel bundles of light rays. Light rays in a bundle defined by the SPOT sensor are almost parallel. The scanner can produce a nadir view. there must be significant differences in the inclination angles. Field Guide 283 .g. Off-nadir refers to any point that is not directly beneath the satellite. the inclination angles of the cameras become the critical data.

It provides a technique to represent the satellite’s speed as if the imaged area were flat instead of being a curved surface. 284 ERDAS .O2 = = = = center of the scene Eastward inclination Westward inclination exposure stations (perspective centers of imagery) The orientation angle of a satellite scene is the angle between a perpendicular to the center scan line and the North direction. The real motion of the satellite above the ground is further distorted by earth rotation.O1 orbit 1 sensors vertical orbit 2 O2 II+ EAST WEST C scene coverage earth’s surface (ellipsoid) Figure 117: Inclination of a Satellite Stereo-Scene (View from North to South) Where C II+ O1. The spatial motion of the satellite is described by the velocity vector. The velocity vector of a satellite is the satellite’s velocity if measured as a vector through a point on the spheroid.

It is assumed that the satellite travels in a smooth motion as a scene is being scanned. which consists of: • • • • the perspective center of the center scan line. This relationship is expressed as the exterior orientation. the change of perspective centers along the orbit. the three rotations of the center scan line. In addition to fitting the bundle of light rays to the known points. and the changes of angles along the orbit. the exterior orientation of any other scan line is calculated based on the distance of that scan line from the center and the changes of the perspective center location and rotation angles. Field Guide 285 . once the exterior orientation of the center scan line is determined. Therefore. satellite triangulation also accounts for the motion of the satellite by determining the relationship of the perspective centers and rotation angles of the scan lines.Triangulation North O center scan line C orbital path V Figure 118: Velocity Vector and Orientation Angle of a Single Scene Where O C V = orientation angle = center of the scene = velocity vector Satellite triangulation provides a model for calculating the spatial relationship between the SPOT sensor and the ground coordinate system for each line of data.

The best locations for control points in the scene are shown below. the rotation angles for the center scan line. from which the perspective center and rotation angles of all other scan lines can be calculated.Bundle adjustment for triangulating a satellite scene is similar to the bundle adjustment used for aerial photos. A minimum of six control points is necessary for good triangulation results. Least squares adjustment is used to derive a set of parameters that comes the closest to fitting the control points to their known ground coordinates. these parameters change. the coefficients. and the ground coordinates of all tie points. For triangulating a single scene. Control for SPOT Triangulation Both control and tie points can be used for satellite triangulation of a stereo scene. The resulting parameters of satellite bundle adjustment are: • • • • the ground coordinates of the perspective center of the center scan line. Collinearity Equations Modified collinearity equations are applied to analyze the exterior orientation of satellite scenes. and to intersecting tie points. Each scan line has a unique perspective center and individual rotation angles. y control point horizontal scan lines x Figure 119: Ideal Point Distribution Over a Satellite Scene for Triangulation 286 ERDAS . It is then called space resection. only control points are used. When the satellite moves from one scan line to the next. Due to the smooth motion of the satellite in orbit. the changes are small and can be modeled by low order polynomial functions.

It gives the mean error of the image coordinate measurements used in the adjustment. the covariance matrix of unknowns. For satellite stereo-scenes. in which case points along the shoreline can be very easily detected as tie points. as well as the residuals of the observations. depending on the geometry of the photo block. the vertical accuracy is usually lower than the horizontal accuracy by a factor of 1.Triangulation In some cases. which is split into a horizontal (X. and rotation angles from the image header are used to define a datum. the coordinate system is limited by the accuracy of the emphemeris information. Residuals describe how much the measured image coordinates differ from their computed locations after the adjustment. Check Point Analysis An independent measure is needed to describe the accuracy of computed ground coordinates of tie points. The resulting DEM would display relative elevations. and check point analysis to aid in determining the accuracy of triangulation. the σ0 value (square root of variance of unit weight) is calculated. the vertical accuracy depends on the dimension of the inclination angles (the separation of the two scenes). A check point analysis compares photogrammetrically computed ground coordinates of tie points with their known coordinates. This might be especially useful for remote islands. measured independently by geodetic methods or by digitizing existing maps.75 pixels. the satellite positions.5. The ground coordinates of tie points will be computed in such a case.25 to 0. information describing the precision of the computed parameters. can be computed. However. Field Guide 287 . NOTE: In the case of aerial mapping. and the coordinate system would approximately correspond to the real system of this area.Y) and a vertical (Z) component. When a local coordinate system is defined. the residuals of observations. Standard Deviation ( σ 0 ) Each time the triangulation program completes one iteration. The coordinate center is the center of the scene expressed in the longitude and latitude taken from the header. velocity vectors. a local coordinate system may be defined. This value decreases as the bundle fits better to the control and tie points. In this instance. NOTE: The σ0 value usually should not be larger than 0. there are no reliable control points available in the area for which a DEM or orthophoto is to be created. The variance matrix and covariance matrix are the theoretical precision of the unknown parameters. Triangulation Accuracy Measures The triangulation solution usually provides the standard deviation. The result of this analysis is an RMS error. Precision and Residual Values After the adjustment.

Stereo Imagery To perform photogrammetric stereo operations. most commonly along the flight line. Though digital photogrammetric principles can be applied to any type of imagery. Editing Points and Repeating the Adjustment To increase the accuracy of the triangulation. However. mounted so that the lens is close to vertical. Aerial photographs are taken by specialized cameras. A stereopair is a set of two images that overlap. Photos are taken in sequence at regular intervals. some gross errors in observations can be detected and flagged. Aerial Stereopairs For decades.Robust Estimation Using robust estimation or data-snooping techniques. pointing out of a hole in the bottom of an airplane. the triangulation can be repeated. the residuals are not equivalent to their observation errors. The residual of each image coordinate measurement can be a good reference to help decide whether to remove the observation from further computations. two views of the same ground area captured from different locations are required. this document focuses on two main sources: aerial photographs (metric frame cameras) and SPOT satellite imagery. because of the different redundancy of the different measurements. The relief displacement in a stereopair is required to extract three-dimensional information about the terrain. you may refine the input image coordinates to eliminate the gross errors in the observations. the SPOT concepts have much in common with other sensors that also use a linear Charged Coupled Device (CCD) in a pushbroom fashion. Likewise. aerial photographs have been used to create topographic maps in analog and analytical stereoplotters. 288 ERDAS . Many of the concepts presented for aerial photographs also pertain to most imagery that has a single perspective center. A stereopair can be constructed from any two overlapping photographs that share a common area on the ground. After editing the input image coordinates. providing two views of the terrain in the overlap area. A better way is to consider a residual together with its redundancy. Neighboring photos along the flight line usually overlap by 60% or more.

Field Guide 289 . it is easy to acquire stereopairs from the SPOT satellite. called epipolar geometry. is characterized by relief displacement only occurring in one dimension (along the flight line).Stereo Imagery flight direction Figure 120: Aerial Stereopair (60% Overlap) SPOT Stereopairs Satellite stereopairs are created by two scenes of the same terrain that are recorded from different viewpoints. This orientation. overlapping imagery using the process in Figure 122. Digital photogrammetry creates a new set of digital images by resampling the overlap region into a stereo orientation. The SPOT stereopairs are recorded from different orbits on different days. A feature unique to digital photogrammetry is that there is no need to create a relative stereo model before proceeding to absolute map coordinates. Because of its off-nadir viewing capability. motion of satellite during scanning 1st satellite visit 2nd satellite visit terrain Figure 121: SPOT Stereopair (80% Overlap) Epipolar Stereopairs Epipolar stereopairs are created from triangulated.

Uncorrected Digital Imagery Generate Epipolar Stereopair Triangulation Results Epipolar Stereopair Figure 122: Epipolar Stereopair Creation Work Flow 290 ERDAS .

Uncorrected Digital Imagery Extract Elevation Information Triangulation Results Generated DTM Figure 123: Generate Elevation Models Work Flow (Method 1) Epipolar Stereopair Extract Elevation Information Generated DTM Figure 124: Generate Elevation Models Work Flow (Method 2) Traditional Methods The traditional method of deriving elevations was to visualize the stereopair in three dimensions using an analog or analytical stereo plotter. There are two methods in digital photogrammetry. Method 2 uses only the epipolar stereopairs (which are assumed to include geometric information derived from the triangulation results). However.Y.Z) values for each correlated feature. An alternative method was to set the pointer to a fixed elevation and then proceed to trace contour lines.Generate Elevation Models Generate Elevation Models Elevation models are generated from overlapping imagery (Figure 123). Digital Methods Field Guide 291 . The triangulation information is then used to calculate ground (X. Method 1 uses the original images and triangulation results.image correlation. The general idea is to use pattern matching algorithms to locate the same ground features on two overlapping photographs. The user would then place points and breaklines at critical terrain locations. a powerful new method is introduced with the advent of all digital systems . Both of the traditional methods described above can also be used in digital photogrammetry utilizing specialized stereo viewing hardware.

Y. In addition to elevation points. Their usage in the industry are not always consistent. the elevation is computed by a surface interpolation method. DTMs A digital terrain model (DTM) is a discrete expression of terrain surface in a data array. For each grid point. 292 ERDAS . to describe the terrain surface more accurately. Z value. Often. IMAGINE Virtual GIS). Consider DTM as being a general term for elevation models. with DEMs and TINs (defined below) as specific representations. A breakline is an elevation polyline.Y) and the elevations of the ground points and breaklines. in this section the following meanings will be assigned to specific terms. Often. DEMs A digital elevation model (DEM) is a specific representation of DTMs in which the elevation points consist of a regular grid. TINs A triangulated irregular network (TIN) is a specific representation of DTMs in which elevation points can occur at irregular intervals. DEM Interpolation The direct results from most image matching techniques are irregular and discrete object surface points. breaklines are often included in TINs. or it can be represented with irregular points. Some of them have been introduced in "CHAPTER 1: Raster Data. There are many algorithms for DEM interpolation." Other methods. However. TIN based interpolation methods can deal with breaklines more efficiently. are also used for DEM interpolation. DEMs are stored raster files in which each grid cell value contains an elevation value.Elevation Model Definitions There are a variety of terms used to describe various digital elevation models. In order to generate a DEM.g.. A DTM can be extracted from stereo imagery based on the automatic matching of points in the overlap areas of a stereo model. and this terminology will be used consistently. A DTM can be in the regular grid form. The stereo model can be a satellite stereo scene or a pair of digitized aerial photographs. breaklines should be added and used in the DEM interpolation. consisting of a group of planimetric coordinates (X. including Least Square Collocation and Finite Elements. In particular. the irregular set of object points need to be interpolated. in which each vertex has its own X. The resulting DTM can be used as input to geoprocessing software. it can be utilized to produce an orthophoto or used in an appropriate 3-D viewing package (e.

Generate Elevation Models Image Matching Image matching refers to the automatic acquisition of corresponding image points on the overlapping area of two images. Level 4 64 x 64 pixels Resolution of 1:8 Matching begins on level 4 Level 3 128 x 128 pixels Resolution of 1:4 and Level 2 256 x 256 pixels Resolution of 1:2 Level 1 Full resolution (1:1) 512 x 512 pixels Matching finishes on level 1 Figure 125: Image Pyramid for Matching at Coarse to Full Resolution Epipolar Image pair The epipolar image pair is a stereopair without y-parallax. at a decreasing spatial resolution each time. the image pyramid is usually adopted in the image matching techniques to reduce the computation time and to increase the matching reliability. It can be generated from the original stereopair if the orientation parameters are known. The matching process is performed at each level of resolution. However. Image Pyramid Because of the large amounts of image data. some of the image detail may be lost during the epipolar resampling process. When image matching algorithms are applied to the epipolar stereopair. see "Image Matching Techniques" on page 294. The search is first performed at the lowest resolution level and subsequently at each higher level of resolution. Each level of the pyramid contains the image at a particular resolution. Figure 125 shows a four-level image pyramid. the search can be constrained to the epipolar line to significantly reduce search times and false matching. Field Guide 293 . The pyramid is a data structure consisting of the same image represented several times. For more information on image matching.

Cross correlation is more robust in that it requires a less accurate a priori position than least squares. A reference window is the source window on the first image. cross correlation is often followed by least squares..g.. 7 X 7 pixels). One example of correlation windows is square neighborhoods (e.Image Matching Techniques The image matching methods can be divided into three categories: • • • area based matching feature based matching relation based matching Area Based Matching Area based matching can also be called signal based matching. its precision is limited to 1. Correlation Calculations Two correlation calculations are described below: cross correlation and least squares correlation. many different search windows are examined until a location is found that best matches the reference window.g. but requires an a priori position that is accurate to about 2 pixels. These windows consist of a local neighborhood of pixels. it is not necessary to balance the contrast or brightness prior to running correlation. Least squares correlation can achieve precision levels of 0.). Therefore.1 pixels.0 pixels. Area correlation uses the characteristics of these windows to match ground feature locations in one image to ground features on the other. Search windows are candidate windows on the second image that are evaluated relative to the reference window. 5 X 5. In practice. 3 X 3. 294 ERDAS . including these methods. During correlation. In practice. Most area based matching calculations. the windows vary in shape and dimension. 3 X 3. However. Its dimensions are usually square in size (e. The cross correlation and least squares correlation techniques are well-known methods for area based matching. etc. This method determines the correspondence between two image areas according to the similarity of their gray level values. normalize the correlation windows. Correlation Windows Area based matching uses correlation windows. based on the matching technique. 5 X 5. which remains at a constant location.

This technique accounts for both gray scale and geometric differences.Image Matching Techniques Cross Correlation Cross correlation computes the correlation coefficient of the gray values between the template window and the search window. j i. j ρ = ------------------------------------------------------------------------------------------------------2 2 ∑ [ g1 ( c1. r 2 ) – g2 ] with 1 g 1 = -.r1 = the pixel coordinates on the left image c2. j ∑ [ g1 ( c1. until an optimum solution has been determined.1 pixels). it is necessary to have a good initial position for the two correlation windows.r2 n i. However. Least squares correlation is iterative. The initial coordinates for the search window prior to correlation must be accurate to about 2 pixels or better. Least Squares Correlation Least squares correlation uses the least squares estimation to derive parameters that best fit a search window to a reference window.r) = the gray value of the pixel (c. j where 1 g 2 = -.∑ g 2 ( c 2. When least squares correlation fits a search window to the reference window. Least squares matching can result in high positional accuracy (about 0.r) c1. both radiometric (pixel gray values) and geometric (location. it is sensitive to initial approximations. r 2 ) – g2 ] i. making it especially useful when ground features on one image look somewhat different on the other image (differences which occur when the surface terrain is quite steep or when the viewing angles are quite different). The parameters calculated during the initial pass are used in the calculation of the second pass and so on. according the following equation: i. size. Also. j ρ = the correlation coefficient g(c. j = the pixel coordinates on the right image = the total number of pixels in the window = pixel index into the correlation window When using the area based cross correlation. the correlation will fail. r 1 ) – g1 ] ∑ [ g2 ( c2. r 1 ) n i.∑ g 1 ( c 1. r 2 ) n i. and shape of the search window) transformations are calculated. Field Guide 295 . r 1 ) – g1 ] [ g2 ( c2. if the contrast in the windows is very poor.

r 1 ) + ∆g with ∆g = g 2 ( c 2. The pixel coordinate values are presented as (x.y1) the gray value of pixel (x1. g 2 ( c 2.y2) = h0.r2). a1.y2 = = the pixel coordinate in the reference window the pixel coordinate in the search window the gray value of pixel (x1. the error equation for each pixel can be derived. a2 = b0.y) instead of (c.For example. b1. r 1 ) where gc and gr are the gradients of g2 (c2. b2 = Based on this assumption. r 2 ) – g 1 ( c 1. NOTE: The following formulas do not follow use the coordinate system nomenclature established elsewhere in this chapter. Also. assume that the change in the window’s geometry can be represented by an affine transformation.y1) linear gray value transformation parameters affine geometric transformation parameters affine geometric transformation parameters g1(x1. suppose the change in gray values between two correlation windows can be represented as a linear relationship.y1 x2. h1 = a0.r). as is shown in the following equation: v = ( a 1 + a 2 c 1 + a 3 r 1 )g x + ( b 1 + b 2 c 1 + b 3 r 1 )g y – h 1 – h 2 g 1 ( c 1. r 2 ) = h 0 + h 1 g 1 ( c 1. 296 ERDAS .y1) = g2(x2. r 1 ) c2 = a0 + a1 c1 + a2 r 1 r 2 = b0 + b1 c1 + b2 r 1 where x1.

the image features must initially be extracted.Image Matching Techniques Feature Based Matching Feature based matching determines the correspondence between two image features. Relation based matching can also be applied for the automatic recognition of control points. the process is timeconsuming. Most feature based techniques match extracted point features (this is called feature point matching). since it deals with varying types of information. Field Guide 297 . as opposed to other features. This kind of matching technique uses not only the image features. the attributes of the features are compared between two images. Relation Based Matching Relation based matching is also called structure based matching. The feature pair with the attributes which are the best fit will be recognized as a match. However. such as lines or complex objects. With relation based matching. but also the relation among the features. Poor contrast areas can be avoided with feature based matching. There are several well-known operators for feature point extraction. Examples include: • • • Moravec Operator Dreschler Operator Förstner Operator After the features are extracted. In order to implement feature based matching. the corresponding image structures can be recognized automatically. without any a priori information.

or generated from the raw digital images earlier in the photogrammetric process (see Figures 126 and 127). 298 ERDAS .Orthorectification Orthorectification takes a raw digital image and applies an elevation model (DTM) and triangulation results to create an orthoimage (digital orthophoto). along a line of sight that is orthogonal (perpendicular) to the earth. The DTM can be sourced from an externally derived product. Generated DTM Uncorrected Digital Imagery Orthorectify Imagery Triangulation Results Generated Orthoimage Figure 126: Orthorectification Work Flow (Method 1) External DTM Uncorrected Digital Imagery Orthorectify Imagery Triangulation Results Generated Orthoimage Figure 127: Orthorectification Work Flow (Method 2) An image or photograph with an orthographic projection is one for which every point looks as if an observer were looking straight down at it.

radial lens distortion).. atmospheric diffraction. or reference plane. which is called the datum. sensor position. earth rotation and change in orbital position during acquisition). These are terrain. perspective projection satellite sensor or camera orthographic projection terrain reference plane (elevation zero) Figure 128: Orthographic Projection The resulting image is known as a digital orthophoto or orthoimage.Orthorectification Figure 128 illustrates the relationship of a remotely-sensed image to the orthographic projection. An aerial photo or satellite scene transformed by the orthographic projection yields a map that is free of most significant geometric distortions. the camera or sensor itself (e. only the most significant of these distortions are presented. In the following material. and the mechanics of acquisition (e. In addition. The digital orthophoto is a representation of the surface projected onto the plane of zero elevation. there are distortions caused by earth curvature.g. Field Guide 299 .. there is inherent geometric distortion caused by terrain and by the angle of the sensor or camera to the ground.g. and rotation angles. Geometric Distortions When a remotely-sensed image or an aerial photograph is recorded. for SPOT. as well as earth curvature for small-scale images.

and the orientation parameters of the sensor or camera. O Pl f Z P DTM X orthoimage gray values Figure 129: Digital Orthophoto .Z f = ground point = image point = perspective center (origin) = ground coordinates (in DTM file) = focal length 300 ERDAS . The brightness value. elevation. In overlap regions of orthoimage mosaics.Finding Gray Values Where P P1 O X. A brightness value is determined for this location based on resampling of the surrounding pixels. Relief displacement is corrected for by taking each pixel of a DTM and finding the equivalent position in the satellite or aerial image. the digital orthophoto can be used. and reflections from water and snow. occlusions.Aerial and SPOT Orthorectification Creating Digital Orthophotos The following are necessary to create digital orthophoto: • • • a satellite image or scanned aerial photograph. cloud cover. and orientation are used to calculate the equivalent location in the orthoimage file. which minimizes problems with contrast. a digital terrain model (DTM) of the area covered by the image.

However. can be used with an existing DTM to yield an orthoimage. See "CHAPTER 8: Rectification" for a complete explanation of rectification and resampling methods. a map ground coordinate can be quickly calculated for a pixel position. No information about the sensor or the orbit of the satellite is needed. when the cell sizes of orthoimage pixels are selected. such as polynomial warping or rubber sheeting. which makes it much more accurate for off-nadir imagery. Vertical imagery taken by a line scanner.Orthorectification The resulting orthoimages have similar basic characteristics to images created by other means of rectification. changes in orbital position during acquisition. and cubic convolution. This section illustrates just one example of how to correct the relief distortion by using the polynomial formulation. such as Landsat TM or Landsat MSS. For example. a transformation is calculated using information about the ground region and image capture.000. For this reason. a cell size of 10 x 10 meters is appropriate. Landsat Orthorectification Landsat TM or Landsat MSS sensor systems have a complex geometry which includes factors such as a rotating mirror inside the sensor. larger image scales. bilinear interpolation. On any rectified image. Instead. The resulting “zero level” imagery requires sophisticated rectification that is beyond the capabilities of many end users. For the orthoimage. if the image was scanned 9K X 9K.025mm on the image. they should be similar or larger than the cell sizes of the original image. imagery of mountainous regions needs to account for relief displacement for effective georeferencing. For SPOT Pan images. orthorectification often requires less control points than other methods. Any further enlargement from the original scene to the orthophoto would not improve the image detail. it would be appropriate to choose a pixel spacing of 1m or larger. 1 pixel would represent 0. A solution to this problem is discussed below. Generally. and earth rotation. and mountainous regions. Assuming that the image scale (SI) of this photo is 1:40. Also. Choosing a smaller pixel size would oversample the original image. The orthorectification process almost always explicitly models the ground terrain and sensor attitude (position and rotation angles). Resampling methods used are nearest neighbor. almost all Landsat data formats have already been preprocessed to minimize these distortions. then the cell size on the ground is about 1m. Applying simple polynomial rectification techniques to these formats usually fulfills most georeferencing needs when the terrain is relatively flat. The correction takes into account: • • • • elevation local earth curvature distance from nadir flying height above datum Field Guide 301 .

Each edge line is then obtained by a least squares regression. Thus the nadir line. and a least squares regression method similar to the one used for rectification is used to get a polynomial transformation between image and ground coordinates. Vertical imagery is assumed to be captured by Landsat satellites. 302 ERDAS . image point image edge scan line image plane nadir nadir line The edges of a Landsat image can be determined by a search based on a gray level threshold between the image and background fill values. The nadir is a point directly beneath the satellite.The image coordinates are adjusted for the above factors. Sample points are determined along the edges of the image. For vertically viewed imagery. Approximating the Nadir Line The nadir line is a mathematically derived line based on the nadir. The nadir line is found by averaging the left and right edge lines. the nadir point for a scan line is assumed to be the center of the line.

r). and = slopes of straight lines. ( x – c 0 – c 1 y )  1 – c1g1  (1) Where g1 is the slope of the scan line (y = g0 +g1x).b0. The pixel coordinate values are presented as (x. as follows:  1 + g 12 d =  -------------------.c1 and c0 c1 = = 0. Field Guide 303 .y a0.Orthorectification For simplicity.c0 a1.y) instead of (c. each line (the four edges and the nadir line) can be approximated by a straight line without losing generality. = intercepts of straight lines. x = a0 + a1y x = c0 + c1y right edge: x = b0 + b1y Determining a Point’s Distance from Nadir The distance (d) of a point from the nadir is calculated based on its position along the scan line. which can be obtained based on the top and bottom edges with a method similar to the one described for the left and right edges.5 * ( a1 + b1) = image coordinates.5 * (a0 + b0) 0. left edge: nadir line: where x. NOTE: The following formulas do not follow use the coordinate system nomenclature established elsewhere in this chapter.b1.

Displacement Displacement is the degree of geometric distortion for a point that is not on the nadir line. exposure station ∆d image plane H d datum z earth ellipsoid α R β earth’s center Figure 130: Image Displacement where R H ∆d d Z α β = = = = = = = radius of local earth curvature at the nadir point flying height of satellite above datum at the nadir point displacement distance of image point from nadir point elevation of the ground point angle between nadir and image point before adjustment for terrain displacement angle between nadir and image point by vertical view 304 ERDAS . A point on the nadir line represents an area in the image with zero distortion. The displacement of a point is based on its relationship to the nadir point in the scan line in which the point is located.

F 2 ) Before performing the polynomial transformation.. the polynomial equations become: Z R+H 1 . an explicit approximate equation can be derived from equations (1). (2).∆d =  -------------------.= 1 – ----------d tan α (2) (3) Considering α and β are very tiny values. Bi = polynomial transformation coefficients = = A0 + A1X + A2Y + A3X2 + A4XY +A5Y2 + . Thus. the following approximations can be used with sufficient accuracy: cos α ≈ 1 cos β ≈ 1 Then. --. ( x – c 0 – c 1 y ) = F 2 ( X . Y ) 1 – c 1 g 1 H   R + Z  where X. the displacement is applied. Y ) 1 – c 1 g 1 H   R + Z  Z R+H g1 .x –  ------------------.------------.= -----------------------------------( R + H ) – ( R + Z ) cos β R + H – R cos α ∆d tan β -----. F2 and where F1 F2 and where Ai.Y.y –  ------------------. ( x – c 0 – c 1 y ) = F 1 ( X . B0 + B1X + B2Y + B3X2 + B4XY +B5Y2 + .  ------------...( x – c 0 – c 1 y )  1 – c1g1 H  R + Z  (4) Solving for Transformations (F 1 .Z x.y F1. and (3):  1 + g 12  Z   R + H  .  --. = = = 3-D ground coordinates image coordinates polynomial expressions Field Guide 305 ..  --.Orthorectification Solving for Displacement (∆d) The displacement of an image point is expressed in the following equations: ( R + Z ) sin β R sin α -------------------------------------------------------.  ------------.

y) during orthocorrection: ( 1. the following pair of equations can easily be solved for image coordinates (x.p =  ------------------.0 – c 1 g 1 p )y = F 2 ( X .  ------------. Y ) – c 0 p g 1 px + ( 1.0 – p )x + c 1 py = F 1 ( X .Solving for Image Coordinates Once all the polynomial coefficients are derived using least squares regression. Y ) – c 0 g 1 p where 1 Z R+H .  --. 1 – c 1 g 1 H   R + Z  306 ERDAS .

delineating. is the most common approach. There can be many different elements within a general category. which uses an orthoimage as the image backdrop. infrastructure. hydrology. To achieve high levels of positional accuracy. and buildings. Monoscopic Collection Field Guide 307 . infrastructure can be broken down into roads.Z). The features are represented by attribute points. and labeling various types of natural and man-made phenomena from remotely-sensed images. photogrammetric processing is applied to the imagery prior to collecting features (see Figures 131 and 132). only X and Y ground coordinates can be obtained. works well for non-urban areas and/or smaller image scales. lines. Monoscopic collection from orthoimages has no special hardware requirements. utilities.Map Feature Collection Map Feature Collection Feature collection is the process of identifying. and land cover. The features are collected from orthoimages while viewing them in mono. and polygons. Method 2 (Figure 132). For instance. making orthoimages an ideal image source for many applications. Viewing the stereopair in three dimensions provides greater image content and the ability to obtain three-dimensional feature ground coordinates (X. Stereopair Collect Features Stereoscopically Collected Map Features Figure 131: Feature Collection Work Flow (Method 1) Generated Orthoimage Collect Features Monoscopically Collected Map Features Figure 132: Feature Collection Work Flow (Method 2) Stereoscopic Collection Method 1 (Figure 131).Y. Therefore. General categories of features include elevation models. which uses a stereopair as the image backdrop.

these digital images can be enhanced. items from a topographic database (e. The images are often clipped to represent the same ground area and map projection as standard map series. north arrows. Topographic maps are the traditional end product of the photogrammetric process. merged with other data sources. contour lines. scale bars. 308 ERDAS . including feature collection. visualization.. making them an excellent primary data source for all types of mapping. and mosaicked with adjacent orthoimages. This product is similar to a standard map in that it usually includes additional information.Product Output Orthoimages Orthoimages are the end product of orthorectification. topographic maps are often produced by map publishing systems which utilize a topographic database. Once created. and feature descriptions) can improve the interpretability of the orthomap. In addition. other geographic data can be superimposed on the image. Orthomaps Topographic Database Topographic Maps Features obtained from elevation extraction and feature collection can serve as primary inputs into a topographic database. such as map coordinate grids. Orthoimages have very good positional accuracy. etc.g. Orthomaps use orthoimages. This database can then be utilized by GIS and map publishing systems. The resulting digital file makes an ideal image backdrop for many applications. roads. to produce an imagemap product. and input into GIS/Remote sensing systems. For example. In the digital era. or orthoimage mosaics.

Topographic Database Field Guide 309 .

310 ERDAS .

There are a number of map coordinate systems for determining location on an image. and are expressed as X. conform to other images. and have the integrity of a map. or area. each map projection system compromises accuracy between certain properties. For example. Rectification is the process of transforming the data from one grid system into another grid system using an nth order polynomial. These coordinate systems conform to a grid. This chapter covers the processes of geometrically correcting an image so that it can be represented on a planar surface. the shapes. and scale in parts of the map may be distorted (Jensen 1996).Y (column. A map coordinate system is not necessarily involved. in equal area map projections. and many other applications. Since flattening a sphere to a plane causes distortions to the surface. For example. images of one area that are collected from different sources must be used together. In this example. Since the pixels of the new grid may not align with the pixels of the original grid. if image A is not rectified and it is being used with image B. Each map projection system is associated with a map coordinate system. The tools for rectifying image data are used to transform disparate images to the same coordinate system. then image B must be registered to image A. density. angle. Field Guide 311 . the pixel grids of each image must conform to the other images in the data base. There are a number of different map projection methods. To be able to compare separate images pixel by pixel. angles. Resampling is the process of extrapolating data values for the pixels on the new grid from the values of the source pixels. to maintain equal area. so that they conform to each other. This is useful for comparing land use area. image A is not rectified to a particular map projection. Even images of seemingly “flat” areas are distorted by both the curvature of the earth and the sensor being used. Registration In many cases. a circle of a specified diameter drawn at any location on the map will represent the same total area. However. the pixels must be resampled. A map projection system is any system designed to represent the surface of a sphere or spheroid (such as the earth) on a plane. row) pairs of numbers.Introduction CHAPTER 8 Rectification Introduction Raw. so there is no need to rectify image B to a map projection. such as conservation of distance. remotely sensed image data gathered by a satellite or aircraft are representations of the irregular surface of the earth. Registration is the process of making an image conform to another image.

Therefore. 312 ERDAS . since all map projection systems are associated with map coordinates. Image-to-image registration involves georeferencing only if the reference image is already georeferenced." Orthorectification Orthorectification is a form of rectification that corrects for terrain displacement and can be used if there is a digital elevation model (DEM) of the study area. an image is not usually “rectified” to Lat/Lon. The image data may already be projected onto the desired plane. You can view map projection information for a particular file using the ERDAS IMAGINE Image Information utility. but in mountainous areas (or on aerial photographs of buildings). Image Information allows you to modify map information that is incorrect. but not yet referenced to the proper coordinate system. It is possible to purchase image data that is already geocoded. and some tips for doing so are included in this chapter. you cannot rectify data using Image Information. You must use the Rectification tools described in this chapter. although it is possible to convert images to Lat/Lon. Rectification. involves changing only the map coordinate information in the image file. The grid of the image does not change. and usually have had radiometric corrections applied. by definition. orthorectification is not necessary. involves georeferencing. by itself. Lat/Lon expresses locations in the terms of a spheroid. See "CHAPTER 7: Photogrammetric Concepts" for more information on orthocorrection. Geocoded data should be rectified only if they must conform to a different projection system or be registered to other rectified data. Latitude/Longitude Latitude/Longitude is a spherical coordinate system that is not associated with a map projection. where a high degree of accuracy is required. However. Geocoded data are images that have been rectified to a particular map projection and pixel size. In relatively flat areas. not a plane. The properties of map projections and of particular map projection systems are discussed in "CHAPTER 11: Cartography" and "APPENDIX C: Map Projections. orthorectification is recommended.Georeferencing Georeferencing refers to the process of assigning map coordinates to image data. Georeferencing.

east-west. Where on the globe is the study area? Polar regions and equatorial regions require different projections for maximum accuracy.When to Rectify When to Rectify Rectification is necessary in cases where the pixel grid of the image must be changed to fit a map projection system or a reference image. and oblique areas may all require different projection systems (ESRI 1992). What is the extent of the study area? Circular. the projection may be pre-determined. one must determine the appropriate coordinate system for the data base. Before selecting a map projection. To select the optimum map projection and coordinate system. A commonly used projection in the United States government is State Plane. If the user is doing a government project. north-south. There are several reasons for rectifying image data: • • • • • • • • • scene-to-scene comparisons of individual pixels in applications. such as change detection or thermal inertia mapping (day and night comparison) developing GIS data bases for GIS modeling identifying training samples according to map coordinates prior to classification creating accurate scaled photomaps overlaying an image with vector data. the primary use for the data base must be considered. Field Guide 313 . consider the following: • • • How large or small an area will be mapped? Different projections are intended for different size areas. such as ARC/INFO comparing images that are originally at different scales extracting accurate distance and area measurements mosaicking images performing any other analyses requiring precise geographic locations Before rectifying the data. Use an equal area projection for thematic or distribution maps and conformal or equal area projections for presentation maps.

Use the Image Information utility to modify image file header information that is incorrect. the cell size of band 6 of Landsat TM data is different than the cell size of the other bands. the classification may be more accurate if the new coordinates help to locate better training samples. Although some of the algorithms for calculating these values are highly reliable. This involves redefining: • • the map coordinate of the upper left corner of the image. Thematic Files Nearest neighbor is the only appropriate resampling method for thematic files. which is a much simpler process than rectification. For example. and the cell size (the area represented by each pixel). which may be a drawback in some applications. Classification Some analysts recommend classification before rectification. the image header can simply be updated with new map coordinate information. Disadvantages of Rectification During rectification. some spectral integrity of the data can be lost during rectification. then it may be wiser not to rectify the image. These images need only to be georeferenced. For example. but do not contain any map coordinate information. The available resampling methods are discussed in detail later in this chapter. Scanning and digitizing produce images that are planar. if an image file is produced by scanning or digitizing a paper map that is in the desired projection system. since the classification will then be based on the original data values.When to Georeference Only Rectification is not necessary if there is no distortion in the image. Another benefit is that a thematic file has only one band to rectify instead of the multiple bands of a continuous file. it may be beneficial to rectify the data first. Since these data are very accurate. In many cases. although it could be different. An unrectified image is more spectrally correct than a rectified image. 314 ERDAS . If map coordinates or map units are not needed in the application. especially when using Global Positioning System (GPS) data for the ground control points. This information is usually the same for each layer of an image (.img) file. the data file values of rectified pixels must be resampled to fit into a new grid of pixel rows and columns. then that image is already planar and does not require rectification unless there is some skew or rotation of the image. On the other hand.

many references to rectification also apply to image-to-image registration. Field Guide 315 . Disk rectification involves: • • rearranging the pixels of the image onto a new grid. rectification is the conversion of data file coordinates to some other grid and coordinate system. Display rectification is temporary. such as the upper left corner map coordinates and the area represented by each pixel. Compute and test a transformation matrix. which conforms to a plane in the new map projection and coordinate system. regardless of the application: 1. Create an output image file with the new coordinate information in the header. Rectifying or registering image data on disk involves the following general steps. Throughout this documentation. and inserting new information to the header of the file. because a new file is created. called a reference system.When to Rectify Rectification Steps NOTE: Registration and rectification involve similar sets of procedures. Usually. 3. 2. Locate ground control points. The pixels must be resampled to conform to the new grid. but disk rectification is permanent. Images can be rectified on the display (in a Viewer) or on the disk.

Use a digitizing tablet to register an image to a hardcopy map. If a GCP set exists for the top file that is displayed in the Viewer. towers.Ground Control Points Ground control points (GCPs) are specific pixels in an image for which the output map coordinates (or other output coordinates) are known. then those GCPs can be displayed when the GCP Tool is opened. A default point ID string is provided (such as “GCP #1”). vegetation. The point ID is a name given to GCPs in separate files that represent the same geographic location. or buildings. For example." 316 ERDAS . etc. the edges of lakes or other water bodies. From the ground control points. but the user can enter his or her own unique ID strings to set up corresponding GCPs as needed. GCPs in ERDAS IMAGINE Any ERDAS IMAGINE image can have one GCP set associated with it. In the CellArray of GCP data that displays in the GCP Tool. Select many GCPs throughout the scene. Entering GCPs Accurate ground control points are essential for an accurate rectification. • Information on the use and setup of a digitizing tablet is discussed in "CHAPTER 2: Vector Layers. the rectified coordinates for all other points in the image are extrapolated. With both the source and destination Viewers open. Even though only one set of GCPs is associated with an image file. utility corridors. larger features such as urban areas or geologic features may be used. one column shows the point ID of each GCP. enter source coordinates and reference coordinates for image-to-image registration.img) along with the raster layers.) should not be used.Y pairs of coordinates: • • source coordinates — usually data file coordinates in the image being rectified reference coordinates — the coordinates of the map or reference image to which the source image is being registered The term “map coordinates” is sometimes used loosely to apply to reference coordinates and rectified coordinates. GCPs for large-scale imagery might include the intersection of two roads. Landmarks that can vary (e. The more dispersed the GCPs are. These coordinates are not limited to map coordinates. The GCP set is stored in the image file (.g. one GCP set can include GCPs for a number of rectifications by changing the point IDs for different groups of corresponding GCPs. and entered at the keyboard. Such GCPs are called corresponding GCPs. airport runways.. map coordinates are not necessary. GCPs consist of two X. The source and reference coordinates of the ground control points can be entered in the following ways: • • They may be known a priori. For small-scale imagery. in image-to-image registration. the more reliable the rectification will be. Use the mouse to select a pixel from an image in the Viewer.

Landsat TM to SPOT) and avoid stretching resolution spans greater than a cubic convolution radius (a 4 × 4 area).e.e.000) are more suitable for imagery of lower resolution (i.000 scale USGS quadrangles make good base maps for rectifying Landsat TM and SPOT imagery. the user should try to match coarser resolution imagery to finer resolution imagery (i.. accurate base maps must be collected.img and . 1:24. Landsat and SPOT). Mouse Option When entering GCPs with the mouse.000. Field Guide 317 .img file..e. the user should not try to match Landsat MSS to SPOT or Landsat TM to an aerial photograph. if possible. For example. The user should try to match the resolution of the imagery with the scale and projection of the source map.e..gcc. Refer to "APPENDIX B: File Formats and Extensions" for more information on the format of .gcc files. Avoid using maps over 1:250.. How GCPs are Stored GCPs entered with the mouse are stored in the . Coarser maps (i.e. In other words. and those entered at the keyboard or digitized using a digitizing tablet are stored in a separate file with the extension .Ground Control Points Digitizing Tablet Option If GCPs will be digitized from a hardcopy map and a digitizing tablet. 1:250. AVHRR) and finer base maps (i. 1:24..000) are more suitable for imagery of finer resolution (i.

The goal in calculating the coefficients of the transformation matrix is to derive the polynomial equations for which there is the least possible amount of error when they are used to transform the reference coordinates of the GCPs into the source coordinates. the number of GCPs used. It is not always possible to derive coefficients that produce no error. The order is simply the highest exponent used in the polynomial. The matrix consists of coefficients which are used in polynomial equations to convert the coordinates. and their locations relative to one another. GCPs Every GCP influences the coefficients. Depending upon the distortion in the imagery. The size of the matrix depends upon the order of transformation.through nth-order transformations. Usually.Orders of Transformation Polynomial equations are used to convert source file coordinates to rectified map coordinates. ERDAS IMAGINE allows 1st. 318 ERDAS . which is discussed later in this chapter. Transformation Matrix A transformation matrix is computed from the GCPs. GCPs are plotted on a graph and compared to the curve that is expressed by a polynomial. For example. You can specify the order of the transformation you want to use in the Transform Editor. A discussion of polynomials and order is included in "APPENDIX A: Math Topics". complex polynomial equations may be required to express the needed transformation. The distance between the GCP reference coordinate and the curve is called RMS error. The least squares regression method is used to calculate the transformation matrix from the GCPs. The order of transformation is the order of the polynomial used in the transformation. This common method is discussed in statistics textbooks. even if there is not a perfect fit of each GCP to the polynomial that the coefficients represent. 1st-order or 2nd-order transformations are used. in Figure 133. The degree of complexity of the polynomial is expressed as the order of the polynomial. Reference X coordinate GCP Polynomial curve Source X coordinate Figure 133: Polynomial Curve vs.

and look for systematic errors. and vice versa. rotate scanned quad sheets according to the angle of declination stated in the legend. SPOT and Landsat Level 1B data are already transformed to a plane. except that the user can specify different scaling factors for X and Y. ERDAS IMAGINE provides the following options for 1st-order transformations: • • • • scale offset rotate reflect Scale Scale is the same as the zoom option in the Viewer. A 1st-order transformation can also be used for data that are already projected onto a plane. For example. Field Guide 319 . such as the GCP source and distribution. convert a planar map projection to another planar map projection. Examine other factors first. When doing this type of rectification.Orders of Transformation Linear Transformations A 1st-order transformation is a linear transformation. A linear transformation can change: • • • • location in X and/or Y scale in X and/or Y skew in X and/or Y rotation First-order transformations can be used to project raw imagery to a planar map projection. For rotation. and rotate descending data so that north is up. but may not be rectified to the desired map projection. Offset Offset moves the image by a user-specified number of pixels in the X and Y directions. and when rectifying relatively small image areas. If you are scaling an image in the Viewer. the user can specify any positive or negative number of degrees for clockwise and counterclockwise rotation. The user can reorient skewed Landsat TM data. the zoom option will undo any changes to the scale that you do. Linear transformations may be required before collecting GCPs on the displayed image. Rotation occurs around the center pixel of the image. The user can perform simple linear transformations to an image displayed in a Viewer or to the transformation matrix itself. it is not advisable to increase the order of transformation if at first a high RMS error occurs.

You can perform linear transformations in the Viewer and then load that transformation to the Transform Editor. Figure 134 illustrates how the data are changed in linear transformations. original image change of scale in X change of scale in Y change of skew in X change of skew in Y rotation Figure 134: Linear Transformations 320 ERDAS . or you can perform the linear transformations directly on the transformation matrix.Reflection Reflection options enable the user to perform the following operations: • • • left to right reflection top to bottom reflection left to right and top to bottom reflection (equal to a 180° rotation) Linear adjustments are available from the Viewer or from the Transform Editor.

a1 b1 a2 b2 a3 b3 . Other representations of a 1st-order transformation matrix may take a different form. The process of correcting nonlinear distortions is also known as rubber sheeting..Orders of Transformation The transformation matrix for a 1st-order transformation consists of six coefficients— three for each coordinate (X and Y).which are used in a 1st-order polynomial as follows: xo = b1 + b2 xi + b3 yi yo = a1 + a2 xi + a3 yi where: xi and yi are source coordinates (input) xo and yo are rectified coordinates (output) the coefficients of the transformation matrix are as above EQUATION 3 The position of the coefficients in the matrix and the assignment of the coefficients in the polynomial is an ERDAS IMAGINE convention. These transformations can correct nonlinear distortions. original image some possible outputs Figure 135: Nonlinear Transformations Field Guide 321 . Figure 135 illustrates the effects of some nonlinear transformations.. Nonlinear Transformations Transformations of the 2nd-order or higher are nonlinear transformations...

the size of the transformation matrix increases with the order of the transformation. one for Y. for data covering a large area (to account for the earth’s curvature). An easier way to arrive at the same number is: (t + 1) × (t + 2) EQUATION 5 Clearly. The transformation matrix for a transformation of order t contains this number of coefficients: t+1 2 ∑i EQUATION 4 i=1 It is multiplied by two for the two sets of coefficients—one set for X. on scans of warped maps and with radar imagery. and with distorted data (for example. Third-order transformations are used with distorted aerial photographs. 322 ERDAS .Second-order transformations can be used to convert Lat/Lon data to a planar projection. due to camera lens distortion). Fourth-order transformations can be used on very distorted aerial photographs.

.. such that: i+ j≤t EQUATION 7 The equation for yo takes the same format with different coefficients. Ω are coefficients t is the order of the polynomial i and j are exponents All combinations of xi times yj are used in the polynomial expression. An example of 3rdorder transformation equations for X and Y... + Qx i y j + . is: x o = 5 + 4x – 6y + 10x 2 – 5xy + 1y 2 + 3x 3 + 7x 2 y – 11xy 2 + 4y 3 y o = 13 + 12x + 4y + 1x 2 – 21xy + 11y 2 – 1x 3 + 2x 2 y + 5xy 2 + 12y 3 These equations use a total of 20 coefficients. + QUATION 6 E Ωy t where: A... B. C. using numbers.. Q . F .. E.Orders of Transformation Higher Order Polynomials The polynomial equations for a t-order transformation take this form: x o = A + Bx + Cy + Dx 2 + Exy + F y 2 + . D. or (3 + 1) × (3 + 2) EQUATION 8 Field Guide 323 .

the number of GCPs used is less than the numbers required to actually perform the different orders of transformation. This equation is graphed in Figure 136. NOTE: Because only the X coordinate is used in these examples.Effects of Order The computation and output of a higher-order polynomial equation are more complex than that of a lower-order polynomial equation. To understand the effects of different orders of transformation in image rectification. which is satisfied by this equation (the coefficients are in parentheses): x r = ( 25 ) + ( – 8 ) x i where: xr= the reference X coordinate xi= the source X coordinate EQUATION 9 This equation takes on the same format as the equation of a line (y = mx + b). This enables the user to draw two-dimensional graphs that illustrate the way that higher orders of transformation affect the output image. which are used in the polynomials for rectification. The example below uses only one coordinate (X). Therefore. a 1st-order transformation is also known as a linear transformation. 324 ERDAS . In mathematical terms. Suppose GCPs are entered with these X coordinates: Source X Coordinate (input) Reference X Coordinate (output) 1 2 3 17 9 1 These GCPs allow a 1st-order transformation of the X coordinates. Coefficients like those presented in this example would generally be calculated by the least squares regression method. a 1st-order polynomial is linear. instead of two (X.Y). Therefore. it is helpful to see the output of various orders of polynomials. higher-order polynomials are used to perform more complicated image rectifications.

which illustrates that they cannot be expressed by a 1st-order polynomial. Field Guide 325 . like the one above. what if the second GCP were changed as follows? Source X Coordinate (input) Reference X Coordinate (output) 1 2 3 These points are plotted against each other in Figure 137. a 2nd-order polynomial equation will express these points: x r = ( 31 ) + ( – 16 )x i + ( 2 )x i 2 EQUATION 10 Polynomials of the 2nd-order or higher are nonlinear. 17 7 1 reference X coordinate 16 12 8 4 0 0 1 2 3 4 source X coordinate Figure 137: Transformation Example—2nd GCP Changed A line cannot connect these points. In this case. The graph of this curve is drawn in Figure 138.Orders of Transformation reference X coordinate 16 12 xr = (25) + (-8)xi 8 4 0 0 1 2 3 4 source X coordinate Figure 136: Transformation Example—1st-Order However.

5) 8 4 0 0 1 2 3 4 source X coordinate Figure 139: Transformation Example—4th GCP Added As illustrated in Figure 139. 326 ERDAS .reference X coordinate 16 12 xr = (31) + (-16)xi + (2)xi2 8 4 0 0 1 2 3 4 source X coordinate Figure 138: Transformation Example—2nd-Order What if one more GCP were added to the list? Source X Coordinate (input) Reference X Coordinate (output) 1 2 3 4 17 7 1 5 reference X coordinate 16 12 xr = (31) + (-16)xi + (2)xi2 (4. the order of the transformation could be increased to 3rd-order. So that all of the GCPs would fit. this fourth GCP does not fit on the curve of the 2nd-order polynomial equation. The equation and graph in Figure 140 would then result.

Source X Coordinate (input) Reference X Coordinate (output) xo(1) = 17 xo(2) = 7 xo(3) = 1 xo(4) = 5 EQUATION 11 EQUATION 12 1 2 3 4 xo ( 1 ) > xo ( 2 ) > xo ( 4 ) > xo ( 3 ) 17 >7 >5 >1 input image X coordinates 1 2 1 2 3 3 4 4 1 2 3 3 4 5 6 4 7 2 output image X coordinates 8 9 10 11 12 13 14 15 16 17 18 1 Figure 141: Transformation Example—Effect of a 3rd-Order Transformation Field Guide 327 . In this example. this equation may be unnecessarily complex. a 3rd-order transformation probably would be too high. because the output pixels would be arranged in a different order than the input pixels. However.Orders of Transformation reference X coordinate 16 12 xr = (25) + (-5)xi + (-4)xi2 + (1)xi3 8 4 0 0 1 2 3 4 source X coordinate Figure 140: Transformation Example—3rd-Order Figure 140 illustrates a 3rd-order transformation. in the X direction. Performing a coordinate transformation with this equation may cause unwanted distortions in the output image for the sake of a perfect fit for all the GCPs.

For instance. The minimum number of points required to perform a transformation of order t equals: ((t + 1)(t + 2)) -----------------------------------2 EQUATION 13 Use more than the minimum number of GCPs whenever possible. Order of Transformation 1 2 3 4 5 6 7 8 9 10 3 6 10 15 21 28 36 45 55 66 Minimum GCPs Required For the best rectification results. Therefore. to use a higher order of transformation.In this case. three points define a plane.through 10th-order transformations. Although it is possible to get a perfect fit. For 1st. which is expressed by the equation of a plane. it is rare. no matter how many GCPs are used. you should always use more than the minimum number of GCPs and they should be well distributed. Minimum Number of GCPs Higher orders of transformation can be used to correct more complicated types of distortion. the minimum number of GCPs required to perform a transformation is listed in the table below. However. a higher order of transformation would probably not produce the desired results. Therefore. the equation used in a 2nd-order transformation is the equation of a paraboloid. Six points are required to define a paraboloid. 328 ERDAS . more GCPs are needed. at least three GCPs are needed. at least six GCPs are required to perform a 2nd-order transformation. to perform a 1st-order transformation. Similarly.

Both of these methods require an existing transformation matrix. The user can perform histogram matching to ensure that there is no offset between the images. The search window can be any odd size between 5 × 5 and 21 × 21.000 indicates an exact match. those that do not meet an acceptable level of error can be edited.000. If it is within an acceptable range of accuracy. Since the matching process is based on the reflectance values. This point is determined based on the current transformation matrix. For image to image rectification. then there may be enough GCPs to perform an accurate rectification (depending upon how evenly dispersed the GCPs are).8000 or 0. the user has the option to discard points. The threshold is an absolute value threshold ranging from 0. Values above 0.Orders of Transformation GCP Prediction and Matching Automated GCP prediction enables the user to pick a GCP in either coordinate system and automatically locate that point in the other coordinate system based on the current transformation parameters.000. If a match cannot be made because the absolute value of the correlation is greater than the threshold. A correlation threshold is used to accept or discard points. then use GCP prediction to locate the corresponding GCP on the other image (map).9000 are recommended. select layers that have similar spectral wavelengths. GCP Matching In GCP matching the user can select which layers from the source and destination images to use. The user can also select the radius from the predicted GCP from which the matching operation will search for a spectrally similar pixel. then more GCPs should be gathered before rectifying the image.000 indicates a bad match and a value of 1.000 to +1. GCP matching enables the user to fine tune a rectification for highly accurate results. Examine the automatically generated point and see how accurate it is. The correlation ranges from -1. Histogram matching is discussed in "CHAPTER 5: Enhancement". select a point in either the source or the destination image. If the automatically generated point is not accurate. After selecting several GCPs. GCP Prediction GCP prediction is a useful technique to help determine if enough ground control points have been gathered. A transformation matrix is a set of coefficients used to convert the coordinates to the new projection. Once the GCPs are automatically selected. A value of 0. This saves time in selecting another set of GCPs by hand. Field Guide 329 .000 to 1. Automated GCP matching is a step beyond GCP prediction. GCP prediction can also be used when applying an existing transformation matrix to another image in a data set. Transformation matrices are covered in more detail below. a GCP selected in one image is precisely matched to its counterpart in the other image using the spectral characteristics of the data and the transformation matrix. such as two visible bands or two infrared bands.

This is a common problem in off-nadir data. the user has the option to tolerate a certain amount of error. If data file coordinates are the source coordinates. 330 ERDAS . In other words. The Y residual is the distance between the source Y coordinate and the retransformed Y coordinate. = ( xr – xi ) + ( yr – yi ) 2 2 EQUATION 14 Residuals and RMS Error Per GCP The ERDAS IMAGINE GCP Tool contains columns for the X and Y residuals. They are shown for each GCP. the inverse of the transformation matrix is used to retransform the reference coordinates of the GCPs back to the source coordinate system. more points should be added in that direction. when the point is transformed with the transformation matrix. RMS error is calculated with a distance equation: RMS error where: xi and yi are the input source coordinates xr and yr are the retransformed coordinates RMS error is expressed as a distance in the source coordinate system. When a transformation matrix is calculated. Residuals are the distances between the source and retransformed coordinates in one direction. RMS error (root mean square) is the distance between the input (source) location of a GCP and the retransformed location for the same GCP. If the GCPs are consistently off in either the X or the Y direction. Instead of increasing the order. there is some discrepancy between the source coordinates and the retransformed reference coordinates. an RMS error of 2 means that the reference pixel is 2 pixels away from the retransformed pixel. it is the difference between the desired output coordinate for a GCP and the actual output coordinate for the same point.RMS Error In most cases. For example. Unless the order of transformation allows for a perfect fit. The X residual is the distance between the source X coordinate and the retransformed X coordinate. a perfect fit for all GCPs would require an unnecessarily high order of transformation. then the RMS error is a distance in pixel widths.

source GCP X residual RMS error Y residual retransformed GCP Figure 142: Residuals and RMS Error Per Point Field Guide 331 .RMS Error RMS Error Per GCP The RMS error of each point is reported to help the user evaluate the GCPs. This is calculated with a distance formula: Ri = where: X R i2 + Y R i2 EQUATION 15 Ri= the RMS error for GCPi XRi= the X residual for GCPi YRi= the Y residual for GCPi Figure 142 illustrates the relationship between the residuals and the RMS error per point.

and the Y RMS error: Rx = 1 -n ∑ X Ri2 i=1 n Ry = 1 -n ∑ Y Ri2 i=1 or n T = where: 2 2 Rx + Ry 1 -n ∑ X Ri2 + Y Ri2 i=1 n Rx= X RMS error Ry= Y RMS error T= total RMS error n= the number of GCPs i= GCP number XRi= the X residual for GCPi YRi= the Y residual for GCPi Error Contribution by Point A normalized value representing each point’s RMS error in relation to the total RMS error is also reported. This value is listed in the Contribution column of the GCP Tool. the X RMS error.Total RMS Error From the residuals. the following calculations are made to determine the total RMS error. Ri E i = ---T where: Ei= error contribution of GCPi Ri= the RMS error for GCPi T = total RMS error EQUATION 16 332 ERDAS .

For example. For example. Acceptable accuracy will depend on the image area and the particular project. and the accuracy of the GCPs and ancillary data being used. close enough to use). GCPs acquired from GPS should have an accuracy of about 10 m. an RMS error of 1. If the user is rectifying AVHRR data. Therefore. the type of data being used. it will be advantageous to tolerate a certain amount of error rather than take a higher order of transformation.000-scale maps should have an accuracy of about 20 m. but GCPs from 1:24. if the RMS error tolerance is 2.RMS Error Tolerance of RMS Error In most cases. The amount of RMS error that is tolerated can be thought of as a window around each source coordinate. Field Guide 333 . inside which a retransformed coordinate is considered to be correct (that is. if the user is rectifying Landsat TM data and wants the rectification to be accurate to within 30 meters. It is important to remember that RMS error is reported in pixels. then the retransformed pixel can be 2 pixels away from the source pixel and still be considered accurate.50 might be acceptable. the RMS error should not exceed 0. source pixel 2 pixel RMS error tolerance (radius) Retransformed coordinates within this range are considered correct Figure 143:RMS Error Tolerance Acceptable RMS error is determined by the end use of the data base.50.

Increase the order of transformation. After each computation of a transformation matrix and RMS error. A transformation matrix can then be computed that can accommodate the GCPs with less error. • • • 334 ERDAS . if this is the only GCP in a particular region of the image. Select only the points for which you have the most confidence. Another transformation matrix can then be computed from the remaining GCPs. assuming that this GCP is the least accurate. A closer fit should be possible. the less regular and predictable the results will be. Tolerate a higher amount of RMS error. However. it may cause greater error to remove it. One should start with a 1st-order transformation unless it is known that it will not work. To fit all of the GCPs. the user can assess the relative distortion in going from image to map or map to map. It is possible to repeatedly compute transformation matrices until an acceptable RMS error is reached. there are four options: • Throw out the GCP with the highest RMS error. The danger of using higher order rectifications is that the more complicated the equation for the transformation. there may be very high distortion in the image.Evaluating RMS Error To determine the order of transformation. Most rectifications are either 1st-order or 2nd-order. creating more complex geometric alterations in the image.

The Restoration™ algorithm was developed by the Environmental Research Institute of Michigan (ERIM). • • Additionally. GCP 2. Bilinear interpolation — uses the data file values of four pixels in a 2 × 2 window to calculate an output value with a bilinear function. 3. It produces sharper. The input image with source GCPs. Restoration can also provide higher spatial resolution from oversampled images and is well suited for data fusion and GIS data integration applications. the pixel values of the input image are assigned to pixels in the output grid. with reference GCPs shown. a deconvolution algorithm that models known sensor-specific parameters to produce a better estimate of the original scene radiance. so that the GCPs of the two grids fit together. Since the grid of pixels in the source image rarely matches the grid for the reference image. Cubic convolution — uses the data file values of sixteen pixels in a 4 × 4 window to calculate an output value with a cubic function. To compare the two grids. See the IMAGINE Restoration documentation for more information. IMAGINE Restoration. crisper rectified images by preserving and enhancing the high spatial frequency component of the image during the resampling process. The output grid. Using a resampling method. Field Guide 335 . the pixels are resampled so that new data file values for the output file can be calculated. GCP GCP GCP 1.Resampling Methods Resampling Methods The next step in the rectification/registration process is to create the output file. is available as an add-on module to ERDAS IMAGINE. Figure 144: Resampling The following resampling methods are supported in ERDAS IMAGINE: • Nearest neighbor — uses the value of the closest pixel to assign to the output pixel value. the input image is laid over the output grid. 4.

then the program would calculate what this size would be in decimal degrees and automatically update the output cell size. if the user wants the output file cell size to be 30 × 30 meters. which is determined by the transformation matrix and the cell size. then the origin of the image is the upper left corner. For example.e. Enter the nominal cell size in the Nominal Cell Size dialog. he or she can enter meters and calculate the equivalent size in decimal degrees. Otherwise. The output corners (upper left and lower right) of the output file can be specified.0 and not the defaults. The output cell size for a geographic projection (i.In all methods.. If the output units are pixels. it may be beneficial to specify the output corners relative to the reference file system. If an image to image rectification is being performed. In this case. the number of rows and columns of pixels in the output is calculated from the dimensions of the output map. However. 336 ERDAS . the upper left X and upper left Y coordinate will be 0. Since the transformation between angular (decimal degrees) and nominal (meters) measurements varies across the image. so that the images will be coregistered. The default values are calculated so that the entire source file will be resampled to the destination file. “Rectifying” to Lat/Lon The user can specify the nominal cell size if the output coordinate system is Lat/Lon. Lat/Lon) is always in angular units of decimal degrees. the transformation is based on the center of the output file. if the user wants the cell to be a specific size in meters. the origin is the lower left corner.

g. there is usually a “stair stepped” effect around diagonal lines and curves. The retransformed coordinates (xr. Using on linear thematic data (e.yo) of the pixel are retransformed back to the source coordinate system using the inverse of the transformation matrix. which can have data file values based on a qualitative (nominal or ordinal) system or a quantitative (interval or ratio) system.yr) nearest to (xr. the rectified coordinates (xo.yr) is the nearest neighbor.Resampling Methods Nearest Neighbor To determine an output pixel’s nearest neighbor. locating an edge associated with a lineament. The pixel that is closest to the retransformed coordinates (xr. Field Guide 337 . The averaging that is performed with bilinear interpolation and cubic convolution is not suited to a qualitative class value system. roads.yr) Figure 145: Nearest Neighbor Nearest Neighbor Resampling Advantages Transfers original data values without averaging them. (xr. therefore the extremes and subtleties of the data values are not lost. Appropriate for thematic files. Data values may be dropped. as the other methods do. The data file value(s) for that pixel become the data file value(s) of the pixel in the output image. This is an important consideration when discriminating between vegetation types. Disadvantages When this method is used to resample from a larger to a smaller grid size. or determining different levels of turbidity or temperatures in a lake (Jensen 1996).yr) are used in bilinear interpolation and cubic convolution as well. streams) may result in breaks or gaps in a network of linear data. while other values may be duplicated. Suitable for use before classification.. The easiest of the three methods to compute and the fastest to use.

If the data file values are plotted in a graph relative to their distances from one another. 2. and 4.V1) / D Y1 Ym D data file coordinates (Y) Y3 Figure 147: Linear Interpolation 338 ERDAS . The data file value of m (Vm) is a function of the change in the data file value between pixels 3 and 1 (that is.yr) 2 n 3 D 4 r is the location of the retransformed coordinate Figure 146: Bilinear Interpolation To calculate Vr. In this example.yr) and the four closest pixels in the input (source) image (see Figure 146). the task is to calculate a data file value for r (Vr). 1 dy m dx r (xr. Given the data file values of these four pixels on a grid.V1). V3 . the data file value of the rectified pixel is based upon the distances between the retransformed coordinate location (xr. the neighbor pixels are numbered 1. 3. the user can perform linear interpolation. By interpolating Vm and Vn. which is a simple process to illustrate. then a visual linear interpolation is apparent. Calculating a data file value as a function of spatial distance between two pixels V3 data file values Vm V1 (V3 . first Vm and Vn are considered.Bilinear Interpolation In bilinear interpolation.

the equation for calculating the data file value for n (Vn) in the pixel grid is: V4 – V2 V n = ------------------. Similarly.× dx + V m D EQUATION 19 Field Guide 339 .V1 / D) is the slope of the line in the graph above.× dy + V 1 D where: Yi= the Y coordinate for pixel i Vi= the data file value for pixel i dy= the distance between Y1 and Ym in the source coordinate system D= the distance between Y1 and Y3 in the source coordinate system EQUATION 17 If one considers that (V3 .can be calculated in the same manner: Vn – Vm V r = -------------------.yr). then this equation translates to the equation of a line in y = mx + b form. the data file value for r.× dy + V 2 D EQUATION 18 From Vn and Vm. which is at the retransformed coordinate location (xr.Resampling Methods The equation for calculating Vm from V1 and V3 is: V3 – V1 V m = ------------------.

in which the calculation of wi is apparent: Vr = where: ( D – ∆x i ) ( D – ∆y i ) ---------------------------------------------.× dy + V 1 V3 – V1 D D V r = -------------------------------------------------------------------------------------------------------.× dy + V 2 – -----------------. the data file value is weighted more if the pixel is closer to (xr.× V i 2 D i=1 ∑ 4 EQUATION 21 ∆xi = ∆yi = Vi D = = the change in the X direction between (xr.yr).yr) and the data file coordinate of pixel i the data file value for pixel i the distance between pixels (in X or Y) in the source coordinate system For each of the four pixels.× dx + -----------------.yr) and the data file coordinate of pixel i the change in the Y direction between (xr. 340 ERDAS .× dy + V 1 D D V 1 ( D – dx ) ( D – dy ) + V 2 ( dx ) ( D – dy ) + V 3 ( D – dx ) ( dy ) + V 4 ( dx ) ( dy ) V r = ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------D2 In most cases D = 1.The following is attained by plugging in the equations for Vm and Vn to this final equation for Vr: V3 – V1 V4 – V2 -----------------. since data file coordinates are used as the source coordinates and data file coordinates increment by 1. Some equations for bilinear interpolation express the output data file value as: Vr = where: ∑ wi V i EQUATION 20 wi is a weighting factor The equation above could be expressed in a similar format.

This method is often used when changing the cell size of the data. bilinear interpolation has the effect of a low-frequency convolution. is applied to those 16 input values.and assuming that (xr.yr). Edges are smoothed. More spatially accurate than nearest neighbor. Disadvantages Since pixels are averaged. such that.j) is used.Resampling Methods Bilinear Interpolation Resampling Advantages Results in output images that are smoother. i = int (xr) j = int (yr) .j) make up a 4 × 4 grid of input pixels. and some extremes of the data file values are lost. are averaged to determine the output data file value. as illustrated in Figure 148.. without the “stair stepped” effect that is possible with nearest neighbor. the pixel (i.. The pixels around (i. Field Guide 341 . Cubic Convolution Cubic convolution is similar to bilinear interpolation. in a 4 × 4 array. and an approximation of a cubic function. such as in SPOT/TM merges within the 2 × 2 resampling matrix limit. See "CHAPTER 5: Enhancement" for more about convolution filtering..yr) is expressed in data file coordinates (pixels). except that: • • a set of 16 pixels. To identify the 16 pixels in relation to the retransformed coordinate (xr. rather than a linear function..

yr).Yr) Figure 148: Cubic Convolution Since a cubic. The cubic convolution used in ERDAS IMAGINE is a compromise between low-frequency and high-frequency. The general effect of the cubic convolution will depend upon the data. Different equations have different effects upon the output data file values. function is used to weight the 16 input pixels. Others may tend to sharpen the image.(i. Some convolutions may have more of the effect of a low-frequency filter (like bilinear interpolation). like a high-frequency filter. serving to average and smooth the values.j) (Xr. Several versions of the cubic convolution equation are used in the field.yr) have exponentially less weight than those closer to (xr. the pixels farther from (xr. 342 ERDAS . rather than a linear.

j + n – 2 ) × f ( d ( i + 2.Resampling Methods The formula used in ERDAS IMAGINE is: Vr = ) ∑ V ( i – 1.j) the output data file value -0. j + n – 2 ) × f ( d ( i. j + n – 2 ) × f ( d ( i – 1. j + n – 2 ) + 1 E n=1 4 QUATION 22 + V ( i. j + n – 2 ) × f ( d ( i + 1. j + n – 2 ) – 2 ) where: i j = = int (xr) int (yr) the distance between a pixel with coordinates (i.j) = V(i.j) = Vr a = = f(x) = ( a + 2 ) x 3 – ( a + 3 ) x 2 + 1  3 2 f (x) =  a x – 5a x 2 + 8a x – 4a 0 Source: Atkinson 1985 if x < 1 if 1 < x < 2 otherwise Field Guide 343 .j) and (xr.yr) the data file value of pixel (i. j + n – 2 ) – 1 ) + V ( i + 2. j + n – 2 ) ) + V ( i + 1.5 (a constant which differs in other applications of cubic convolution) the following function: d(i.

The most computationally intensive resampling method.5 tends to produce output layers with a mean and standard deviation closer to that of the original data (Atkinson 1985).. Disadvantages Data values may be altered. the mean and standard deviation of the output pixels match the mean and standard deviation of the input pixels more closely than any other resampling method.In most cases. a value for a of -0. The actual effects will depend upon the data being used. 344 ERDAS . This method is recommended when the user is dramatically changing the cell size of the data.e. In most cases. The effect of the cubic curve weighting can both sharpen the image and smooth out noise (Atkinson 1985). Cubic Convolution Resampling Advantages Uses 4 × 4 resampling. such as in TM/aerial photo merges (i. and is therefore the slowest. matches the 4 × 4 window more closely than the 2 × 2 window).

In this procedure. such as UTM or State Plane. Field Guide 345 . and scale are represented differently. Since vector data are stored by the coordinates of nodes. There are no coordinates between nodes to extrapolate. areas. Some examples of when this is required are listed below (ESRI 1992). Conversion Process To convert the map coordinate system of any georeferenced image.Map to Map Coordinate Conversions Map to Map Coordinate Conversions There are many instances when the user will need to change a map that is already registered to a planar projection to another projection. it is not usually wise to resample data that have already been resampled if the accuracy of data file values is important to the application. Therefore. When the projection used for the files in the data base does not produce the desired properties of a map. Vector Data Converting the map coordinates of vector data is much easier than converting raster data. it is usually wiser to rectify that data to a second map projection system than to “lose a generation” by converting rectified data and resampling it a second time. When it is necessary to combine data from more than one zone of a projection. GCPs are generated automatically along the intersections of a grid that the user specifies. ERDAS IMAGINE provides a shortcut to the rectification process. • • • When combining two maps with different projection characteristics. The program calculates the reference coordinates for the GCPs with the appropriate conversion formula and a transformation that can be used in the regular rectification process. A change in the projection is a geometric change—distances. So. the conversion process requires that pixels be resampled. each coordinate is simply converted using the appropriate conversion formula. If the original unrectified data are available. Resampling causes some of the spectral integrity of the data to be lost (see the disadvantages of the resampling methods explained previously).

346 ERDAS .

which is influenced by slope. intervisibility. Alpine vegetation). Topographic data and its derivative products have many applications. and chemical concentrations. siting of recreation areas. Any series of values. the ERDAS IMAGINE terrain analysis functions can be used on data types other than topographic data. etc.Introduction CHAPTER 9 Terrain Analysis Introduction Terrain analysis involves the processing and graphic simulation of elevation data. Field Guide 347 . may be used. however. for example.e. Slope images are usually color-coded according to the steepness of the terrain at each pixel. Terrain analysis software functions usually work with topographic data (also called terrain data or elevation data). Topographic data are essential for studies of trafficability. Especially useful are products derived from topographic data. be a key to identifying wildlife habitats that are associated with specific elevations.. Slope and aspect images are often an important factor in assessing the suitability of a site for a proposed use. including: • • calculating the shortest and most navigable path over a mountain range for constructing a road or routing a transmission line determining rates of snow melt based on variations in sun shadow. They can. Terrain analysis functions are not restricted to topographic data. and elevation Terrain data are often used as a component in complex GIS modeling or classification routines. aspect images — illustrating the prevailing direction that the slope faces at each pixel. in which an elevation (or Z value) is recorded at each X. Although this chapter mainly discusses the use of topographic data. These include: • • • slope images — illustrating changes in elevation over distance. route design. such as population densities.Y location. Terrain data can also be used for vegetation classification based on species that are terrain-sensitive (i. shaded relief images — illustrating variations in terrain by differentiating areas that would be illuminated or shadowed by a light source simulating the sun. non-point source pollution. (Welch 1990). ground water pressure values. aspect. magnetic and gravity measurements.

they are surveyed at a series of points including the extreme high and low points of the terrain.img file where the value of each pixel is a specific elevation value. DEM (digital elevation models) and DTED (Digital Terrain Elevation Data) are expressed as regularly spaced points. To make topographic data usable in ERDAS IMAGINE.Y. When topographic data are collected in the field. as shown in Figure 149. See "CHAPTER 3: Raster and Vector Data Sources" for more details on DEM and DTED data. 20 30 40 30 50 20 20 31 45 22 39 48 29 38 41 34 34 30 Topographic image with grid overlay. along features of interest that define the topography such as streams and ridge lines. DEMs can be edited with the Raster Editing capabilities of ERDAS IMAGINE. and at various points in between. See "CHAPTER 7: Photogrammetric Concepts" for more information on the digital orthographic process. Elevations are read at each grid intersection point. A DEM is a one band .. ... or DEM. Elevation points can also be generated through digital orthographic methods. they must be represented as a surface. and Z values.. A gray scale is used to differentiate variations in terrain. To create DEM and DTED files.resulting DEM or regularly spaced terrain data points (Z values) Figure 149: Regularly Spaced Terrain Data Points Elevation data are derived from ground surveys and through manual photogrammetric methods. See “Chapter 1: Raster Layers” for more information. Topographic Data Topographic data are usually expressed as a series of points with X. a regular grid is overlaid on the topographic contours. 348 ERDAS .See "CHAPTER 10: Geographic Information Systems" for more information about GIS modeling.

and i are the elevations of the pixels around it in a 3 X 3 window. A hypothetical example is shown with the slope calculation formulas.f. A 3 × 3 pixel window is used to calculate the slope at each pixel. For a pixel at location X. if the Department of Transportation specifies a maximum of 15% slope on any road. In this case the certain distance is the size of the pixel. it would be possible to recode all slope values that are greater than 15% as unsuitable for road building. a b c 10 m 20 m 25 m d e f 22 m 30 m 24 m 25 m g h i 20 m 18 m a.c. but can also be calculated in degrees. each pixel is 30 × 30 meters.Y has elevation e. Pixel X. In Figure 150.Y. Use the Slope function in Image Interpreter to generate a slope image.g. the relationship between percentage and degree expressions of slope is as follows: • • • • a 45° angle is considered a 100% slope a 90° angle is considered a 200% slope slopes less than 45° fall within the 1 .Slope Images Slope Images Slope is expressed as the change in elevation over a certain distance.200% slopes Slope images are often used in road planning.100% range slopes between 45° and 90° are expressed as 100 . For example. the elevations around it are used to calculate the slope as shown below.b. In ERDAS IMAGINE. Slope is most often expressed as a percentage.h.d. Figure 150: 3 × 3 Window Calculates the Slope at Each Pixel Field Guide 349 .

= 0.078 30 × 3 350 ERDAS .177 30 × 3 ∆y – 10 – 4 + 7 = -------------------------....First. the average elevation changes per unit of distance in the x and y direction (∆x and ∆y) are calculated as: ∆x 1 = c – a ∆x 2 = f – d ∆x 3 = i – g ∆y 1 = a – g ∆y 2 = b – h ∆y 3 = c – i ∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3 × x s ∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ 3 × y s where: a. as shown above xs= x pixel size = 30 meters ys= y pixel size = 30 meters So. ∆x 1 = 25 – 10 = 15 ∆x 2 = 25 – 22 = 3 ∆x 3 = 18 – 20 = – 2 ∆y 1 = 10 – 20 = – 10 ∆y 2 = 20 – 24 = – 4 ∆y 3 = 25 – 18 = 7 15 + 3 – 2 ∆x = --------------------.i = elevation values of pixels in a 3 × 3 window.= – 0. for the hypothetical example..

the slope is: slope in degrees = tan–1 ( s ) × -----..30 = 5.0967 ≤1 percent slope percent slope = s × 100 = 200 – -----180 π 100 s or else slope in degrees = tan–1 ( s ) × ------ For the example.0967 × 100 = 9.67% Field Guide 351 ..Slope Images .54 180 π percent slope = 0.the slope at pixel x.0967 ) × 57.y is calculated as: ( ∆x ) 2 + ( ∆y ) 2 s = ------------------------------------2 ifs s = 0.= tan–1 ( 0.

Y has elevation e. aspect uses a 3x3 window around each pixel to calculate the prevailing direction it faces.f.d. the average changes in elevation in both x and y directions are calculated first.Aspect Images An aspect image is an .y with the following elevation values around it.h. Aspect is expressed in degrees from north.img file that is gray scale-coded according to the prevailing direction of the slope at each pixel. In transportation planning. Figure 151: 3 × 3 Window Calculates the Aspect at Each Pixel ∆x 1 = c – a ∆x 2 = f – d ∆x 3 = i – g ∆y 1 = a – g ∆y 2 = b – h ∆y 3 = c – i ∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3 ∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ 3 352 ERDAS . and i are the elevations of the pixels around it in a 3x3 window. Due north is 0 degrees. a b c 10 m 20 m 25 m d e f 22 m 30 m 25 m g h i 20 m 24 m 18 m a. clockwise. A value of 90 degrees is due east.g.c. For pixel x. Each pixel is 30x30 meters in the following example: Pixel X. Aspect files are used in many of the same applications as slope files. Especially in northern climates. and 270 degrees is due west.b. north facing slopes are often avoided. from 0 to 360. Use the Aspect function in Image Interpreter to generate an aspect image. As with slope calculations. these would be exposed to the most severe weather and would hold snow and ice the longest. It would be possible to recode all pixels with north facing aspects as undesirable for road building. 180 degrees is due south. A value of 361 degrees is used to identify flat surfaces such as water bodies. for example.

.= – 2.6 degrees aspect = 180 + 113. = 1.33 3 If ∆x = 0 and ∆y = 0.33 3 – 10 – 4 + 7 ∆y = ---------------------------.i =elevation values of pixels in a 3 × 3 window as shown above 15 + 3 – 2 ∆x = ----------------------. 5.98  – 2.= 5. For the example above.Aspect Images where: a.  ∆y  then aspect is 180 + θ (in degrees).98 radians = 113.6 degrees Field Guide 353 .33 θ = tan–1  -----------. Otherwise. θ is calculated as: ∆x θ = tan–1  ----.33  1. then the aspect is flat (coded to 361 degrees)..6 = 293.

Shaded Relief A shaded relief image provides an illustration of variations in elevation. draped over the terrain.. 30 40 50 this. For example.. It does not calculate the shadow that is cast by topographic features onto the surrounding surface... It is important to note that the relief program identifies shadowed areas—i. Only the portions of the mountain that would be in shadow from a northwest light would be shaded. areas that would be in sunlight are highlighted and areas that would be in shadow are shaded. of snow melt over an area spanned by an elevation surface. The software would not simulate a shadow that the mountain would cast on the southeast side. This condition produces. Use the Shaded Relief function in Image Interpreter to generate a relief image. not this = in sun shaded ≠ Figure 152: Shaded Relief Shaded relief images are an effective graphic tool. Shaded relief images can also be used to enhance subtle detail in gray scale images such as aeromagnetic. They can also be used in analysis. A series of relief images can be generated to simulate the movement of the sun over the landscape.e. Snow melt rates can then be estimated for each pixel based on the amount of time it spends in sun or shadow.g. a high mountain with sunlight coming from the northwest would be symbolized as follows in shaded relief. Based on a user-specified position of the sun. alone or in combination with an .. those that are not in direct sun. e.img file. 354 ERDAS . Shaded relief images are generated from an elevation surface. radar.. gravity maps. etc.

Each pixel is assigned a value between -1 and +1 to indicate the amount of light reflectance at that pixel. • • Negative numbers and zero values represent shadowed areas. Pixels facing north-northwest and westnorthwest would not be quite as bright. (In the example above.img file. These indicate shadowed areas. Positive numbers represent sunny areas. All negative values are set to 0 or to the minimum light level specified by the user. the software compares the user-specified sun position and angle with the angle each pixel faces. pixels facing northwest would be the brightest. Light reflectance in sunny areas falls within a range of values depending on whether the pixel is directly facing the sun or not. The reflectance values are then applied to the original pixel values to get the final result. the surface reflectance values are multiplied by the color lookup values for the .img file along with the elevation surface.) In a relief file that includes an . Field Guide 355 .Shaded Relief In calculating relief. with +1 assigned to the areas of highest reflectance.

The following equation produces normalized brightness values (Colby 1991. Topographic effect results from the differences in illumination due to the angle of the sun and the angle of the terrain. making it appear as if it were a flat surface. and that variations in reflectance are due to the amount of incident radiation.Topographic Normalization Digital imagery from mountainous regions often contains a radiometric distortion known as topographic effect. These models normalize the imagery. When using the Topographic Normalization model. the following information is needed: • • • Lambertian Reflectance Model solar elevation and azimuth at time of image acquisition DEM file original imagery file (after atmospheric corrections) The Lambertian Reflectance model assumes that the surface reflects incident solar energy uniformly in all directions. This causes a variation in the image brightness values. Smith et al 1980): BVnormal λ= BV observed λ / cos i where: BVnormal λ = normalized brightness values BVobserved λ= observed brightness values cos i = cosine of the incidence angle 356 ERDAS . The Topographic Normalize function in Image Interpreter uses a Lambertian Reflectance model to normalize topographic effect in VIS/IR imagery. Topographic effect is a combination of: • • • incident illumination — the orientation of the surface with respect to the rays of the sun exitance angle — the amount of reflected energy as a function of the slope angle surface cover characteristics — rugged terrain with high mountains or steep slopes (Hodgson and Shelley 1993) One way to reduce topographic effect in digital imagery is by applying transformations based on the Lambertian or Non-Lambertian reflectance models.

Field Guide 357 . For these areas.Topographic Normalization Incidence Angle The incidence angle is defined from: cos i = cos (90 . although more computationally demanding than the Lambertian model. NOTE: The Non-Lambertian model does not detect surfaces that are shadowed by intervening topographic features between each pixel and the sun. This model. may present more accurate results. he formulated the Non-Lambertian model. a line-of-sight algorithm will identify such shadowed pixels. which takes into account variations in the terrain. Smith et al 1980): BVnormal λ= (BVobserved λ cos e) / (cosk i cosk e) where: BVnormal λ BVobserved λ cos i cos e k = = = = = normalized brightness values observed brightness values cosine of the incidence angle cosine of the exitance angle. Non-Lambertian Model Minnaert (1961) proposed that the observed surface does not reflect incident solar energy uniformly in all directions. In a Non-Lambertian Reflectance model. The k value is the slope of the regression line (Hodgson and Shelley 1993): log (BVobserved λ cos e) = log BVnormal λ+ k log (cos i cos e) Use the Spatial Modeler to create a model based on the Non-Lambertian Model. provided that all the observations in this set are the same type of land cover.φn) where: i= the angle between the solar rays and the normal to the surface θs= the elevation of the sun φs= the azimuth of the sun θn= the slope of each surface element φn= the aspect of each surface element If the surface has a slope of 0 degrees. then aspect is undefined and i is simply 90 . Instead.θs) cos θn + sin (90 .θs) sin θn cos (φs . the following equation is used to normalize the brightness values in the image (Colby 1991. or slope angle the empirically derived Minnaert constant Minnaert Constant The Minnaert constant (k) may be found by regressing a set of observed brightness values from the remotely sensed imagery with known slope and aspect values.θs.

358 ERDAS .

display. The mid-eighteenth century brought the use of map overlays to show troop movements in the Revolutionary War. Ian McHarg’s influential work. and manipulate geographic data to create new layers of information. In 1969. CA) and ERDAS developed software packages that could input.C. This work on land suitability/capability analysis (SCA). a system designed to analyze many data layers to produce a plan map. was published. but there were probably maps before that. This type of analysis was beginning to be used for a much wider range of applications. The first system to be called a GIS was the Canadian Geographic Information System. Frederick Law Olmstead has long been considered the father of Landscape Architecture for his pioneering work in the early 20th century. Since then.Introduction CHAPTER 10 Geographic Information Systems Introduction The beginnings of geographic information systems (GIS) can legitimately be traced back to the beginnings of man. man has been continually improving the methods of conveying spatial information. another application for GIS. The growth rate of the GIS industry in the last several years has exceeded even the most optimistic projections. Field Guide 359 . urban planning. The first British census in 1825 led to the science of demography. discussed the use of overlays of spatially referenced data layers for resource planning and management (Star and Estes 1990). Many of the methods Olmstead used in Landscape Architecture also involved the use of hand-drawn overlays. and resource management (Rado 1992).. developed in 1962 by Roger Tomlinson of the Canada Land Inventory. During the 1800’s many different cartographers and scientists were all discovering the power of overlays to convey multiple levels of information about an area (Star and Estes). This system is still in operation today (Parent and Church 1987). The steady advances in features and power of the hardware over the last ten years and the decrease in hardware costs have made GIS technology accessible to a wide range of users. such as change detection. as analysts began to program computers to automate some of the manual processes. Software companies like ESRI (Redlands. The era of modern GIS really started in the 1970s. This could be considered an early GIS. Unlike earlier systems that were developed for a specific application. this system was designed to store digitized map data and land-based attributes in an easily accessible format for all of Canada. Design with Nature. The earliest known map dates back to 2500 B.

Although software and data are commercially available. A GIS should also be able to create reports and maps (Marble 1990). budgets. or any other data that is needed in a study. is independently meaningful. Once the project is defined. if one is looking for a suitable refuge for bald eagles. retrieve. a true GIS includes knowledgeable staff. from Landscape Architecture to natural resource management to transportation routing. Data Information. It must be designed to meet the needs of the organization and objectives. and software (Walker and Miller 1990). For example. while land cover data may be useful. “Land cover with a value of 8 are on slopes too steep for development. as opposed to data.Today. ERDAS IMAGINE provides all the tools required to build and manipulate a GIS data base. The user can input data into a GIS and output information. E757261 has a data file value 8. manipulate. such as earthquakes. The information the user wishes to derive determines the type of data that must be input. hardcopy maps. and floods? Information vs. marketing. The central purpose of a GIS is to turn geographic data into useful information—the answers to real-life questions—questions such as: • • • • • How will we monitor the influence of global climatic changes on the earth’s resources? How should political districts be redrawn in a growing metropolitan area? Where is the best place for a shopping center that will be most convenient to shoppers and least harmful to the local ecology? What areas should be protected to ensure the survival of endangered species? How can communities be better prepared to face natural disasters. hurricanes. zip code data is probably not needed. statistical data. It is relevant to a particular problem or question: • • “The land cover at coordinate N875250. Although the term GIS is commonly used to describe software packages. and analyze layers of geographic data to produce interpretable information. a custom data base must be created for the particular project and study area. tornadoes. a geographic information system (or GIS) is a unique system designed to input. store. hardware.” is information. this first step in any GIS project is usually an assessment of the scope and goals of the study. For this reason. GIS technology can be used in almost any geography-related discipline. the user can begin the processing of building the data base. data. a training program. The GIS data base may include computer images. 360 ERDAS .” is data.

slope. soils. The seamless integration of these two data types enables the user to reap the benefits of both data formats in one system. and vector layers are coverages based on the ARC/INFO data model.) thematic layers (land use. This chapter discusses these steps in detail. object-oriented architecture that utilizes both raster imagery and topological vector data. population demographics.) attribute data (characteristics of roads. Data Input Acquiring the appropriate data for a project involves creating a data base of layers that encompass the study area.) vector layers (streets. parcels. etc. etc. etc. land. In the analysis phase. elevation data. vegetation. utility and communication lines. imagery. these data layers will be combined and manipulated in order to create new layers and to extract meaningful information from them. Raster images are stored in . etc. A data base created with ERDAS IMAGINE can consist of: • • • • • continuous layers (satellite imagery.) statistics (frequency of an occurrence. etc. Field Guide 361 .img files.Data Input Successful GIS implementation typically includes two major steps: • • data input analysis Data input involves collecting the necessary data layers into the image data base. hydrology. aerial photographs.) The ERDAS IMAGINE software package employs a hierarchical.

Both data formats can be used and the functions of both types of systems can be accessed. 362 ERDAS . Depending upon the project. vector data may be better suited for these applications: • • • • urban planning tax assessment and planning traffic engineering facilities management The advantage of an integrated raster and vector system such as ERDAS IMAGINE is that one data structure does not have to be chosen over the other.Raster Data Input Landsat TM SPOT panchromatic Aerial photograph Soils data Land cover Vector Data Input Roads Census data Ownership parcels Political boundaries Landmarks Raster Attributes Vector Attributes GIS analyst using ERDAS IMAGINE Figure 153: Data Input Raster data might be more appropriate in the following applications: • • • • • site selection natural resource management petroleum exploration mission planning change detection On the other hand. only raster or vector data may be needed. but most applications benefit from using both.

etc. The concept of themes has evolved from early GISs.. and indicate through the use of colors and/or annotation which areas would be best for the new site. drainage basins. Depending upon the goals of a project. For example. etc. roads. a color scheme. A single theme may require more than a simple raster or vector file to fully describe it. slope. if a user wanted to propose a new park site. This one layer would then include many separate themes.Data Input Themes and Layers A data base usually consists of files with data of the same geographical area. or meaningful annotation for the image. These files might depict park boundaries. Field Guide 363 . slope. there may be attribute data that describe the information. Much of GIS analysis is concerned with combining individual themes into one or more layers that answer the questions driving the analysis. it may be helpful to combine several themes into one layer. land cover. The full collection of data that describe a certain theme is called a layer. Each of these files contains different information— each is a different theme. a data base for the city recreation department might include files of all the parks in the area. with each file containing different types of information. he or she might create one layer that shows roads. land ownership. soil types. In addition to the image. For example. This chapter explores these analysis techniques. vegetation types. in which transparent overlays were created for each theme and combined (overlaid) in different ways to derive new information. county and municipal boundaries.

all other layers that are added to the data base can be registered to this base map.” they are usually displayed in pseudo color mode. wetlands. Satellite images. such as roads. elevation data. etc. Extremely accurate base maps can be created from rectified satellite images or aerial photographs. scanned maps. agriculture. blues are usually used for water features. Since thematic layers usually have only one “band. The vectors can be overlaid on the raster backdrop and updated dynamically to reflect new or changed features. Continuous raster layers can be multiband (e. where particular colors are often assigned to help others visualize the information.. 364 ERDAS . In fact. utility lines.g. or land use. Classes are simply categories of pixels which represent the same condition. Thematic Layers Thematic data are typically represented as single layers of information stored as . This chapter will explore the many uses of continuous data in a GIS.Continuous Layers Continuous raster layers are quantitative (measuring a characteristic) and have related. continuous data are now being incorporated into GIS data bases and used in combination with thematic data to influence processing algorithms or as backdrop imagery on which to display the results of analyses. See "CHAPTER 4: Image Display" for information on pseudo color display. urban. because it represents one of many characteristics about the study area. See "CHAPTER 1: Raster Data" for more information on continuous data. For example.g. aerial photographs. An example of a thematic layer is a vegetation classification with discrete classes representing coniferous forest. greens for healthy vegetation. continuous values. etc. Landsat TM) or single band (e. Current satellite data and aerial photographs are also effective in updating outdated vector data. deciduous forest.. Then. and other continuous raster layers can be incorporated into a data base and provide a wealth of information that is not available in thematic layers or vector layers.img files and containing discrete classes. Once used only for image processing. these layers often form the foundation of the data base. SPOT panchromatic). A thematic layer is sometimes called a variable.

” An ordinal class numbering system is often created from a nominal system. Interval classes also have a natural sequence. This is discussed in detail under "Recoding" on page 378. such as communication lines. or ratio relationship (Star and Estes 1990). Vector Data Converted to Raster Format Vector layers can be converted to raster format if the raster format is more appropriate for an application. A frequent and popular application is the creation of land cover classification schemes through the use of both supervised (user-assisted) and unsupervised (automatic) pattern-recognition algorithms contained within ERDAS IMAGINE. can easily be converted to raster format within ERDAS IMAGINE for further analysis. Classification Thematic layers can be generated from remotely sensed data (e. Field Guide 365 . In the case of the recreation department data base used in the previous example.” “better. streams. such as “poor.Thematic Layers Class Numbering Systems As opposed to the data file values of continuous raster layers. soil type or political area).g. in which classes have been ranked by some criteria. Usually. Typical vector layers. the data file values of thematic raster layers can have a nominal. and other linear features. and Spatial Modeler tools.” and “best. • • • The variable being analyzed and the way that it contributes to the final product determines the class numbering system used in the thematic layers. ordinal. • Nominal classes represent categories with no particular order. Ratio classes differ from interval classes only in that ratio classes have a natural zero point. Layers that have one numbering system can easily be recoded to a new system. Ordinal classes are those that have a sequence.” “good. See "CHAPTER 6: Classification" for more information. interval. Use the Vector Utilities menu from the Vector icon in the IMAGINE icon panel to convert vector layers to raster format. which are generally multiband and statistically related.. these are characteristics that are not associated with quantities (e.g. the final layer may rank the proposed park sites according to their overall suitability. Landsat TM. such as rainfall amounts. but the distance between each value is meaningful as well. boundaries. SPOT) by using the ERDAS IMAGINE Image Interpreter. Classification. This numbering system might be used for temperature data. The output is a single thematic layer which represents specific classes based on the approach selected..

Use the Image Information option in the ERDAS IMAGINE icon panel to generate or update statistics for . Statistics Both continuous and thematic layers include statistical information. 366 ERDAS .Other sources of raster data are discussed in "CHAPTER 3: Raster and Vector Data Sources". these statistics are called attributes and may be accompanied by many other types of information. which is the total number of pixels in each class a list of class names that correspond to class values a list of class values a color table. and blue. See "CHAPTER 1: Raster Data" for more information about the statistics stored with continuous layers. stored as brightness values in red. which make up the colors of each class when the layer is displayed For thematic data.img files. Thematic layers contain the following information: • • • • a histogram of the data values. as described below. green.

In both cases. Field Guide 367 . Attributes Text and numerical data that are associated with the classes of a thematic layer or the features in a vector layer are called attributes. etc. which are categories of information about each class. This information can take the form of character strings. but more fields can be added as needed to fully describe the data. The Raster Attribute Editor contains a CellArray. using a digitizing tablet. Vector layers can be used to represent transportation routes. voting districts. or converting other data types to vector format. but includes options for importing. Attribute information for raster layers is stored in the image (. raster attributes for .Attributes Vector Layers The vector layers used in ERDAS IMAGINE are based on the ARC/INFO data model and consist of points. school zones. or floating point numbers. Raster Attributes In ERDAS IMAGINE. These layers are topologically complete. raster and vector attributes are handled slightly differently. landmarks. there are fields that are automatically generated by the software. integer numbers. The user may define fields. Vector data can be acquired from several private and governmental agencies. which allow the user to display and manipulate the information. Vector data can also be created in ERDAS IMAGINE by digitizing on the screen. meaning that the spatial relationships between features are maintained.img files are accessible from the Raster Attribute Editor. exporting. editing. which is similar to a table or spreadsheet that not only displays the information. copying. communication lines. Vector attribute information is stored in an INFO file. population density. Each record is like an index card. A record is the set of all attribute data for one class. which contain similar information for the other classes or features. Vector layers can be analyzed independently or in combination with continuous and thematic raster layers. and other operations. See "CHAPTER 2: Vector Layers" for more information on the characteristics of vector data. tax parcels. containing information about one class or feature in a file of many index cards. Both are viewed in ERDAS IMAGINE CellArrays. so a separate section on each follows. lines. utility corridors. and polygons.img) file. Attributes work much like the data that are handled by data base management software. However. Figure 154 shows the attributes for a land cover classification layer.

processing may be further refined by comparing the attributes of several files. green. Attribute information is accessible in several places throughout ERDAS IMAGINE.img Most thematic layers contain the following attribute fields: • • • • • Class Name Class Value Color table (red. and blue values) Opacity percentage Histogram (number of pixels in the file that belong to the class) As many additional attribute fields as needed can be defined for each class. Viewing Raster Attributes Simply viewing attribute information can be a valuable analysis tool. allowing the information to be modified. When both the raster layer and its associated attribute information are displayed. simply click on that polygon with the mouse and that row is highlighted in the Raster Attribute Editor. For example.Figure 154: Raster Attributes for lnlandc. Depending on the type of information associated with the layers of a data base. to locate the class name associated with a particular polygon in a displayed image. 368 ERDAS . the user can select features in one using the other. In some cases it is read-only and in other cases it is a fully functioning editor. See "CHAPTER 6: Classification" for more information about the attribute information that is automatically generated when new thematic layers are created in the classification process.

The attribute information in a data base will depend on the goals of the project. In addition to direct user manipulation. so that class (object) colors can be viewed or changed. Some of the attribute editing capabilities in ERDAS IMAGINE include: • • • • • import/export ASCII information to and from other software packages. See "CHAPTER 5: Enhancement" for more information on the Image Interpreter. copy. Models that read and/or modify attribute information can also be written. starting on page 383. such as spreadsheets and word processors cut. some of the Image Interpreter functions calculate statistics that are automatically added to the Raster Attribute Editor. There is more information on GIS modeling. attributes can be changed by other programs.Attributes Manipulating Raster Attributes The applications for manipulating attributes are as varied as the applications for GIS. Field Guide 369 . For example. or columns to and from the same Raster Attribute Editor or among several Raster Attribute Editors generate reports that include all or a subset of the information in the Raster Attribute Editor use formulas to populate cells directly edit cells by entering in new information The Raster Attribute Editor in ERDAS IMAGINE also includes a color cell column. rows. and paste individual cells.

Vector Attributes

Vector attributes are stored in the Vector Attributes CellArrays. The user can simply view attributes or use them to: • • • select features in a vector layer for further processing determine how vectors are symbolized label features

Figure 155 shows the attributes for a roads layer.

Figure 155: Vector Attributes CellArray

See "CHAPTER 2: Vector Layers" for more information about vector attributes.

370

ERDAS

Analysis

Analysis
ERDAS IMAGINE Analysis Tools In ERDAS IMAGINE, GIS analysis functions and algorithms are accessible through three main tools: • • • script models created with the Spatial Modeler Language graphical models created with Model Maker pre-packaged functions in Image Interpreter

Spatial Modeler Language The Spatial Modeler Language is the basis for all ERDAS IMAGINE GIS functions and it is the most powerful. It is a modeling language that enables the user to create script (text) models for a variety of applications. Models may be used to create custom algorithms that best suit the user’s data and objectives. Model Maker Model Maker is essentially the Spatial Modeler Language linked to a graphical interface. This enables the user to create graphical models using a palette of easy-to-use tools. Graphical models can be run, edited, saved in libraries, or converted to script form and edited further, using the Spatial Modeler Language. NOTE: References to the Spatial Modeler in this chapter mean that the named procedure can be accomplished using both Model Maker and the Spatial Modeler Language. Image Interpreter The Image Interpreter houses a set of common functions that were all created using either Model Maker or the Spatial Modeler Language. They have been given a dialog interface to match the other processes in ERDAS IMAGINE. In most cases, these processes can be run from a single dialog. However, the actual models are also provided with the software to enable customized processing. Many of the functions described in the following sections can be accomplished using any of these tools. Model Maker is also easy to use and requires many of the same steps that would be performed when drawing a flow chart of an analysis. The Spatial Modeler Language is intended for more advanced analyses, and has been designed using natural language commands and simple syntax rules. Some applications may require a combination of these tools. Customizing ERDAS IMAGINE Tools ERDAS Macro Language (EML) enables the user to create and add new and/or customized dialogs. If new capabilities are needed, they can be created with the C Programmers’ Toolkit. Using these tools, a GIS that is completely customized to a specific application and its preferences can be created.

The ERDAS Macro Language and the C Programmers’ Toolkit are part of the ERDAS IMAGINE Developers’ Toolkit.

Field Guide

371

See the ERDAS IMAGINE On-Line Help for more information about the Developers’ Toolkit. Analysis Procedures Once the data base (layers and attribute data) is assembled, the layers can be analyzed and new information extracted. Some information can be extracted simply by looking at the layers and visually comparing them to other layers. However, new information can be retrieved by combining and comparing layers using the procedures outlined below: • • • Proximity analysis — the process of categorizing and evaluating pixels based on their distances from other pixels in a specified class or classes. Contiguity analysis — enables the user to identify regions of pixels in the same class and to filter out small regions. Neighborhood analysis — any image processing technique that takes surrounding pixels into consideration, such as convolution filtering and scanning. This is similar to the convolution filtering performed on continuous data. Several types of analyses can be performed, such as boundary, density, mean, sum, etc. Recoding — enables the user to assign new class values to all or a subset of the classes in a layer. Overlaying — creates a new file with either the maximum or minimum value of the input layers. Indexing — adds the values of the input layers. Matrix analysis — outputs the coincidence of values in the input layers. Graphical modeling — enables the user to combine data layers in an unlimited number of ways. For example, an output layer created from modeling can represent the desired combination of themes from many input layers. Script modeling — offers all of the capabilities of graphical modeling with the ability to perform more complex functions, such as conditional looping.

• • • • •

Using an Area of Interest (AOI) Any of these functions can be performed on a single layer or multiple layers. The user can also select a particular area of interest (AOI) that is defined in a separate file (AOI layer, thematic raster layer, or vector layer) or an area of interest that is selected immediately preceding the operation by entering specific coordinates or by selecting the area in a Viewer.

372

ERDAS

Proximity Analysis

Proximity Analysis

Many applications require some measurement of distance, or proximity. For example, a real estate developer would be concerned with the distance between a potential site for a shopping center and an interchange to a major highway. A proximity analysis determines which pixels of a layer are located at specified distances from pixels in a certain class or classes. A new thematic layer (.img file) is created, which is categorized by the distance of each pixel from specified classes of the input layer. This new file then becomes a new layer of the data base and provides a buffer zone around the specified class(es). In further analysis, it may be beneficial to weight other factors, based on whether they fall in or outside the buffer zone. Figure 156 shows a layer containing lakes and streams and the resulting layer after a proximity analysis is run to create a buffer zone around all of the water features.

Lake Streams

Buffer zones

Original layer

After proximity analysis performed

Figure 156: Proximity Analysis

Use the Search (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform a proximity analysis.

Field Guide

373

Contiguity Analysis

A contiguity analysis is concerned with the ways in which pixels of a class are grouped together. Groups of contiguous pixels in the same class, called raster regions, or “clumps,” can be identified by their sizes and manipulated. One application of this tool would be an analysis for locating helicopter landing zones that require at least 250 contiguous pixels at 10 meter resolution. Contiguity analysis can be used to: • • further divide a large class into separate raster regions, or eliminate raster regions that are too small to be considered for an application.

Filtering Clumps In cases where very small clumps are not useful, they can be filtered out according to their sizes. This is sometimes referred to as eliminating the “salt and pepper” effects, or “sieving.” In Figure 157, all of the small clumps in the original (clumped) layer are eliminated.

Clumped layer

Sieved layer

Figure 157: Contiguity Analysis

Use the Clump and Sieve (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform contiguity analysis.

374

ERDAS

Neighborhood Analysis

Neighborhood Analysis

With a process similar to the convolution filtering of continuous raster layers, thematic raster layers can also be filtered. The GIS filtering process is sometimes referred to as “scanning,” but is not to be confused with data capture via a digital camera. Neighborhood analysis is based on local or neighborhood characteristics of the data (Star and Estes 1990). Every pixel is analyzed spatially, according to the pixels that surround it. The number and the location of the surrounding pixels is determined by a scanning window, which is defined by the user. These operations are known as focal operations. The scanning window can be: • • • circular, with a maximum diameter of 512 pixels doughnut-shaped, with a maximum outer radius of 256 rectangular, up to 512 × 512 pixels, with the option to mask out certain pixels

Use the Neighborhood (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform neighborhood analysis. The scanning window used in Image Interpreter can be 3 × 3, 5 × 5, or 7 × 7. The scanning window in Model Maker is user-defined and can be up to 512 × 512. Defining Scan Area The user may define the area of the file to be scanned. The scanning window will move only through this area as the analysis is performed. The area may be defined in one or all of the following ways: • • Specify a rectangular portion of the file to scan. The output layer will contain only the specified area. Specify an area of interest that is defined by an existing AOI layer, an annotation overlay, or a vector layer. The area(s) within the polygon will be scanned, and the other areas will remain the same. The output layer will be the same size as the input layer or the selected rectangular portion. Specify a class or classes in another thematic layer to be used as a mask. The pixels in the scanned layer that correspond to the pixels of the selected class or classes in the mask layer will be scanned, while the other pixels will remain the same.

Field Guide

375

8 8 2 2 2 2 8 8 2 2 2

2 6 6 6 6 2 2 6 6 6 8

8 8 8 8 8 6 6 8 3 3 4
mask layer

4 5 5 3 3 3 4 4 5 4 4

4 4 4 5 4 4 4 4 5 5 5

5 5 5 5 5

target layer

Figure 158: Using a Mask In Figure 158, class 2 in the mask layer was selected for the mask. Only the corresponding (shaded) pixels in the target layer will be scanned—the other values will remain unchanged. Neighborhood analysis creates a new thematic layer. There are several types of analysis that can be performed upon each window of pixels, as described below: • Boundary — detects boundaries between classes. The output layer contains only boundary pixels. This is useful for creating boundary or edge lines from classes, such as a land/water interface. Density — outputs the number of pixels that have the same class value as the center (analyzed) pixel. This is also a measure of homogeneity (sameness), based upon the analyzed pixel. This is often useful in assessing vegetation crown closure. Diversity — outputs the number of class values that are present within the window. Diversity is also a measure of heterogeneity (difference). Majority — outputs the class value that represents the majority of the class values in the window. The value is user-defined. This option operates like a low-frequency filter to clean up a “salt and pepper” layer. Maximum — outputs the greatest class value within the window. This can be used to emphasize classes with the higher class values or to eliminate linear features or boundaries. Mean — averages the class values. If class values represent quantitative data, then this option can work like a convolution filter. This is mostly used on ordinal or interval data. Median — outputs the statistical median of the class values in the window. This option may be useful if class values represent quantitative data. Minimum — outputs the least or smallest class value within the window. The value is user-defined. This can be used to emphasize classes with the low class values.

• •

• •

376

ERDAS

Neighborhood Analysis

Minority — outputs the least common of the class values that are within the window. This option can be used to identify the least common classes. It can also be used to highlight disconnected linear features. Rank — outputs the number of pixels in the scan window whose value is less than the center pixel. Standard deviation — outputs the standard deviation of class values in the window. Sum — totals the class values. In a file where class values are ranked, totaling enables the user to further rank pixels based on their proximity to high-ranking pixels.

• • •

2 2 2 2 2

8 8 2 2 2

6 6 8 2 2

6 6 6 8 2

6 6 6 6 8

8 6 6 2 48 6 2 2 8

Output of one iteration of the sum operation

8 + 6 + 6 + 2 + 8 + 6 + 2 + 2 + 8 = 48

Figure 159: Sum Option of Neighborhood Analysis (Image Interpreter) In Figure 159, the Sum option of Neighborhood (Image Interpreter) is applied to a 3 × 3 window of pixels in the input layer. In the output layer, the analyzed pixel will be given a value based on the total of all of the pixels in the window.

The analyzed pixel is always the center pixel of the scanning window. In this example, only the pixel in the third column and third row of the file is “summed.”

Field Guide

377

Recoding

Class values can be recoded to new values. Recoding involves the assignment of new values to one or more classes. Recoding is used to: • • • reduce the number of classes combine classes assign different class values to existing classes

When an ordinal, ratio, or interval class numbering system is used, recoding can be used to assign classes to appropriate values. Recoding is often performed to make later steps easier. For example, in creating a model that will output “good,” “better,” and “best” areas, it may be beneficial to recode the input layers so all of the “best” classes have the highest class values. In the following example (Table 21), a land cover layer is recoded so that the most environmentally sensitive areas (Riparian and Wetlands) have higher class values. Table 21: Example of a Recoded Land Cover Layer Value New Value Class Name
Background Riparian Grassland and Scrub Chaparral Wetlands Emergent Vegetation Water

0 1 2 3 4 5 6

0 4 1 1 4 1 1

Use the Recode (GIS Analysis) function in Image Interpreter or Spatial Modeler to recode layers.

378

ERDAS

Overlaying

Overlaying

Thematic data layers can be overlaid to create a composite layer. The output layer contains either the minimum or the maximum class values of the input layers. For example, if an area was in class 5 in one layer, and in class 3 in another, and the maximum class value dominated, then the same area would be coded to class 5 in the output layer, as shown in Figure 160. Basic Overlay 6 8 9
3 5

Application Example
Original Slope 1-5 = flat slopes 6-9 = steep slopes

2 1 1 3 5 9
Recode

6

9 9 9 0 2 2 3 2 3 9 9 9 9 3 Figure 160: Overlay 1 1 0

0 0 0

Recoded Slope 0 = flat slopes 9 = steep slopes

Overlay

4 2 5

Land Use 1 = commercial 2 = residential 3 = forest 4 = industrial 5 = wetlands

4 2 5

Overlay Composite 1 = commercial 2 = residential 3 = forest 4 = industrial 5 = wetlands 9 = steep slopes (Land Use masked

The application example in Figure 160 shows the result of combining two layers—slope and land use. The slope layer is first recoded to combine all steep slopes into one value. When overlaid with the land use layer, the highest data file values (the steep slopes) dominate in the output layer.

Use the Overlay (GIS Analysis) function in Image Interpreter or Spatial Modeler to overlay layers.

Field Guide

379

Indexing

Thematic layers can be indexed (added) to create a composite layer. The output layer contains the sums of the input layer values. For example, the intersection of class 3 in one layer and class 5 in another would result in class 8 in the output layer, as shown in Figure 161. Basic Index 9 9 5
3 8 5

Application Example 9 9 1 5 + 18 10 10 2 2 9 5 1 5 9 36 24 16 8 16 Figure 161: Indexing 36 36 36 28 =
Output values calculated Slope 9 = good 5 = fair 1 = poor Weighting Importance ×2 ×2 ×2 Soils 9 = good 5 = fair 1 = poor Weighting Importance ×1 ×1 ×1

1 9

18 18 18 + 9 9 9 9 18

Access 9 = good 5 = fair 1 = poor

Weighting Importance ×1 ×1 ×1

The application example in Figure 161 shows the result of indexing. In this example, the user wants to develop a new subdivision, and the most likely sites are where there is the best combination (highest value) of good soils, good slope, and good access. Since good slope is a more critical factor to the user than good soils or good access, a weighting factor is applied to the slope layer. A weighting factor has the effect of multiplying all input values by some constant. In this example, slope is given a weight of 2.

Use the Index (GIS Analysis) function in the Image Interpreter or Spatial Modeler to index layers.

380

ERDAS

Matrix Analysis

Matrix Analysis

Matrix analysis produces a thematic layer that contains a separate class for every coincidence of classes in two layers. The output is best described with a matrix diagram. input layer 2 data values (columns) 0 0 input layer 1 data values (rows) 1 2 3
0 0 0 0

1
0 1 6 11

2
0 2 7 12

3
0 3 8 13

4
0 4 9 14

5
0 5 10 15

In this diagram, the classes of the two input layers represent the rows and columns of the matrix. The output classes are assigned according to the coincidence of any two input classes.

All combinations of 0 and any other class are coded to 0, because 0 is usually the background class, representing an area that is not being studied. Unlike overlaying or indexing, the resulting class values of a matrix operation are unique for each coincidence of two input class values. In this example, the output class value at column 1, row 3 is 11, and the output class at column 3, row 1 is 3. If these files were indexed (summed) instead of matrixed, both combinations would be coded to class 4.

Use the Matrix (GIS Analysis) function in Image Interpreter or Spatial Modeler to matrix layers.

Field Guide

381

Modeling

Modeling is a powerful and flexible analysis tool. Modeling is the process of creating new layers from combining or operating upon existing layers. Modeling enables the user to create a small set of layers—perhaps even a single layer—which, at a glance, contains many types of information about the study area. For example, if a user wants to find the best areas for a bird sanctuary, taking into account vegetation, availability of water, climate, and distance from highly developed areas, he or she would create a thematic layer for each of these criteria. Then, each of these layers would be input to a model. The modeling process would create one thematic layer, showing only the best areas for the sanctuary. The set of procedures that define the criteria is called a model. In ERDAS IMAGINE, models can be created graphically and resemble a flow chart of steps, or they can be created using a script language. Although these two types of models look different, they are essentially the same—input files are defined, functions and/or operators are specified, and outputs are defined. The model is run and a new output layer(s) is created. Models can utilize analysis functions that have been previously defined, or new functions can be created by the user.

Use the Model Maker function in Spatial Modeler to create graphical models and the Spatial Modeler Language to create script models. Data Layers In modeling, the concept of layers is especially important. Before computers were used for modeling, the most widely used approach was to overlay registered maps on paper or transparencies, with each map corresponding to a separate theme. Today, digital files replace these hardcopy layers and allow much more flexibility for recoloring, recoding, and reproducing geographical information (Steinitz, Parker, and Jordan 1976). In a model, the corresponding pixels at the same coordinates in all input layers are addressed as if they were physically overlaid like hardcopy maps.

382

ERDAS

such as slope. land cover. without creating intermediate files that occupy extra disk space. recode. neighborhood analysis. and floodplain. functions. the output thematic layer can be overlaid onto a high resolution. as well as image processing functions. All of this can be accomplished in a single model (as shown in Figure 162).. To visualize the location of these areas. the traditional GIS functions (e. Use the Model Maker function in Spatial Modeler to create graphical models. Complex models can be developed easily and then quickly edited and re-run on another data set. Field Guide 383 . the user can analyze many layers of data in very few steps. index. For example.g. suppose there is a need to assess the environmental sensitivity of an area for development.. SPOT panchromatic) that has had a convolution filter applied. and outputs. in that the user identifies a logical flow of steps needed to perform the desired action. Through the extensive functions and operators available in the ERDAS IMAGINE graphical modeling program. proximity analysis.) can be performed in models. continuous raster layer (e. This type of modeling is very similar to drawing flowcharts. Modeling is performed using a graphical editor that eliminates the need to learn a programming language. Image Processing and GIS In ERDAS IMAGINE.g. An output layer can be created that ranks most to least sensitive regions based on several factors. Both thematic and continuous layers can be input into models that accomplish many objectives at once. etc. overlay.Graphical Modeling Graphical Modeling Graphical modeling enables the user to “draw” models using a palette of tools that defines inputs.

The number of inputs. table(s). function. and scalar(s) to be analyzed calculations. or operations to be performed on the input data the output image(s) to be created The graphical models created in Model Maker all have the same basic structure: input.Figure 162: Graphical Model for Sensitivity Analysis See the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on creating the environmental sensitivity model in Figure 162. and outputs can vary. The model on the left in Figure 163 is the most basic form. functions. All components must be connected to one another before the model can be executed. matrix(ces). output. but it retains the same input/function/output flow. Descriptions of all of the graphical models delivered with ERDAS IMAGINE are available in the On-Line Help. functions. 384 ERDAS . but the overall form remains constant. Model Structure A model created with Model Maker is essentially a flow chart that defines: • • • the input image(s). The model on the right is more complex.

gmd extension. See the On-Line Help for instructions on editing existing models.Graphical Modeling Basic Model Complex Model Input Input Function Output Function Input Function Input Output Output Figure 163: Graphical Model Structure Graphical models are stored in ASCII files with the . There are several sample graphical models delivered with ERDAS IMAGINE that can be used as is or edited for more customized processing. Field Guide 385 .

and more. including sine/arcsine. Manipulate colors to and from RGB (red. standard deviation. Calculate aspect and degree/percent slope and produce shaded relief. less than. and not. standard deviation. maximum. saturation). blue) and IHS (intensity. greater than or equal. rank. standard deviation. and tangent.. diversity. histogram matching. cosine/arccosine. Analyze an entire layer and output one value. divide. Run logical tests using conditional statements and either. diversity. principal components.or. Use exponential operators. mean. Multiply. sum. exclusive or. or row numbers. or. cosine. Perform logical functions including and. Use common trigonometric functions.if.otherwise. Use bitwise and. majority. Perform neighborhood analysis functions. arccosine.. as well as convert a matrix to a table and vice versa. sine. Perform basic arithmetic functions including addition. inequality. greater than. mean. and more. including natural and common logarithmic. and more. Includes density. and modulus. and transpose matrices.. division. contrast stretch. including boundary. less than or equal. or. and square root. multiplication. including proximity analysis. Global Matrix Other Relational Statistical String Surface Trigonometric These functions are also available for script modeling. majority. density.Model Maker Functions The functions available in Model Maker are divided into 19 categories: Category Analysis Arithmetic Bitwise Boolean Color Conditional Data Generation Descriptor Distance Exponential Focal (Scan) Description Includes convolution filtering. and other utilities. sum.. minority.. subtraction. Create a matrix or table from a list of scalars. hue. hyperbolic arcsine. factorial. minimum. column numbers. tangent/arctangent. Create raster layers from map coordinates. green. Includes over 20 miscellaneous functions for data type conversion. such as diversity. 386 ERDAS . rank. Manipulate character strings. Includes equality. and not. and others. power. various tests. Perform distance functions. mean. Read attribute information and map a raster through an attribute column. and others..

They can also be used to store covariance matrices.g. a table with four rows could be used to store the maximum value from each layer of a four layer image file. Information in the table can be attributes. Matrices may be used to store convolution kernels or the neighborhood definition used in neighborhood functions.Graphical Modeling See the ERDAS IMAGINE Tour Guides manual and the on-line Spatial Modeler Language manual for complete instructions on using Model Maker and more detailed information about the available functions and operators. The four basic object types used in Model Maker are: • • • • raster scalar matrix table Raster A raster object is a single layer or set of layers. Scalar A scalar object is a single numeric value.. A table may consist of up to 32. or user-defined.img) files. Table A table object is a series of numeric values or character strings. A table has one column and a fixed number of rows. Matrix A matrix object is a set of numbers arranged in a two-dimensional array.767 rows. Objects Within Model Maker. Rasters are typically used to specify and manipulate data from image (. an object is an input to or output from a function. eigenvector matrices. For example. A matrix has a fixed number of rows and columns. The graphics used in Model Maker to represent each of these objects are shown in Figure 164. Tables are typically used to store columns from the Raster Attribute Editor or a list of values which pertains to the individual layers of a set of layers. Scalars are often used as weighting factors. calculated (e. Field Guide 387 . or matrices of linear combination coefficients. histograms).

Using the Spatial Modeler Language. the image area. Either of the following options can be selected: • • Union — the model will operate on the union of all input rasters..147. Working Window Raster layers of differing areas can be input into one model.) Intersection — the model will use only the area of the rasters that is common to all input rasters.648 (signed 32-bit integer) Float — floating point data (double precision) String — a character string (for table objects only) Input and output data types do not have to be the same. the user can change the data type of input files before they are processed. Input layers must be referenced to the same coordinate system (i. one can optionally define the working window and the pixel cell size of the output data. Output Parameters Since it is possible to have several inputs in one model. Lat/Lon. However. etc.Raster Matrix a Data Types • • • • 1 Table Scalar Figure 164: Modeling Objects The four object types described above may be any of the following data types: Binary — either 0 (false) or 1 (true) Integer — integer values from -2.648 to 2. or working window. (This is the default. must be specified in order to use in the model calculations.).e. UTM. 388 ERDAS .147. State Plane.483.483.

Other — specify a new cell size. parks.33 Fair Good Excellent None Good Fair Excellent Excellent 127 94 65 0 A simple model could create one output layer that showed only the parks in need of repairs. consider the sample thematic layer. The columns of the criteria table represent either attributes associated with a raster layer or the layer itself. Maximum — the maximum cell size of the input layers will be used. The output raster will contain the first row number of a set of criteria that were met for a raster cell. if the cell values are of direct interest. the output class value is 2. The following logic would be coded into the model: “If Turf Condition is not Good or Excellent.g. Attributes can be used from every input file. The inputs to a criteria function are rasters. The criteria function simplifies the process of creating a conditional statement. Criteria which must be met for each output column are entered in a cell in that column (e. so the user must select the output cell size as either: • • • Using Attributes in Models Minimum — the minimum cell size of the input layers will be used (this is the default setting).img. which would show the soil types for parks with Fair or Poor turf condition..img Class Name Grant Park Piedmont Park Candler Park Springdale Park Histogram Acres Path Condition Turf Condition Car Spaces 2456 5167 763 548 403.90 46. Otherwise. using the input layers parks. >5). that contains the following attribute information: Table 22: Attribute Information for parks.img. then the output class value is 1. attribute data can be used to determine output values.45 547.img and soils. Example For example. The criteria function can be used to build a table of conditions that must be satisfied to output a particular row value for an attribute (or cell value) associated with the selected raster. a model could be created. Multiple sets of criteria may be entered in multiple rows. With the criteria function in Model Maker. For example. Field Guide 389 .” More than one input layer could also be used.88 128.Graphical Modeling Pixel Cell Size Input rasters may also be of differing resolution (pixel size). and if Path Condition is not Good or Excellent.

highly complex models can be created. Notice how even the annotation on the graphical model is included in the automatically generated script model. It includes all of the functions available in Model Maker. In Figure 165. The Spatial Modeler Language can also be used to directly write to user-created models. he or she would simply add more conditions to the criteria table.The following is a slightly more complex example: If a user had a land cover file and wanted to create a file of pine forests larger than 10 acres. If the user wanted the output file to show varying sizes of pine forest. plus: • • • conditional branching and looping the ability to use complex and color data types more flexibility in using raster objects and attributes Graphical models created with Model Maker can be output to a script file (text only) in the Spatial Modeler Language. They are stored in ASCII . the criteria function could be used to output values only for areas that satisfied the conditions of being both pine forest and larger than 10 acres. Generating script models from graphical models may aid in learning the Spatial Modeler Language. Script models can also be written from scratch in the text editor. With these capabilities. These scripts can then be edited with a text editor using the Spatial Modeler Language syntax and re-run or saved in a library. See the ERDAS IMAGINE Tour Guides manual or the On-Line Help for specific instructions on using the criteria function.mdl files. The output file would have two classes: pine forests larger than 10 acres and background. both the graphical and script models are shown for a tasseled cap transformation. The Text Editor is available from the ERDAS IMAGINE icon panel and from the Script Library (Spatial Modeler). 390 ERDAS . Script Modeling The Spatial Modeler Language is a script language used internally by Model Maker to execute the operations specified in the graphical models that are created. Comparisons of attributes can also be combined with mathematical and logical functions on the class values of the input file(s).

403590. -0.251780. -0.247170. # # load matrix n2_Custom_Matrix # n2_Custom_Matrix = MATRIX(3.img".457320).000000. -0. 0.406390. 0. 0.000000. -0. 7: 0. QUIT. FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE "/usr/imagine/examples/lntassel.425140. Georgia # # declarations # INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR "/usr/imagine/examples/tm_lanier.139290. Open existing script models from the Script Librarian (Spatial Modeler).854680.162630.Script Modeling Tasseled Cap Transformation Models Graphical Model Script Model # TM Tasseled Cap Transformation # of Lake Lanier.117490.551770. 0. 0. # # set cell size for the model # SET CELLSIZE MIN. # # function definitions # n4_lntassel = LINEARCOMB ( $n1_tm_lanier .252520.331830. Figure 165: Graphical and Script Models For Tasseled Cap Transformation Convert graphical models to scripts using Model Maker. Field Guide 391 . -0. FLOAT MATRIX n2_Custom_Matrix.331210.224900. 0.000000.img". 0.480870. # # set window for the model # SET WINDOW UNION.701330. 0. 0. 0. -0. $n2_Custom_Matrix ) .054930. 0. 0. 0. 0.

img". Each statement falls into one of the following categories: • • • • • • Declaration — defines objects to be manipulated within the model Assignment — assigns a value to an object Show and View — enables the user to see and interpret results from the model Set — defines the scope of the model or establishes default values used by the Modeler Macro Definition — defines substitution text associated with a macro name Quit — ends execution of the model The Spatial Modeler Language also includes flow control structures. which cause a set of statements to be executed as a group. Set Example The following set statements are used: SET CELLSIZE MIN. FLOAT MATRIX n2_Custom_Matrix.img". so that the user can utilize conditional branching and looping in the models and statement block structures. FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE "/usr/imagine/examples/lntassel.Statements A script model consists primarily of one or more statements. 392 ERDAS . Declaration Example In the script model in Figure 165. SET WINDOW UNION. the following lines form the declaration portion of the model: INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR "/usr/imagine/examples/tm_lanier.

0.854680. n4_lntassel = LINEARCOMB ( $n1_tm_lanier . representing intensity of red. green.139290. 0. The declaration may also associate a raster variable with certain layers of an image file or a table variable with an attribute table.224900.000000. -0. 0. 0. 0.0 to 1. 0.331830. 0. -0.0. 0.117490. 0.403590. and sample models.406390. Data Types In addition to the data types utilized by Graphical Modeling. -0. 0. 0. Field Guide 393 .551770. 7: 0. For script model syntax rules. -0. -0.000000. 0. descriptions of all available functions and operators. The declaration statement defines the data type and object type of the variable. 0.331210. 0.Script Modeling Assignment Example The following assignment statements are used: n2_Custom_Matrix = MATRIX(3. see the on-line Spatial Modeler Language manual.162630. -0.054930.457320). $n2_Custom_Matrix ) . Assignment statements are used to set or change the value of a variable.425140.252520.247170. and blue Variables are objects in the Modeler which have been associated with a name using a declaration statement.480870.251780.701330. script model objects can store data in the following data types: • • Variables Complex — complex data (double precision) Color — three floating point numbers in the range of 0.000000.

or moving a vertex or node) can be done on a single selected line. copy. moved roads. paste. the user can also perform the following operations on the line features in multiple or single selections: • • • • • spline — smooths or generalizes all currently selected lines using a specified grain tolerance generalize — weeds vertices from selected lines using a specified tolerance split/unsplit — makes two lines from one by adding a node or joins two lines by removing a node densify — adds vertices to selected lines at a user-specified tolerance reshape (for single lines only) — enables the user to move the vertices of a line Reshaping (adding. etc. lines. One of the most common applications involving the combination of raster and vector data is the updating of vector layers using current raster imagery as a backdrop for vector editing. are general editing operations and the feature types that will support each of those operations. deleting. Table 23: General Editing Operations and Supporting Feature Types Add Points Lines Polygons Nodes yes yes yes yes Delete yes yes yes yes Move yes yes yes yes Reshape no yes no no The Undo utility may be applied to any edits. the user can dynamically update the vector layer by digitizing new or changed features on the screen. Vector layers can also be used to indicate an area of interest (AOI) for further processing. For example. the user could restrict the model to only those areas in the raster input files. new development.g. When displaying existing vector layers over a raster layer. Editing Vector Coverages Editable features are polygons (as lines).Vector Analysis Most of the operations discussed in the previous pages of this chapter focus on raster data. so that continually pressing Undo will reverse the editing. The software stores all edits in sequential order. in a complete GIS data base. label points. By selecting these zones in a vector polygon layer. 394 ERDAS . then there are probably errors due to changes in the area (new roads. delete).). Editing operations and commands can be performed on multiple or single selections. Below. Assume the user wants to run a site suitability model on only areas designated for commercial development in the zoning ordinances.. and nodes. in Table 23. In addition to the basic editing operations (e. both raster and vector layers will be present. cut. There can be multiple features selected with a mixture of any and all feature types. However. if a vector data base is more than one or two years old.

measured in layer units Cover# — internal line number (values assigned by ERDAS IMAGINE) Cover-ID — user-ID (values modified by the user) The automatically generated fields for a point or polygon layer are: • • • • Building and Cleaning Coverages AREA — area of each polygon.Constructing Topology Constructing Topology Either the build or clean option can be used to construct topology. To create spatial relationships between features in a vector layer. it is necessary to create topology. After a vector layer is edited. The differences in these two options are summarized in Table 24 (ESRI 1990). measured in layer units (will be zero for layers containing only points and no polygons) PERIMETER — length of each polygon boundary. These numbers are then used to determine line connectivity and polygon contiguity. whereas clean creates intersections (nodes) wherever lines cross one another. and polygons. these values are recorded and stored in that layer’s associated attribute table. The automatically generated fields for a line layer are: • • • • • • • FNODE# — the internal node number for the beginning of a line (from-node) TNODE# — the internal number for the end of a line (to-node) LPOLY# — the internal number for the polygon to the left of the line (will be zero for layers containing only lines and no polygons) RPOLY# — the internal number for the polygon to the right of the line (will be zero for layers containing only lines and no polygons) LENGTH — length of each line. When topology is constructed. feature attribute tables are created with several automatically created fields. Field Guide 395 . each feature is assigned an internal number. Different fields are stored for the different types of layers. Build recognizes only existing intersections (nodes). lines. When topology is constructed. but clean processes only lines and polygons. You must also reconstruct the topology of vector layers imported into ERDAS IMAGINE. Once calculated. measured in layer units (will be zero for layers containing only points and no polygons) Cover# — internal polygon number (values assigned by ERDAS IMAGINE) Cover-ID — user-ID (values modified by the user) Build processes points. the topology must be constructed to maintain the topological relationships between features.

Table 24: Comparison of Building and Cleaning Coverages Capabilities Processes: Polygons Lines Points Yes Yes Yes Yes Yes No Faster Yes Yes No Yes Yes Yes Slower Build Clean Numbers features Calculates spatial measurements Creates intersections Processing speed Errors Constructing topology also helps to identify errors in the layer. Some of the common errors found are: • • • • Lines with less than two nodes Polygons that are not closed Polygons that have no label point or too many label points User-IDs that are not unique Constructing typology can identify the errors mentioned above. and a label point is associated with each polygon. since there is no intersection. Construct topology using the Vector Utilities menu from the Vector icon in the IMAGINE icon panel. When topology is constructed. 396 ERDAS . nor should you try to display a layer that is being built or cleaned. Until topology is constructed. no polygons exist and lines that cross each other are not connected at a node. the lines that make up each polygon are identified. line intersections are created. You should not build or clean a layer that is displayed in a Viewer.

or was digitized past an intersection. a dangling node may be acceptable. drawn with a diamond symbol. In the latter case. refers to the unconstructed node of a dangling line. Field Guide 397 . cul-de-sacs are often represented by dangling nodes.Constructing Topology When the build or clean options are used to construct the topology of a vector layer. Pseudo nodes. or by editing the layer manually. or more than one label point for a polygon. occur where a single line connects with itself (an island) or where only two lines intersect. then running build or clean. two or more points may have been mistakenly digitized for a polygon. or it may be that a line does not intersect another line. resulting in an open polygon. Every line begins and ends at a node point. potential node errors are marked with special symbols. Pseudo nodes do not necessarily indicate an error or a problem. For example. it will register as a dangling node. Refer to the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on editing vector layers. represented by a square symbol. These symbols are listed below (ESRI 1990). A dangling node. In some cases. So if a line does not close properly. in a street centerline map. In polygon layers there may be label errors—usually no label point for a polygon. Pseudo node (island) No label point in polygon Dangling nodes Label points in one polygon (due to dangling node) Figure 166: Layer Errors Errors detected in a layer can be corrected by changing the tolerances set for that layer and building or cleaning again. Acceptable pseudo nodes may represent an island (a spatial pseudo node) or the point where a road changes from pavement to gravel (an attribute pseudo node).

398 ERDAS .

But today. in many cases. This chapter defines some basic cartographic terms and explains how maps are created within the ERDAS IMAGINE environment. Use the ERDAS IMAGINE Map Composer to create hardcopy and softcopy maps and presentation graphics.CHAPTER 11 Cartography Introduction Maps and mapping are the subject of the art and science known as cartography— creating 2-dimensional representations of our 3-dimensional Earth. no matter how large that piece of paper is or how small the annotation is. This chapter concentrates on the production of digital maps. analyzed. The capabilities of a computer system are invaluable to map users. But now. In the past. As the veteran GIS and image processing authority Roger F. Tomlinson said. who often need to know much more about an area than can be reproduced on paper. “Mapped and related statistical data do form the greatest storehouse of knowledge about the condition of the living space of mankind. or draftsmen) information and created a map to illustrate that information.” With this thought in mind. photogrammetrists. the analyst is the cartographer and can design his maps to best suit the data and the end user. map making was carried out by mapping agencies who took the analyst’s (be they surveyors. See "CHAPTER 12: Hardcopy Output" for information about printing hardcopy maps. Maps stored on a computer can be queried. Field Guide 399 . and updated quickly. it only makes sense that maps be created as accurately as possible and be as accessible as possible. These representations were once hand-drawn with paper and pen. map production is largely automated—and the final output is not always paper.

Maps can take on many forms and sizes. Lines are often spaced in increments of ten or twenty feet or meters. A map representing morphological features of the earth’s surface. A map showing the limits of a specific set of mapping entities. A map showing only the horizontal position of geographic objects. orthophotos. A map in which lines are used to connect points of equal elevation. In this manual. Maps no longer refer only to hardcopy output. A map portraying background reference information onto which other information is placed. such as population density) are used to represent some selected quantity. and that is usually placed on the same sheet with the smaller scale main map. A map portraying properties of a surface using area symbols. Base Bathymetric Cadastral Choropleth Composite Contour Derivative Index Inset Isarithmic Isopleth Morphometric Outline Planimetric 400 ERDAS . A map on which isopleths (lines representing quantities that cannot exist at a point. identifies all of the component maps for the area if several map sheets are required. A map created by altering. A map on which the combined information from different thematic maps is presented. Aspect maps are often color-coded to show the eight major compass directions or any of 360 degrees. without topographic features or elevation contours. Map Aspect Purpose A map that shows the prevailing direction that a slope faces at each pixel. and identifies all adjacent map sheets. A map portraying the shape of a water body or reservoir using isobaths (depth contours). NTS quads. A map that is an enlargement of some congested area of a smaller scale map. depending on the intended use of the map. A map showing the boundaries of the subdivisions of land for purposes of describing and recording ownership or taxation. Raster imagery. etc. and orthoimages are often used as base maps. combining. such as counties. the maps discussed begin as digital files and may be printed later as desired. Some of the different types of maps are defined below. or through the analysis of other maps. A reference map that outlines the mapped area. A map that uses isorithms (lines connecting points of the same value for any of the characteristics used in the representation of surfaces) to represent a statistical surface. Outline maps usually contain a very small number of details over the desired boundaries with their descriptive codes. Base maps usually show the location and extent of natural earth surface features and permanent man-made objects.Types of Maps A map is a graphic representation of spatial relationships on the earth or other planets. Area symbols usually represent categorized classes of the mapped phenomenon. Also called an isometric map.

Thematic Topographic Viewshed In ERDAS IMAGINE. land cover. Purpose Any map that appears to be. Also called a shaded A map which shows changes in elevation over distance.map file. See "APPENDIX B: File Formats and Extensions" for information on the format of the . etc.map extension. A map depicting terrain relief. hydrology. Also called a line-of-sight map or a visibility map. Slope maps are usually color-coded according to the steepness of the terrain at each pixel. maps are stored as a map file with a .Types of Maps Map Relief Slope relief map. A map illustrating the class characterizations of a particular spatial variable such as soils. Field Guide 401 . or is. A map showing only those areas visible (or invisible) from a specified point(s). 3-dimensional.

you can include multiple layers in a single map composition. For this reason. This may be valuable information when planning emergency response and resource management efforts for the area. a map showing corn fields in the United States would be a qualitative map. It would not show how much corn is produced in each location. A quantitative map displays the spatial aspects of numerical data. Base Information Thematic maps should include a base of information so that the reader can easily relate the thematic data to the real world. or countries. For example. A map showing corn production (volume) in each area would be a quantitative map. or production relative to the other areas. to something more complex. See “Chapter 6: Classification” for more information. Quantitative maps show ordinal (less than/greater than) and interval/ratio (how much different) scale data (Dent 1985). In the past. and can assist the user in assessing the accuracy of a thematic image. In ERDAS IMAGINE. This base may be as simple as an outline of counties. it was difficult and expensive to produce maps that included both thematic and continuous data. See Map Composition on page 432 for more information about creating maps. states. this map type will be explored in more detail. Satellite images can also provide very current information about an area. You can create thematic data layers from continuous data (aerial photography and satellite images) using the ERDAS IMAGINE classification capabilities. the user could overlay the thematic data onto a line coverage of state borders or a satellite image of the area.Thematic Maps Thematic maps comprise a large portion of the maps that many organizations create. Thematic maps may be subdivided into two groups: • • qualitative quantitative A qualitative map shows the spatial distribution or location of a kind of nominal data. For example. but technological advances have made this easy. 402 ERDAS . such as an aerial photograph or satellite image. The satellite image can provide more detail about the areas bordering the flood plains. in a thematic map showing flood plains in the Mississippi River valley.

Use browns for land forms. When mapping elevation data. orange. use yellows and tans for dryness and sparse vegetation and greens for lush vegetation. Use blues for water. • • • When mapping interval or ordinal data. This progression should not be used for series other than elevation. followed by green. yellow.Types of Maps Color Selection The colors used in thematic maps may or may not have anything to do with the class or category of information shown. Although color selection is left entirely up to the map designer. and purple. Cartographers usually try to use a color scheme that highlights the primary purpose of the map. the higher ranks and greater amounts are generally represented by darker colors. green. blue. start with blues for water. and gray for cool temperatures. and yellow for warm temperatures and blue. In land cover mapping. use red. The map reader’s perception of colors also plays an important role. Field Guide 403 . • • • Use the Raster Attributes option in the Viewer to select and modify class colors. Most people are more sensitive to red. ranging up through yellows and browns to reds in the higher elevations. greens in the lowlands. some guidelines have been established (Robinson and Sale 1969). In temperature mapping.

) labels (rivers.) descriptive text (title. (Symbols are discussed in more detail under "Symbols" on page 411. The basic annotation elements in ERDAS IMAGINE include: • • • • rectangles (including squares) ellipses (including circles) polygons and polylines text These elements can be used to create more complex annotation.ovr. etc. This annotation may take the form of: • • • • • • scale bars legends neatlines.) Create annotation using the Annotation tool palette in the Viewer or in a map composition. mountains. Annotation is any explanatory material that accompanies a map to denote graphical features on the map. For example. How Annotation is Stored An annotation layer is a set of annotation elements that is drawn in a Viewer or Map Composer window and stored in a file. such as legends. scale bars.ovr file. Map annotation that is created in a Map Composer window is also stored in a . See "APPENDIX B: File Formats and Extensions" for information on the format of the .map.ovr file. production notes. the annotation for a file called lanier. and grid lines symbols (north arrows. tick marks. Therefore. etc. 404 ERDAS . Annotation that is created in a Viewer window is stored in a separate file from the other data in the Viewer. which is named after the map composition. cities. it must convey information that may not be obvious by looking at the image. etc. etc.) The annotation listed above is made up of single elements. credits.Annotation A map is more than just an image(s) on a background. These annotation files are called overlay files (. These annotation components are actually groups of the basic elements and can be ungrouped and edited like any other graphic. Since a map is a form of communication.map would be lanier. copyright. The user can also create his or her own groups to form symbols that are not in the IMAGINE symbol library. maps usually contain several annotation elements to explain the map.ovr extension).

such as an entire continent. Maps often include more than one scale bar to indicate various measurement systems. Verbal Statement A verbal statement of scale describes the distance on the map to the distance on the ground. If a large area is to be mapped. It shows the distance on paper that represents a geographical distance on the map. the less detailed the map can be. the smaller the scale.000 is considered small-scale. The units on both sides of the ratio must be the same. If a relatively small area is to be mapped.000 is approximately 1 inch to 16 miles.000. One-inch and 6-inch maps of the British Ordnance Survey are often referred to by this method (1 inch to 1 mile. It is perhaps the most important information on a map. 6 inches to 1 mile) (Robinson and Sale 1969). the scale must be smaller. Scale Bars A scale bar is a graphic annotation element that describes map scale. Use the Text tool to create a verbal statement. As a rule. since the level of detail and map accuracy are both factors of the map scale. Scale is directly related to the map extent.000 inches on the ground could be described as having a scale of 1:24. such as a neighborhood or subdivision. or the area of the earth’s surface to be mapped.Scale Scale Map scale is a statement that relates distance on a map to distance on the earth’s surface. The units on the map and on the ground do not have to be the same in a verbal statement. such as kilometers and miles. A verbal statement describing a scale of 1:1.000 or 1/24. including: • • • representative fraction verbal statement scale bar Representative Fraction Map scale is often noted as a simple ratio or fraction called a representative fraction. Scale can be reported in several ways. then the scale can be larger. 1 1 0 0 1 1 2 3 2 Miles 4 Kilometers Figure 167: Sample Scale Bars Use the Scale Bar tool in the Annotation tool palette to automatically create representative fractions and scale bars.000. A map in which one inch on the map equals 24. anything smaller than 1:250. Field Guide 405 . Generally.

000 yd 13.000 1:5.00 cm 6.380 yd 22.634 km 0.000 1:500.625 km 0.000 km 1.950 mi 7.360 1:75. however.840 1:20.260 mi 1.384 yd 0.099 mi 0.158 mi 0.180 mi 1.640 in 2.500 mi 0.379 mi 0.500 1:63.025 mi 0.500 km 5.000 1:62.000 1:1.00 mm 1:2.000 in 1.17 cm 4.00 cm 1.00 mm 4.000 1:80.000 1:250.530 in 2.789 mi 0.634 in 0.340 in 4.100 km 0.040 mi 0.000 1:100.986 mi 1.200 ft 10.00 cm 20.000 406 ERDAS .16 cm 2.250 mi 0.395 mi 0.000 mi 1.317 km 0.000 km Map Scale 1/40 inch represents 4.014 in 1.000 in 3.032 mi 0.680 1:50.25 cm 5.750 km 0.000 yd 139.063 in 1 kilometer is represented by 50.676 yd 17.58 cm 1.00 cm 10.000 in 0.25 cm 1.00 cm 4.500 km 0.030 mi 0.250 km 0.050 mi 0.200 km 0. Table 25 lists these scales and their equivalents (Robinson and Sale 1969).170 in 2.000 1:125.00 mm 1.507 in 0. there are some commonly used scales.000 m 0.792 in 0.00 cm 3.000 yd 0.270 in 1.904 yd 16.000. Table 25: Common Map Scales 1 centimeter represents 20.890 mi 15.970 mi 3.00 mm 2.197 mi 0.952 yd 11.000 m 50.680 in 12.000 yd 34.000 km 10.580 mi 1.000 1:31.Common Map Scales The user can create maps with an unlimited number of scales.000 1:10.716 yd 43.000 1:25.316 mi 0.780 mi 1 mile is represented by 31.60 cm 1.33 cm 1.00 cm 8.845 in 0.240 km 0.156 km 0.253 in 0.425 ft 6.670 in 6.000 1:24.000 1:15.127 in 0.395 mi 1 inch represents 56.800 km 1.250 km 2.

92 60.74 1.09 .69 2.38 . Table 26: Pixels per Inch SCALE Pixel Size (m) 1 2 2.64 45.46 609.16 9. Way.10 4.27 1609.05 2.61 Courtesy of D.00 254.72 30.23 3.29 1.81 3.00 127.12 .23 35.39 3.05 2.41 .75 28.02 .01 1.03 .35 1.08 4.30 .61 1270.80 42.71 7.74 1.65 .40 76.61 .19 8.30 .08 .02 3.52 1.80 152.24 12.63 3.61 .05 2.58 3.20 .86 18.96 30.44 45.22 .06 3.14 6.06 152.10 4.03 1.84 121.35 5.48 15.03 1.83 1.40 121.87 .15 304.10 3.43 10.12 .34 .96 40.16 7.31 1.54 2.16 8.87 .38 12.02 .08 4.32 15.10 4.05 6.51 .76 .37 53.44 .70 8.06 3.96 30.15 .05 2.03 1.44 .76 .22 1.14 .60 182.81 1.24 .87 160.48 22.02 .93 12.36 4.08 .29 15.88 91.30 .24 13.02 .52 1.00 635.10 .62 6.46 16.44 5.76 .5 5 10 15 20 25 30 35 40 45 50 75 100 150 200 250 300 350 400 450 500 600 700 800 900 1000 1”=100’ 1:1200 1”=200’ 1:2400 1”=500’ 1:6000 1”=1000’ 1:12000 1”=1500’ 1:18000 1”=2000’ 1:24000 1”=4167’ 1:50000 1”=1 mile 1:63360 30.19 21.40 16.07 .00 84.12 1.30 .93 107.51 .24 13.41 1.32 17.20 60.03 1.05 .17 .55 12.03 60.24 10.04 .52 1.59 1.79 1.48 24.34 .19 10.04 .64 30.15 .81 .47 6.98 40.10 4.13 6.68 .67 643.68 .62 6.17 . Cunningham and D.61 .19 .91 .00 508.76 32.74 321.22 25.19 6.96 30.67 63.Scale Table 26 shows the number of pixels per inch for selected scales and pixel sizes.24 12.52 1.51 .60 304.92 60.22 .49 15.25 .52 1.44 2.48 20.09 .20 228.05 2.76 .30 457.06 3.82 2.30 2.76 .03 1.73 8.22 2.35 804.14 1. Field Guide 407 .22 1.10 .35 3.29 80.42 15.02 .35 1.13 6.44 2.33 36.09 10.07 .06 .41 .22 1.08 11.60 4.05 2.87 .10 5.50 50.80 243.57 3.18 2.38 20.38 .52 1.77 6.61 . The Ohio State University.57 .48 24.68 .47 64.29 31.

0000 2.1468 200.0225 0.5367 50.0000 100.2025 0.1544 0.0006 0.0010 0.1225 0.0000 12. Cunningham and D.5625 1.9576 121.3900 2.2703 39.0900 0.0000 64.4710 5.3027 0.6178 1.3954 0.Table 27 lists the number of acres and hectares per pixel for various pixel sizes.2500 25.7761 88.0386 61.0062 0.5 5 10 15 20 25 30 35 40 45 50 75 100 150 200 250 300 350 400 450 500 600 700 800 900 1000 Acres Hectares 0.0001 0.0000 81.0247 0.2500 16.0015 0.1600 0. Table 27: Acres and Hectares per Pixel Pixel Size (m) 1 2 2.0004 0.0000 49.2500 4.0002 0.0025 0.0556 0.0000 6.0812 158.1044 0.1546 247.5598 9.0625 0.0988 0.0000 Courtesy of D.0000 20.0000 36.2394 30. The Ohio State University.5004 0.0100 0. 408 ERDAS .2224 0.2500 9.8842 15.2500 0.4440 22. Way.0400 0.

where each color represents a different feature or category. Legends are likewise used to describe all unknown or unique symbols utilized. but can be created manually.Legends Legends A legend is a key to the colors. displayed in gray scale. A legend can also be created for a single layer of continuous data. Field Guide 409 . Symbols in legends should appear exactly the same size and color as they appear on the map (Robinson and Sale 1969). Symbol legends are not created automatically. and line styles that are used in a map. Legend pasture forest swamp developed Figure 168: Sample Legend Use the Legend tool in the Annotation tool palette to automatically create color legends. symbols. Legends are especially useful for maps of categorical data displayed in pseudo color.

• A neatline is a rectangular border around the image area of a map.Neatlines. Tick Marks. and grid lines serve to provide a georeferencing system for map detail and are based on the map projection of the image shown. It is often helpful to place grid lines over the image area of a map. not just the image area. It differs from the map border in that the border usually encloses the entire map. and grid lines. tick marks. tick marks. based on a coordinate system. but is really up to the map designer. and Grid Lines Neatlines. Tick Marks. This is becoming less common on thematic maps. 410 ERDAS . they are an extension of tick marks. Tick marks and grid lines can also be created over images displayed in a Viewer. they should be used. Graticules are discussed in more detail in "Map Projections" on page 416. Usually. and Grid Lines Grid lines may also be referred to as a graticule. Use the Grid/Tick tool in the Annotation tool palette to create neatlines. Tick marks are small lines along the edge of the image area or neatline that indicate regular intervals of distance. Grid lines are intersecting lines that indicate regular intervals of distance. • • neatline grid lines tick marks Figure 169: Sample Neatline. See the On-Line Help for instructions. If the grid lines will help readers understand the content of the map.

Plan Profile Function Figure 170: Sample Symbols Field Guide 411 . (Dent 1985). They are traditionally used to represent amounts that vary from place to place. squares. and triangles. Therefore. they represent tangible objects. such as trees. etc. on a map of a state park. such as population density. amount of rainfall. For example. trees. For example. the symbol for a house might be a square. etc. a set of symbols is devised to represent real-world objects. Profile symbols generally represent vertical objects. Both replicative and abstract symbols are composed of one or more of the following annotation elements: • • • point line area Symbol Types These basic elements can be combined to create three different types of replicative symbols: • • • plan — formed after the basic outline of the object it represents. such as coastlines. There are two major classes of symbols: • • replicative abstract Replicative symbols are designed to look like their real-world counterparts. such as circles. and houses. railroads. since most houses are rectangular. a symbol of a tent would indicate the location of a camping area.Symbols Symbols Since maps are a greatly reduced version of the real-world. objects cannot be depicted in their true shape or size. oil wells. Abstract symbols usually take the form of geometric shapes. function — formed after the activity that a symbol represents. windmills. profile — formed like the profile of an object.

Use the Symbol tool in the Annotation tool palette and the symbol library to place symbols in maps. if captions are provided outside of the image area (Dent 1985). Credits Map credits (or source) can include the data source and acquisition date. For example. this is a very subjective area and many organizations already have guidelines to use. ERDAS IMAGINE enables you to make map templates to facilitate the development of map standards within your organization. Since symbols are not drawn to scale. colors. however. regardless of who actually makes the map. This will ensure that all of the maps produced follow the same conventions. color. and other details that are required or helpful to readers. Labels and Descriptive Text Place names and other labels convey important information to the reader about the features on the map. it would be beneficial to develop a style guide specifically for mapping. It focuses the reader’s attention on the primary purpose of the map. For example. Any features that will help orient the reader or are important to the content of the map should be labeled. A specific color could be used to indicate county seats. their placement is crucial to effective communication. production notes.Symbols can have different sizes. captions. and pattern are generally used to show qualitative or quantitative differences among areas marked. they must give credit to the owner. if a circle is used to show cities and towns. or other explanatory material. if the user includes data which they do not own in a map. 412 ERDAS . where available. This section is intended as an introduction to the concepts involved and to convey traditional guidelines. and patterns to indicate different meanings within a map. Title The map title usually draws attention by virtue of its size. Descriptive text on a map can include the map title and subtitle. As with many other aspects of map design. If your organization does not have a set of guidelines for the appearance of maps and you plan to produce many in the future. accuracy information. copyright information. larger circles would be used to show areas with higher population. The title may be omitted. Typography and Lettering The choice of type fonts and styles and how names are lettered can make the difference between a clear and attractive map and a jumble of imagery and text. The use of size. credits. Use the Text tool in the Annotation tool palette to add labels and descriptive text to maps.

using letters that are too bold may obscure important information in the image. • • • • • Sans Serif Sans 10 pt regular Serif Roman 10 pt regular Roman 10 pt italic Roman 10 pt bold Roman 10 pt bold italic ROMAN 10 PT ALL CAPS Sans 10 pt italic Sans 10 pt bold Sans 10 pt bold italic SANS 10 PT ALL CAPS Figure 171: Sample Sans Serif and Serif Typefaces with Various Styles Applied Use the Styles dialog to adjust the style of text. For example. titles.g. although names in which the letters must be spread out across a large area are better in all capital letters. etc. therefore lowercase letters might improve the legibility of the map. underline.Labels and Descriptive Text Type Styles Type style refers to the appearance of the text and may include font. italic. and style (bold. rather than two different serif fonts or two different sans serif fonts [e. However. Sans (sans serif) and Roman (serif) could be used together in one map]. Put more important text in labels. use a serif and a sans serif. and other natural features were labeled in italic. This is a matter of personal preference. this is not strictly adhered to by map makers today. Although the type styles used in maps are purely a matter of the designer’s taste.). size. Avoid ornate text styles because they can be difficult to read.) In the past. and names in all capital letters and lesser important text in lowercase with initial capitals.. Use no more than four to six different type sizes. although water features are still nearly always labeled in italic. italic. city names will usually be in a larger type size than the town names. bold.g. one or two styles are enough when also using the variations of those type faces (e. On the other hand. (Studies have found that capital letters are more difficult to read. hydrology. etc..). Generally. When using two typefaces. on a map with city and town labels. underline. landform. Field Guide 413 . Exercise caution in using very thin letters that may not reproduce well. Dent 1985). Use different sizes of type for showing varying levels of importance. the following techniques help to make maps more legible (Robinson and Sale 1969. • Do not use too many different typefaces in a single map.

Here is a list of guidelines that have been used by cartographers in the past (Robinson and Sale 1969. For geographical names. the name “Germany” should be used. but should always have a slight curve. not the names.kerning) as little as necessary. the data. in small-scale maps. Much is determined by the purpose of the map and the end user. Where the continuity of names and other map data.) should not be spaced. • • Names should be either entirely on land or water—not overlapping both. If lettering must be disoriented. railroads. such as lines and tones. it should never be set in a straight line. conflicts with the lettering. These labels should be placed above the feature and river names should slant in the direction of the river flow (if the label is italic). use the native language of the intended map user. Letter spacing. For an English-speaking audience. Here again. different from preceding bullet) unless it is necessary to do so. In large-scale maps this means parallel with the upper and lower edges.Lettering Lettering refers to the way in which place names and other labels are added to a map. and position are the three most important factors in lettering. The word(s) should be repeated along the feature as often as necessary to facilitate identification. rather than “Deutscheland. should be interrupted. in line with the parallels of latitude.” • • • • • • • • 414 ERDAS . rivers. Names should be letter spaced (space between individual letters .e.. Lettering that refers to point locations should be placed above or below the point. preferably above and to the right. etc. Many organizations have developed their own rules for lettering. Lettering should never be upside-down in any respect. Lettering should generally be oriented to match the orientation structure of the map. orientation. Dent 1985). The letters identifying linear features (roads. there are no set rules for how lettering is to appear. Type should not be curved (i.

Field Guide 415 . the map may be well served by incorporating color into its design. However. In fact.. Bad Lettering Text Color Many cartographers argue that all lettering on a map should be black. studies have shown that coding labels by color can improve a reader’s ability to find information (Dent 1985).Labels and Descriptive Text Better Worse Atlanta Atlanta GEORGIA Savannah G e o r g i a Savannah Figure 172: Good Lettering vs.

Ideally. without stretching. Conformality is the characteristic of true shape. and azimuthal or planar. or most often. 1984. but all involve transfer of the distinctive global patterns of parallels of latitude and meridians of longitude onto an easily flattened surface. A plane is already flat. a compromise among selected properties. or developable surface. 416 ERDAS . guiding. The property of conformality is important in maps which are used for analyzing. Properties of Map Projections Regardless of what type of projection is used. This is accomplished by exact transformation of angles around points. A conformal map or projection is one that has the property of true shape. One necessary condition is the perpendicular intersection of grid lines as on the globe.Map Projections This section is adapted from Map Projections for Use with the Geographic Information System by Lee and Walsh. while a cylinder or cone may be cut and laid out flat. a distortion-free map has four valuable properties: • • • • conformality equivalence equidistance true direction Each of these properties is explained below. Thus. wherein a projection preserves the shape of any small geographical area. and from several other locations. The Projection Chooser is accessible from the ERDAS IMAGINE icon panel. This can be accomplished by direct geometric projection or by a mathematically derived transformation. The three most common developable surfaces are the cylinder. conical. No map projection can be true in all of these properties. map projections may be classified into three general families: cylindrical. cone. Projections that compromise in this manner are known as compromise projections. Map projections are selected in the Projection Chooser. A map projection is the manner in which the spherical surface of the earth is represented on a flat (two-dimensional) surface. it is inevitable that some error or distortion will occur in transforming a spherical surface into a flat surface. or recording motion. There are many kinds of projections. Therefore. and plane (Figure 173). as in navigation. each projection is devised to be true in selected properties.

meaning that areas on one portion of a map are in scale with areas in any other portion. being an arc of a circle whose center is the center of the earth. The line of constant or equal direction is termed a rhumb line. Equidistance is the characteristic of true distance measuring. Field Guide 417 . is mutually exclusive with conformality except along one or two selected lines. or at most two. reference lines such as the equator or a meridian are chosen to have equidistance and are termed standard parallels or standard meridians. a more desirable property than true direction may be where great circles are represented by straight lines. Preservation of equivalence involves inexact transformation of angles around points and thus. at a constant angle or azimuth. points in any direction or along certain lines. Note that all meridians are great circles. meridians. This property can be fulfilled on any given map from one. Equidistance is important in maps that are used for analyzing measurements (i. but the only parallel that is a great circle is the equator. The scale of distance is constant over the entire map. This characteristic is most important in aviation. True direction is characterized by a direction line between two points that crosses reference lines. azimuths constantly change (unless the great circle is the equator or a meridian). An azimuth is an angle measured clockwise from a meridian. Thus. for example. but a great circle.. The property of constant direction makes it comparatively easy to chart a navigational course. as in populations. the shortest surface distance between two points is not a rhumb line. road distances). going north to east. However.Map Projections Equivalence is the characteristic of equal area. Along a great circle. Typically. on a spherical surface. The property of equivalence is important in maps that are used for comparing density and distribution data.e.

Regular Cylindrical Regular Conic Transverse Cylindrical Polar Azimuthal (planar) Oblique Cylindrical Figure 173: Projection Types Oblique Azimuthal (planar) 418 ERDAS .

For example. Cones may also be secant. A tangent plane intersects the global surface at only one point and is perpendicular to a line passing through the center of the sphere. forming two circles that will possess equidistance. the cone slices underneath the global surface. Choice of the projection center determines the aspect. In this case. the majority of them are geometric or mathematical variants of the basic direct geometric projection families described below. the perspective point—may also assume various positions. or touching. opposite the projection plane (stereographic) Conical Projections Conical projections are accomplished by intersecting. is only conceptual and not geometrically accurate. Along this line of intersection. Only polar conical projections are supported in ERDAS IMAGINE. Azimuthal projections may be centered: • • • on the poles (polar aspect) at a point on the equator (equatorial aspect) at any other orientation (oblique aspect) The origin of the projection lines—that is. termed the standard parallel.Map Projections Projection Types Although a great number of projections have been devised. it may be: • • • the center of the earth (gnomonic) an infinite distance away (orthographic) on the earth’s surface. A tangent cone intersects the global surface to form a circle. the map will be error-free and possess equidistance. a cone with the global surface and mathematically projecting lines onto this developable surface. Thus. Azimuthal Projections Azimuthal projections. This is conceptually equivalent to tracing a shadow of a figure cast by a light source. Choice of the projection to be used will depend upon the true property or combination of properties desired for effective cartographic analysis. Note that the use of the word “secant. are accomplished by drawing lines from a given perspective point through the globe onto a tangent plane.” in this instance. or oblique. Usually. and intersect the global surface. these projections are symmetrical around a chosen center or central meridian. of the projection surface. this line is a parallel. or orientation. the conical aspect may be polar. Field Guide 419 . Conceptually. also called planar projections. equatorial. between the standard parallels.

the long axis becomes horizontal). wherein the central line of the projection becomes a chosen standard meridian as opposed to a standard parallel. one slightly less in diameter than the globe. A secant cylinder. then the aspect becomes transverse. If the cylinder is rotated 90 degrees from the vertical (i. as with a tangent cone.e. a cylinder with the global surface. or touching..” A tangent cylinder will intersect the global surface on only one line to form a circle.Tangent one standard parallel Secant two standard parallels Figure 174: Tangent and Secant Cones Cylindrical Projections Cylindrical projections are accomplished by intersecting. Tangent one standard parallel Secant two standard parallels Figure 175: Tangent and Secant Cylinders Perhaps the most famous cylindrical projection is the Mercator. which is then “cut” and “unrolled. will have two lines possessing equidistance. possessing true direction and conformality. This central line of the projection is commonly the equator and will possess equidistance. which became the standard navigational map. 420 ERDAS . The surface is mathematically projected onto the cylinder.

Modified projections are modified versions of another projection. the Space Oblique Mercator projection is a modification of the Mercator projection. cone. Many other projections cannot be created so easily. Field Guide 421 . These modifications are made to reduce distortion. because all meridians except the central meridian are curved. and all meridians are equally spaced. often by including additional standard lines or a different pattern of distortion. the Sinusoidal is called a pseudocylindrical projection because all lines of latitude are straight and parallel. or cylinder. However. This results in the Earth appearing oval instead of rectangular (ESRI 1991).Map Projections Other Projections The projections discussed so far are projections that are created by projecting from a sphere (the earth) onto a plane. For example. For example. it cannot truly be a cylindrical projection. Pseudo projections have only some of the characteristics of another class projection.

Parallels are designated as 0˚ at the equator to 90˚ at the poles. coordinates are based on the network of latitude and longitude (Lat/Lon) lines that make up the graticule of the earth. Latitude and longitude are defined with respect to an origin located at the intersection of the equator and the prime meridian. Map projections are various arrangements of the earth’s latitude and longitude lines onto a plane. or Cartesian. lines of longitude are called meridians.Y). The origin of the projection. Lines of latitude are called parallels. This point is defined in two coordinate systems: • • geographical planar Geographical Geographical. is defined by values of false easting and false northing. The equator is the largest parallel. coordinates are defined by a column and row position on a planar grid (X. being a “false” origin. Values of false easting are read first and may be in meters or feet. or spherical. 422 ERDAS . Most often this is the center. with the prime meridian at 0˚ (Greenwich. England). Lat/Lon coordinates are reported in degrees. and seconds. In practice. Within the graticule. minutes. east or west of the prime meridian. which run north/south. this eliminates negative coordinate values and allows locations on a map projection to be defined by positive coordinate pairs. The origin of a planar coordinate system is typically located south and west of the origin of the projection.Geographical and Planar Coordinates Map projections require a point of reference on the earth’s surface. Coordinates increase from 0. The 180˚ meridian (opposite the prime meridian) is the International Dateline. Grid references always contain an even number of digits. and the first half refers to the easting and the second half the northing. Planar Planar. Meridians are designated as 0˚ to 180˚. which run east/west. or origin.0 going east and north. of the projection.

Available Map Projections Available Map Projections In ERDAS IMAGINE. which is used to georeference images and to convert map coordinates from one type of projection to another. map projection information appears in the Projection Chooser. The Projection Chooser provides the following projections: USGS Projections Albers Conical Equal Area Azimuthal Equidistant Equidistant Conic Equirectangular General Vertical Near-Side Perspective Geographic (Lon/Lat) Gnomonic Lambert Azimuthal Equal Area Lambert Conformal Conic Mercator Miller Cylindrical Modified Transverse Mercator Oblique Mercator (Hotine) Orthographic Polar Stereographic Polyconic Sinusoidal Space Oblique Mercator State Plane Stereographic Transverse Mercator UTM Van Der Grinten I External Projections Bipolar Oblique Conic Conformal Cassini-Soldner Laborde Oblique Mercator Modified Polyconic Modified Stereographic Mollweide Equal Area Plate Carrée Rectified Skew Orthomorphic Robinson Pseudocylindrical Southern Orientated Gauss Conformal Winkel’s Tripel Field Guide 423 .

• Lat/Lon coordinates are expressed in decimal degrees.Choice of the projection to be used will depend upon the desired major property and the region to be mapped (Table 1). For example.85333 or 30:51:12 = 30. the user can use the DD function to convert coordinates in degrees. seconds format to decimal. All other coordinates are expressed in meters.12) = 30. After choosing the desired map projection.12) = -30. When prompted. a menu of spheroids displays.51.51.85333 -dd(30. • • State Plane coordinates are expressed in feet. England. Note also that values for longitude west of Greenwich.85333 The user can also enter Lat/Lon coordinates in radians. along with appropriate prompts that enable the user to specify these parameters. minutes. These parameters fall into three general classes: • • • definition of the spheroid definition of the surface viewing window definition of scale For each map projection. several parameters are required for its definition (Table 2). 424 ERDAS . and values for latitude south of the equator are to be entered as negatives. Units Use the units of measure that are appropriate for the map projection type. for 30˚51’12’’: dd(30.

plane coordinates Middle latitudes.20) Cone Cone Cylinder Plane Cone Cone Cylinder Plane Plane Plane Plane Plane Plane PseudoCylinder Cylinder Cylinder N/A Cylinder Cylinder Cylinder Property N/A Conformal Conformal Equivalent Conformal True Direction Conformal True Direction Conformal Compromise Equidistant Conformal Conformal Equivalent Equidistant Compromise Compromise Compromise Equivalent Compromise Compromise Compromise Conformal Conformal Conformal Use Data entry.. navigation (straight rhumb lines) Polar regions N-S expanses Middle latitudes. satellite tracking Mapping of Landsat imagery Alaska Field Guide 425 .9. E-W expanses N-S expanses Hemispheres.Available Map Projections Table 28: Map Projections Map projection 0 Geographic 1 Universal Transverse Mercator 2 State Plane 3 Albers Conical Equal Area 4 Lambert Conformal Conic 5 Mercator 6 Polar Stereographic 7 Polyconic 8 Equidistant Conic 9 Transverse Mercator 10 Stereographic 11 Lambert Azimuthal Equal Area 12 Azimuthal Equidistant 13 Gnomonic 14 Orthographic 15 General Vertical NearSide Perspective 16 Sinusoidal 17 Equirectangular 18 Miller Cylindrical 19 Van der Grinten I 20 Oblique Mercator 21 Space Oblique Mercator 22 Modified Transverse Mercator Construction N/A Cylinder (see #9) (see #4. plane coordinates Data entry. radio/seismic work (straight great circles) Navigation. continents Square or round expanses Polar regions. spherical coordinates Data entry. pictorial Hemispheres or less N-S expanses or equatorial regions City maps.g. E-W expanses flight (straight great circles) Non-polar regions. 7. Hawaiian islands). seismic work (straight great circles) Globes. E-W expanses Middle latitudes. computer plotting (simplistic) World maps World maps Oblique expanses (e.

426 ERDAS . Numbers are used for reference only and correspond to the numbers used in Table 1.Table 29: Projection Parameters Projection type (#) a Parameter Definition of Spheroid Spheroid selections X X X X X X X X X X X X X X X X X X X 3 4 5 6 7 8b 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 b 20 21 b 2 2 Definition of Surface Viewing Window False easting False northing Longitude of central meridian Latitude of origin of projection Longitude of center of projection Latitude of center of projection Latitude of first standard parallel Latitude of second standard parallel Latitude of true scale Longitude below pole X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X Definition of Scale Scale factor at central meridian Height of perspective point above sphere Scale factor at center of projection X X X a. Additional parameters required for definition of the map projection are described in the text of Appendix C. b. Parameters for definition of map projection types 0-2 are not applicable and are described in the text.

Many factors must be weighed when selecting a projection. including: • • • • • type of map special properties that must be preserved types of data to be mapped map accuracy scale If the user is mapping a relatively small area. there may be little or no distortion in the center of the map. virtually any map projection will do. use an azimuthal projection Field Guide 427 . In small areas. one or several map projections may be used. there have been three fundamental rules regarding map projection use (Maling 1992): • • • if the country to be mapped lies in the tropics. continents. Guidelines Since the sixteenth century. use a cylindrical projection if the country to be mapped lies in the temperate latitudes. the amount of distortion in a particular projection is barely.Choosing a Map Projection Choosing a Map Projection Map Projections Uses in a GIS Selecting a map projection for the GIS data base will enable the user to (Maling 1992): • • • decide how to best display the area of interest or illustrate the results of analysis register all imagery to a single coordinate system for easier comparisons test the accuracy of the information and perform measurements on the data Deciding Factors Depending on the user’s applications and the uses for the maps created. In large areas. if at all. but distortion will increase outward toward the edges of the map. It is in mapping large areas (entire countries. and the world) that the choice of map projection becomes more critical. noticeable. use a conical projection if the map is required to show one of the polar regions.

There are too many factors to consider in map projection selection for broad generalizations to be effective today.These rules are no longer held so strongly. The purpose of a particular map and the merits of the individual projections must be examined before an educated choice can be made. use a conformal projection 428 ERDAS . However. there are some guidelines that may help a user select a projection (Pearson 1990): • • • Statistical data should be displayed using an equal area projection to maintain proper proportions (although shape may be sacrificed) Equal area projections are well suited to thematic data Where shape is important.

The amount of flattening of the earth is expressed as the ratio: f = (a – b) ⁄ a where: a = the equatorial radius (semi-major axis) b = the polar radius (semi-minor axis) EQUATION 23 Most map projections use eccentricity (e2) rather than flattening.000 or larger. However. and for many maps this is satisfactory. This flattening of the sphere makes it an oblate spheroid.Spheroids Spheroids The previous discussion of direct geometric map projections assumes that the earth is a sphere. Minor axis Major axis semi-major axis semi-minor axis Figure 176: Ellipse An ellipse is defined by its semi-major (long) and semi-minor (short) axes. However. these spheroids may not give the “best fit” for a particular region. the planet bulges slightly at the equator. due to rotation of the earth around its axis. Several principal spheroids are in use by one or more countries. Differences are due primarily to calculation of the spheroid for a particular region of the earth’s surface. Field Guide 429 . In North America. Calculation of a map projection requires definition of the spheroid (or ellipsoid) in terms of axes lengths and eccentricity squared (or radius of the reference sphere). the spheroid in use is the Clarke 1866 for NAD27 and GRS 1980 for NAD83 (State Plane). which is an ellipse rotated around its shorter axis. Only recently have satellite tracking data provided spheroid determinations for the entire earth. The relationship is: e2 = 2 f – f 2 EQUATION 24 The flattening of the earth is about 1 part in 300 and becomes significant in map accuracy at a scale of 1:100.

There are many other spheroids available. and they are listed in the Projection Chooser. the user has the option to choose from the following list of spheroids: Clarke 1866 Clarke 1880 Bessel New International 1967 International 1909 WGS 72 Everest WGS 66 GRS 1980 Airy Modified Everest Modified Airy Walbeck Southeast Asia Australian National Krasovsky Hough Mercury 1960 Modified Mercury 1968 Sphere of Radius 6370977m WGS 84 Helmert Sphere of Nominal Radius of Earth The spheroids listed above are the most commonly used. Upon choosing a desired projection type. different projections should be used.If other regions are to be mapped. These additional spheroids are not documented in this manual. You can use the ERDAS IMAGINE Developers’ Toolkit to add your own map projections and spheroids to IMAGINE. 430 ERDAS .

4 6378249.89 6376896.31414 Use North America and the Philippines France and Africa Central Europe. Table 30: Spheroids Spheroid Clarke 1866 Clarke 1880 Bessel (1841) New International 1967 International 1909 (= Hayford) WGS 72 (World Geodetic System 1972) Everest (1830) WGS 66 (World Geodetic System 1966) GRS 1980 (Geodetic Reference System) Airy (1940) Modified Everest Modified Airy Walbeck (1819) Southeast Asia Australian National (1965) Krasovsky (1940) Hough Mercury 1960 Modified Mercury 1968 Sphere of Radius 6370997 m Semi-Major Axis 6378206. more recent version As Airy above. Chile. as well as the principal uses of these spheroids.91 6356103. and Indonesia As International 1909 below.0188 6356794.0 Field Guide 431 . rarely used As Mercury 1960 above. with modification of ellipse axes Early satellite. more recent calculation Remaining parts of the world not listed here NASA (satellite) India.155 6378157. up to 1910 As named Australia Former Soviet Union and some East European countries As International 1909 above.8 6356514.0 6378166.3452 6378145.4133 6356759.145 6377397.8467 6356773.2 6356911. more recent version Soviet Union.94613 6356750.0 6378150.283666 6356768.0 6378270.519915 6356075.0 6377276.0 6378245.719 6356863.039 6356036.5 6378388.343479 6356794.0 6378135.0 6377304.063 6377341.Spheroids The semi-major and semi-minor axes of all supported spheroids are listed in Table 30. older version Expected to be adopted in North America for 1983 earth-centered coordinate system (satellite) England As Everest above.96284 6356772.86955 6356078.0 6378155.0 6378137.0 6378160.0 Semi-Minor Axis 6356583.3205 6356774. more recent calculation A perfect sphere with the same surface area as the Clarke 1866 spheroid 6377563.0 6370997. Burma. and Pakistan As WGS 72 above.143 6355834.337303 6370997.769356 6356752.0 6356256.

31424517929 6356818. the results of a user’s analyses can be communicated much more effectively.0 Use As WGS 72. This chapter is certainly not a textbook on cartography. Many GIS analysts may already know more about cartography than they realize. when maps were hand drawn. but that is how we learn. more recent calculation Egypt A perfect sphere Map Composition Learning Map Composition Cartography and map composition may seem like an entirely new discipline to many GIS and image processing analysts—and that is partly true. it is merely an overview of some of the issues involved in creating cartographically-correct products. he or she can begin map composition. The first step in creating a map is to plan its contents and layout.0 6370997. Map composition is also much easier than in the past.16962789092 6370997.Table 30: Spheroids Spheroid WGS 84 Helmert Sphere of Nominal Radius of Earth Semi-Major Axis 6378137. The following questions will aid in the planning process: • • • • • • • How will this map be used? Will the map have a single theme or many? Is this a single map. Plan the Map After the user’s analysis is complete. by learning the basics of map design. simply because they have access to map making software.0 Semi-Minor Axis 6356752. or is it part of a series of similar maps? Who is the intended audience? What is the level of their knowledge about the subject matter? Will it remain in digital form and be viewed on the computer screen or will it be printed? If it is going to be printed. But. how big will it be? Will it be printed in color or black and white? Are there map guidelines already set up by your organization? 432 ERDAS .0 6378200. Perhaps the first maps you made were imitations of existing maps.

and labels will have to be larger than for maps printed on 8. and will make the composition phase go quickly.5” x 11. captions. Select symbols that are widely recognized. This scenario might lead to the following conclusions: • • • • A format (layout) should be developed for the series.Map Composition The answers to these questions will help to determine the type of information that must go into the composition and the layout of that information. Political boundaries might need to be included. See the Map Composer section of the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on creating a map. suppose you are going to do a series of maps about global deforestation for presentation to Congress. Refer to the On-Line Help for details about how Map Composer works. Field Guide 433 . The colors used should be chosen carefully. so that all the maps produced have the same style. For example. Include a statement about the accuracy of each map.0” sheets. since they will influence the types of actions that can be taken in each deforested area. since the maps will be printed in color. It is helpful for the user to know how they want the map to look before starting the ERDAS IMAGINE Map Composer. Cultural features (roads. urban centers.) may be added for locational reference. and you are going to print these maps in color on an electrostatic printer. etc. • • • Once this information is in hand. The typeface size and style to be used for titles. and make sure they are all explained in a legend. the user can actually begin sketching the look of the map on a sheet of paper. since these maps may be used in very high-level decisions. Doing so will ensure that all of the necessary data layers are available. The type styles selected should be the same for all maps.

where points refer only to points that can be well-defined on the ground. map accuracy is of the utmost importance. • • • • • USGS Land Use and Land Cover Map Guidelines The United States Geological Survey (USGS) has set standards of their own for land use and land cover maps (Fisher 1991): • • • The minimum level of accuracy in identifying land use and land cover categories is 85%. in a large part. There are many factors that influence map accuracy: the projection used. determine its usefulness. Accuracy should be maintained between interpreters and times of sensing.000. 434 ERDAS . especially. a statement should be made to that effect in the legend. or enlighten a particular group before decisions are made. several agencies have established guidelines for map makers. These guidelines are summarized below (Fisher 1991): • On scales smaller than 1:20. The analyst/cartographer must be aware of these factors before map production begins.000. because the accuracy of the map will. generalization. If maps have been tested and do meet these standards. etc. Maps that have been tested but fail to meet the requirements should omit all mention of the standards on the legend. The several categories shown should have about the same accuracy. It is usually up to individual organizations to perform accuracy assessment and decide how those findings are reflected in the products they produce. At no more than 10 percent of the elevations tested will contours be in error by more than one half of the contour interval. promote a cause. the corresponding error term is 1/30 inch. Accuracy should be tested by comparison of actual map data with survey data of higher accuracy (not necessarily with ground truth). scale. In these cases. not more than 10 percent of points tested should be more than 1/50 inch in horizontal error.Map Accuracy Maps are often used to influence legislation. base data. However. US National Map Accuracy Standard The United States Bureau of the Budget has developed the US National Map Accuracy Standard in an effort to standardize accuracy reporting on maps. On maps with scales larger than 1:20.

Field Guide 435 . Up to only 10% of pedons may be of other soil types than those named if they do present a major hindrance to land management. care must be taken in pursuing this avenue if it is necessary to maintain a particular level of accuracy. If the hardcopy maps that are digitized are outdated. the digitized map may negatively influence the overall accuracy of the data base. No single included soil type may occupy more than 10% of the area of the map unit.Map Accuracy USDA SCS Soils Maps Guidelines The United States Department of Agriculture (USDA) has set standards for Soil Conservation Service (SCS) soils maps (Fisher 1991): • • • Up to 25% of the pedons may be of other soil types than those named if they do not present a major hindrance to land management. Digitized Hardcopy Maps Another method of expanding the data base is by digitizing existing hardcopy maps. or were not produced using the same accuracy standards that are currently in use. Although this may seem like an easy way to gather more information.

436 ERDAS .

like 1:12. Printing Large Maps Some scaled maps will not fit on the paper that is used by the printer. Therefore.000. These topics are covered in this chapter: • • printing maps the mechanics of printing Printing Maps ERDAS IMAGINE enables the user to create and output a variety of types of hardcopy maps. See "CHAPTER 8: Rectification" for information on rectifying and georeferencing images and "CHAPTER 11: Cartography" for information on creating maps. A paneled map is designed to be spliced together into a large paper map. where 1 inch on the map represents 12. that includes a scale. There is a border.Printing Maps CHAPTER 12 Hardcopy Output Introduction Hardcopy output refers to any output of image data to paper. with several referencing features. such as “1 inch = 1000 feet”. but no tick marks on every page. Scaled Maps A scaled map is a georeferenced map that has been projected to a map projection. A scaled map usually has a legend. borders and tick marks appear on the outer edges of the large map. and is accurately laid-out and referenced to represent distances and locations. Field Guide 437 . The scale is often expressed as a ratio. Each page fits on the paper used by the printer.000 inches on the ground. These methods are used to print and store large maps: • • A book map is laid out like the pages of a book.

if the map composition is 24 inches by 36 inches. The map scale is defined when the user creates an image area in the map composition. Map Scale The map scale is the distance on a map as related to the true distance on the ground. One map composition can have multiple image areas set at different scales. or the area that one pixel represents. 438 ERDAS . Display Scale Display scale is the distance on the screen as related to one unit on paper. For example. measured in map units.+ + neatline neatline + + map composition map composition tick marks ++ Paneled Map + + Book Map Figure 177: Layout for a Book Map and a Paneled Map Scale and Resolution The following scales and resolutions will be noticeable during the process of creating a map composition and sending the composition to a hardcopy device: • • • • • spatial resolution of the image display scale of the map composition map scale of the image(s) map composition to paper scale device resolution Spatial Resolution Spatial resolution is the area on the ground represented by each raw image data pixel. it would not be possible to view the entire composition on the screen. the scale could be set to 1:0.25 so that the entire map composition would be in view. Therefore. These areas may need to be shown at different scales for different applications.

Figure 178 is the map composition that will be used in the examples. Map Scaling Examples The ERDAS IMAGINE Map Composer enables the user to define a map size. as well as the size and scale for the image area within the map composition.5” × 11” piece of paper. • • It must be output to a PostScript printer on an 8. Figure 178: Sample Map Composition Field Guide 439 . Device Resolution The number of dots that are printed per unit—for example. 300 dots per inch (DPI). The examples in this section focus on the relationship between these factors and the output file created by Map Composer for the specific hardcopy device or file format. A TIFF file must be created and sent to a film recorder having a 1. Use the ERDAS IMAGINE Map Composer to define the above scales and resolutions.000 DPI resolution.Printing Maps Map Composition to Paper Scale This scale is the original size of the map composition as related to the desired output size on paper. This composition was originally created using IMAGINE Map Composer at a size of 22” × 34” and the hardcopy output must be in two different formats.

This division is accomplished by specifying a 1/3 or 0.img file created from the map composition must be less than half to accommodate the .23. the file must be enlarged three times to compensate for the reduction during the . Due to the three bands and the high resolution.000 DPI device resolution. Output to TIFF The limiting factor in this example is not page size. A three-band . 440 ERDAS . the X and Y dimensions need to be calculated: • • • X = 22 inches * 1.img to . the .000 22. therefore the map composition to paper scale would be set for 0. The .333 map composition to paper scale when outputting the map composition to an .img file must be created in order to convert the map composition to a . This file size is small enough to process and leaves enough room for the . The . these numbers will be used in the calculation. Since the printable area for the printer is approximately 8. it is necessary to calculate the most limiting direction. it is possible to reduce the file size with little image degradation. If this scale is set for a 1 to 1 ratio. Remember.img file is created and exported to TIFF format. See the hardware manual of the hardcopy device for information about the printable area of the device. To determine the map composition to paper scale factor.tif files.000 * 34.1” × 8. To determine the number of megabytes for the map composition. If the specified size of the map (width and height) is greater than the printable area for the printer.tif file will be output to a film recorder with a 1.1” / 22” = 0.5” × 11” piece of paper.img file.244 MB / 3 /3) results in approximately a 250 megabyte file. Use the Print Map Composition dialog to output a map composition to a PostScript printer. then the composition will be paneled. but disk space (600 MB total).23 (vertical direction) The vertical direction is the most limiting.img file could be very large.Output to PostScript Printer Since the map was created at 22” × 34”.36 (horizontal direction) 8. the output hardcopy map will be paneled. since the total disk space is only 600 megabytes.000 dots/inch = 22.tif file.000 = 34. Once the .img file creation.6”.000 * 3 = 2244 MB (multiplied by 3 since there are 3 bands) Although this appears to be an unmanageable file size. it can be sent to a film recorder that accepts . Dividing the map composition by three in both X and Y directions (2.6” / 34” = 0.000 Y = 34 * 1. • • 8.tif file. the map composition to paper scale will need to be calculated so that the composition will fit on an 8.tif conversion.

By using different patterns of dots. To make a color illustration. green. If a very large image file is being printed onto a small piece of paper. colors can have different intensities. halftones in the primary colors (cyan. and blue to create other colors. Hardcopy Devices The following hardcopy devices use halftoning to output an image or map composition: • • • • • • CalComp Electrostatic Plotters Canon PostScript Intelligent Processing Unit Linotronic Imagesetter Tektronix Inkjet Printer Tektronix Phaser Printer Versatec Electrostatic Plotter See the user’s manual for the hardcopy device for more information about halftone printing. Use the ERDAS IMAGINE Print Map Composition dialog to output a map composition to an . Halftoning is the process of converting a continuous tone image into a pattern of dots. magenta. Field Guide 441 .Mechanics of Printing See the hardware manual of the hardcopy device for information about the DPI device resolution. each output pixel may contain one or more dot patterns. are overlaid. create the effect of blended colors in much the same way that phosphorescent dots on a color computer monitor combine red. plus black. The halftone dots of different colors. Mechanics of Printing Halftone Printing This section describes the mechanics of transferring an image or map composition from a data file to a hardcopy map. in close proximity. A newspaper photograph is a common example of halftoning. data file pixels will be skipped to accommodate the reduction. For scaled maps.img file. and yellow). The dots for halftoning are a fixed density—either a dot is there or it is not there.

While the paper moves through the printer. For thematic layers.Continuous Tone Printing Continuous tone printing enables the user to output color imagery using the four process colors (cyan. Hardcopy Devices The following hardcopy devices use continuous toning to output an image or map composition: • • • IRIS Color Inkjet Printer Kodak XL7700 Continuous Tone Printer Tektronix Phaser II SD NOTE: The above printers do not necessarily use the thermal dye transfer process to generate a map. Contrast and Color Tables ERDAS IMAGINE contrast and color tables are used for some printing processes. The density of the dot depends on the amount of heat applied by the printer to transfer the dye. they are loaded from the IMAGINE contrast table. One example is a process called thermal dye transfer. The printer converts digital data from the host computer into a continuous tone image. The amount of heat applied is determined by the brightness values of the input image. to the paper. they are loaded from the color table. The quality of the output picture is similar to a photograph. For continuous raster layers. it is possible to create a wide range of colors. This allows the printer to control the amount of dye that is transferred to the paper to create a continuous tone image. just as they are used in displaying an image. See the user’s manual for the hardcopy device for more information about continuous tone printing. By using varying percentages of these colors. and black). magenta. The entire image or map composition is loaded into the printer’s memory. which has the dyes for all of the four process colors. heat is used to transfer the dye from a ribbon. 442 ERDAS . yellow. The output is smoother than halftoning because the dots for continuous tone printing can vary in density. Example There are different processes by which continuous tone printers generate a map. The translation of data file values to brightness values is performed entirely by the software program.

Use the programs discussed in "CHAPTER 5: Enhancement" to brighten or enhance an image before it is printed.Mechanics of Printing RGB to CMY Conversion Colors Since a printer uses ink instead of light to create a visual image. The data file values that are sent to the printer and the contrast and color tables that accompany the data file are all in the RGB color scheme. magenta. that the presence of cyan in a color means an equal lack of red.G Y = MAX . for example. and yellow) are used in printing. Images often appear darker when printed than they do when displayed on the display device. Field Guide 443 . muddy brown. and yellow combine to create black ink. cyan. the primary colors of pigment (cyan. green. Cyan.B where: MAX = the maximum brightness value R = red value from lookup table G = green value from lookup table B = blue value from lookup table C = calculated cyan value M = calculated magenta value Y = calculated yellow value Black Ink Although. each RGB brightness value is subtracted from the maximum brightness value to produce the brightness value for the opposite color. To convert the values. Consult the user’s manual for your printer.R M = MAX . and yellow can be combined to make black through a subtractive process. C = MAX . Many printers also use black ink for a truer black. NOTE: Black ink is not available on all printers. instead of the primary colors of light (red. Therefore. it may be beneficial to improve the contrast and brightness of an image before it is printed. The RGB primary colors are the opposites of the CMY colors—meaning. green. theoretically. and blue combine to make white (Gonzalez and Wintz 1977). the color that results is often a dark. The RGB brightness values in the contrast and color tables must be converted to CMY values. magenta. magenta. whereas the primary colors of light are additive— red. and blue).

444 ERDAS .

ranging from 1 to 10. used to denote a summation of values. Summation A commonly used notation throughout this and other discussions is the Sigma (Σ). the notation ∑i i=1 is the sum of all values of i. Its purpose is to educate the novice reader. 10 Field Guide 445 . For example. which equals: 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = 55.Summation APPENDIX A Math Topics Introduction This appendix is a cursory overview of some of the basic mathematical concepts that are applicable to image processing. and to put these formulas and concepts into the context of image processing and remote sensing applications.

which denotes an ordered set of values. 2. and band) is a variable. column. and 4-bit 8. For example. each data file value (defined by its row.Similarly. is the set of frequencies with which an event occurs. 16. as used in statistics. the horizontal axis of a histogram is the range of all possible data file values. the value i may be a subscript. IMAGINE supports the following data types: • • • • • 1. and 32-bit signed 8. 446 ERDAS . A histogram is a graph of data frequency or distribution. and 32-bit unsigned 32 and 64-bit floating point 64 and 128-bit complex floating point Distribution. or that a variable will have a particular value.img). ∑ Qi = 3 + 5 + 7 + 2 = 17 i=1 where: Q1 = 3 Q2 = 5 Q3 = 7 Q4 = 2 4 Statistics Histogram In ERDAS IMAGINE image data files (. For a single band of data. The vertical axis is the number of pixels that have each data value. 16.

Statistics 1000 Y number of pixels histogram 300 0 0 100 X 255 data file values Figure 179: Histogram Figure 179 shows the histogram for a band of data in which Y pixels have data value X. Histograms and other descriptor columns for 1. and 32-bit integer data would yield an enormous amount of information.98 ≤ X < 0. to have a row in a descriptor table for every possible data value in floating point.0 to 1. with each row or bin corresponding to a data range of . For example. .01 ≤ X < 0. for example.01 in the layer. 98 99 0. complex.99 0. The bins would look like the following: Bin Number 0 1 2 . Example of a Bin Function Suppose a user has a floating point data layer with values ranging from 0. since they contain a maximum of 256 rows.02 ≤ X < 0. 300 pixels (y) have the data file value of 100 (x). . Bin Functions Bins are used to group ranges of data values together for better manageability. 4. The user could set up a descriptor table of 100 rows. row 23 of the histogram table would contain the number of pixels in the layer whose value fell between . 2. Therefore.01 0.024.0. in this graph.99 ≤ X Data Range X < 0. the bin function is provided to serve as a data reduction tool.02 0. and 8-bit data are easy to handle.03 Then. Field Guide 447 .023 and . However.

5 ≤ X < 1.5 296.5 -1.5 < X ≤ -1. 4.5 < X ≤ -599.5 -600. 898 899 X ≤ -600.5 < X < 0. There are four types of bin functions used in ERDAS IMAGINE image layers: • DIRECT — one bin per integer value. 599 600 601 602 603 . For example.5 -0.5 1. 2. Used by default for 1.5 297.Types of Bin Functions The bin function establishes the relationship between data values and rows in the descriptor table. The direct bin function may include an offset for negative data or data in which the minimum value is greater than zero.5 0.5 -2. . a direct bin with 900 bins and an offset of -601 would look like the following: Bin Number Data Range 0 1 . . and 8-bit integer data. .5 ≤ X < 2. but may be used for other data types as well.5 < X ≤ -0. .5 ≤ X < 297.5 ≤ X 448 ERDAS .

The bin number is computed by: bin = numbins * (x .min) if (bin < 0) bin = 0 if (bin >= numbins) bin = numbins . Field Guide 449 .min) / (max .Statistics • LINEAR — establishes a linear mapping between data values and bin numbers.min)/(max . The bin number is computed by: bin = numbins * (ln (1.0 + ((x . as in our first example.establishes a logarithmic mapping between data values and bin numbers.0 to bin numbers 0 to 99.0)) if (bin < 0) bin = 0 if (bin >= numbins) bin = numbins .1 where: bin = resulting bin number number of bins data value lower limit (usually minimum data value) upper limit (usually maximum data value)LOG .min)))/ ln (2. mapping the data range 0.0 to 1. numbins = x min max = = = • LOG — establishes a logarithmic mapping between data values and bin numbers.1 • EXPLICIT — explicitly defines mapping between each bin number and data range.

as shown by the peak of the bell curve. but would vary slightly. whether it be average age.. average test score. or the average amount of spectral reflectance from oak trees in the spring.. a normal distribution would occur if one were to compare the bands in a desert image.Mean The mean (µ) of a set of values is its statistical average. if Qi represents a set of k values: Q 1 + Q 2 + Q 3 + . such that. 450 ERDAS . although all distributions have averages. + Q k µ = --------------------------------------------------------k or µ = ∑ i=1 k Qi ----k The mean of data with a normal distribution is the value at the peak of the curve—the point where the distribution balances. as shown by the tails at the ends of the curve. Values that are more extreme are more rare. In a normal distribution. The Normal Distributions are a family of bell shaped distributions that turn up frequently under certain special circumstances. The bands would be very similar. Normal Distribution Our general ideas about an average. 1000 number of pixels 0 0 data file values 255 Figure 180: Normal Distribution Average usually refers to a central value on a bell curve. are made visible in the graph of a normal distribution. most values are at or near the middle. or bell curve. For example.

In choosing to approximate a distribution by the nearest of the Normal Distributions. Many natural phenomena can be predicted or estimated according to “the law of averages” that is implied by the bell curve (Larsen and Marx 1981). A normal distribution in remotely sensed data is meaningful—it is a sign that some characteristic of an object can be measured by the average amount of electromagnetic radiation that the object reflects. controls how much the bell is shifted horizontally so that its average will match the average of the distribution of x. we describe the many values in the bin function of its distribution with just two parameters. to control the shape and location of the resulting probability graph through the equation: x–µ 2 –  ----------. Field Guide 451 . σ and µ. while σ adjusts the width of the bell to try to encompass the spread of the given distribution.  2σ  e f ( x ) = -------------------σ 2π where x = the quantity whose distribution is being approximated π and e = famous mathematical constants The parameter. µ. The normal distribution is the most widely encountered model for probability. it reduces the accuracy of the conclusions we can draw.Statistics Each Normal Distribution uses just two parameters. but like all simplifications. The mean and standard deviation are often used by computer programs that process and analyze image data. This relationship between the data and a physical scene or object is what makes image processing applicable to various types of land analysis. It is a significant simplification that can greatly ease the computational burden of many operations.

so it may result in a number that is much higher than any of the original values. or the sample variance (notated σ2). by definition of the mean. the squares of these differences are averaged so that a meaningful number results (Larsen and Marx 1981). This is called the “minimum variance unbiased estimator” of the variance. square inches. the better the approximation) The theory behind this equation is discussed in chapters on “Point Estimates” and “Sufficient Statistics.. 2 ------------------------------------σQ ≈ i = 1 k–1 where: i = a particular pixel k = the number of pixels (the higher the number.g. etc. It is helpful to know how much the data varies from its mean.” and covered in most statistics texts. not known. Therefore. The equation used in practice is shown below. the variance is calculated as follows: Var Q = E 〈 ( Q – µ Q ) 〉 where: E = expected value (weighted average) 2 = squared to make the distance a positive number In practice. the mean and variance of the entire data set are estimated. square data values. However. 2 ∑ ( Qi – µQ ) k 2 452 ERDAS . a simple average of the differences between each value and the mean equals zero in every case.Variance The mean of a set of values locates only the average value—it does not adequately describe the set of values by itself. In theory. and therefore.). NOTE: The variance is expressed in units squared (e. These values are usually only samples of a large data set. the use of this equation for variance does not usually reflect the exact nature of the values that are used in the equation.

which is expressed in units and can be related back to the original values (Larsen and Marx 1981). For more information on contrast stretch. between µ-s and µ+s more than 1/2 of the values are between µ-2s and µ+2s more than 3/4 of the values are between µ-3s and µ+3s An example of a simple application of these rules is seen in the ERDAS IMAGINE Viewer. When 8-bit data are displayed in the Viewer. the sample standard deviation (sQ) for a set of values Q is computed as follows: ∑ ( Qi – µQ ) sQ = k–1 In any distribution: • • • k 2 i=1 ------------------------------------- approximately 68% of the values are within one standard deviation of µ: that is. see "CHAPTER 5: Enhancement. Based on the equation for sample variance (s2). The square root of the variance is the standard deviation." Field Guide 453 . a more useful value is the square root of the variance.Statistics Standard Deviation Since the variance is expressed in units squared. Standard deviations are used because the lowest and highest data file values may be much farther from the mean than 2s. IMAGINE automatically applies a 2 standard deviation stretch that remaps all data file values between µ-2s and µ+2s (more than 1/2 of the data) to the range of possible brightness values on the display device.

k 454 ERDAS . the standard deviation describes how a fixed percentage of the data varies from the mean.Parameters As described above. they can be used to estimate other calculations about the data. whereas variance is the average square of the differences between values and their mean in one band. ERDAS IMAGINE classification algorithms that use signature files (. Covariance In many image processing procedures. The closer that the distribution of the data resembles a normal curve. to vary with each other. Theoretically speaking. but in different bands. Covariance measures the tendencies of data file values in the same pixel. since the mean and standard deviation of each sample or cluster are stored in the file to represent the distribution of the values. the more accurate the parametric estimates of the data will be. covariance is expressed in units squared. it is much more convenient to estimate calculations with a mean and standard deviation than it is to repeatedly sample the actual data. Algorithms that use parameters are parametric. the relationships between two bands of data are important. covariance is the average product of the differences of corresponding values in two different bands from their respective means. Compare the following equation for covariance to the previous one for variance: Cov QR = E 〈 ( Q – µ Q ) ( R – µ R )〉 where: Q and R = data file values in two bands E = expected value In practice. When the mean and standard deviation are known. The mean and standard deviation are known as parameters. These bands must be linear. in relation to the means of their respective bands. which are sufficient to describe a normal curve (Johnston 1980).sig) are parametric. In computer programs. the sample covariance is computed with this equation: ∑ ( Qi – µQ ) ( Ri – µ R ) --------------------------------------------------------C QR ≈ i = 1 k where: i = a particular pixel k = the number of pixels Like variance.

so that it needs to be computed only once. the matrix itself can be used in matrix equations. Field Guide 455 . Below is an example of a covariance matrix for 4 bands of data: band A band A band B band C band D VarA CovAB CovAC CovAD band B CovBA VarB CovBC CovBD band C CovCA CovCB VarC CovCD band D CovDA CovDB CovDC VarD The covariance matrix is symmetrical—for example. Also. See "Matrix Algebra" on page 462 for more information on matrices. CovAB = CovBA. the diagonal of the covariance matrix consists of the band variances. The covariance matrix is an organized format for storing variance and covariance information on a computer system.= ------------------------------------k–1 k–1 Therefore. The covariance of one band of data with itself is the variance of that band: ∑ ( Qi – µQ ) ( Qi – µQ ) k ∑ ( Qi – µQ ) k 2 i=1 i=1 C QQ = ----------------------------------------------------------.Statistics Covariance Matrix The covariance matrix is an n × n matrix that contains all of the variances and covariances within n bands of data. as in principal components analysis.

In image processing. Although image data files are stored band-by-band. An image with four bands of data is said to be 4-dimensional (Jensen 1996). each band of data is a set of values. 456 ERDAS . it is often necessary to extract the measurement vectors for individual pixels. Measurement Vector The measurement vector of a pixel is the set of data file values for one pixel in all n bands. NOTE: The letter n is used consistently in this documentation to stand for the number of dimensions (bands) of image data.Dimensionality of Data Spectral Dimensionality is determined by the number of sets of values being used in a process. Band 1 Band 2 Band 3 V1 V2 V3 1 pixel n=3 Figure 181: Measurement Vector According to Figure 181: i = particular band Vi = the data file value of the pixel in band i. then the measurement vector for this pixel is: V1 V2 V3 See "Matrix Algebra" on page 462 for an explanation of vectors.

in band i.Dimensionality of Data Mean Vector When the measurement vectors of several pixels are analyzed. a mean vector is often calculated. Training sample mean of values in sample in band 1 = µ1 Band 1 Band 2 Band 3 mean of these values = µ2 mean of these values = µ3 Figure 182: Mean Vector According to Figure 182: i = a particular band µi = the mean of the data file values of the pixels being studied. then the mean vector for this training sample is: µ1 µ2 µ3 Field Guide 457 . It has n elements. This is the vector of the means of the data file values in each band.

In Figure 183. Feature space is an abstract space that is defined by spectral units. Actually.Feature Space Many algorithms in image processing compare the values of two or more bands of data. the pixel that is plotted has a measurement vector of: 180 85 The graph above implies physical dimensions for the sake of illustration. 85) 85 0 0 Band A data file values 180 255 Figure 183: Two Band Plot NOTE: If the image is 2-dimensional. such as an amount of electromagnetic radiation. 255 Band B data file values (180. As opposed to physical space. The programs that perform these functions abstractly plot the data file values of the bands being studied against each other. An example of such a plot in two dimensions (two bands) is illustrated in Figure 183. represented by the digital image data. the pixel above is plotted in feature space. 458 ERDAS . the plot doesn’t always have to be 2-dimensional. these dimensions are based on spectral characteristics.

This ellipse is used in several algorithms—specifically. See "CHAPTER 5: Enhancement" for more information on principal components analysis. for evaluating training samples for image classification. This figure shows that when the values in the bands being plotted have jointly normal distributions.Dimensionality of Data Feature Space Images Several techniques for the processing of multiband data make use of a two-dimensional histogram. 255 Band B data file values 0 0 255 Band A data file values Figure 184: Two Band Scatterplot The scatterplot pictured in Figure 184 can be described as a simplification of a 2dimensional histogram. This is simply a graph of the data file values of one band of data against the values of another band. "CHAPTER 6: Classification" for information on training sample evaluation. two-dimensional feature space images with ellipses are helpful to illustrate principal components analysis. or feature space image. the feature space forms an ellipse. and "CHAPTER 8: Rectification"for more information on orders of transformation. Also. Field Guide 459 . where the data file values of one band have been plotted against the data file values of another band.

or in this case: D2 = (di . be plotted on an n-dimensional histogram. Each point on an n-dimensional scatterplot has n coordinates in that spectral space — a coordinate for each axis.ej)2 460 ERDAS . 2-dimensional examples are used to illustrate concepts that apply to any number of dimensions of data. which is defined in more than three dimensions.n-Dimensional Histogram If 2-dimensional data can be plotted on a 2-dimensional histogram. Spectral Distance Euclidean Spectral distance is distance in n-dimensional spectral space. The prefix “hyper” refers to an abstract geometrical shape. NOTE: In this documentation. or the spectral space is redefined in such a way that the coordinates are changed. the points in the scatterplot are replotted. It is a number that allows two measurement vectors to be compared for similarity. In some image enhancement algorithms (most notably. When all data sets (bands) have jointly normal distributions. thus transforming the measurement vector of the pixel. The n coordinates are the elements of the measurement vector for the corresponding pixel. The 2-dimensional examples are best suited for creating illustrations to be printed. principal components). as above. then ndimensional data can. the scatterplot forms a hyperellipsoid.ei)2 + (dj . The spectral distance between two pixels can be calculated as follows: D = where: D n i di ei = = = = = ∑ ( d i – ei ) i=1 n 2 spectral distance number of bands (dimensions) a particular band data file value of pixel d in band i data file value of pixel e in band i This is the equation for Euclidean distance—in two dimensions (when n = 2). defining ndimensional spectral space. it can be simplified to the Pythagorean Theorem (c2 = a2 + b2). abstractly.

The highest exponent in a polynomial determines the order of the polynomial.. B." Field Guide 461 . A coefficient is a constant. + Wyt where: A. but not the complexity. The coefficients are computed from the GCPs and stored as a transformation matrix. D ... x. orders of transformation.. takes this form: A + Bx + Cx2 + Dx3 + . of the transformation is changed.1x3 + 2x2y + 5xy2 + 12y3 Polynomial equations are used in image rectification to transform the coordinates of an input file to the coordinates of another system. x and y.. A polynomial with two variables. B... the variables in the polynomials (x and y) are the source coordinates of a ground control point (GCP).6y + 10x2 . and transformation matrices is included in "CHAPTER 8: Rectification.21xy + 1y2 . A polynomial with one variable.. B.. A detailed discussion of GCPs.. Transformation Matrix In the case of first order image rectification. D.. W = coefficients t = the order of the polynomial i and j = exponents All combinations of xi times yj are used in the polynomial expression. D . C.11xy2 + 4y3 yo = 13 + 12x + 4y + 1x2 .. C. F . The variables in polynomial expressions can be raised to exponents. The order of the polynomial used in this process is the order of transformation.5xy + 1y2 + 3x3 + 7x2y . then the nature. + Ωxt where: A. takes this form: A + Bx + Cy + Dx2 + Exy + Fy2+ .Polynomials Polynomials Order A polynomial is a mathematical expression consisting of variables and coefficients. Q . Ω cannot be 0. C... which is multiplied by a variable in the expression. are 0. such that: i+j≤t A numerical example of 3rd-order transformation equations for x and y is: xo = 5 + 4x . Mathematically. E. + Qxiyj + . Ω = coefficients t = the order of the polynomial NOTE: If one or all of A..

it is said to be an i by j matrix. having one column (i by 1) is one of many kinds of vectors.8 6.0 12.Matrix Algebra A matrix is a set of numbers or values arranged in a rectangular array. A one-dimensional matrix.1 G2 = 6. where n is equal to the number of bands.6 6. it is simpler to use only one number to designate the position: G = 2. such as M.5 462 ERDAS .2 4.1 8. Matrix Notation Matrices and vectors are usually designated with a single capital letter.4 With column vectors. If a matrix has i rows and j columns. which is its row and column (in that order) in the matrix.5 10. One element of the array (one value) is designated with a lower case letter and its position: m3.4 One value in the matrix M would be specified by its position. the measurement vector of a pixel is an n-element vector of the data file values of the pixel. For example: M = 2. For example. See "CHAPTER 5: Enhancement" for information on eigenvectors.3 10.2 = 12.

For example. if the first matrix is a by b. where: xo = a1 + a2xi + a3yi yo = b1 + b2xi + b3yi xi and yi = source coordinates xo and yo = rectified coordinates the coefficients of the transformation matrix are as above The above could be expressed by a matrix equation: x0 y0 = a1 a2 a3 b1 b2 b3 1 xi yi R =CS. and the second matrix is m by n. To multiply two matrices. then b must equal m. The coefficients are stored in a 2 by 3 matrix: C = a1 a2 a3 b1 b2 b3 Then. Field Guide 463 . and the product matrix will have the size a by n. or where: S = a matrix of the source coordinates (3 by 1) C = the transformation matrix (2 by 3) R = the matrix of rectified coordinates (2 by 1) The sizes of the matrices are shown above to demonstrate a rule of matrix multiplication. the first matrix must have the same number of columns as the second matrix has rows.Matrix Algebra Matrix Multiplication A simple example of the application of matrix multiplication is a 1st-order transformation matrix.

as in the example below (Cullen 1972).The formula for multiplying two matrices is: ( fg ) ij = ∑ k=1 m f ik g kj for every i from 1 to a for every j from 1 to n where: i = a row in the product matrix j = a column in the product matrix f = an (a by b) matrix g = an (m by n) matrix (b must equal m) fg is an a by n matrix. 2 3 G = 6 4 10 12 G T = 2 6 10 3 4 12 For more information on transposition. Transposition is denoted by T. see "Computing Principal Components" in "CHAPTER 5: Enhancement" and "Classification Decision Rules" in "CHAPTER 6: Classification. Transposition The transposition of a matrix is derived by interchanging its rows and columns." 464 ERDAS .

Please refer to the product‘s documentation for information on those files. The list in Table 31 does not include files that are used by third party products. this does not include files that are introduced into IMAGINE by third party products. that often identifies the type of data in a file. The part of the file name before the extension can be used in a manner that is helpful to the user and others. ERDAS IMAGINE automatically assigns the default extension when the user is prompted to enter a file name.APPENDIX B File Formats and Extensions Introduction This appendix describes all of the file formats and extensions that are used within ERDAS IMAGINE software.img Files Hierarchial File Architecture (HFA) System IMAGINE Machine Independent Format (MIF) MIF Data Dictionary ERDAS IMAGINE File Extensions A file name extension is a suffix. and their formats are conventions of ERDAS. The files used within the ERDAS IMAGINE system. All of the types of files used within IMAGINE are listed in Table 31 by their extensions. However. usually preceded by a period. their extensions. IMAGINE HFA (hierarchal file architecture) files can be viewed with the IMAGINE HfaView utility. Field Guide 465 . Files with an ASCII format are simply text files which can be viewed with the IMAGINE Text Editor utility. Please refer to the product‘s documentation for information on those files. Topics include: • • • • • IMAGINE file extensions . Inc.

img HFA .img file.fsp.gms instead. Feature Space Image file — stores the same information as an ..cff ASCII HFA .Table 31: ERDAS IMAGINE File Extensions Extension .img HFA ASCII HFA .img file.clb . Attributes) — used to augment a directly readable format (e. Text form of .aux Format HFA HFA Description Area of Interest file— stores a user-defined area of an .img files. created by performing a Fast Fourier Transformation on an .aoi .img HFA .klb . New .gmd HFA ASCII .g. flow chart). descriptor tables.gms HFA . the transformation itself for any geometric model. created with the Spatial Modeler Model Maker. and file information.gcc . . Line style Library file .img Files section in this chapter for more information on . Fast Fourier Transform file — stores raster layers in a compressed format. contrast and color tables.fls . Coefficient file — stores transformation matrices created by rectifying a file. Ground Control Coordinates file — stores ground control points.atx . Geometric Model file — contains the parameters of a transformation and.eml HFA ASCII ASCII Image chip — a greatly reduced preview of a raster image.flb .ifft.img file. optionally. Fill styles Library file File List file — stores the list of files that is used for mosaicking images.chp .. SGI FIT) when the format does not handle such information. Use . Kernel Library file — stores convolution kernels. See the .e. Inverse Fast Fourier Transform file — stores raster layers created by performing an inverse Fast Fourier Transformation on an .llb ASCII HFA 466 ERDAS .aux file — used for input purposes only.fft HFA .eml files can be created with the ERDAS Macro Language and incorporated into the IMAGINE interface.img file contains. Note: This format is now obsolete. transformation). Graphical Model file — stores scripts that draw the graphical model (i.img file plus the information required to create the feature space image (e. pyramid layers. Auxiliary information (Projection. It includes everything that an .g. Color Library file ERDAS Macro Language file — stores scripts which control the operation of the IMAGINE graphical user interface.. Image file — stores single or multiple raster layers.

The .ovr Format HFA HFA (ver.sig ASCII HFA . It does not store any graphical model (i.map. Permanent Model files — stores the permanent version of the .sml .tlb HFA HFA Field Guide 467 . Each panel consists of two files.panel_xx. MapMaker processes the .img file that stores the magnitude of a Fourier Transform image file.panel_xx extension.name. One is the name file with the extension .plt. Panel Name file — stores the name of the panel data file and any fonts used by the panel (the font names are present only for PostScript output) Panel Data file — stores actual processed data output by MapMaker. which names the various fonts used in the panel along with name of the actual file that contains the panel output.plb . 8.3) ASCII (pre-8.. It can be viewed with the Viewer.plt. Projection Library file Plot file — stores the names of the panel files produced by MapMaker.stores the IMAGINE On-Line Help documentation.plt. legends.map . which was created by the Classification Signature Editor or imported from ERDAS Version 7.e. On-Line Help file —.mdl files that are provided by ERDAS. scales) Model file — stores Spatial Modeler scripts..plt.g. If the destination device was a PostScript device.map file to produce one or more map panels.pdf .ERDAS IMAGINE File Extensions Table 31: ERDAS IMAGINE File Extensions Extension . Symbol Library file — stores annotation symbols for the symbol library.mdl ASCII .plt file contains the complete path names (one per line) of the panel name files. Map/Overlay file — stores annotation layers created in Map Composer outside of the map frame (e.5. Signature file — stores a signature set.plt FrameMaker HFA ASCII ASCII ASCII .mag. If only a . or on an image in a Viewer.panel_xx. then this file is an HFA file that contains one or three layers of raster imagery. Text style Library file .olh . Preference Definition file — stores information that is used by the Preference Editor. grids.img .panel_xx ASCII ASCII/HFA . then this is an ASCII file that contains PostScript commands. flow chart) information. in a blank Viewer.3) HFA Description Magnitude Image file — an . lines. then a temporary . The other is the panel file itself with the . Overlay file — stores an annotation layer that was created in a map frame.mdl file is created when a model is run. This file is necessary for running a model.pmdl .ovr .gmd file exists. Map file — stores map frames created with Map Composer. If the output device was a non-PostScript raster device.na me .

The information in the Image Info and HfaView utilities should be modified with caution because IMAGINE programs use this information for data input.img file are described in more detail on the following pages. there will be errors in the output data for these programs. add ground control points). These files use the HFA structure. . Layer_1 Info. Layer_n Info. the statistics for the two files can be compared to see how the filter changed the data.img file. Figure 185 shows the different objects of data stored in an .img file can be used to help the user visualize how different processes change the data. Attribute Data Statistics Map Info. The information stored in an . Layer_2 Info.. such as programs incorporated by third party vendors. 468 ERDAS .img file.img File Ground Control Covariance Matrix Sensor Info. The user’s file may not have all of this information present. Use the IMAGINE Image Information utility or the HfaView utility to view the information that is stored in an . Projection Info. For example.img files to store raster data. that is. data may be added or removed when a process is run (e. other sources. may add objects to the . Pyramid Layers Data File Values Figure 185: Examples of Objects Stored in an . Also.img file. because these objects are not definitive. if the user runs a filter over the file and creates a new file.img File The objects of an .g. If it is incorrect.img Files ERDAS IMAGINE uses .ERDAS IMAGINE .

img Files Sensor Information When importing satellite imagery. there is usually a header file on the tape or CD-ROM that is separate from the data. Field Guide 469 . The sensor object is named: <format type>_Header Some examples of the various sensor types are listed in the chart below.ERDAS IMAGINE . Sensor ADRG Landsat TM NOAA AVHRR RADARSAT SPOT Sensor Object ADRG_Header TM_Header AVHRR_Header RADARSAT_Header SPOT_Header Use the HfaView utility to view the header file. Each sensor provides different types of information. such as: • • • • • • date and time scene was scanned calibration information of the sensor orientation of the sensor original dimensions for data data storage format number of bands The data presented are dependent upon the sensor. This object contains ephemeris information about the sensor.

if the layer is very heterogenous. Currently. then compressing the layer would save on disk space. The time that it takes to uncompress the data is minimal. including the following parameters: • • • • • height and width (rows and columns) layer type (continuous or thematic) data type (signed 8-bit.. The default block size is 64 pixels by 64 pixels. Block Size IMAGINE software uses a tiled format to store raster layers. For example. etc. Data will be compressed only when it is stored. Use the Import function to compress data when it is imported into IMAGINE. the user has the option to compress the data. IMAGINE uses the run-length compression method. blocks of water). The raster layer is divided into tiles (i. NOTE: The default block size is acceptable for most applications and should not need to be changed. The tiled format allows raster layers to be displayed and resampled quickly. IMAGINE automatically uncompresses data before the layer is run through a process. 470 ERDAS .. Use the Image Information utility to view the parameters.img file.img file has its own ancillary data. if the layer contains large. homogenous areas (e.Raster Layer Information Each raster layer within an . The amount that the data are compressed depends on data in the layer. floating point. The size of this block can be defined when the user either creates the file or imports it. These parameters are defined when the raster layer is created or imported into IMAGINE.g. run-length compression would not save much disk space. blocks) when IMAGINE creates or imports an .) compression (see below) block size (see below) This information is usually the same for each layer. Compression When importing a file into IMAGINE.e. However.

ERDAS IMAGINE .img Files 64 pixels 64 pixels 512 columns 512 rows Figure 186: Example of a 512 x 512 Layer with a Block Size of 64 x 64 Pixels Field Guide 471 .

. if the user does not want to include zero file values in the statistics calculation (and they are currently included). For example.g. statistics should be created for a layer. and attributes for each class. which is preferred. Rebuilding statistics for a raster layer may be necessary. and blue values). includes the following information: • • histogram contrast table Thematic Raster Layer For a thematic raster layer. the statistics could be used to see if the layer has a normal distribution of data. green. If they do not exist. Use the Raster Attribute Editor to view or modify the contents of these attribute tables. includes the following information: • • • • histogram class names class values color table (red. the attribute table object. if a user planning to use the ISODATA classifier. by default.Attribute Data Continuous Raster Layer The attribute table object for a continuous raster layer. such as the area. 472 ERDAS . Statistics The following statistics are calculated for each raster layer: • • • • • minimum and maximum data file values mean of the data file values median of the data file values mode of the data file values standard deviation of the data file values See "APPENDIX A: Math Topics" for more information on these statistics. Knowing the statistics for a layer will aid the user in determining the process to be used in extracting the features that are of the most interest. contrast tools) will not run without layer statistics. For example. by default. the statistics could be rebuilt without zero file values. Certain Viewer functions (e. Attribute data can also include additional information for thematic raster layers. These statistics are based on the data file values of the pixels in the layer. opacity.

they must be correct.ERDAS IMAGINE . When you import a file. or rebuild statistics for a raster layer. Since IMAGINE programs use these data. or change map information for a raster layer in an . the information in the Image Information utility will be inactive and shaded. If the layer has been georeferenced.img file. If incorrect information is entered. the map information may not have imported correctly. create. meters. use the Image Info utility to update the information. inches. add. If this occurs. then the data for this file will no longer be valid.g. the following information will be stored in the raster layer: • • • upper left X. If the statistics do not exist.. Map Information Map information for a raster layer will be created only when the layer has been georeferenced.Y coordinates pixel size map unit used for measurement (e.img Files Use the Image Information utility to view. The user should add or change the map information only when he or she has valid map information to enter. feet) See "CHAPTER 11: Cartography" for information on map data. Use the Image Information utility to view. Field Guide 473 .

1k × 1k. then the data for this layer will no longer be valid. to 2k × 2k. See "CHAPTER 4: Image Display" for more information on pyramid layers. Using the Create Pyramid Layers option. This may occur when you import a file. the information in the Image Information utility will be inactive and shaded. If the layer has not been georeferenced. Pyramid Layers IMAGINE gives the user the option to “pyramid” large raster layers for faster processing and display in the Viewer. 128 × 128. the following projection information will be generated for the layer: • • • map projection spheroid zone number See "APPENDIX C: Map Projections" for information on map projections. a raster layer that is 4k × 4k pixels could take a long time to display when using the Fit To Window option in the Viewer. Use the Image Information utility to view. Since IMAGINE programs use these data. 474 ERDAS . down to 64 × 64. they need to be correct. 512 × 512. If the user enters incorrect information. Then IMAGINE would select the pyramid layer size most appropriate for display in the Viewer window. Pyramid layers can be created using the Image Information utility or when the raster layer is imported.Map Projection Information If the raster layer has been georeferenced. Changing the map projection with the Projections Editor dialog will not rectify the layer. IMAGINE would create additional raster layers successively reduced from 4k × 4k. add. reduced subsampled raster layers are created from the original raster layer. The number of pyramid layers that are created depends on the size of the raster layer and the block size. Do not add or change the map projection unless the projection listed in the Image Info utility is incorrect or missing. or change the map projection for a raster layer in an . For example. When the user generates pyramid layers.img file. You can also use the Image Information utility to delete pyramid layers.

When they are written to the file.Machine Independent Format Machine Independent Format MIF Data Elements ERDAS IMAGINE uses the Machine Independent Format (MIF) to store data in a fashion which can be read by a variety of machines. This data type can be used for thematic data with 4 or fewer classes. 7 U1 _7 byte 0 6 U1_ 6 5 U1_ 5 4 U1_ 4 3 U1_ 3 2 U1_ 2 1 U1_ 1 0 U1_ 0 EMIF_T_U2 (Unsigned 2-bit Integer) U2 is for unsigned 2-bit integers (0 . they are automatically expanded to give one value per byte in memory. When they are written to the file. they are automatically compressed to place four values into one output byte. they are automatically expanded to give one value per byte. When the data are read from a MIF file. When the data are read from a MIF file they are automatically expanded to give one value per byte in memory. Each MIF file is made up of one or more of the data elements explained below. This data type can be used for bitmap images with “yes/no” conditions. EMIF_T_U1 (Unsigned 1-bit Integer) U1 is for unsigned 1-bit integers (0 . 7 U2_3 byte 0 5 U2_2 3 U2_1 1 U2_0 EMIF_T_U4 (Unsigned 4-bit Integer) U4 is for unsigned 4-bit integers (0 . Files created using this package on one machine will be readable from another machine with no explicit data translation. they are automatically compressed to place eight values into one output byte. 7 U4_1 byte 0 3 U4_0 Field Guide 475 . This data type can be used for thematic data with 16 or fewer classes. When these data are read from a MIF file. When they are written to the file it is automatically compressed to place two values into one output byte.1).3).15). This format provides support for converting data between the IMAGINE standard data format and that of the specific host's architecture.

stored in Intel byte order. 15 integer byte 1 byte 0 476 ERDAS . The least significant byte is stored first.EMIF_T_UCHAR (8-bit Unsigned Integer) This stores an 8-bit unsigned integer. The least significant byte is stored first. 7 integer byte 0 EMIF_T_USHORT (16-bit Unsigned Integer) This stores a 16-bit unsigned integer. It is most typically used to stored characters and raster imagery. 15 integer byte 1 byte 0 EMIF_T_SHORT (16-bit Signed Integer) This stores a 16-bit two-complement signed integer. stored in Intel byte order. 7 integer byte 0 EMIF_T_CHAR (8-bit Signed Integer) This stores an 8-bit signed integer.

31 integer byte 3 byte 2 byte 1 byte 0 Field Guide 477 . 15 integer byte 1 byte 0 EMIF_T_ULONG (32-bit Unsigned Integer) This stores a 32-bit unsigned integer. The least significant byte is stored first. The list of strings associated with the type are defined in the data dictionary which is defined below. The least significant byte is stored first. stored in Intel byte order. however most UNIX systems only allow 2-Gigabyte files. this element appears in the data dictionary as an EMIF_T_ULONG element. This allows for indexing into a 4Gigabyte file. stored in Intel byte order. the EMIF_T_PTR will be expanded to an 8-byte format which will allow indexing using 64 bits which allow addressing of 16 billion Gigabytes of file space. The least significant byte is stored first. The first item in the list is indicated by 0. Byte 0 is the first byte. etc. NOTE: Currently. 31 integer byte 3 byte 2 byte 1 byte 0 EMIF_T_PTR (32-bit Unsigned Integer) This stores a 32-bit unsigned integer. byte 1 is the second. 31 integer byte 3 byte 2 byte 1 byte 0 EMIF_T_LONG (32-bit Signed Integer) This stores a 32-bit two-complement signed integer value.Machine Independent Format EMIF_T_ENUM (Enumerated Data Types) This stores an enumerated data type as a 16-bit unsigned integer. which is used to provide a byte address within the file. In future versions of the file format. stored in Intel byte order.

1 = negative) exp = 11 bit excess 1023 exponent fraction = 53 bits of precision (+1 hidden bit) 63 52 51 s byte 7 exp byte 6 fraction byte 5 byte 4 byte 3 byte 2 byte 1 byte 0 478 ERDAS . 31 integer byte 3 byte 2 byte 1 byte 0 EMIF_T_FLOAT (Single Precision Floating Point) Single precision floating point values are IEEE floating point values. The least significant byte is stored first.EMIF_T_TIME (32-bit Unsigned Integer) This stores a 32-bit unsigned integer. This is the standard used in UNIX time keeping. s = sign (0 = positive. which represents the number of seconds since 00:00:00 1 JAN 1970. 1 = negative) exp = 8 bit excess 127 exponent fraction = 24 bits of precision (+1 hidden bit) 31 30 22 s byte 3 exp byte 2 fraction byte 1 byte 0 EMIF_T_DOUBLE (Double Precision Floating Point) Double precision floating point data are IEEE double precision. s = sign (0 = positive.

Machine Independent Format EMIF_T_COMPLEX (Single Precision Complex) A complex data element has a real part and an imaginary part. Double precision floating point data are IEEE double precision. 1 = negative) exp = 11 bit excess 1023 exponent fraction = 53 bits of precision (+1 hidden bit) Real part: first double precision 63 52 51 s byte 7 exp byte 6 fraction byte 5 byte 4 byte 3 byte 2 byte 1 byte 0 Imaginary part: second double precision 63 52 51 s byte15 exp byte 14 fraction byte 13 byte 12 byte 11 byte 10 byte 9 byte 8 Field Guide 479 . s = sign (0 = positive. Single precision floating point values are IEEE floating point values. 1 = negative) exp = 8 bit excess 127 exponent fraction = 24 bits of precision (+1 hidden bit) Real part: first single precision 31 30 22 s byte 3 exp byte 2 fraction byte 1 byte 0 Imaginary part: second single precision 31 30 22 s byte 7 exp byte 6 fraction byte 5 byte 4 EMIF_T_DCOMPLEX (Double Precision Complex) A complex data element has a real part and an imaginary part. s = sign (0 = positive.

It is a variable length object whose size is determined by the data type. numrows: This indicates the number of rows of data in this item.EMIF_T_BASEDATA (Matrix of Numbers) A Basedata is a generic two dimensional array of values. 31 integer byte 7 byte 6 byte 5 byte 4 datatype: This indicates the type of data stored here. This is used in the IMAGINE Spatial Modeler. The valid values are: 480 ERDAS . 31 integer byte 3 byte 2 byte 1 byte 0 numcolumns: This indicates the number of columns of data in this item. It can store any of the types of data used by IMAGINE. and the number of columns. the number of rows. The types are: DataType 0 1 3 4 5 6 7 8 9 10 11 12 13 EMIT_T_U1 EMIF_T_U2 EMIT_T_U4 EMIF_T_UCHAR EMIF_T_CHAR EMIF_T_USHORT EMIF_T_SHORT EMIF_T_ULONG EMIF_T_LONG EMIF_T_FLOAT EMIF_T_DOUBLE EMIF_T_COMPLEX EMIF_T_DCOMPLEX 15 1/8 1/4 1/2 1 1 2 2 4 4 4 8 8 16 BytesPerObject integer byte 9 byte 8 objecttype: This indicates the object type of the data.

) Field Guide 481 .Machine Independent Format 0 1 2 3 SCALAR. when one object is defined by referring to another object. The numcolumns should be 1. The first four bytes provide the object repeat count. For example. This is identical in file format to the EMIF_M_INDIRECT element. This is used for Coefficient matrices. 15 integer byte 11 byte 10 data: This is the actual data. 31 integer byte 7 byte 6 byte 5 byte 4 EMIF_M_PTR (Indication of Indirect Data) This is used when the following data belong to an indirect reference of data of variable length. Whereas only the data gets placed into memory when the EMIF_M_INDIRECT element is read in. 31 integer byte 3 byte 2 byte 1 byte 0 The next four bytes provide the file pointer. RASTER: This indicates that the number of rows and columns is greater than one and the data are just a part of a larger raster object. etc. TABLE: This indicates that the object is an array. since a scalar has a single value. when one object is defined by referring to another object. MATRIX: This indicates the number of rows and columns is greater than one. Its main difference is in the memory resident object which gets created. In the case of the EMIF_M_PTR the count and data pointer are placed into memory. which points to the data comprising the object. This will not normally be the case. (The size of the object is inherent in the data definitions. For example. This would be the case for blocks of images which are written to the file. The number of bytes is given as: bytecount = numrows * numcolumns * BytesPerObject EMIF_M_INDIRECT (Indication of Indirect Data) This is used when the following data belongs to an indirect reference of data.

The first four bytes provide the object repeat count. 31 integer byte 3 byte 2 byte 1 byte 0 The next four bytes provide the file pointer which points to the data comprising the object. 31 integer byte 7 byte 6 byte 5 byte 4 482 ERDAS .

2. the count precedes the data in the input stream.. followed optionally by EnumData. The pointer to that structure is placed into the initial structure.] }name .3.. This is the complete set of names associated with an individual enum type. The asterisk indicates that the number of items in the indirect object is given by the number in the item definition.8. followed by an item name.[<name>. This is a complete definition of a single object. 1| 2| 4| c| C| s| S| l| L| f| d| t| m| M| b| e| o| x This is used to indicate the type of an item.4. followed by a colon.. The dictionary is one or more ObjectDefinitions terminated by a period. In both cases.5. This is the complete collection of object type definitions. ItemDefinition number :[ *| p]ItemType[EnumData]name . The * and the p both indicate that when the data are read into memory. Field Guide 483 . ObjectDefinition {ItemDefinition[ItemDefinition.. followed by one or more names each of which is terminated by a comma. The dictionary is a compact ASCII string which is usually placed at the end of the file with a pointer to the start the dictionary that is stored in the header of the file. followed optionally by either an asterisk or a p. Each object is defined like a structure in C.9. they will not be placed directly into the structure being built. followed by an ItemType. EnumData number :name . The number defines the number of names which will follow. The ItemType indicates the type of data and the name indicates the name by which the item will be known. An ObjectDefinition is an ItemDefinition enclosed in braces {} followed by a name and terminated by a comma. An ItemDefinition is a number followed by a colon. but that a new structure will be allocated and filled with the data.6..7.1. This composed of any sequence of these digits: 0. This is the complete definition of a single Item..Machine Independent Format MIF Data Dictionary IMAGINE HFA files have a data dictionary that describes the contents of each of the different types of nodes. The following table indicates how the characters correspond to one of the basic EMIF_T types. A positive integer number. name number ItemType Any sequence of alphanumeric characters excluding the comma.] EnumData is a number. Each item is composed of an ItemType and a name. and consists of one or more items. and terminated by a comma. The syntax of the dictionary string is: Dictionary ObjectDefinition[ObjectDefinition..] . The p indicates that the number is variable.

The Interpretation column describes the type of data indicated by the item type. This indicates that the description of the following data follows. EMIF_T_USHORT EMIF_T_SHORT EMIF_T_TIME EMIF_T_ULONG EMIF_T_LONG EMIF_T_FLOAT EMIF_T_DOUBLE EMIF_T_COMPLEX EMIF_T_DCOMPLEX EMIT_T_BASEDATA Previously defined object. then it is given as dynamic. The Number of Bytes column is the number of bytes that the data type will occupy in the MIF file. Number of Bytes 1 1 1 1 1 2 2 2 4 4 4 4 8 8 16 dynamic dynamic ItemType 1 2 4 c C e s S t l L f d m M b o Interpretation EMIF_T_U1 EMIF_T_U2 EMIF_T_U4 EMIF_T_UCHAR EMIF_T_CHAR EMIF_T_ENUM. Defined object for this entry. This is like using a previously defined structure in a structure definition. x dynamic 484 ERDAS . This indicates that the description of the following data has been previously defined in the dictionary. This is like using a structure definition within a structure definition.The following table describes the single character codes used to identify the ItemType in the MIF Dictionary Definition. If the number of bytes is not fixed.

ERDAS IMAGINE HFA File Format ERDAS IMAGINE HFA File Format Many of the files created and used by ERDAS IMAGINE are stored in a hierarchical file architecture (HFA). Hierarchical File Architecture Use the IMAGINE HfaView utility to view the objects of a file that uses the HFA format. This tree is built of nodes which contain a variety of types of data. Header Dictionary Root Node Node_1 Node_2 Node_3 Data Data Data Node_4 Node_5 Data Data Figure 187: HFA File Structure Field Guide 485 . This format allows any number of different types of data elements to be stored in the file in a tree structured fashion. The type refers to a description of the data contained by that object. The contents of the nodes (as well as the structural information) is saved in the file in a machine independent format (MIF) which allows the files to be shared between computers of differing architectures. Additionally each object may contain a pointer to a subtree of more nodes. Each object has a name and a type. All entries are stored in MIF and can be accessed directly by name. The hierarchical file architecture maintains an object-oriented representation of data in an IMAGINE disk file through use of a tree structure. Each object is called an entry and occupies one node in the tree.

Figure 188 is an example of an HFA file structure for a thematic raster layer in an . that is objects may be added or removed depending on the data in the file (e. For example. If there were more attributes in the IMAGINE Raster Attribute Editor. The list of objects in a file is not fixed. an .. then they would appear as objects under the Descriptor Table object.ovr file because these files store different types of data.img files with continuous raster layers will not have a node for ground control points). Layer_1 Eimg_Layer Statistics Esta_Statistics Descriptor Table Edsc_Table Projection Eprj_Pro Parameters #Bin Function# Edsc_Bin Function Red Edsc_Column Green Edsc_Column Blue Edsc_Column Class_Names Edsc_Column Histogram Edsc_Column Figure 188: HFA File Structure Example 486 ERDAS .img file will have different objects than an . all .img file.g.Nodes and Objects Each node within the HFA tree structure contains an object and each object has its own data. The types of objects in a file are dependent upon the type of file.

img files: • • • Basic HFA File Object Types . If the item is an array. If an item is a previously defined object type. If the item is an indirect item of fixed size (it is a pointer to an item). then this item will be encoded in the data dictionary as a ULONG element. For example. and description of each item in the object.” For example.ERDAS IMAGINE HFA File Format Pre-defined HFA File Object Types There are three categories of pre-defined HFA File Object Types found in . then the number of elements is given in square brackets [n] after the type. name. An item within an object can be an element or another object. then the type is followed by an asterisk “*. NOTE: If the item type is shown as PTR. the type for an item with an array of 16 EMIF_T_CHAR would appear as CHAR[16]. Field Guide 487 . then the type is simply the name of the previously defined item. the item type for EMIF_T_CHAR would be shown as CHAR.” For example. then the item type is one of the basic types previously given with the EMIF_T_ prefix omitted. The first definition shows how the object appears in the data dictionary in the HFA file. If an item is an element. If the item is an indirect item of variable size (it is a pointer to an item and the number of items). a pointer to an item with an array of 16 EMIF_T_CHAR would appear as CHAR[16] *.img Object Types External File Format Header Object Types These sections list each object with two different detailed definitions. The second definition is a table that shows the type. then the type is followed by a “p. a pointer to an item with a variable sized array of characters would look like CHAR p. For example.

1:ldictionaryPtr.} Ehfa_File. The dictionary must be read and decoded before any of the other objects in the file can be decoded. PTR SHORT rootEntryPtr entryHeaderLength PTR dictionaryPtr 488 ERDAS . and the object tree. node type. This points to the starting position of the file for the MIF Dictionary.Basic Objects of an HFA File This is a list of types of basic objects found in all HFA files: • • • Ehfa_HeaderTag Ehfa_File Ehfa_Entry Ehfa_HeaderTag The Ehfa_HeaderTag is used as a unique signature at the beginning of an ERDAS IMAGINE HFA file. they are placed on the free list so that they may be reused later.1:lrootEntryPtr. the dictionary. Each node consists of two parts. This points to list of freed blocks within the file. The second part is the data for the node. It must always occupy the first 20 bytes of the file. This list is searched first whenever new space is needed. Ehfa_File The Ehfa_File is composed of several main parts. It is currently 1.1:lfreeList. This points to the root node of the object tree. As blocks of space are released in the file.}Ehfa_HeaderTag.1:lheaderPtr. and parent/child information. This entry is used to keep track of these items in the file. Type LONG PTR version freeList Name Description This defines the version number of the ehfa file. Type CHAR[16] PTR label headerPtr Name Description This contains the string “EHFA_HEADER_TAG” The file pointer to the Ehfa_File header record. {1:Lversion. since they may begin anywhere in the file. This defines the length of the entry portion of each node. The first part is the entry which contains the node name. including the free list. {16:clabel.1:SentryHeaderLength.

This points to the data for this node.1:lchild. This is 0 for the root node.32:ctype. If there are no children. PTR prev PTR PTR parent child PTR LONG CHAR[64] data dataSize name CHAR[32] type TIME modTime Field Guide 489 . {1:lnext.}Ehfa_Entry. The type must match one of the types found in the data dictionary. This contains the number of bytes contained in the data record associated with this node. This is a file pointer which gives the location of the parent for this node. including the name and type of the node as well as the parent/child information. This is a file pointer which gives the location of the first of the list of children for this node. This contains a NULL terminated string which names the type of data to be found at this node. The string can be no longer then 64 bytes including the NULL terminator byte. Type PTR Name next Description This is a file pointer which gives the location of the next node in the tree at the current level.1:ldata. The type name can be no longer then 32 bytes including the NULL terminator byte.1:lparent. If there is no data for this node then it contains 0. This contains a NULL terminated string that is the name for this node. This is a file pointer which gives the location of the previous node in the tree at the current level.1:lprev. If this is the last node at this level.1LdataSize. 1:tmodTime. If this is the first node at this level. then this contains 0. This contains the time of the last modification to the data in this node. then this contains 0. then this contains 0.ERDAS IMAGINE HFA File Format Ehfa_Entry The Ehfa_Entry contains the header information for each node in the object tree.64:cname.

HFA Object Directory for . its data type. map information. This is not a complete list.img files The following section defines the list of objects which comprise IMAGINE image files (. etc.. This object describes the basic information for the layer. projection information. The child objects that are usually found under the Eimg_Layer include: • • • • • RasterDMS (an Edms_State which actually contains the imagery) Descriptor_Table (an Edsc_Table object which contains the histogram and other pixel value related data) Projection (an Eprj_ProParameters object which contains the projection information) Map_Info (an Eprj_MapInfo object which contains the map information) Ehfa_Layer (an Ehfa_Layer object which describes the type of data in the layer) 490 ERDAS . including its width and height in pixels. because users and developers may create new items and add them to any ERDAS IMAGINE file. and the width and height of the blocks used to store the image. Other information such as the actual pixel data.img extension). are stored as child objects under this node. Eimg_Layer An Eimg_Layer object is the base node for a single layer of imagery.

an Emif_String is of type CHAR p (i.c128. The height of the layer in pixels. 0=”thematic” 1=”athematic” ENUM pixelType The type of the pixels.u2. The height of each block in the layer.f32.1:lblockHeight.1:lheight.c64. Type LONG LONG ENUM width height layerType Name Description The width of the layer in pixels. {0:pcstring. s8. The type of layer. 1:lblockWidth.e.f64. Field Guide 491 .u16.1e13:u1.s16.1:e3:thematic.} Eimg_Layer. layerType. NOTE: In the following definitions.athematic..pixelType.s32.}Emif_String).ERDAS IMAGINE HFA File Format {1:lwidth.u32.u4.u8. 0=”u1” 1=”u2” 2=”u4” 3=”u8” 4=”s8” 5=”u16” 6=”s16” 7=”u32” 8=”s32” 9=”f32” 10=”f64” 11=”c64” 12=”c128” LONG LONG blockWidth blockHeight The width of each block in the layer.fft of real valued data.

The object is written as a child of the root with the name DependentFile.dependent. It contains the original name of the layer of which it is a child in the original imagery file being served by this . {1:oEmif_String.aux file.ImageLayerName. Type Emif_String Name dependent Description The dependent file name.Eimg_DependentFile The Eimg_DependentFile object contains the base name of the file for which the current file is serving as an .aux. Eimg_DependentLayerName The Eimg_DependentLayerName object normally exists as the child of an Eimg_Layer in an .}Eimg_DependentLayerName. {1:oEmif_String.}Eimg_DependentFile. 492 ERDAS .aux files serving imagery files of a format supported by a RasterFormats DLL Instance which does not define a FileLayerNamesSet interface function (because these DLL Instances are obviously incapable of supporting layer name changes). It only exists in . Type Emif_String Name ImageLayerName Description The original dependent layer name.aux.

f32. 0 =”thematic” 1 =”athematic” ENUM pixelType The type of the pixels. The height of each block in the layer. This node will have an Edms_State node called RasterDMS and an Ehfa_Layer node called Ehfa_layer under it.u16. s8.u2.u4. The node of this form are named _ss_2. etc. 0=”u1” 1=”u2” 2=”u4” 3=”u8” 4=”s8” 5=”u16” 6=”s16” 7=”u32” 8=”s32” 9=”f32” 10=”f64” 11=”c64” 12=”c128” LONG LONG blockWidth blockHeight The width of each block in the layer.1:lheight.1:e3:thematic.1e13:u1.c128. Field Guide 493 .s16.c64. 1:lblockWidth.f64.} Eimg_Layer_SubSample. SubSampled by 4. This will be present if pyramid layers have been computed. etc.pixelType. This stands for SubSampled by 2. Type LONG LONG ENUM width height layerType Name Description The width of the layer in pixels. _ss_8.athematic. _ss_4. The height of the layer in pixels. layerType.ERDAS IMAGINE HFA File Format Eimg_Layer_SubSample An Eimg_Layer_SubSample object is a node which contains a subsampled version of the layer defined by the parent node. The type of layer. {1:lwidth.s32.1:lblockHeight.u32.fft of real valued data.u8.

algorithm.units.nameList. {1:oEmif_String. Emif_String units Eimg_RRDNamesList The Eimg_RRDNamesList object contains a list of layers of a resolution different (reduced) than the original. Type Emif_String Name projection Description The name of the map projection system associated with the MapToPixelTransform sibling object. A list of the reduced resolution layers associated with the parent layer. {1:oEmif_String.0:poEmif_String. Type Emif_String Emif_String p Name algorithm nameList Description The name of the algorithm used to compute the layers in nameList. The name of the map units of the coordinates returned by the transforming layer pixel coordinates through the inverse of the MapToPixelTransform sibling object. As a child of an Eimg_Layer.}Eimg_MapInformation.}Eimg_RRDNamesList.Eimg_NonInitializedValue The Eimg_NonInitializedValue object is used to record the value that is to be assigned to any uninitialized blocks of raster data in a layer.1:oEmif_String. As a child of an Eimg_Layer. These are full layer names. it will have the name MapInformation. {1:*bvalueBD. Type BASEDATA * Name valueBD Description A basedata structure containing the excluded values Eimg_MapInformation The Eimg_MapInformation object contains the map projection system and the map units applicable to the MapToPixelXForm object that is its sibling.projection. it will have the name RRDNamesList. 494 ERDAS .}Eimg_NonInitializedValue.

scalar Statistics of a layer. the object will be named CovarianceParameters.1:lSkipFactorY.ERDAS IMAGINE HFA File Format Eimg_StatisticsParameters830 The Eimg_StatisticsParameters830 object contains statistics parameters that control the computation of certain statistics. or the Histogram of a layer. The values excluded during this computation. Type ENUM type Name Description The type of layer. In these cases. In the case of raster data. The parameters can apply to the computation of Covariance.}Ehfa_Layer. The skip factor in X. StatisticsParameters. The skip factor in Y.1:*oEdsc_BinFunction. The CovarianceParameters will exist as a sibling of the Covariance.BinFunction.LayerNames. {1:e2:raster. The bin function used for this computation (statistics and histogram only).1:ldictionaryPtr.vector. the vector layers have not been implemented. Type Emif_String p BASEDATA * Emif_String LONG LONG Edsc_BinFunction * Name LayerNames ExcludedValues AOIname SkipFactorX SkipFactorY BinFunction Description The list of (full) layer names that were involved in this computation (covariance only). Ehfa_Layer The Ehfa_Layer is used to indicate the type of layer. it points to a dictionary pointer which describes the contents of each block via the RasterDMS definition given below. {0:poEmif_String.type.AOIname. 1:lSkipFactorX.} Eimg_StatisticsParameters830. and HistogramParameters.1:*bExcludedValues. 0=”raster” 1=”vector” ULONG dictionaryPtr This points to a dictionary entry which describes the data. The name of the AOI file used to limit the computation. and the StatisticsParameters and HistogramParameters will be children of the Eimg_Layer to which they apply. Field Guide 495 .1:oEmif_String. The initial design for the IMAGINE files allowed for both raster and vector layers. Currently.

Single precision complex M .Unsigned 1-bit 2 .Unsigned 16-bit S . <t>.Signed 32-bit f .Unsigned 4-bit c .Signed 16-bit l .RasterDMS The RasterDMS object definition must be present in the EMIF dictionary pointed to by an Ehfa_Layer object that is of type “raster”.Double precision floating point m .Double precision complex 496 ERDAS .Signed 8-bit s . The physical representation of the raster data is actually managed by the DMS system through objects of type Ehfa_Layer and Edms_State.}RasterDMS.Unsigned 32-bit L . which can have any one of the following values: 1 . of data file values in a block of the raster layer (which is simply <block width> * <block height>) and the data value type.Unsigned 2-bit 4 . <n>.Single precision floating point d . Type <t>[<n>] data Name Description The data is described in terms of total number. It describes the logical make-up of a block of raster data in the Ehfa_Layer. {<n>:<t>data. The RasterDMS definition should describe the raster data in terms of total number of data values in a block and the type of data value.Unsigned 8-bit C .

The stream of bytes is to be interpreted as a sequence of bytes which defines the data as indicated by the data type.true. The scheme for compressed data is described below. since the multiple file scheme has not been implemented.}Edms_VirtualBlockInfo. The ESRI GRID compression is a two stage runlength encoding. but not in the file.logvalid.1:loffset. and how to unpack the data from the block. This indicates whether the block actually contains valid data.ERDAS IMAGINE HFA File Format Edms_VirtualBlockInfo An Edms_VirtualBlockInfo object describes a single raster data block of a layer. The number of bytes in the block. the data are simply packed into the block one pixel value at a time. This allows blocks to exist in the map. {1:SfileCode.1:e2:no compression. The number indicates the file in which the block is located. 0=”no compression” 1=”ESRI GRID compression” No compression indicates that the data located at offset are uncompressed data.1:e2:false. how many bytes are in the data block. For uncompressed data the unpacking is straight forward. All non-integer data are uncompressed.1:lsize.1:LblockHeight. This points to the byte location in the file where the block data actually resides. It describes where to find the data in the file. Field Guide 497 . 0=”false” 1=”true” PTR LONG ENUM offset size logvalid ENUM compressionType This indicates the type of compression used for this block. For uncompressed blocks.ESRI GRID compression. Type SHORT fileCode Name Description This is included to allow expansion of the layer into multiple files. Each pixel is read from the block as indicated by its data type.compressionType. Currently this is always 0.

• • NOTE: No compression scheme is used if the data are non-integral. The next byte indicates the number of bits per pixel (1. The number of runlength segments occupies the next 4 bytes. Following this is the list of segment counts.2. If the difference is less than 65. following the segment counts are the pixel values. The first two bits of the first count byte contains 0.4. The byte size of the output pixels is determined by examining the difference between the maximum and the minium. There is one segment count per pixel value. and the next 4 bytes are an offset into the block which indicates where the compressed pixel values begin. Then the rest of the byte (6 bits) represent the six most significant bytes of the count. 498 ERDAS .1. if present. otherwise 32-bit data are used. then the following steps are performed: • • The minimum and maximum values for a block are determined. A run-length encoding scheme is used to encode runs of the same pixel value.3.2. The data values are compressed into the remaining space packed into as many bits per pixel as indicated by the numbitpervalue field.3 indicating that the count is contained in 1. or ULONG). The minimum is subtracted from each of the values. 3. min num segments data offset numbits per value data counts data values Each data count is encoded as follows: next 8 bits next 8 bits next 8 bits byte count byte 0 high 6 bits byte 3 byte 2 byte 1 There may be 1.The compression scheme used by ERDAS IMAGINE is a two level run-length encoding scheme. 16-bit data are used. 2.536 then.32). represents decreasing significance. or 4 bytes. If the data are an integral type. or 4 bytes per count. NOTE: This order is different than the rest of the package. If the difference is less than or equal to 256.8. The next byte. This pattern is repeated as many times as indicated by the numsegments field. then 8-bit data are used. These four values are encoded in the standard MIF format (unsigned long. This was done so that the high byte with the encoded byte count would be first in the byte stream. The data minimum value occupies the first 4 bytes of the block. 2.16.

Basically. {1:lnumvirtualblocks. Edms_State The Edms_State describes the location of each of the blocks of a single layer of imagery. Currently this object is unused and reserved for future expansion. 1:tmodTime. 1:e2:no compression. 0=”no compression” 1=”ESRI GRID compression” No compression indicates that the data located at offset are uncompressed data.compressionType. The ESRI GRID compression is a two stage run-length encoding. The maximum block number in the group. This indicates the type of compression used for this block. this type is not being used and is reserved for future expansion.1:lnextobjectnum. The freelist consists of an array of min/max pairs which indicate unused contiguous blocks of data which lie within the allocated layer space.freelist. {1:Lmin. The number of pixels represented by one block.0:poEdms_FreeIDList.1:lnumobjectsperblock.RLC compression.}Edms_FreeIDList.}Edms_State Type LONG LONG LONG Name numvirtualblocks numobjectsperblock nextobjectnum Description The number of blocks in this layer. Type LONG LONG min max Name Description The minimum block number in the group.ERDAS IMAGINE HFA File Format Edms_FreeIDList An Edms_FreeIDList is used to track blocks which have been freed from the layer. Currently. this object is an index of all of the blocks in the layer. 0:poEdms_VirtualBlockInfo.blockinfo.1:Lmax. ENUM compressionType Field Guide 499 . The stream of bytes is to be interpreted as a sequence of bytes which defines the data as indicated by the data type.

Edms_FreeIDList p freelist TIME modTime Edsc_Table An Edsc_Table is a base node used to store columns of information. Type LONG Name numRows Description This defines the number of rows in the table. 1:dminLimit. Table 32 describes how the binning functions are used.1:e4:direct. Currently. 0=”direct” 1=”linear” 2=” exponential” 3=”explicit” DOUBLE DOUBLE BASEDATA minLimit maxLimit binLimits The lowest value defined by the bin function. {1:lnumRows.linear.Type Edms_VirtualBlockInfo p Name blockinfo Description This is the table of entries which describes the state and location of each block in the layer. this type is not being used and is reserved for future expansion. The highest value defined by the bin function.} Edsc_Table. Type LONG ENUM Name numBins binFunction Type Description The number of bins. This is the time of the last modification to this layer.} Edsc_BinFunction.1:*bbinLimits.logarithmic. 500 ERDAS . {1:lnumBins.binFunction Type. The limits used to define the bins. Edsc_BinFunction The Edsc_BinFunction describes how pixel values from the associated layer are to be mapped into an index for the columns.explicit. The type of bin function.1:dmaxLimit. This serves simply as a parent node for each of the columns which are a part of the table.

Field Guide 501 . The data type of this column 0=”integer” (EMIF_T_LONG) 1=”real” (EMIF_T_DOUBLE) 2=”complex” (EMIF_T_DCOMPLEX) 3=”string” (EMIF_T_CHAR) ENUM dataType LONG maxNumChars The maximum string length (for string data only). This allows a very large range of data. The types of information stored in columns are given in the following table. then the index is 1. The data are compared against the limits set in the binLimit table.1:LcolumnDataPtr. If the pixel is less than or equal to the next value. If the pixel is less than or equal to the first value. then the index is 0. {1:lnumRows. Starting point of column data in the file.1:e4:integer. or even floating point data. etc.string. 1:lmaxNumChar.real. The formula used is index = numBins*(log(1+(value-minLimit)) / (maxLimit-minLimit). EXPLICIT Edsc_Column The columns of information which are stored in a table are stored in this format. then value 0 is indexed into location 0. For example.} Edsc_Column. Type LONG PTR Name numRows columnDataPtr Description The number of rows in this column.dataType. LINEAR EXPONENTIAL Exponential binning is used to compress data with a large dynamic range. Linear binning means that the pixel value is first scaled by the formula: index = (value-minLimit)*numBins/(maxLimit-minLimit). Explicit binning is used to map the data into indices using an arbitrary set of boundaries. etc. It is 0 if the type is not a String. if the minimum value is zero. to be used to index into a table. This points to the location in the file which contains the data.ERDAS IMAGINE HFA File Format Table 32: Usage of Binning Functions Bin Type DIRECT Description Direct binning means that the pixel value minus the minimum is used as is with no translation to index into the columns. 1 is indexed into 1.comples.

The range of the value is from 0. The range of the value is from 0. This is found in the descriptor table of almost every thematic layer. It defines the blue component of the color for each class. This is found in the descriptor table of almost every thematic layer. This is found in the descriptor table of almost every thematic layer. This is found in the descriptor table of most continuous raster layers. This is the Y coordinate for the point. The range of the value is from 0.0 to 1. It defines the name for each class.5 means that 50% of the underlying pixel would show through. This is found in the GCP_Table in files which have ground control points.0.0. This is found in the GCP_Table in files which have ground control points. It defines the green component of the color for each class. It defines the number of pixels which fall into each bin. Class_Names string Red real Green real Blue real Opacity real Contrast real GCP_Names string GCP_xCoords real GCP_yCoords real GCP_Color string 502 ERDAS .0. This is found in the GCP_Table in files which have ground control points. A value of 0. It defines the opacity associated with the class. A value of 0 means that the color will be solid. This is the name of the color that is used to display this point.0 means that all of the pixel value in the underlying layer would show through. This is found in the descriptor table of almost every thematic layer. The table is stored as normalized values from 0. This is the table of names for the points. This is the X coordinate for the point. It is used to define an intensity stretch which is normally used to improve contrast.0. It defines the red component of the color for each class.0 to 1. This is found in the descriptor table of almost every thematic layer.0 to 1. This is found in the GCP_Table in files which have ground control points. and 1.0 to 1.Name Histogram Data Type real Description This is found in the descriptor table of almost every layer.

AUTO-APPLY. Each Eded_ColumnAttributes_1 is a child of the Edsc_Column containing the data for the descriptor column.formulamode. 0 = NO 1 = YES Alignment of this column in CellArray.1:e5:NO_COLOR. The name of the descriptor column. The properties for a color column are stored as a child of the Eded_ColumnAttributes_1 for the red component of the color column.0:pcname.alignment.BLUE. {1:lposition.RIGHT. Mode for formula application. 0 = LEFT 1 = CENTER 2 = RIGHT The format for display of numeric data. whether the column is editable.editable.GREEN.CENTER. and width of the column.COLOR. The positions for all descriptor columns are sorted and the columns are displayed in ascending order. The properties include the position of the descriptor column within the CellArray.1:dcolumnwidth. format. and whether the column is a component of a color column.0:pcgreenname.APPLY.colorflag. 0:pcunits.ERDAS IMAGINE HFA File Format Eded_ColumnAttributes_1 The Eded_ColumnAttributes_1 stores the descriptor column properties which are used by the Raster Attribute Editor for the format and layout of the descriptor column display in the Raster Attribute Editor CellArray. Specifies whether this column is editable.0:pcformula. for all columns except color columns. The width of the CellArray column The name of the units for numeric data stored in the column. 0 = DEFAULT 1 = APPLY 2 = AUTO-APPLY The formula for the column. 1:e3:DEFAULT. CHAR P name ENUM editable ENUM alignment CHAR P ENUM format formulamode CHAR P DOUBLE CHAR P formula columnwidth units Field Guide 503 . the formula (if any) for the column. 1:e3:LEFT. the name. alignment.0:pcformat.}Eded_ColumnAttributes_1. the units (for numeric data).1:e2:FALSE.TRUE. Color columns have no corresponding Edsc_Column.RED. 0:pcbluename. This is the same as the name of the parent Edsc_Column node. Type LONG position Name Description The position of this descriptor column in the Raster Attribute Editor CellArray.

Empty string for other column types. {1:dminimum. This may exclude values as defined by the user. The median of all of the pixels in the image.1:dstddev. The maximum of all of the pixels in the image. This may exclude values as defined by the user. This may exclude values as defined by the user. This may exclude values as defined by the user.} Esta_Statistics.1d:mode. CHAR P greenname CHAR P bluename Esta_Statistics The Esta_Statistics is used to describe the statistics for a layer. The standard deviation of the pixels in the image. Type DOUBLE DOUBLE DOUBLE DOUBLE DOUBLE DOUBLE Name minimum maximum mean median mode stddev Description The minimum of all of the pixels in the image. a component of a color column. Empty string for other column types. This may exclude values as defined by the user. 0 = NO_COLOR 1 = RED 2 = GREEN 3 = BLUE 4 = COLOR Name of green component column associated with color column.Type ENUM colorflag Name Description Indicates whether column is a color column. or a normal column.1:dmedian.1:dmaximum. The mean of all of the pixels in the image. 504 ERDAS .1:dmean. This may exclude values as defined by the user. Name of blue component column associated with color column. The mode of all of the pixels in the image.

{1:LskipFactorX.}Esta_ExcludedValues. Type BASEDATA * Name valueBD Description A basedata structure containing the excluded values Field Guide 505 . Type BASEDATA Name covariance Description A basedata structure containing the covariance matrix Esta_SkipFactors The Esta_SkipFactors object is used to record the skip factors that were used when the statistics or histogram was calculated for a raster layer or when the covariance was calculated for an .img file.}Esta_Covariance.ERDAS IMAGINE HFA File Format Esta_Covariance The Esta_Covariance object is used to record the covariance matrix for the layers in an .img file. Type LONG LONG Name skipFactorX skipFactorY Description The horizontal sampling interval used for statistics measured in image columns/sample The vertical sampling interval used for statistics measured in image rows/sample Esta_ExcludedValues The Esta_ExcludedValues object is used to record the values that were excluded from consideration when the statistics or histogram was calculated for a raster layer or when the covariance was calculated for a .}Esta_SkipFactors. {1:*bvalueBD.img file {1:bcovariance.1:LskipFactorY.

The seven parameters of a parametric datum which describe the translations.1:deSquared. The radius of the spheroid in meters.1:e3:EPRJ_DATUM_PARAMETRIC.1:db.}Eprj_Spheroid. NAD83 and HARN.1:da.img file.EPRJ_DATUM_GRID.}Eprj_Datum. DOUBLE DOUBLE DOUBLE DOUBLE a b eSquared radius 506 ERDAS . {0:pcsphereName. Type CHAR ENUM Name datumname type Description The datum name. EPRJ_DATUM_REGRESSION.1:dradius. {0:pcdatumname.type. The semi-major axis of the ellipsoid in meters. Type CHAR p Name sphereName Description The name of the spheroid/ellipsoid. The semi-minor axis of the ellipsoid in meters.0:pcgridname.Eprj_Datum The Eprj_Datum object is used to record the datum information which is part of the projection information for an . The name of a grid datum file which stores the coordinate shifts among North America Datums NAD27. The datum type which could be one of three different types: parametric type.tab. The eccentricity of the ellipsoid. DOUBLE params CHAR gridname Eprj_Spheroid The Eprj_Spheroid is used to describe spheroid parameters used to describe the shape of the earth. squared. This name is can be found in: <$IMAGINE_HOME>/etc/spheroid.0:pdparams. rotations and scale change between the current datum and the reference datum WGS84. grid type and regression type.

}Eprj_ProParameters.proSpheroid. 0:pcproExeName.1:lproZone.1:lproNumber.ERDAS IMAGINE HFA File Format Eprj_ProParameters The Eprj_Parameters is used to define the map projection for a layer.EPRJ_EXTERNAL. {1:e2:EPRJ_INTERNAL. Field Guide 507 . 0=”EPRJ_INTERNAL” 1=” EPRJ_EXTERNAL” LONG proNumber The projection number for internal projections.proType. 1:*oEprj_Spheroid.0:pcproName. The current internal projections are: 0=”Geographic(Latitude/Longitude)” 1=”UTM” 2=”State Plane” 3=”Albers Conical Equal Area” 4=”Lambert Conformal Conic” 5=”Mercator” 6=”Polar Stereographic” 7=”Polyconic” 8=”Equidistant Conic” 9=”Transverse Mercator” 10=”Sterographic” 11=”Lambert Azimuthal Equal-area” 12=”Azimuthal Equidistant” 13=”Gnomonic” 14=”Orthographic” 15=”General Vertical Near-Side Perspective” 16=”Sinusoidal” 17=”Equirectangular” 18=”Miller Cylindrical” 19=”Van der Grinten I” 20=”Oblique Mercator (Hotine)” 21=”Space Oblique Mercator” 22=”Modified Transverse Mercator” CHAR p proExeName The name of the executable to run for an external projection.0:pdproParams. Type ENUM Name proType Description This defines whether the projection is internal or external.

LONG DOUBLE p Eprj_Spheroid * proZone proParams proSpheroid The following table defines the contents of the proParams array which is defined above. See the proceeding description for the Eprj_Spheroid object. n is the index into the array. -1=South 0: 0=NAD27. Name 0 1 2 3 ”Geographic(Latitude/Longitude)” ”UTM” ”State Plane” ”Albers Conical Equal Area” None Used 3: 1=North. The array of parameters for the projection. This will be one of the names given above in the description of proNumber. 1=NAD83 2: Latitude of 1st standard parallel 3: Latitude of 2nd standard parallel 4: Longitude of central meridian 5: Latitude of origin of projection 6: False Easting 7: False Northing 4 ”Lambert Conformal Conic” 2: Latitude of 1st standard parallel 3: Latitude of 2nd standard parallel 4: Longitude of central meridian 5: Latitude of origin of projection 6: False Easting 7: False Northing 5 ”Mercator” 4: Longitude of central meridian 5: Latitude of origin of projection 6: False Easting 7: False Northing Parameters 508 ERDAS . The Parameters column defines the meaning of the various elements of the proParams array for the different projections.Type CHAR p Name proName Description The name of the projection. The parameters of the spheroid used to approximate the earth. Each one is described by one or more statements of the form n: Description. The zone number for internal State Plane or UTM projections.

7 ”Polyconic” 4: Longitude of central meridian 5: Latitude of origin of projection 6: False Easting 7: False Northing 8 ”Equidistant Conic” 2: Latitude of standard parallel (Case 0) 2: Latitude of 1st Standard Parallel (Case 1) 3: Latitude of 2nd standard Parallel (Case 1) 4: Longitude of central meridian 5: Latitude of origin of projection 6: False Easting 7: False Northing 8: 0=Case 0. 1=Case 1. 9 ”Transverse Mercator” 2: Scale Factor at Central Meridian 4: Longitude of center of projection 5: Latitude of center of projection 6: False Easting 7: False Northing 10 ”Stereographic” 4: Longitude of center of projection 5: Latitude of center of projection 6: False Easting 7: False Northing 11 ”Lambert Azimuthal Equal-area” 4: Longitude of center of projection 5: Latitude of center of projection 6: False Easting 7: False Northing Field Guide 509 . 5: Latitude of true scale. 6: False Easting 7: False Northing.ERDAS IMAGINE HFA File Format Name 6 ”Polar Stereographic” Parameters 4: Longitude directed straight down below pole of map.

6: False Easting 7: False Northing 18 ”Miller Cylindrical” 4: Longitude of central meridian 6: False Easting 7: False Northing 19 ”Van der Grinten I” 4: Longitude of central meridian 6: False Easting 7: False Northing 510 ERDAS .Name 12 ”Azimuthal Equidistant” Parameters 4: Longitude of center of projection 5: Latitude of center of projection 6: False Easting 7: False Northing 13 ”Gnomonic” 4: Longitude of center of projection 5: Latitude of center of projection 6: False Easting 7: False Northing 14 ”Orthographic” 4: Longitude of center of projection 5: Latitude of center of projection 6: False Easting 7: False Northing 15 ”General Vertical Near-Side Perspective 2: Height of perspective point above sphere. 4: Longitude of center of projection 5: Latitude of center of projection 6: False Easting 7: False Northing 16 ”Sinusoidal” 4: Longitude of central meridian 6: False Easting 7: False Northing 17 ”Equirectangular” 4: Longitude of central meridian 5: Latitude of True Scale.

8: Longitude of 1st Point defining central line (Case 0) 9: Latitude of 1st Point defining central line (Case 0) 10: Longitude of 2nd Point defining central line. 6: False Easting 7: False Northing. 12: 0=Case 0.ERDAS IMAGINE HFA File Format Name 20 ”Oblique Mercator (Hotine)” Parameters 2: Scale Factor at center of projection 3: Azimuth east of north for central line. (Case 0) 11: Latitude of 2nd Point defining central line (Case 0). 1=Case 1 21 ”Space Oblique Mercator” 4: Landsat Vehicle ID (1-5) 5: Orbital Path Number (1-251 or 1-233) 6: False Easting 7: False Northing 22 ”Modified Transverse Mercator” 6: False Easting 7: False Northing Field Guide 511 . (Case 1) 4: Longitude of point of origin (Case 1) 5: Latitude of point of origin.

The coordinates of the center of the lower right pixel. 0:pcunits.1:*oEprj_Size. The size of the pixel in the image. 512 ERDAS .upperLeftCenter.}Eprj_Size. {0:pcproName. as well as the cell size and the name of the map projection.}Eprj_Coordinate. {1:dx.1:dy.Eprj_Coordinate An Eprj_Coordiante is a pair of doubles used to define X and Y. Type DOUBLE DOUBLE x y Name Description The X value of the coordinate.1:dy. Type CHAR p Eprj_ Coordinate * Eprj_ Coordinate * Eprj_Size * CHAR * Name proName upperLeftCenter lowerRightCenter pixelSize units Description The name of the projection. 1:*oEprj_Coordinate.pixelSize. The coordinates of the center of the upper left pixel. Type DOUBLE DOUBLE width height Name Description The X value of the coordinate.1:*oEprj_Coordinate. Eprj_MapInfo The Eprj_MapInfo object is used to define the basic map information for a layer. It defines the map coordinates for the center of the upper left and lower right pixels. The units of the above values.lowerRightCenter. The Y value of the coordinate. Eprj_Size The Eprj_Size is a pair of doubles used to define a rectangular size.}Eprj_MapInfo. {1:dx. The Y value of the coordinate.

titleList. The ordered list of powers for the polynomial. As a child of an Eimg_Layer. The number of terms in the polynomial. The polynomial vectors.}Efga_Polynomial.1:Lnumdimtransforms. Field Guide 513 .1:bpolycoefmtx. {1:Lorder. XForm1. XFormi.1:bpolycoefvector.1:numdimpolynomial. {0:poEmif_String.. 1:*exponentList. .ERDAS IMAGINE HFA File Format Efga_Polynomial The Efga_Polynomial is used to store transformation coefficients created by the IMAGINE GCP Tool.. Type LONG LONG LONG LONG LONG * BASEDATA BASEDATA order numdimtransform numdimpolynomial termcount exponentlist polycoefmtx polycoefvector Name Description The order of the polynomial. The number of dimensions of the polynomial (always 2). it will have the name MapToPixelXForm.. The polynomial coefficients. The components are written as children of the Exfr_GenericXFormHeader with names XForm0. The design of component XFormi is defined by the specific GeometricModels DLL instance that controls XForms of the title specified as the ith title string in the Exfr_GenericXFormHeader unless XFormi is of type Exfr_ASCIIXform (see below). where i is the number of components listed by the Exfr_GenericXFormHeader. The number of dimensions of the transformation (always 2). Type Emif_String titleList Name Description The list of titles of the component XForms that are children of this node.1:Ltermcount.}Exfr_GenericXFormHeader. Exfr_GenericXFormHeader The Exfr_GenericXFormHeader contains a list of GeometricModels titles for the component XForms making up a composite Exfr_XForm.

{0:pcxForm. This is the nth order polynomial coefficient used to convert from map coordinates to pixel coordinates.” The “Calibration” node will have the four children described below. Type CHAR p xForm Name Description An ASCII string representation of an XForm component. Calibration_Node An object of type Calibration_Node is an empty object — it contains no data.Exfr_ASCIIXForm An Exfr_ASCIIXForm is an ASCII string representation of an Exfr_XForm component controlled by a DLL that does not have an XFormConvertToMIF function defined but does define an XFormSprintf function.}Exfr_ASCIIXForm. A node of this type will be a child of the root node and will be named “Calibration. The nominal map information associated with the transformation. Node Projection Map_Info ObjectType Eprj_ProParameters Eprj_MapInfo Description The projection associated with the output coordinate system. This is the nth order polynomial used to convert from pixel coordinates to map coordinates InversePolynomial Efga_Polynomial ForwardPolynomial Efga_Polynomial 514 ERDAS .There is no dictionary definition for this object type. A node of this type simply serves as the parent node of four related child objects. The children of the Calibration_Node are used to provide information which converts pixel coordinates to map coordinates and vice versa.

Refer to the ARC/INFO users manuals for detailed information on the vector data structure. Field Guide 515 . Inc.Vector Layers Vector Layers The vector data structure in ERDAS IMAGINE is based on the ARC/INFO data model (developed by ESRI. See "CHAPTER 2: Vector Layers" for more information on vector layers.).

516 ERDAS .

Use the rectification tools to actually georeference an image to a new map projection system.Introduction APPENDIX C Map Projections Introduction This appendix is an alphabetical listing of the map projections supported in ERDAS IMAGINE. The projections in each section are presented in alphabetical order. View. You should change map projection information using Image Information only if you know the information to be incorrect. beginning on page 518 External projections. For general information about map projection types. beginning on page 585 The external projections were implemented outside of ERDAS IMAGINE so that users could add to these using the Developers’ Toolkit. refer to "CHAPTER 11: Cartography". It is divided into two sections: • • USGS projections. The information in this appendix is adapted from: • • Map Projections for Use with the Geographic Information System (Lee and Walsh 1984) Map Projections—A Working Manual (Snyder 1987) Other sources are noted in the text. add. Field Guide 517 . NOTE: You cannot rectify to a new map projection using the Image Information option. Rectify an image to a particular map projection using the ERDAS IMAGINE Rectification tools. or change projection information using the Image Information option.

USGS Projections The following USGS map projections are supported in ERDAS IMAGINE and are described in this section: Albers Conical Equal Area Azimuthal Equidistant Equidistant Conic Equirectangular General Vertical Near-side Perspective Geographic (Lat/Lon) Gnomonic Lambert Azimuthal Equal Area Lambert Conformal Conic Mercator Miller Cylindrical Modified Transverse Mercator Oblique Mercator (Hotine) Orthographic Polar Stereographic Polyconic Sinusoidal Space Oblique Mercator State Plane Stereographic Transverse Mercator UTM Van der Grinten I 518 ERDAS .

Linear scale is true on the standard parallels. but not at the pole. the two standard parallels are 29.S. the standard parallels are 8˚N and 18˚N. There is no areal deformation. Maximum scale error is 1. United States Base Map (48 states). When this projection is used for the continental U.5˚ and 45. Graticule spacing Linear scale Uses The Albers Conical Equal Area projection is mathematically based on a cone that is conceptually secant on two parallels. but are farthest apart between the standard parallels and closer together on the north and south edges. Parallels are arcs of concentric circles concave toward a pole. Maps based on the Albers Conical Equal Area for Alaska use standard parallels 55˚N and 65˚N. and the standard parallels are correct in scale and in every direction. It retains its properties at various scales.5˚N. Field Guide 519 .5˚ North.25% on a map of the United States (48 states) with standard parallels of 29. for Hawaii. and individual sheets can be joined along their edges. Albers Conical Equal Area has concentric arcs for parallels and equally spaced radii for meridians. Like other conics. Used for large countries with an eastwest orientation. The North or South Pole is represented by an arc.e. Parallel spacing decreases away from the standard parallels and increases between them. The National Atlas of the United States.USGS Projections Albers Conical Equal Area Summary Construction Property Meridians Parallels Cone Equal area Meridians are straight lines converging on the polar axis. Meridians and parallels intersect each other at right angles. Parallels are not equally spaced. The graticule spacing preserves the property of equivalence of area. Meridian spacing is equal on the standard parallels and decreases toward the poles. in the National Atlas of 1970. and the Geologic map of the United States are based on the standard parallels of 29.S. Used for thematic maps. there is no angular distortion (i. Albers Conical Equal Area is the projection exclusively used by the USGS for sectional maps of all 50 states of the U. This projection produces very accurate area and distance measurements in the middle latitudes (Figure 189). Thus.5˚N and 45. This projection possesses the property of equal area. Albers Conical Equal Area is well-suited to countries or continents where north-south depth is about 3/5 the breadth of east-west... meridians intersect parallels at right angles) and conformality exists along the standard parallels.5˚N.5˚N and 45. The graticule is symmetrical. Thus.

the standard parallels. Note that the first standard parallel is the southernmost.e. It is very often convenient to make them large enough to prevent negative coordinates from occurring within the region of the map projection." Latitude of 1st standard parallel Latitude of 2nd standard parallel Enter two values for the desired control lines of the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Then.Prompts The following prompts display in the Projection Chooser once Albers Conical Equal Area is selected. False easting at central meridian False northing at origin Enter values of false easting and false northing. i. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. Spheroid Name: Datum Name: Select the spheroid and datum to use. Respond to the prompts as described. define the origin of the map projection in both spherical and rectangular coordinates. Longitude of central meridian Latitude of origin of projection Enter values for longitude of the desired central meridian and latitude of the origin of projection. corresponding to the intersection of the central meridian and the latitude of the origin of projection.. These values must be in meters. 520 ERDAS . That is.

Field Guide 521 . the standard parallels are 20˚N and 60˚N.USGS Projections Figure 189: Albers Conical Equal Area Projection In Figure 189. Note the change in spacing of the parallels.

Parallels Oblique aspect: the parallels are complex curves. Equatorial aspect: the meridians are complex curves concave toward a straight central meridian. Meridians Oblique aspect: the meridians are complex curves concave toward the point of tangency. which is a circle.Azimuthal Equidistant Summary Construction Property Plane Equidistant Polar aspect: the meridians are straight lines radiating from the point of tangency.S. Angular and area deformation increase away from the point of tangency. Polar aspect: the parallels are concentric circles. except the outer meridian of a hemisphere. Parallel spacing is equidistant. The Azimuthal Equidistant projection is used for radio and seismic work. Equatorial aspect: the parallels are complex curves concave toward the nearest pole. Polar aspect: the meridian spacing is equal and increases away from the point of tangency. as every place in the world will be shown at its true distance and direction from the point of tangency. Graticule spacing Linear scale Oblique and equatorial aspects: linear scale is true from the point of tangency. In all aspects. Polar aspect: linear scale is true from the point of tangency along the meridians only. The U. the equator is straight. Geological Survey uses the oblique aspect in the National Atlas and for large-scale mapping of Micronesia. Uses 522 ERDAS . the projection shows distances true to scale when measured between the point of tangency and any other point on the map. The polar aspect is used as the emblem of the United Nations.

Respond to the prompts as described. Longitude of center of projection Latitude of center of projection Enter values for the longitude and latitude of the desired center of the projection. That is. Meridians are equally spaced. These values must be in meters. It is very often convenient to make them large enough so that no negative coordinates will occur within the region of the map projection. for example—and distance measurements will be true from that central point. though the other hemisphere can be portrayed. This projection can also be used to center on any point on the earth—a city. Also. Linear scale distortion is moderate and increases toward the periphery. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. This projection is used mostly for polar projections because latitude rings divide meridians at equal intervals with a polar aspect (Figure 190). False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. but generally less than one hemisphere is portrayed. Distances are not correct or true along parallels. Prompts The following prompts display in the Projection Chooser if Azimuthal Equidistant is selected. Field Guide 523 . and all distances and directions are shown accurately from the central point." Define the center of the map projection in both spherical and rectangular coordinates. Spheroid Name: Datum Name: Select the spheroid and datum to use. and the projection is neither equal area nor conformal. straight lines radiating from the center of this projection represent great circles. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. It has true direction and true distance scaling from the point of tangency. but is much distorted.USGS Projections The Azimuthal Equidistant projection is mathematically based on a plane tangent to the earth. The entire earth can be represented.

524 ERDAS .Figure 190: Polar Aspect of the Azimuthal Equidistant Projection This projection is commonly used in atlases for polar maps.

This projection is neither conformal nor equal area. and parallels are equidistantly spaced. Parallels are arcs of concentric circles concave toward a pole. The North or South Pole is represented by an arc. correct distance is achieved along the line(s) of contact with the cone. It can be used with either one (A) or two (B) standard parallels. Respond to the prompts as described. It was used in the former Soviet Union for mapping the entire country (ESRI 1992). Prompts The following prompts display in the Projection Chooser if Equidistant Conic is selected. Linear scale is true along all meridians and along the standard parallel or parallels. The USGS uses the Equidistant Conic in an approximate form for a map of Alaska. Because scale distortion increases with increasing distance from the line(s) of contact. It is good for representing regions with a few degrees of latitude lying on one side of the equator. Graticule spacing Linear scale Uses With Equidistant Conic (Simple Conic) projections. Meridian spacing is true on the standard parallels and decreases toward the pole. Meridians and parallels intersect each other at right angles. Spheroid Name: Datum Name: Select the spheroid and datum to use. Parallels are placed at true scale along the meridians. The Equidistant Conic projection is used in atlases for portraying mid-latitude areas. The graticule is symmetrical. but the north-south scale along meridians is correct. the Equidistant Conic is used mostly for mapping regions predominantly east-west in extent.USGS Projections Conic Equidistant Summary Construction Property Meridians Parallels Cone Equidistant Meridians are straight lines converging on a polar axis but not at the pole. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography." Field Guide 525 .

Longitude of central meridian Latitude of origin of projection Enter values for the longitude of the desired central meridian and the latitude of the origin of projection.Define the origin of the projection in both spherical and rectangular coordinates. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.e. Note that if two standard parallels are used. 526 ERDAS . False easting False northing Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of the origin of projection. the standard parallel(s). One or two standard parallels? Latitude of standard parallel Enter one or two values for the desired control line(s) of the projection. i. That is.. the first is the southernmost. These values must be in meters. It is very often convenient to make them large enough so that no negative coordinates will occur within the region of the map projection.

neither scale nor projection is marked to avoid implying that the maps are suitable for normal geographic information. parallel meridians and latitude lines that cross at right angles on a rectangular map. This projection is valuable for its ease in computer plotting. and west are true. Prompts The following prompts display in the Projection Chooser if Equirectangular is selected. south. However. Used for simple portrayals of the world or regions with minimal geographic data. Equirectangular is composed of equally spaced.USGS Projections Equirectangular (Plate Carrée) Summary Construction Property Meridians Parallels Graticule spacing Linear scale Cylinder Compromise All meridians are straight lines. or other small areas with map scales small enough to reduce the obvious distortion. The equator is the standard parallel. All parallels are straight lines. because of its simplicity. Respond to the prompts as described. such as index maps (ESRI 1992). but it does contain less distortion than the Mercator in polar regions. Uses Also called Simple Cylindrical. such as city maps. It is useful for mapping small areas. Best used for city maps. Hawaii. this projection may be centered anywhere. east. with insets of Alaska. and size. true to scale and free of distortion. Equally spaced parallel meridians and latitude lines cross at right angles. Each rectangle formed by the grid is equal in area.S. but all other directions are distorted. Directions due north. The USGS uses Equirectangular for index maps of the conterminous U. Scale is true on all meridians and on the central parallel. and various islands. The scale is correct along all meridians and along the standard parallels (ESRI 1992). shape. Equirectangular is not conformal nor equal area. Spheroid Name: Datum Name: Select the spheroid and datum to use. Field Guide 527 . However.

False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. 528 ERDAS . That is. It is very often convenient to make them large enough so that no negative coordinates will occur within the region of the map projection." Longitude of central meridian Latitude of true scale Enter a value for longitude of the desired central meridian to center the projection and the latitude of true scale. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. These values must be in meters.

except that certain angles of tilt may cause some parallels to be shown as parabolas or hyperbolas.USGS Projections General Vertical Nearside Perspective Summary Construction Property Meridians Plane Compromise The central meridian is a straight line in all aspects. distance of view. The scale perpendicular to the radii decreases. rather than for technical applications (ESRI 1992). Used as an aesthetic presentation. Parallels Graticule spacing Equatorial and oblique aspects: parallels are elliptical arcs that are not evenly spaced. Respond to the prompts as described. Often used to show the earth or other planets and satellites as seen from space. and angle of view. In the polar aspect all meridians are straight. General Vertical Near-side Perspective cannot illustrate the entire globe on one map—it can represent only part of one hemisphere. but not as rapidly (ESRI 1992). except for the central meridian. Other meridians and parallels are usually arcs of circles or ellipses. which is a straight line. Spheroid Name: Datum Name: Select the spheroid and datum to use. Meridians are evenly spaced and spacing increases from the center of the projection. Nearly all other parallels are elliptical arcs. Meridians are elliptical arcs that are not evenly spaced. Polar aspect: parallels are concentric circles that are not evenly spaced. The map user simply identifies area of coverage. Like all perspective projections. Linear scale Radial scale decreases from true scale at the center to zero on the projection edge. Uses General Vertical Near-side Perspective presents a picture of the earth as if a photograph were taken at some distance less than infinity. Prompts The following prompts display in the Projection Chooser if General Vertical Near-side Perspective is selected. but some may be parabolas or hyperbolas. Parallels on vertical polar aspects are concentric circles. It is a variation of the General Perspective projection in which the “camera” precisely faces the center of the earth. Field Guide 529 . In the equatorial aspect the equator is straight (ESRI 1992). Central meridian and a particular parallel (if shown) are straight lines.

the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. 530 ERDAS . define the center of the map projection in both spherical and rectangular coordinates. These values must be in meters. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. That is." Height of perspective point above sphere Enter a value for desired height of the perspective point above the sphere in the same units as the radius. Longitude of center of projection Latitude of center of projection Enter values for the longitude and latitude of the desired center of the projection. Then. It is very often convenient to make them large enough so that no negative coordinates will occur within the region of the map projection.The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography.

Because the earth spins on an axis between the North and South Poles. would be the prime meridian. If the user chooses Geographic from the Projection Chooser. there is no natural zero meridian. Field Guide 531 . In 1884. termed the equator. the origin of the geographic coordinate system is the intersection of the equator and the prime meridian. Both divide the circumference of the earth into 360 degrees. the following prompts will display: Spheroid Name: Datum Name: Select the spheroid and datum to use. it was finally agreed that the meridian of the Royal Observatory in Greenwich. parallel circles." Note that in responding to prompts for other projections. Thus. However. These lines are not parallel and they converge at the poles. with a reference line exactly at the northsouth center. England. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. which are further subdivided into minutes and seconds (60 sec = 1 minute. values for longitude are negative west of Greenwich and values for latitude are negative south of the equator. The series of circles north of the equator are termed north latitudes and run from 0˚ latitude (the equator) to 90˚ North latitude (the North Pole). and similarly southward. 60 min = 1 degree). Unlike the equator in the latitude system. Note that the 180˚ meridian is the international date line. this allows construction of concentric. Position in an east-west direction is determined from lines of longitude.USGS Projections Geographic (Lat/Lon) The Geographic is a spherical coordinate system composed of parallels of latitude (Lat) and meridians of longitude (Lon) (Figure 191). they intersect lines of latitude perpendicularly.

North Pole Parallel (Latitude) 60 Equator 30 3 0 0 6 0 Meridian (Longitude) Figure 191: Geographic Figure 191 shows the graticule of meridians and parallels on the global surface. 532 ERDAS .

rapidly increasing away from the center of the projection. With a polar aspect. which is the opposite of the Mercator projection. Rhumb lines are curved. It is used with the Mercator projection for navigation. it is the only projection which shows all great circles as straight lines. Gnomonic is a perspective projection that projects onto a tangent plane from a position in the center of the earth. Field Guide 533 . Parallels Oblique and equatorial aspects: parallels are ellipses. the equator is straight. Polar aspect: the parallels are concentric circles. The Gnomonic projection is used in seismic work because seismic waves travel in approximately great circles. parabolas. With an equatorial or oblique aspect. Graticule spacing Oblique and equatorial aspects: the graticule spacing increases very rapidly away from the center of the projection. Meridians are straight and parallel. The parallel spacing increases very rapidly from the pole.USGS Projections Gnomonic Summary Construction Property Plane Compromise Polar aspect: the meridians are straight lines radiating from the point of tangency. this projection is useful for air and sea navigation. Linear scale Uses Linear scale and angular and areal deformation are extreme. Because great circles are straight. However. Meridians Oblique and equatorial aspects: the meridians are straight lines. the latitude intervals increase rapidly from the center outwards. which is straight). Polar aspect: the meridian spacing is equal and increases away from the pole. or hyperbolas concave toward the poles (except for the equator. while intervals between parallels increase rapidly from the center and parallels are convex to the equator. Because of the close perspective. this projection is limited to less than a hemisphere.

" Define the center of the map projection in both spherical and rectangular coordinates. These values must be in meters. Spheroid Name: Datum Name: Select the spheroid and datum to use. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.Prompts The following prompts display in the Projection Chooser if Gnomonic is selected. That is. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. It is very often convenient to make them large enough to prevent negative coordinates within the region of the map projection. 534 ERDAS . Respond to the prompts as described. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. Longitude of center of projection Latitude of center of projection Enter values for the longitude and latitude of the desired center of the projection.

Polar aspect: the meridian spacing is equal and increases. This central point can be located anywhere. latitude rings decrease their intervals from the center outwards. Field Guide 535 . and the parallel spacing is unequal and decreases toward the periphery of the projection. oblique.S. This projection generally represents only one hemisphere. Parallels Oblique and equatorial aspects: the parallels are complex curves. In the equatorial aspect. and spacing decreases toward the edges. Meridians are also curved. The polar.S. except for the central meridian. The graticule spacing. Scale increases perpendicular to the radii toward the periphery. The polar aspect is used by the U. retains the property of equivalence of area. but not as good as the equidistant. Linear scale is better than most azimuthals. Scale decreases radially toward the periphery of the map projection. Polar aspect: parallels are concentric circles. Geological Survey for the Circum-Pacific Map. which is a circle. Concentric circles are closer together as toward the edge of the map. and equatorial aspects are used by the U. Meridians Oblique and equatorial aspects: meridians are complex curves concave toward a straight central meridian. except the outer meridian of a hemisphere. The equator on the equatorial aspect is a straight line. and the scale distorts accordingly. Angular deformation increases toward the periphery of the projection. It is the only projection that can accurately represent both area and true direction from the center of the projection (Figure 192). Graticule spacing Linear scale Uses The Lambert Azimuthal Equal Area projection is mathematically based on a plane tangent to the earth. In the polar aspect. in all aspects. parallels are curves flattened in the middle. Geological Survey in the National Atlas.USGS Projections Lambert Azimuthal Equal Area Summary Construction Property Plane Equal Area Polar aspect: the meridians are straight lines radiating from the point of tangency. This projection is well-suited to square or round land masses.

the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. showing one hemisphere. centered on 40˚N. These values must be in meters. 536 ERDAS . Longitude of center of projection Latitude of center of projection Enter values for the longitude and latitude of the desired center of the projection. B) Equatorial aspect. In Figure 192. Respond to the prompts as described. That is. three views of the Lambert Azimuthal Equal Area projection are shown: A) Polar aspect. Spheroid Name: Datum Name: Select the spheroid and datum to use." Define the center of the map projection in both spherical and rectangular coordinates. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. frequently used in old atlases for maps of the eastern and western hemispheres.Prompts The following prompts display in the Projection Chooser if Lambert Azimuthal Equal Area is selected. It is very often convenient to make them large enough to prevent negative coordinates within the region of the map projection. C) Oblique aspect.

USGS Projections A B C Figure 192: Lambert Azimuthal Equal Area Projection Field Guide 537 .

that is conceptually secant on two parallels (Figure 193). but increases away from the standard parallels.5-minute and 15-minute quadrangles and the State Base Map series are constructed on this projection. whereas Albers possesses equal area. Meridians and parallels intersect each other at right angles. This projection. North or South Pole is represented by a point—the other pole cannot be shown.5% on a map of the United States (48 states) with standard parallels at 33˚N and 45˚N.Lambert Conformal Conic Summary Construction Property Meridians Parallels Cone Conformal Meridians are straight lines converging at a pole.S. Since 1962. Maximum scale error is 2. The conformal property of Lambert Conformal Conic and the straightness of great circles makes it valuable for landmark flying. The graticule spacing retains the property of conformality. Parallels are arcs of concentric circles concave toward a pole and centered at a pole. Also. Lambert Conformal Conic possesses true shape of small areas. Lambert Conformal Conic is the State Plane coordinate system projection for states of predominant east-west expanse. The United States (50 states) Base Map uses standard parallels at 37˚N and 65˚N. Some of the National Topographic Map Series 7. The correct angles produce correct shapes. described previously. Unlike Albers. like Albers. In comparison with Albers Conical Equal Area. is most valuable in middle latitudes. are 33˚ and 45˚N. Meridian spacing is true on the standard parallels and decreases toward the pole. 538 ERDAS . Parallel spacing increases away from the standard parallels and decreases between them. It retains its properties at various scales. meridians and parallels cross at right angles. Areal distortion is minimal. It is mathematically based on a cone that is tangent at one parallel or. Used for large countries in the mid-latitudes having an east-west orientation. great circles are approximately straight. The standard parallels for the U. At all coordinates. Linear scale is true on standard parallels.S. Aeronautical charts for Alaska use standard parallels at 55˚N and 65˚N. Lambert Conformal Conic has been used for the International Map of the World between 84˚N and 80˚S. and sheets can be joined along their edges. parallels of Lambert Conformal Conic are spaced at increasing intervals the farther north or south they are from the standard parallels. The graticule is symmetrical. The National Atlas of Canada uses standard parallels at 49˚N and 77˚N. The latter series uses standard parallels of 33˚N and 45˚N. especially in a country sprawling east to west like the U. more often. Great circle lines are approximately straight. Graticule spacing Linear scale Uses This projection is very similar to Albers Conical Equal Area. The major property of this projection is its conformality.

the standard parallels. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography." Latitude of 1st standard parallel Latitude of 2nd standard parallel Enter two values for the desired control lines of the projection. Longitude of central meridian Latitude of origin of projection Enter values for longitude of the desired central meridian and latitude of the origin of projection. define the origin of the map projection in both spherical and rectangular coordinates. It is very often convenient to make them large enough to ensure that there will be no negative coordinates within the region of the map projection.. Respond to the prompts as described. Then. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Spheroid Name: Datum Name: Select the spheroid and datum to use. False easting at central meridian False northing at origin Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of the origin of projection.USGS Projections Prompts The following prompts display in the Projection Chooser if Lambert Conformal Conic is selected. These values must be in meters. i. Field Guide 539 .e. Note that the first standard parallel is the southernmost. That is.

Note the change in spacing of the parallels. the standard parallels are 20˚N and 60˚N. 540 ERDAS .Figure 193: Lambert Conformal Conic Projection In Figure 193.

69 statute miles. However. Most great circles appear as long arcs when drawn on a Mercator map. Graticule spacing Linear scale Uses This famous cylindrical projection was originally designed by Flemish map maker Gerhardus Mercator in 1569 to aid navigation (Figure 194). great circles are the shortest path. However. Distance scales are usually furnished for several latitudes. or 111 kilometers. Any straight line is a constant-azimuth (rhumb) line. Rhumb lines. of Commerce. U. Meridians and parallels are straight lines and cross at 90˚ angles.S. Areal enlargement is extreme away from the equator. which equals 60 nautical miles. For this reason a Mercator map was very valuable to sea navigators. Meridians intersect parallels at right angles. Otherwise the Mercator is a special-purpose map projection best suited for navigation. Due to extreme scale distortion in high latitudes. The graticule spacing retains the property of conformality. Dept. the projection is rarely extended beyond 80˚N or S unless the latitude of true scale is other than the equator. parallels are placed increasingly farther apart with increasing distance from the equator. are straight. which show constant direction. An excellent projection for equatorial regions. or along two parallels equidistant from the equator (the secant form). poles cannot be represented. Linear scale is true along the equator only (line of tangency). Shape is true only within any small area. The use of the Mercator map projection as the base for nautical charts is universal. Parallels are straight and parallel.USGS Projections Mercator Summary Construction Property Meridians Parallels Cylinder Conformal Meridians are straight and parallel. to preserve conformality. Scale can be determined by measuring one degree of latitude. Meridian spacing is equal and the parallel spacing increases away from the equator. rhumb lines are not the shortest path. Examples are the charts published by the National Ocean Survey. Angular relationships are preserved. The graticule is symmetrical. Field Guide 541 . It is a reasonably accurate projection within a 15˚ band along the line of tangency. This projection can be thought of as being mathematically based on a cylinder tangent at the equator. Secant constructions are used for large-scale coastal charts.

Selection of a parameter other than the equator can be useful for making maps in extreme north or south latitudes.Prompts The following prompts display in the Projection Chooser if Mercator is selected. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Spheroid Name: Datum Name: Select the spheroid and datum to use. 542 ERDAS . False easting at central meridian False northing at origin Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of true scale. That is. Respond to the prompts as described. These values must be in meters. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography." Define the origin of the map projection in both spherical and rectangular coordinates. Longitude of central meridian Latitude of true scale Enter values for longitude of the desired central meridian and latitude at which true scale is desired. It is very often convenient to make them large enough so that no negative coordinates will occur within the region of the map projection.

. Field Guide 543 . Rhumb lines are straight. therefore small shapes are true (i.USGS Projections Figure 194: Mercator Projection In Figure 194. all angles are shown correctly. the map is conformal).e. which makes it useful for navigation.

while parallels are spaced farther apart the farther they are from the equator. or lines true to scale and free of distortion. Respond to the prompts as described. Prompts The following prompts display in the Projection Chooser if Miller Cylindrical is selected. the lines of latitude are parallel. Meridians and parallels intersect at right angles (ESRI 1992). Miller Cylindrical is not equal-area. Useful for world maps." Longitude of central meridian Enter a value for the longitude of the desired central meridian to center the projection. Graticule spacing Linear scale Uses Miller Cylindrical is a modification of the Mercator projection (Figure 195). Meridians are equidistant. Miller Cylindrical lessens the extreme exaggeration of the Mercator. and the distance between them increases toward the poles. Meridians are parallel and equally spaced.Miller Cylindrical Summary Construction Property Meridians Parallels Cylinder Compromise All meridians are straight lines. beyond 45˚. It is similar to the Mercator from the equator to 45˚. 544 ERDAS . Both poles are represented as straight lines. equidistant. Miller Cylindrical also includes the poles as straight lines whereas the Mercator does not. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. While the standard parallels. Thus. All parallels are straight lines. Spheroid Name: Datum Name: Select the spheroid and datum to use. or conformal. are at latitudes 45˚N and S. Miller Cylindrical is used for world maps and in several atlases. but latitude line intervals are modified so that the distance between them increases less rapidly. only the equator is standard. Meridians and parallels are straight lines intersecting at right angles.

These values must be in meters. That is. Figure 195: Miller Cylindrical Projection This projection resembles the Mercator. but has less distortion in polar regions. Field Guide 545 . Miller Cylindrical is neither conformal nor equal area.USGS Projections False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. It is very often convenient to make them large enough to prevent negative coordinates within the region of the map projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

It resembles the Transverse Mercator in a very limited manner and cannot be considered a cylindrical projection.Modified Transverse Mercator Summary Construction Property Meridians Parallels Graticule spacing Linear scale Cone Equidistant On pre-1973 editions of the Alaska Map E.50˚N.S. Uses In 1972. published by the American Association of Petroleum Geologists. uses the straight meridians on its Modified Transverse Mercator and is more equivalent to the Equidistant Conic map projection. the meridians are straight.000. Parallels are arcs concave to the pole. was based on the Polyconic projection. it is identified as the Modified Transverse Mercator projection. 546 ERDAS ..500. meridians are curved concave toward the center of the projection.000 (map “E”) and 1:1. The Bathymetric Maps Eastern Continental Margin U. Linear scale is more nearly correct along the meridians than along the parallels.000 and published at 1:2. like its predecessors. The U.000. On post-1973 editions. Graphically prepared by adapting coordinates for the Universal Transverse Mercator projection.000 scale.000 (map “B”). It is found to be most closely equivalent to the Equidistant Conic for the Clarke 1866 ellipsoid. Geological Survey’s Alaska Map E at the scale of 1:2. It resembles the Equidistant Conic projection for the ellipsoid in actual construction. the USGS devised a projection specifically for the revision of a 1954 map of Alaska which. The projection was also used in 1974 for a base map of the Aleutian-Bering Sea Region published at 1:2. Parallels are approximately equally spaced.9992 of true scale and the standard parallels at latitude 66.584.500.S. The graticule is symmetrical on post-1973 editions of the Alaska Map E. This projection was drawn to a scale of 1:2.09˚ and 53.A. Meridian spacing is approximately equal and decreases toward the pole.500. with the scale along the meridians reduced to 0.

That is.USGS Projections Prompts The following prompts display in the Projection Chooser if Modified Transverse Mercator is selected. It is very often convenient to make them large enough to prevent negative coordinates within the region of the map projection. These values must be in meters. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Field Guide 547 . Respond to the prompts as described.

Linear scale is true along the line of tangency.” and “New Zealand. or along two lines of equidistance from and parallel to the line of tangency.” “Hawaii.” “Countries of the Caribbean. Graticule spacing increases away from the line of tangency and retains the property of conformality. and the National Geographic Society’s maps “West Indies. Reasonably accurate projection within a 15˚ band along the line of tangency. Examples are: NASA Surveyor Satellite tracking charts. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. ERTS flight indexes.” Uses Oblique Mercator is a cylindrical. The Hotine version is based on a study of conformal projections published by British geodesist Martin Hotine in 194647. the Hotine version was used for mapping Landsat satellite imagery. Prompts The following prompts display in the Projection Chooser if Oblique Mercator (Hotine) is selected. except each 180th meridian is straight. It is equivalent to a Mercator projection that has been altered by rotating the cylinder so that the central line of the projection is a great circle path instead of the equator. The USGS uses the Hotine version of Oblique Mercator. Shape is true only within any small area. conformal projection that intersects the global surface along a great circle. Spheroid Name: Datum Name: Select the spheroid and datum to use. Respond to the prompts as described. Prior to the implementation of the Space Oblique Mercator. Parallels are complex curves concave toward the nearest pole. Useful for plotting linear configurations that are situated along a line oblique to the earth’s equator. Areal enlargement increases away from the line of tangency.Oblique Mercator (Hotine) Summary Construction Property Meridians Parallels Graticule spacing Linear scale Cylinder Conformal Meridians are complex curves concave toward the line of tangency." 548 ERDAS . strip charts for navigation.

Latitude of point of origin of projection False easting False northing The center of the projection is defined by rectangular coordinates of false easting and false northing. but close to. The origin of rectangular coordinates on this projection occurs at the nearest intersection of the central line with the earth’s equator. and use these for false eastings and northings. That is. To shift the origin to the intersection of the latitude of the origin entered above and the central line of the projection.0 indicates true scale only along the central line. Field Guide 549 . These values must be in meters. Appropriate values should be entered. one is often used to lessen scale distortion away from the central line. reverse the signs of the coordinates obtained. compute coordinates of the latter point with zero false eastings and northings. A value of 1. Format A For format A the additional prompts are: Azimuth east of north for central line Longitude of point of origin Format A defines the central line of the projection by the angle east of north to the desired great circle path and by the latitude and longitude of the point along the great circle path from which the angle is measured. Do you want to enter either: A) Azimuth East of North for central line and the longitude of the point of origin B) The latitude and longitude of the first and second points defining the central line These formats differ slightly in definition of the central line of the projection. A value of less than. This parameter may be used to modify scale distortion away from this central line. It is very often convenient to add additional values so that no negative coordinates will occur within the region of the map projection.USGS Projections Scale factor at center of projection Designate the desired scale factor along the central line of the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Appropriate values should be entered.Format B For format B the additional prompts are: Longitude of 1st point defining central line Latitude of 1st point defining central line Longitude of 2nd point defining central line Latitude of 2nd point defining central line Format B defines the central line of the projection by the latitude of a point on the central line which has the desired scale factor entered previously and by the longitude and latitude of two points along the desired great circle path. 550 ERDAS .

Meridians Oblique aspect: the meridians are ellipses. Scale decreases along lines radiating from the center of the projection. Polar aspect: meridian spacing is equal and increases. Parallels Oblique aspect: the parallels are ellipses concave toward the poles. Linear scale Uses Scale is true on the parallels in the polar aspect and on all circles centered at the pole of the projection in all aspects. The earth appears as it would from outer space. and the point of projection is at infinity (Figure 196). Light rays that cast the projection are parallel and intersect the tangent plane at right angles. Equatorial aspect: the meridians are ellipses concave toward the straight central meridian. Directions from the center of the projection are true. Graticule spacing Oblique and equatorial aspects: the graticule spacing decreases away from the center of the projection.USGS Projections Orthographic Summary Construction Property Plane Compromise Polar aspect: the meridians are straight lines radiating from the point of tangency. Equatorial aspect: the parallels are straight and parallel. The Orthographic projection is geometrically based on a plane tangent to the earth.S. This projection is a truly graphic representation of the earth and is a projection in which distortion becomes a visual aid. It is the most familiar of the azimuthal map projections. Field Guide 551 . Polar aspect: the parallels are concentric circles. and the parallel decreases from the point of tangency. concave toward the center of the projection. Geological Survey uses the Orthographic map projection in the National Atlas. The U.

centered at 40˚N and showing the classic globelike view. B) Equatorial aspect.This projection is limited to one hemisphere and shrinks those areas toward the periphery. It is very often convenient to make them large enough so that no negative coordinates will occur within the region of the map projection. the central meridian and parallels are straight. The Orthographic projection seldom appears in atlases. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. Three views of the Orthographic projection are shown in Figure 196: A) Polar aspect. 552 ERDAS . That is. Prompts The following prompts display in the Projection Chooser if Orthographic is selected. These values must be in meters. Spheroid Name: Datum Name: Select the spheroid and datum to use." Define the center of the map projection in both spherical and rectangular coordinates. Respond to the prompts as described. Longitude of center of projection Latitude of center of projection Enter values for the longitude and latitude of the desired center of the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Its utility is more pictorial than technical. Orthographic has been used as a basis for artistic maps by Rand McNally and the USGS. latitude ring intervals decrease from the center outwards at a much greater rate than with Lambert Azimuthal. with spaces closing up toward the outer edge. C) Oblique aspect. In the equatorial aspect. In the polar aspect. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection.

USGS Projections A C B Figure 196: Orthographic Projection Field Guide 553 .

thus making the standard parallel (latitude of true scale) approximately 81˚07’N or S. this latitude represents the true scale. and scale increases for areas farther from the central pole. If a standard parallel is chosen rather than one of the poles. The Astrogeology Center of the Geological Survey at Flagstaff. The point of tangency is a single point—either the North Pole or the South Pole. and the scale nearer the pole is reduced.Polar Stereographic Summary Construction Property Meridians Parallels Graticule spacing Linear scale Plane Conformal Meridians are straight. The central point is either the North Pole or the South Pole. The projection is equivalent to the polar aspect of the Stereographic projection on a spheroid. the scale factor at the pole is made 0. possesses the property of conformality. this is the only one that is conformal. like all stereographic projections. Polar Stereographic is an azimuthal projection obtained by projecting from the opposite pole (Figure 197). 554 ERDAS . This form is called Universal Polar Stereographic (UPS). The distance between parallels increases with distance from the central pole. the point of global contact is a line of latitude (ESRI 1992). Polar regions (conformal). this projection. but not both. Of all the polar aspect planar projections.994. Even though scale and area are not constant with Polar Stereographic. Polar Stereographic stretches areas toward the periphery. Meridians are straight and radiating. Uses The Polar Stereographic may be used to accommodate all regions not included in the UTM coordinate system. This projection produces a circular map with one of the poles at the center. has been using the Polar Stereographic projection for the mapping of polar areas of every planet and satellite for which there is sufficient information. Arizona. If the plane is secant instead of tangent. regions north of 84˚N and 80˚S. parallels are concentric circles. In the UPS system. All of either the northern or southern hemispheres can be shown. The scale increases with distance from the center. Parallels are concentric circles.

Field Guide 555 . Respond to the prompts as described. Latitude of true scale Enter a value for latitude at which true scale is desired. These values must be in meters. specify the latitude of true scale as any line of latitude other than 90˚N or S. specify the latitude of true scale as the North Pole.USGS Projections Prompts The following prompts display in the Projection Chooser if Polar Stereographic is selected. -90 00 00 (ESRI 1992). Spheroid Name: Datum Name: Select the spheroid and datum to use. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Longitude directed straight down below pole of map Enter a value for longitude directed straight down below the pole for a north polar aspect." Define the origin of the map projection in both spherical and rectangular coordinates. That is. For secant projections. For tangential projections. It is very often convenient to make them large enough to prevent negative coordinates within the region of the map projection. 90 00 00. or the South Pole. Ellipsoid projections of the polar regions normally use the International 1909 spheroid (ESRI 1992). False easting False northing Enter values of false easting and false northing corresponding to the pole. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. or straight up from the pole for a south polar aspect. This is equivalent to centering the map with a desired meridian.

556 ERDAS . Pole Figure 197: Polar Stereographic Projection and its Geometric Construction This projection is conformal and is the most scientific projection for polar regions.N. Pole Plane of projection Equator S.

Parallels (except the equator) are nonconcentric circular arcs. These conic projections are placed in relation to a central meridian. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. excepting the central meridian." Field Guide 557 .5-minute and 15-minute topographic USGS quad sheets. but all other meridians are complex curves. This projection is used mostly for north-south oriented maps. Used for 7. Parallels cross the central meridian at equal intervals but get farther apart at the east and west peripheries. Respond to the prompts as described. It increases along the meridians as the distance from the central meridian increases (ESRI 1992). although the central meridian is held true to scale.S. All parallels are arcs of circles. The equator is a straight line. The scale along each parallel and along the central meridian of the projection is accurate. from 1886 to about 1957 (ESRI 1992). Distortion increases greatly the farther east and west an area is from the central meridian.USGS Projections Polyconic Summary Construction Property Meridians Parallels Cone Compromise The central meridian is a straight line. Polyconic projections are made up of an infinite number of conic projections tangent to an infinite number of parallels. Spheroid Name: Datum Name: Select the spheroid and datum to use. Used almost exclusively in slightly modified form for large-scale mapping in the United States until the 1950s. (Figure 198). Prompts The following prompts display in the Projection Chooser if Polyconic is selected. All meridians. but not concentric. Graticule spacing Linear scale Uses Polyconic was developed in 1820 by Ferdinand Hassler specifically for mapping the eastern coast of the U. Polyconic projections compromise properties such as equal area and conformality. are concave toward the central meridian.

Figure 198: Polyconic Projection of North America In Figure 198.Define the origin of the map projection in both spherical and rectangular coordinates. These values must be in meters. the central meridian is 100˚W. It is very often convenient to make them large enough so that no negative coordinates will occur within the region of the map projection. This projection is used by the U. Geological Survey for topographic quadrangle maps. That is. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Longitude of central meridian Latitude of origin of projection Enter values for longitude of the desired central meridian and latitude of the origin of projection.S. False easting at central meridian False northing at origin Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of the origin of projection. 558 ERDAS .

Parallel spacing is equal. Central meridians may be different for the northern and southern hemispheres and may be selected to minimize distortion of continents or oceans. Meridian spacing is equal and decreases toward the poles. All parallels are straight. Used by the U. curved toward a straight central meridian. Sinusoidal is particularly suited for less than world areas. especially those bordering the equator. Sinusoidal maps achieve the property of equal area but not conformality. Interrupting a Sinusoidal world or hemisphere map can lessen distortion.USGS Projections Sinusoidal Summary Construction Property Meridians Parallels Graticule spacing Linear scale Pseudo-cylinder Equal area Meridians are sinusoidal curves. The central meridian is the only straight meridian—all others become sinusoidal curves. The graticule spacing retains the property of equivalence of area. Used as a world equalarea projection in atlases to show distribution patterns. Sinusoidal is also used by the USGS as a base map for showing prospective hydrocarbon provinces and sedimentary basins of the world. which. The interrupted Sinusoidal contains less distortion because each interrupted area can be constructed to contain a separate central meridian. Linear scale is true on the parallels and the central meridian. is twice as long as the central meridian. especially in polar regions. Field Guide 559 . but distortion becomes pronounced near outer meridians. Uses Sometimes called the Sanson-Flamsteed. Used as an equal area projection to portray areas that have a maximum extent in a north-south direction. Parallels are also the correct distance from the equator. Geological Survey as the base for maps showing prospective hydrocarbon provinces of the world and sedimentary basins of the world.S. such as South America or Africa. parallel lines. for a complete world map. Sinusoidal is a projection with some characteristics of a cylindrical projection—often called a pseudo-cylindrical type. The equator and central meridian are distortion free. All parallels are straight and the correct length.

It is very often convenient to make them large enough to prevent negative coordinates within the region of the map projection.Prompts The following prompts display in the Projection Chooser if Sinusoidal is selected. Spheroid Name: Datum Name: Select the spheroid and datum to use. That is. 560 ERDAS . the origin of the rectangular coordinate system should fall outside of the map projection to the south and west." Longitude of central meridian Enter a value for the longitude of the desired central meridian to center the projection. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. These values must be in meters. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. Respond to the prompts as described.

There are no graticules. for which the central line is curved and defined by the groundtrack of the orbit of the satellite. Plots for adjacent paths do not match without transformation (ESRI 1991). It is the standard format for data from Landsats 4 and 5. Scale is true along the groundtrack. and varies approximately 0. The Space Oblique Mercator projection is specifically designed to minimize distortion within sensing range of a mapping satellite as it orbits the Earth. The Y axis is directed east. and continuous mapping from. to form a Cartesian coordinate system. Used for georectification of. The Space Oblique Mercator projection is defined by USGS. Standard format for data from Landsats 4 and 5 (ESRI 1992). and continuous mapping from. It is the first projection to incorporate the earth’s rotation with respect to the orbiting satellite. the X axis passes through the descending node for each daytime scene.USGS Projections Space Oblique Mercator Summary Construction Property Meridians Parallels Graticule spacing Linear scale Uses Cylinder Conformal All meridians are curved lines except for the meridian crossed by the groundtrack at each polar approach. The direction of the X axis in a daytime Landsat scene is in the direction of the satellite motion — south. The method of projection used is the modified cylindrical.The line of tangency is conceptual and there are no graticules. Field Guide 561 . For SOM projections used by EOSAT. The Y axis is perpendicular to the X axis. The Space Oblique Mercator (SOM) projection is nearly conformal and has little scale distortion within the sensing range of an orbiting mapping satellite such as Landsat. the X axis is directed east and the Y axis is directed south.01% within sensing range (ESRI 1992). satellite imagery. It can be used for the rectification of. All parallels are curved lines. the axes are switched. satellite imagery. According to USGS.

Orbital path number (1-251 or 1-233) For Landsats 1. 2. 2. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Spheroid Name: Datum Name: Landsat vehicle ID (1-5) Specify whether the data are from Landsat 1. It is very often convenient to make them large enough to prevent negative coordinates within the region of the map projection. For Landsats 4 and 5. 562 ERDAS . and 3. or 5. 4. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. 3. That is. These values must be in meters.Prompts The following prompts display in the Projection Chooser if Space Oblique Mercator is selected. the path range is from 1 to 233. Respond to the prompts as described. the path range is from 1 to 251.

such as Delaware.000. Two zone numbering systems are currently in use—the U. NAD83 is based on the GRS 1980 spheroid. State Plane Zone Enter either the USGS zone code number as a positive value. NAD27 or 83 Either North America Datum 1927 (NAD27) or North America Datum 1983 (NAD83) may be used to perform the State Plane calculations. Each zone has a centrally located origin and a central meridian which passes through this origin. Some zone numbers have been changed or deleted from NAD27. into over 130 sections. The Transverse Mercator projection is used for zones extending mostly in a north-south direction. New Jersey. and New Hampshire.S. or the NOS zone code number as a negative value. The Lambert Conformal projection is used for zones extending mostly in an east-west direction.Y coordinate system (not a map projection) whose zones divide the U.S. distortion is less than one in 10. The Aleutian panhandle of Alaska is prepared on the Oblique Mercator projection. With the exception of very narrow States. Florida. most States are divided into two to ten zones. Prompts The following prompts will appear in the Projection Chooser if State Plane is selected. Field Guide 563 .USGS Projections State Plane The State Plane is an X. Respond to the prompts as described. • • NAD27 is based on the Clarke 1866 spheroid. These tables include both USGS and NOS code systems. Geological Survey (USGS) code system and the National Ocean Service (NOS) code system (Tables 1 and 2)—but other numbering systems exist. because each zone is small. each with its own projection surface and grid network (Figure 199). Alaska. Zone boundaries follow state and county lines. and. and New York use either Transverse Mercator or Lambert Conformal for different areas. Tables for both NAD27 and NAD83 zone numbers follow (Tables 1 and 2).

Figure 199: Zones of the State Plane Coordinate System The following abbreviations are used in Table 33 andTable 34: Tr Merc Lambert Oblique Polycon = = = = Transverse Mercator Lambert Conformal Conic Oblique Mercator (Hotine) Polyconic 564 ERDAS .

USGS Projections Table 33: NAD27 State Plane coordinate system zone numbers. projection types. and zone code numbers for the United States Code Number State Alabama Zone Name East West Type Tr Merc Tr Merc Oblique Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc USGS 3101 3126 6101 6126 6151 6176 6201 6226 6251 6276 6301 6326 -----3151 3176 3201 3226 3251 3276 3301 3326 3351 3376 3401 3426 3451 3476 3501 3526 3551 -101 -102 NOS Alaska 1 2 3 4 5 6 7 8 9 10 -5001 -5002 -5003 -5004 -5005 -5006 -5007 -5008 -5009 -5010 -5302 -201 -202 -203 -301 -302 -401 -402 -403 -404 -405 -406 -407 -501 -502 -503 -600 -700 American Samoa Arizona ------East Central West Arkansas North South California I II III IV V VI VII Colorado North Central South Connecticut Delaware District of Columbia --------------- Use Maryland or Virginia North Field Guide 565 .

Table 33: NAD27 State Plane coordinate system zone numbers. projection types. and zone code numbers for the United States (Continued) Code Number State Florida Zone Name East West North Type Tr Merc Tr Merc Lambert Tr Merc Tr Merc Polycon Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Lambert Lambert Lambert USGS 3601 3626 3576 3651 3676 ------5876 5901 5926 5951 5976 3701 3726 3751 3776 3801 3826 3851 3876 3901 3926 3951 3976 4001 4026 4051 6426 4076 4101 4126 4151 4176 -901 -902 -903 NOS Georgia East West -1001 -1002 -5400 -5101 -5102 -5103 -5104 -5105 -1101 -1102 -1103 -1201 -1202 -1301 -1302 -1401 -1402 -1501 -1502 -1601 -1602 -1701 -1702 -1703 -1801 -1802 -1900 -2001 -2002 Guam Hawaii ------1 2 3 4 5 Idaho East Central West Illinois East West Indiana East West Iowa North South Kansas North South Kentucky North South Louisiana North South Offshore Maine East West Maryland Massachusetts ------Mainland Island 566 ERDAS .

USGS Projections Table 33: NAD27 State Plane coordinate system zone numbers. projection types. and zone code numbers for the United States (Continued) Code Number State Michigan (Tr Merc) Zone Name East Central West Type Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert USGS 4201 4226 4251 6351 6376 6401 4276 4301 4326 4351 4376 4401 4426 4451 4476 4501 4526 4551 4576 4601 4626 4651 4676 4701 4726 4751 4776 4801 4826 4851 4876 4901 NOS -2101 -2102 -2103 -2111 -2112 -2113 -2201 -2202 -2203 -2301 -2302 -2401 -2402 -2403 -2501 -2502 -2503 -2601 -2602 -2701 -2702 -2703 -2800 -2900 -3001 -3002 -3003 -3101 -3102 -3103 -3104 -3200 Michigan (Lambert) North Central South Minnesota North Central South Mississippi East West Missouri East Central West Montana North Central South Nebraska North South Nevada East Central West New Hampshire New Jersey New Mexico ----------------East Central West New York East Central West Long Island North Carolina -------- Field Guide 567 .

Table 33: NAD27 State Plane coordinate system zone numbers. Croix Tennessee Texas ----------------North North Central Central South Central South Utah North Central South Vermont Virginia -------North South Virgin Islands Washington -------North South 568 ERDAS . projection types. and zone code numbers for the United States (Continued) Code Number State North Dakota Zone Name North South Type Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Lambert Lambert Lambert Lambert Lambert USGS 4926 4951 4976 5001 5026 5051 5076 5101 5126 5151 6001 5176 5201 5226 5251 5276 6051 5301 5326 5351 5376 5401 5426 5451 5476 5501 5526 5551 5576 6026 5601 5626 NOS -3301 -3302 -3401 -3402 -3501 -3502 -3601 -3602 -3701 -3702 -5201 -3800 -3901 -3902 -4001 -4002 -5202 -4100 -4201 -4202 -4203 -4204 -4205 -4301 -4302 -4303 -4400 -4501 -4502 -5201 -4601 -4602 Ohio North South Oklahoma North South Oregon North South Pennsylvania North South Puerto Rico Rhode Island South Carolina --------------North South South Dakota North South St.

and zone code numbers for the United States (Continued) Code Number State West Virginia Zone Name North South Type Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc USGS 5651 5676 5701 5726 5751 5776 5801 5826 5851 NOS -4701 -4702 -4801 -4802 -4803 -4901 -4902 -4903 -4904 Wisconsin North Central South Wyoming East East Central West Central West Field Guide 569 .USGS Projections Table 33: NAD27 State Plane coordinate system zone numbers. projection types.

projection types. and zone code numbers for the United States Code Number State Alabama Zone Name East West Type Tr Merc Tr Merc Oblique Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc USGS 3101 3126 6101 6126 6151 6176 6201 6226 6251 6276 6301 6326 3151 3176 3201 3226 3251 3276 3301 3326 3351 3376 3401 3451 3476 3501 3526 3551 NOS -101 -102 -5001 -5002 -5003 -5004 -5005 -5006 -5007 -5008 -5009 -5010 -201 -202 -203 -301 -302 -401 -402 -403 -404 -405 -406 -501 -502 -503 -600 -700 Alaska 1 2 3 4 5 6 7 8 9 10 Arizona East Central West Arkansas North South California I II III IV V VI Colorado North Central South Connecticut Delaware District of Columbia --------------- Use Maryland or Virginia North 570 ERDAS .Table 34: NAD83 State Plane coordinate system zone numbers.

projection types. and zone code numbers for the United States (Continued) Code Number State Florida Zone Name East West North Type Tr Merc Tr Merc Lambert Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Lambert Lambert Lambert USGS 3601 3626 3576 3651 3676 5876 5901 5926 5951 5976 3701 3726 3751 3776 3801 3826 3851 3876 3901 3926 3951 3976 4001 4026 4051 6426 4076 4101 4126 4151 4176 NOS -901 -902 -903 -1001 -1002 -5101 -5102 -5103 -5104 -5105 -1101 -1102 -1103 -1201 -1202 -1301 -1302 -1401 -1402 -1501 -1502 -1601 -1602 -1701 -1702 -1703 -1801 -1802 -1900 -2001 -2002 Georgia East West Hawaii 1 2 3 4 5 Idaho East Central West Illinois East West Indiana East West Iowa North South Kansas North South Kentucky North South Louisiana North South Offshore Maine East West Maryland Massachusetts ------Mainland Island Field Guide 571 .USGS Projections Table 34: NAD83 State Plane coordinate system zone numbers.

Table 34: NAD83 State Plane coordinate system zone numbers. projection types. and zone code numbers for the United States (Continued) Code Number State Michigan Zone Name North Central South Type Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert USGS 6351 6376 6401 4276 4301 4326 4351 4376 4401 4426 4451 4476 4551 4601 4626 4651 4676 4701 4726 4751 4776 4801 4826 4851 4876 4901 4926 4951 4976 5001 5026 5051 NOS -2111 -2112 -2113 -2201 -2202 -2203 -2301 -2302 -2401 -2402 -2403 -2500 -2600 -2701 -2702 -2703 -2800 -2900 -3001 -3002 -3003 -3101 -3102 -3103 -3104 -3200 -3301 -3302 -3401 -3402 -3501 -3502 Minnesota North Central South Mississippi East West Missouri East Central West Montana Nebraska Nevada ----------------East Central West New Hampshire New Jersey New Mexico ----------------East Central West New York East Central West Long Island North Carolina North Dakota --------North South Ohio North South Oklahoma North South 572 ERDAS .

USGS Projections Table 34: NAD83 State Plane coordinate system zone numbers. projection types. and zone code numbers for the United States (Continued) Code Number State Oregon Zone Name North South Type Lambert Lambert Lambert Lambert Lambert Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert USGS 5076 5101 5126 5151 6001 5176 5201 5251 5276 5301 5326 5351 5376 5401 5426 5451 5476 5501 5526 5551 5576 6026 5601 5626 5651 5676 5701 5726 5751 NOS -3601 -3602 -3701 -3702 -5201 -3800 -3900 -4001 -4002 -4100 -4201 -4202 -4203 -4204 -4205 -4301 -4302 -4303 -4400 -4501 -4502 -5201 -4601 -4602 -4701 -4702 -4801 -4802 -4803 Pennsylvania North South Puerto Rico Rhode Island South Carolina South Dakota --------------------------------South Tennessee Texas --------North North Central Central South Central South Utah North Central South Vermont Virginia --------North South Virgin Islands Washington --------North South West Virginia North South Wisconsin North Central South Field Guide 573 .

projection types.Table 34: NAD83 State Plane coordinate system zone numbers. and zone code numbers for the United States (Continued) Code Number State Wyoming Zone Name East East Central West Central West Type Tr Merc Tr Merc Tr Merc Tr Merc USGS 5776 5801 5826 5851 NOS -4901 -4902 -4903 -4904 574 ERDAS .

but it is impossible to show both hemispheres in their entirety from one center. In the equatorial aspect. with increasing distance from the pole. All of one hemisphere can easily be shown. It is used in geophysics for solving problems in spherical geometry. The polar aspect is used for topographic maps and navigational charts. mainly used for portraying large. Geological Survey uses it as the basis for maps of Antarctica. In the polar aspect. Graticule spacing Linear scale The graticule spacing increases away from the center of the projection in all aspects and it retains the property of conformality. The American Geographical Society uses this projection as the basis for its “Map of the Arctic. all parallels except the equator are circular arcs.” The U. Uses Stereographic is a perspective projection in which points are projected from a position on the opposite side of the globe onto a plane tangent to the earth (Figure 200). Equatorial aspect: parallels are nonconcentric arcs of circles concave toward the poles. Scale increases and parallels become more widely spaced farther from the center. Field Guide 575 . It is the only azimuthal projection that preserves truth of angles and local shape. In the equatorial aspect. the equator is straight. the outer meridian of the hemisphere is a circle centered at the projection center. continent-size areas of similar extent in all directions. Polar aspect: the parallels are concentric circles.USGS Projections Stereographic Summary Construction Property Plane Conformal Polar aspect: the meridians are straight lines radiating from the point of tangency.S. Scale increases toward the periphery of the projection. The Stereographic projection is the most widely used azimuthal projection. Parallels Oblique aspect: the parallels are nonconcentric arcs of circles concave toward one of the poles with one parallel being a straight line. latitude rings are spaced farther apart. Meridians Oblique and equatorial aspects: the meridians are arcs of circles concave toward a straight central meridian.

Spheroid Name: Datum Name: Select the spheroid and datum to use. Figure 200 shows two views: A) Equatorial aspect. B) Oblique aspect. centered on 40˚N. 576 ERDAS ." Define the center of the map projection in both spherical and rectangular coordinates. These values must be in meters. That is. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography.Prompts The following prompts display in the Projection Chooser if Stereographic is selected. It is very often convenient to make them large enough so that no negative coordinates will occur within the region of the map projection. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Respond to the prompts as described. Longitude of center of projection Latitude of center of projection Enter values for the longitude and latitude of the desired center of the projection. often used in the 16th and 17th centuries for maps of hemispheres. The Stereographic is the only azimuthal projection which is conformal.

USGS Projections B A Figure 200: Stereographic Projection Field Guide 577 .

Linear scale is true along the line of tangency. The entire earth from 84˚N to 80˚S is mapped with a system of projections called the Universal Transverse Mercator. It cannot be edge-joined in an east-west direction if each sheet has its own central meridian.5-minute and 15minute quadrangles of the National Topographic Map Series. The graticule retains the property of conformality. In the United States.S. Graticule spacing increases away from the tangent meridian. and parallel to. the central meridian. the two meridians 90˚ away. Geological Survey’s 1:250.000-scale series. Transverse Mercator also loses the straight rhumb lines of the Mercator map. the equator is straight. Scale is true along the central meridian or along two straight lines equidistant from. The contact line is then a chosen meridian instead of the equator and this central meridian runs from pole to pole. Transverse Mercator is the projection used in the State Plane coordinate system for states with predominant north-south extent. the line of tangency. Used where the north-south dimension is greater than the eastwest dimension. 578 ERDAS . Parallels are complex curves concave toward the nearest pole. Uses Transverse Mercator is similar to the Mercator projection except that the axis of the projection cylinder is rotated 90˚ from the vertical (polar) axis. but it is a conformal projection. Parallels are spaced at their true distances on the straight central meridian. and parallel to. and for some of the 7. Used as the base for the U. or along two lines equidistant from.Transverse Mercator Summary Construction Property Meridians Parallels Graticule spacing Linear scale Cylinder Conformal Meridians are complex curves concave toward a straight central meridian that is tangent to the globe. and the equator). It loses the properties of straight meridians and straight parallels of the standard Mercator projection (except for the central meridian. The straight central meridian intersects the equator and one meridian at a 90˚ angle.

A factor of less than. origin of the rectangular coordinate system should fall outside of the map projection to the south and west. This parameter is used to modify scale distortion. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography." Scale factor at central meridian Designate the desired scale factor at the central meridian. It is very often convenient to make them large enough so that there will be no negative coordinates within the region of the map projection. False easting False northing Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of the origin of projection. Respond to the prompts as described. one is often used. or to lessen scale distortion away from the central meridian. Field Guide 579 . It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian. Spheroid Name: Datum Name: Select the spheroid and datum to use. That is. Finally. define the origin of the map projection in both spherical and rectangular coordinates. These values must be in meters. but close to. A value of one indicates true scale only along the central meridian. Longitude of central meridian Latitude of origin of projection Enter values for longitude of the desired central meridian and latitude of the origin of projection.USGS Projections Prompts The following prompts display in the Projection Chooser if Transverse Mercator is selected.

Each zone extends three degrees eastward and three degrees westward from its central meridian. Table 35). Prompts The following prompts display in the Projection Chooser if UTM is chosen. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. With a separate projection for each UTM zone. 580 ERDAS . a high degree of accuracy is possible (one part in 1000 maximum distortion within each zone). The Transverse Mercator projection is then applied to each UTM zone. Spheroid Name: Datum Name: Select the spheroid and datum to use. If the map to be projected extends beyond the border of the UTM zone. The projection cylinder is rotated 90˚ from the vertical (polar) axis and can then be placed to intersect at a chosen central meridian. Army that extends around the world from 84˚N to 80˚S. Zones are numbered consecutively west to east from the 180˚ meridian (Figure 201.UTM The Universal Transverse Mercator (UTM) is an international plane (rectangular) coordinate system developed by the U.S. Transverse Mercator is a transverse form of the Mercator cylindrical projection. The world is divided into 60 zones each covering six degrees longitude." UTM Zone Is the data North or South of the equator? All values in Table 35 are in full degrees east (E) or west (W) of the Greenwich prime meridian (0). See "Transverse Mercator" on page 578 for more information. the entire map may be projected for any UTM zone specified by the user. The UTM system specifies the central meridian of each zone.

USGS Projections 126˚ 120˚ 114˚ 108˚ 102˚ 96˚ 90˚ 84˚ 78˚ 72˚ 66˚ Figure 201: Zones of the Universal Transverse Mercator Grid in the United States Field Guide 581 .

and longitude ranges Zone 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Central Meridian 177W 171W 165W 159W 153W 147W 141W 135W 129W 123W 117W 111W 105W 99W 93W 87W 81W 75W 69W 63W 57W 51W 45W 39W 33W 27W 21W 15W 9W 3W Range 180W-174W 174W-168W 168W-162W 162W-156W 156W-150W 150W-144W 144W-138W 138W-132W 132W-126W 126W-120W 120W-114W 114W-108W 108W-102W 102W-96W 96W-90W 90W-84W 84W-78W 78W-72W 72W-66W 66W-60W 60W-54W 54W-48W 48W-42W 42W-36W 36W-30W 30W-24W 24W-18W 18W-12W 12W-6W 6W-0 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 Zone Central Meridian 3E 9E 15E 21E 27E 33E 39E 45E 51E 57E 63E 69E 75E 81E 87E 93E 99E 105E 111E 117E 123E 129E 135E 141E 147E 153E 159E 165E 171E 177E 0-6E Range 6E-12E 12E-18E 18E-24E 24E-30E 30E-36E 36E-42E 42E-48E 48E-54E 54E-60E 60E-66E 66E-72E 72E-78E 78E-84E 84E-90E 90E-96E 96E-102E 102E-108E 108E-114E 114E-120E 120E-126E 126E-132E 132E-138E 138E-144E 144E-150E 150E-156E 156E-162E 162E-168E 168E-174E 174E-180E 582 ERDAS .Table 35: UTM zones. central meridians.

S.USGS Projections Van der Grinten I Summary Construction Property Meridians Parallels Miscellaneous Compromise Meridians are circular arcs concave toward a straight central meridian. Scale is true along the equator. All lines are curved except the central meridian and the equator. The Van der Grinten projection is used by the National Geographic Society for world maps. Parallels are spaced farther apart toward the poles." Field Guide 583 . Linear scale is true along the equator. except for a straight equator. The graticule spacing results in a compromise of all properties. The parallels are spaced farther apart toward the poles. Meridian spacing is equal at the equator. The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography. It compromises all properties. Geological Survey to show distribution of mineral resources on the sea floor. but increases rapidly toward the poles. It has been used to show distribution of mineral resources on the ocean floor. Van der Grinten I avoids the excessive stretching of the Mercator and the shape distortion of many of the equal area projections. Spheroid Name: Datum Name: Select the spheroid and datum to use. The poles commonly are not represented. Meridian spacing is equal at the equator. and represents the earth within a circle. Scale increases rapidly toward the poles. Parallels are circular arcs concave toward the poles. which are usually not represented. Respond to the prompts as described. Prompts The following prompts display in the Projection Chooser if Van der Grinten I is selected. The central meridian and equator are straight lines. Used by the U. Graticule spacing Linear scale Uses The Van der Grinten I projection produces a map that is neither conformal nor equal area (Figure 202).

False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. but it is not conformal. It is very often convenient to make them large enough to prevent negative coordinates within the region of the map projection. Figure 202: Van der Grinten I Projection The Van der Grinten I projection resembles the Mercator. These values must be in meters. That is.Longitude of central meridian Enter a value for the longitude of the desired central meridian to center the projection. 584 ERDAS .

• • • • • • • • • • • • • • • • • • • • • • Albers Conical Equal Area (see page 519) Azimuthal Equidistant (see page 522) Bipolar Oblique Conic Conformal Cassini-Soldner Conic Equidistant (see page 525) Laborde Oblique Mercator Lambert Azimuthal Equal Area (see page 535) Lambert Conformal Conic (see page 538) Mercator (see page 541) Modified Polyconic Modified Stereographic Mollweide Equal Area Oblique Mercator (see page 548) Orthographic (see page 551) Plate Carrée (see page 527) Rectified Skew Orthomorphic Regular Polyconic (see page 557) Robinson Pseudocylindrical Sinusoidal (see page 559) Southern Orientated Gauss Conformal Stereographic (see page 575) Stereographic (Oblique) (see page 575) Field Guide 585 . NOTE: ERDAS IMAGINE does not support datum shifts for these external projections. Simply refer to the page number in parentheses for more information. Those descriptions are not repeated here. Some of these projections were discussed in the previous section.External Projections External Projections The following external projections are supported in ERDAS IMAGINE and are described in this section.

It is based upon the Lambert Conformal Conic. Scale is compressed between these lines and expanded beyond them. The two oblique conics are joined with the poles 104˚ apart. using two oblique conic projections side-by-side.5%. cuts through Central America. Parallels are complex curves concave toward the nearest pole. and terminates at 45˚N and approximately 19˚59’36”W. and maintains conformality for these regions. Linear scale is generally good. Miller and William A. Linear scale is true along two lines that do not lie along any meridian or parallel. Used to represent one or both of the American continents. Briesemeister in 1941 specifically for mapping North and South America. 73˚02’W. The origin of the coordinates is made 17˚15’N. Prompts The following prompts display in the Projection Chooser if Bipolar Oblique Conic Conformal is selected.• • • • Bipolar Oblique Conic Conformal Transverse Mercator (see page 578) Universal Transverse Mercator (see page 580) Van der Grinten (see page 583) Winkel’s Tripel Summary Construction Property Meridians Parallels Graticule spacing Cone Conformal Meridians are complex curves concave toward the center of the projection. but there is as much as a 10% error at the edge of the projection as used. Projection Name Spheroid Type Datum Name 586 ERDAS . A great circle arc 104˚ long begins at 20˚S and 110˚W. The scale of the map is then increased by approximately 3. Refer to "Lambert Conformal Conic" on page 538 for more information.M. Graticule spacing increases away from the lines of true scale and retains the property of conformality. Linear scale Uses The Bipolar Oblique Conic Conformal projection was developed by O. Examples are the Basement map of North America and the Tectonic map of North America.

Complex curves for all meridians and parallels. all of which are straight. parallel to and equidistant from. but the scale is constant along any straight line on the map that is parallel to the central meridian. the lines of true scale are two straight lines on the map. and each meridian 90˚ away from the central meridian. although it is still in limited use outside of the United States. The projection is neither equal area nor conformal. except for the equator. Used for topographic mapping. all of which are straight. Therefore. Instead of having the straight meridians and parallels of the Equidistant Cylindrical. Cassini de Thury in 1745 for the survey of France. formerly in England and currently in a few other countries. The scale is correct along the central meridian and also along any straight line perpendicular to the central meridian. than regions extending in other directions. and Malaysia. but it has a compromise of both features.External Projections Cassini-Soldner Summary Construction Property Meridians Parallels Graticule spacing Cylinder Compromise Central meridian. There is no distortion along them instead. such as Great Britain. Cassini-Soldner is more suitable for regions that are predominantly north-south in extent. Mathematical analysis by J. such as Denmark. There is no distortion along the central meridian if it is maintained at true scale. the Cassini has complex curves for each. Scale is constant but not true along lines parallel to the central meridian on the spherical form and nearly so for the ellipsoid. the central meridian. the central meridian. Scale is true along the central meridian. and along lines perpendicular to the central meridian. and the equator are straight lines. each meridian 90˚ from the central meridian. it has largely been replaced by the Transverse Mercator projection. which is the usual case. Field Guide 587 . The spherical form of the projection bears the same relation to the Equidistant Cylindrical or Plate Carrée projection that the spherical Transverse Mercator bears to the regular Mercator. F. Other meridians are complex curves. Germany. Parallels are complex curves. except for the equator. It gradually increases in a direction parallel to the central meridian as the distance from that meridian increases. Linear scale Uses The Cassini projection was devised by C. It was one of the major topographic mapping projections until the early 20th century. and each meridian 90˚ away from the central meridian. G. Today. If it is given a reduced scale factor. the central meridian. von Soldner in the early 19th century led to more accurate ellipsoidal formulas.

Laborde combined a conformal sphere with a complex-algebra transformation of the Oblique Mercator projection for the topographic mapping of Madagascar. Projection Name Spheroid Type Datum Name 588 ERDAS . A system equivalent to the oblique Cassini-Soldner projection was used in early coordinate transformations for ERTS (now Landsat) satellite imagery. The central line is a great circle arc. Prompts The following prompts display in the Projection Chooser if Laborde Oblique Mercator is selected. Projection Name Spheroid Type Datum Name Laborde Oblique Mercator In 1928. Prompts The following prompts display in the Projection Chooser if Cassini-Soldner is selected. See "Oblique Mercator (Hotine)" on page 548 for more information.The Cassini-Soldner projection was adopted by the Ordnance Survey for the official survey of Great Britain during the second half of the 19th century. but it was changed to Oblique Mercator (Hotine) in 1978 and to the Space Oblique Mercator in 1982. This variation is now known as the Laborde Oblique Mercator.

000-scale International Map of the World (IMW) series. which is slightly reduced. but east to west.000. The two parallels are spaced from each other according to the true scale along the central meridian. There is still a gap when mosaicking in all directions.N. Projection Name Spheroid Type Datum Name Field Guide 589 . The top and bottom parallels of each sheet are nonconcentric circular arcs. in that there is a gap between each diagonal sheet. Adjacent sheets exactly fit together not only north to south. Prompts The following prompts display in the Projection Chooser if Modified Polyconic is selected. and in 1909 it was adopted by the International Map Committee (IMC) in London as the basis for the 1:1. and there are two meridians that are made true to scale. Parallels are circular arcs. and either one or the other adjacent sheet. The top and bottom parallels of each sheet are nonconcentric circular arcs. a U.External Projections Modified Polyconic Summary Construction Property Meridians Parallels Cone Compromise All meridians are straight.” Used for the International Map of the World series until 1962. Scale is true along each parallel and along two meridians. Graticule spacing Linear scale Uses The Modified Polyconic projection was devised by Lallemand of France. The projection differs from the ordinary Polyconic in two principal features: all meridians are straight. but no parallel is “standard. conference on the IMW adopted the Lambert Conformal Conic and the Polar Stereographic projections to replace the Modified Polyconic. In 1962. See "Polyconic" on page 557 for more information.

Used for maps of continents in the Eastern Hemisphere. and for maps of Alaska and the 50 United States. The graticule is normally not symmetrical about any axis or point. the meridians. and other undesirable curves. The meridians and parallels of the Modified Stereographic projection are generally curved. parallels. Projection Name Spheroid Type Datum Name 590 ERDAS . Prompts The following prompts display in the Projection Chooser if Modified Stereographic is selected. for the Pacific Ocean. As the distance from the projection center increases. There are limitations to these transformations. Scale is true along irregular lines. All parallels are complex curves. overlapping. A world map using the GS50 (50-State) projection is almost illegible with meridians and parallels intertwined like wild vines. Most of them can only be used within a limited range. although some may be straight under certain conditions.Modified Stereographic Summary Construction Property Meridians Parallels Graticule spacing Linear scale Uses Plane Conformal All meridians are normally complex curves. and shorelines begin to exhibit loops. although some may be straight under certain conditions. but the map is usually designed to minimize scale variation throughout a selected region. and there is usually no symmetry about any point or line.

The poles are points. Prompts The following prompts display in the Projection Chooser if Mollweide Equal Area is selected. such as the Van der Grinten. its major axis. such as the Pacific Ocean. This is because only two points on the Mollweide are completely free of distortion unless the projection is interrupted. The central meridian is a straight line and 90˚ meridians are circular arcs (Pearson 1990). The meridians 90˚ east and west of the central meridian form a complete circle.External Projections Mollweide Equal Area Summary Construction Property Meridians Parallels Pseudo-cylinder Equal area All of the meridians are ellipses. Projection Name Spheroid Type Datum Name Field Guide 591 . All other meridians are elliptical arcs which. The world is shown in an ellipse with the equator. form complete ellipses that meet at the poles. but they are not equally spaced. Distortion increases with distance from these lines and becomes severe at the edges of the projection (ESRI 1992). Scale is true along latitudes 40˚44’N and S. Germany. The equator and parallels are straight lines perpendicular to the central meridian. Graticule spacing Linear scale Uses The second oldest pseudo-cylindrical projection that is still in use (after the Sinusoidal) was presented by Carl B. especially as an inspiration for other important projections. twice as long as the central meridian. These are the points at latitudes 40˚44’12”N and S on the central meridian(s). Meridians are equally spaced along the equator and along all other parallels. Suitable for thematic or distribution mapping of the entire world. with their opposite numbers on the other side of the central meridian. The parallels are straight parallel lines. It has had a profound effect on world map projections in the 20th century. frequently in interrupted form (ESRI 1992). Mollweide (1774 . Often used for world maps (Pearson 1990).1825) of Halle. It is an equal area projection of the earth within an ellipse. in 1805. The Mollweide is normally used for world maps and occasionally for a very large region. but they are not equally spaced. its minor axis. Linear graticules include the central meridian and the equator (ESRI 1992).

Parallels are equally spaced straight lines between 38˚N and S.51 times the length of the equator. and concave toward the central meridian. concave toward the central meridian. Parallels are straight lines and are parallel. The continents appear as units and are in relatively correct size and location. The projection is based upon tabular coordinates instead of mathematical formulas (ESRI 1992). Developed for use in general and thematic world maps. scale is made true along latitudes 38˚N and S. The individual parallels are evenly divided by the meridians (Pearson 1990). Used by Rand McNally since the 1960s and by the National Geographic Society since 1988 for general and thematic world maps (ESRI 1992). 592 ERDAS . The central meridian is a straight line 0. and for the latitude of opposite sign (ESRI 1992). equally spaced.1968) called the Oblique Mercator projection the Rectified Skew Orthomorphic projection. Generally. Uses The Robinson Pseudocylindrical projection provides a means of showing the entire earth in an uninterrupted form. Scale is constant along any given latitude. See "Oblique Mercator (Hotine)" on page 548 for more information. Parallels are straight lines. Prompts The following prompts display in the Projection Chooser if Rectified Skew Orthomorphic is selected. Projection Name Spheroid Type Datum Name Robinson Pseudocylindrical Summary Construction Property Meridians Parallels Graticule spacing Linear scale Pseudo-cylinder Compromise Meridians are elliptical arcs. The poles are 0.Rectified Skew Orthomorphic Martin Hotine (1898 .53 times the length of the equator. Poles are represented as lines. and then the spacing decreases beyond these limits. Meridians are equally spaced and resemble elliptical arcs.

Projection Name Spheroid Type Datum Name Southern Orientated Gauss Conformal Southern Orientated Gauss Conformal is another name for the Transverse Mercator projection. Projection Name Spheroid Type Datum Name Field Guide 593 . See "Transverse Mercator" on page 578 for more information.External Projections Prompts The following prompts display in the Projection Chooser if Robinson Pseudocylindrical is selected. after mathematician Friedrich Gauss (1777 .1855). It is also called the GaussKrüger projection. Prompts The following prompts display in the Projection Chooser if Southern Orientated Gauss Conformal is selected.

Symmetry is maintained along the central meridian or the Equator. Other parallels are curved and concave toward the nearest pole. Scale is true along the central meridian and constant along the Equator. It is a combined projection that is the arithmetic mean of the Plate Carrée and Aitoff’s projection (Maling). Equidistant spacing of parallels. Equator and the poles are straight. Prompts The following prompts display in the Projection Chooser if Winkel’s Tripel is selected.Winkel’s Tripel Summary Construction Property Meridians Modified azimuthal Neither conformal nor equal area Central meridian is straight. Used for world maps. Other meridians are curved and are equally spaced along the equator and concave toward the central meridian. Projection Name Spheroid Type Datum Name 594 ERDAS . Parallels Graticule spacing Linear scale Uses Winkel’s Tripel was formulated in 1921 by Oswald Winkel of Germany.

or triangle.4 . alarm . Analytical Photogrammetry . AVIRIS data have been available since 1987.2. Usually.A Glossary A absorption spectra . yearly rainfall.already or previously known.the solar imaging sensors that both emit and receive radiation.see ARC Digitized Raster Graphic. a priori . abstract symbol .the comparison of a classification to geographical data that is assumed to be true.a Russian radar satellite that completed its mission in 1992. a list of the percentages of accuracy. active sensors . usually used before the signature statistics are calculated.the electromagnetic radiation wavelengths that are absorbed by specific materials of interest. under a contract with NASA. etc. ADRI .in classification accuracy assessment. accuracy assessment .the computer replaces some expensive optical and mechanical components by substituting analog measurement and calculation with mathematical computation. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) . These bands are 10 nm wide and cover the spectral range of . the assumed-true data are derived from ground truthing. ADRG . square. Almaz .an experimental airborne radar sensor developed by Jet Propulsion Laboratories (JPL). such as population density. accuracy report . Airborne Synthetic Aperture Radar (AIRSAR) . California. Field Guide 595 .a sensor developed by JPL (Pasadena. These symbols often represent amounts that vary from place to place. California) under a contract with NASA that produces multispectral data with 224 narrow bands. An alarm highlights an area on the display which is an approximation of the area that would be classified with a signature. computed from the error matrix. such as a circle.see ARC Digital Raster Imagery.optical or mechanical instruments used to reconstruct threedimensional geometry from two overlapping photographs.a test of a training sample. Pasadena. The original data can then be compared to the highlighted area. Analog Photogrammetry . AIRSAR data have been available since 1983. aerial stereopair .4 nm.two photos taken at adjacent exposure stations.an annotation symbol that has a geometric shape.

AOIs can be stored in separate . aspect . aspect map . SPOT multispectral.a unit of measure that can be applied to data in the Lat/Lon coordinate system. based on the World Geodetic System 1984 (WGS 84). south.the orientation. neatlines. In ERDAS IMAGINE. scale bars. ARC system (Equal Arc-Second Raster Chart/Map) .aoi files. These data are primarily used for military purposes by defense contractors. annotation layer . ARC Digitized Raster Graphic (ADRG) . attribute .vector data created with the ARC/INFO UNGENERATE command. and symbols which denote geographical features. in “3 arc/second” data. lines. that are used to aid in the classification process. grid lines. ARC Digital Raster Imagery (ADRI) .Defense Mapping Agency (DMA) data that consist of SPOT panchromatic. or the direction that a surface faces. area based matching . Each pixel represents the distance covered by one second of latitude or longitude. other than remotely sensed data. These data are available only to Department of Defense contractors. line.the data. or polygon that is selected as a training sample or as the image area to be used in an operation.the tabular information associated with a raster or vector layer. area .a system that provides a rectangular coordinate and projection system at any scale for the earth’s ellipsoid. polygons.a set of annotation elements that is drawn in a Viewer or Map Composer window and stored in a file (. ARC GENERATE data . arc . west.a map that is color-coded according to the prevailing direction of the slope at each pixel. 596 ERDAS . area of interest (AOI) . annotation consists of text.data from the Defense Mapping Agency (DMA) that consist of digital copies of DMA hardcopy graphics transformed into the ARC system and accompanied by ASCII encoded support files. each pixel represents an area three seconds latitude by three seconds longitude.Glossary A ancillary data . legends.the explanatory material accompanying an image or map. For example. ellipses. with respect to the directions of the compass: north. east. annotation .ovr extension). or Landsat TM satellite imagery transformed into the ARC system and accompanied by ASCII encoded support files. aspect image .a point.an image matching technique that determines the correspondence between two image areas according to the similarity of their gray level values.a thematic raster image which shows the prevailing direction that each pixel faces.see line.a measurement of a surface. arc/second . rectangles. tick marks.

Advanced Very High Resolution Radiometer data. B band . A form of data storage in which the values for each band are ordered within a given pixel. near-infrared.a map portraying the shape of a water body or reservoir using isobaths (depth contours). bin function . bins . going north to east. This file can be edited.” banding .a map portraying background reference information onto which other information is placed. or creating new bands from other sources. the sum of a set of values divided by the number of values in the set. Pixels are sorted into a specified number of bins. etc. base map .1× 1. Base maps usually show the location and extent of natural surface features and permanent man-made features.a file that is created in the batch mode of ERDAS IMAGINE. All steps are recorded for a later run.the statistical mean. The pixels are arranged sequentially on the tape. azimuthal projection .B average . bathymetric map . Sometimes called “channel. BIP . Field Guide 597 .a mathematical function that establishes the relationship between data file values and rows in a descriptor table.an angle measured clockwise from a meridian. thermal.a set of data file values for a specific portion of the electromagnetic spectrum of reflected light or emitted heat (red. infrared.band interleaved by line. representing the probabilities that pixels will be assigned to each class.see striping. All bands of data for a given line are stored consecutively within the file.1 km or 4 × 4 km. A form of data storage in which each record in the file contains a scan line (row) of data for one band. AVHRR . The Bayesian classifier allows the application of a priori weighting factors. Bayesian .band interleaved by pixel.ordered sets of pixels.a variation of the maximum likelihood classifier. batch mode . BIL . Small-scale imagery produced by an NOAA polar orbiting satellite. The pixels are then given new values based upon the bins to which they are assigned. green. It has a spatial resolution of 1. bilinear interpolation .a resampling method that uses the data file values of four pixels in a 2 by 2 window to calculate an output data file value by computing a weighted average of the input data file values with a bilinear function. based on the Bayes Law of probability.a mode of operating ERDAS IMAGINE in which steps are recorded for later use.a map projection that is created from projecting the surface of the earth to the surface of a plane.) or some other user-defined information created by combining or enhancing the original bands. blue. azimuth . batch file .

expressed in units of the specified map projection. build . A measure of data storage density for magnetic tapes. but only 4.a map laid out like the pages of a book. lines. the number of values that can be expressed by 3 bits is 8 (23 = 8). brightness value . bpi .a specific area around a feature that is isolated for or from further analysis. meaning a number that can have two possible values 0 and 1. depending upon the number of bits used.Glossary B bit .on a map. ϕ). based upon. book map . block of photographs . There are neatlines and tick marks on all sides of every page. or “off” and “on.the quantity of a primary color (red.000 bytes. or reducible to a true or false condition.the process of constructing the topology of a vector layer by processing points. A data storage format in which each band is contained in a separate file.” “display value. byte . ω. See clean.the number of logical records in each physical record. bundle attitude . Also called “intensity value. green. buffer zone . bundle . 598 ERDAS .defined by the perspective center. For example.000 columns due to a blocking factor of 7. not just the image area as does a neatline. however.” BSQ . breakline . blocking factor . can have many more values. For instance.a neighborhood analysis technique that is used to detect boundaries between thematic classes. a record may contain 28. in which each vertex has its own X. Y. a line that usually encloses the entire map.the unit of photogrammetric triangulation after each point measured in an image is connected with the perspective center by a straight light ray.logical. Boolean .formed by the combined exposures of a flight. so that further analyses will exclude these areas that are often unsuitable for development.bits per inch.defined by a spatial rotation matrix consisting of three angles (κ. bundle location . border .” “pixel value.band sequential. Z value. buffer zones are often generated around streams in site assessment studies. blue) to be output to a pixel on the display device.” “function memory value. Each page fits on the paper used by the printer. The number of values that can be expressed by a set of bits is 2 to the power of the number of bits used.” “screen value.an elevation polyline. The block consists of a number of parallel strips with a sidelap of 20-30%. blocked .” A set of bits.a binary digit. boundary . and polygons.8 bits of data.a method of storing data on 9-track tapes so that there are more logical records in each physical record. There is one bundle of light rays for each image. For example.

character . One character usually occupies one byte when stored on a computer. one cell in the image may represent an area 30 feet by 30 feet on the ground. For example. the center of a satellite image. grid cell.the physical or spectral distance that is measured as the sum of distances that are perpendicular to one another. class value . the “tail” represents the pixels that are most likely to be classified incorrectly. Field Guide 599 .in aerial photography.the area that one pixel represents. whose curve is characterized by a “tail” that represents the highest and least frequent data values. choropleth map . CCT . cartography . DTED (Digital Terrain Elevation Data) are distributed in cells. 2.a data file value of a thematic file which identifies a pixel as belonging to a particular class. a 1˚ × 1˚ area of coverage. city-block distance . measured in map units. chi-square distribution . check point .a map portraying properties of a surface using area symbols.see thematic data. classification .a map showing the boundaries of the subdivisions of land for purposes of describing and recording ownership or taxation. class . or punctuation symbol. In classification thresholding. categorical data .1.a coordinate system in which data are organized on a grid and points on the grid are referenced by their X. Sometimes called “pixel size.a non-symmetrical data distribution. Classes are usually formed through classification of a continuous raster layer.see computer compatible tape. calibration certificate/report . Cartesian . check point analysis .a set of pixels in a GIS file which represent areas that share some condition. a pixel. cell size . cell .” center of the scene .additional ground points used to independently verify the degree of accuracy of a triangulation.a read-only storage device read by a CD-ROM player.C C cadastral map . Area symbols usually represent categorized classes of the mapped phenomenon. CD-ROM .the art and science of creating maps.the act of using check points to independently verify the degree of accuracy of a triangulation.Y coordinates.the process of assigning the pixels of a continuous raster image to discrete categories. letter. the manufacturer of the camera specifies the interior orientation in the form of a certificate or report.a number.the center pixel of the center scan line.

On a color printer. clustering .(or classification system) a set of target classes.Glossary C classification accuracy table .unsupervised training. classification scheme . magenta. color guns . or a constant in a polynomial expression. a program that accesses a server utility that is on another machine on the network. from a classified file to be tested. ERDAS IMAGINE supports several color printers. as well as text.a map projection that compromises among two or more of the map projection properties of conformality.on a computer on a network. and blue values assigned to the colorcell control the brightness of the color guns for the displayed pixel.on a display device. clean . and sometimes black ink to paper.the location where the data file values are stored in the colormap.for accuracy assessment.one number in a matrix. and a list of the classified values of the same pixels.a map on which the combined information from different thematic maps is presented.the process of constructing the topology of a vector layer by processing lines and polygons. a list of known values of reference pixels. clump . Also called raster region.the natural groupings of pixels when plotted in spectral space. green. color scheme . client . yellow. coefficient of variation . clusters .a printer that prints color or black-and-white imagery. and true direction. and blue brightness values to classes when a layer is displayed. colormap . equivalence. green. supported by some ground truth or other a priori knowledge of the true class.a scene-derived parameter that is used as input to the Sigma and Local Statistics radar enhancement filters. the process of generating signatures based on the natural groupings of pixels in image data when they are plotted in spectral space. composite map . color guns are the devices that apply cyan. 600 ERDAS . which is used to perform a function on a set of input values.an ordered set of colorcells. The red. The purpose of such a scheme is to provide a framework for organizing and categorizing the information that can be extracted from the data. coefficient . compromise projection . and blue phosphors that are illuminated on the picture tube in varying brightnesses to create different colors. equidistance. and orientation parameters colorcell . the red. ground coordinates. See build. collinearity .a contiguous group of pixels in one class.a set of lookup tables that assigns red.a non-linear mathematical model that photogrammetric triangulation is based upon. color printer . Collinearity equations describe the relationship among image coordinates. green.

the property of a map projection to represent true shape.a term used to describe raster data layers that contain quantitative and related values.a matrix which contains the number and percentages of pixels that were classified as expected.the percentage of pixels that are believed to be misclassified. Contrast stretching is often used in displaying continuous raster layers.the process of reassigning a range of values to another range. confidence level .000. or true shape.C computer compatible tape (CCT) . See continuous data. conformality .a value used in rectification to determine whether to accept or discard ground control points.” can be identified by their sizes and manipulated.. Groups of contiguous pixels in the same class.a map projection that is created from projecting the surface of the earth to the surface of a cone.a point with known coordinates in the ground coordinate system. The connectivity radius is used in connectivity analysis.a magnetic tape used to transfer and store digital data. such as remotely sensed images (e.the distance (in pixels) that pixels can be from one another to be considered contiguous. conformal .g. connectivity radius . In two-dimensional coordinate systems. conic projection .).a study of the ways in which pixels of a class are grouped together spatially.a type of raster data that are quantitative (measuring a characteristic) and have related. contrast stretch . etc.000 to 1. coordinate system . continuous data .a map or map projection that has the property of conformality.a method for expressing location. expressed in the units of the specified map projection. The threshold is an absolute value threshold ranging from 0. convolution filtering . locations are expressed by a column and row. convolution kernel . continuous . This is accomplished by exact transformation of angles around points. The numbers in the matrix serve to weight this average toward particular pixels. contour map . Used to change the spatial frequency characteristics of an image. contiguity analysis . SPOT. correlation threshold .a matrix of numbers that is used to average the value of each pixel with the values of surrounding pixels in a particular way.a map in which a series of lines connects points of equal elevation. wherein a projection preserves the shape of any small geographical area. Field Guide 601 . called raster regions. or “clumps. contingency matrix . control point . continuous values. usually according to a linear function. since the range of data file values is usually much narrower than the range of brightness values on the display device. Landsat. also called x and y.the process of averaging small sets of pixels across an image.

1. strings. 602 ERDAS . cross correlation . a collection of numbers. cylindrical projection . attribute information. 2. in the context of remote sensing.a square matrix which contains all of the variances and covariances within the bands in a data file. current directory . and other kinds of data which represent one area of interest.windows which consist of a local neighborhood of pixels.a method of resampling which uses the data file values of sixteen pixels in a 4 by 4 window to calculate an output data file value with a cubic function. One example is square neighborhoods (e. Examples of popular databases include SYBASE. 7 X 7 pixels). 5 X 5. 3 X 3.Glossary D correlation windows .a calculation which computes the correlation coefficient of the gray values between the template window and the search window.a filter used to sharpen the overall scene luminance without distorting the interband variance content of the image. vector layers. a set of continuous and thematic raster layers. D dangling node . Covariance is defined as the average product of the differences between the data file values in each band and the mean of each band. covariance matrix . credits .” It is the default path. dBase. database (one word) . but in different bands. cubic convolution .in ERDAS IMAGINE.” it is the directory that you are “in.a relational data structure usually used to store tabular information. accuracy information.a line that does not close to form a polygon. or facts that require some processing before they are meaningful. and other details that are required or helpful to readers. a computer file containing numbers which represent a remotely sensed image. etc.also called “default directory. A data base is usually part of a geographic information system.g. data file . data base (two words) . or that extends past an intersection. corresponding GCPs .the ground control points that are located in the same geographic location as the selected GCPs.measures the tendencies of data file values for the same pixel. INFO. but were selected in different files.. the text that can include the data source and acquisition date.a computer file that contains numbers which represent an image. crisp filter . Oracle. to vary with each other in relation to the means of their respective bands. These bands must be linear. data . and can be processed to display that image. covariance .on maps.a map projection that is created from projecting the surface of the earth to the surface of a cylinder.

” “brightness value.” datum .1. the number of bands in the classified file. can be produced with terrain analysis programs. desktop scanners . density . DEMs are available from the USGS at 1:24. but are much less expensive. developable surface .the device in a sensor system that records electromagnetic radiation.a technique used to stretch the principal components of an image.the process of adding vertices to selected lines at a user-specified tolerance. not the original image. digitized raster graphic (DRG) .Y) and the elevations of the ground points and breaklines. DEM .continuous raster layers in which data file values represent elevation. Field Guide 603 . Digital Photogrammetry .a digital replica of Defense Mapping Agency hardcopy graphic products. decision rule . digital elevation model (DEM). digital terrain model (DTM) .photogrammetry as applied to digital images that are stored and processed on a computer. detector .” “digital number (DN). derivative map . descriptor .an equation or algorithm that is used to classify image data after signatures have been created. default directory .a discrete expression of topography in a data array. 2. combining.general purpose devices which lack the image detail and geometric accuracy of photogrammetric quality units.000 and 1:250. degrees of freedom .000 scale. decorrelation stretch .each number in an image file. or analyzing other maps. a neighborhood analysis technique that outputs the number of pixels that have the same value as the analyzed pixel in a userspecified window.a flat surface.see reference plane. such as the surface of a cone or a cylinder. digital orthophoto . and IMAGINE OrthoMAX. Also called “file value. consisting of a group of planimetric coordinates (X. 9-track tapes are commonly stored at 1600 and 6250 bpi.a map created by altering. yielding a map that is free of most significant geometric distortions.” “image file value.D data file value . the number of bits per inch on a magnetic tape.An aerial photo or satellite scene which has been transformed by the orthogonal projection.” “pixel.when chi-square statistics are used in thresholding. The decision rule is used to process the data file values based upon the signature statistics.see current directory. densify . See also ADRG.see attribute. or a surface that can be easily flattened by being cut and unrolled.see digital elevation model. Digital images can be scanned from photographs or can be directly captured by digital cameras.

For example. In ERDAS IMAGINE. in which directories can also contain many levels of subdirectories. distance . 16-bit file that can be created in the classification process.the geographic data sets into which ADRG data are divided.one grid location on a display device or printout. usually to be stored on a computer.Digital Line Graph. divergence .a neighborhood analysis technique that outputs the number of different values within a user-specified window. distance image file .the set of frequencies with which an event occurs. diversity .any process that converts non-digital data into numeric data.. display resolution .Glossary D digitizing . distribution rectangles (DRs) . It displays a visible image from a data file or from some user operation. dithering .the computer hardware consisting of a memory board and a monitor. display memory .a display technique that is used in ERDAS IMAGINE to allow a smaller set of colors appear to be a larger set of colors. Usually. horizontally and vertically (i. or the set of probabilities that a variable will have a particular value.e. a data file with 3 bands is said to be 3-dimensional. display driver . digitizing refers to the creation of vector data from hardcopy materials or raster images that are traced using a digitizer keypad on a digitizing tablet. display pixel . bands that diminish the results of the classification can be ruled out.a statistical measure of distance between two or more signatures. dimensionality . distribution . Divergence can be calculated for any combination of bands that will be used in the classification. DLG . displacement .a term referring to the number of bands being classified.the number of pixels that can be viewed on the display device monitor.the degree of geometric distortion for a point that is not on the nadir line.the ERDAS IMAGINE utility that interfaces between the computer running IMAGINE software and the display device. since 3-dimensional spectral space is plotted to analyze the data.see Euclidean distance.a one-band. directories are arranged in a tree structure. or a mouse on a display device.an area of a computer disk that is designated to hold a set of files. Distance image files generally have a chi-square distribution. in which each data file value represents the result of the distance equation used in the program. display device . directory . spectral distance. 604 ERDAS . 512 × 512 or 1024 × 1024). A vector data format created by the USGS.the subset of image memory that is actually viewed on the display screen.

see radiometric resolution. Ellipse plots are often used to test signatures before classification.a two-dimensional figure that is formed in a two-dimensional scatterplot when both bands plotted have normal distributions.usually three EOFs marking the end of a tape. electromagnetic radiation . eigenvalue . element . Unlike an edge detector. elevation data . usually a zero-sum kernel. Enhancement can make important features of raw. See also principal components.the energy transmitted through space in the form of electric and magnetic waves. E edge detector . DXF .the matrices of dots used to represent brightness values on hardcopy maps and images.Data Exchange Format. it does not necessarily eliminate other features. end-of-volume mark (EOV) . edge enhancer .the skipping of pixels during the display or processing of the scanning process.the length of a principal component which measures the variance of a principal component band. or a polygon.E dot patterns . characterized by frequency or wavelength. such as a point.the range of electromagnetic radiation extending from cosmic waves to radio waves. used by AutoCAD software. DEM. double precision .the process of making an image more interpretable for a particular application. Field Guide 605 .a measure of accuracy in which 15 significant digits can be stored for a coordinate. enhancement . ellipse . dynamic range . downsampling . remotely sensed data more interpretable to the human eye. which is at the edges between homogeneous groups of pixels.a high-frequency convolution kernel that brings out the edges between homogeneous groups of pixels. eigenvector . The ellipse is defined by the standard deviations of the input bands.see digital terrain model.see terrain data. a line. electromagnetic spectrum .the direction of a principal component represented as coefficients in an eigenvector matrix which is computed from the eigenvalues.usually a half-inch strip of blank tape which signifies the end of a file that is stored on magnetic tape. it only highlights edges.an entity of vector data. DTM .a convolution kernel. A format for storing vector data in ASCII files. end-of-file mark (EOF) . which smooths out or zeros out areas of low spatial frequency and creates a sharp contrast where spatial frequency is high. See also principal components.

The exterior orientation of an image consists of the exposure station and the camera attitude at this moment. equatorial aspect . ephemeris data . equidistance . extension .during image acquisition. exterior orientation . exposure station . provides information about the recording of the data and the satellite orbit. either in physical or abstract (e. the image area to be displayed in a Viewer. EOSAT .the property of a map projection to represent all areas in true proportion to one another. California). each point in the flight path at which the camera exposes the film. a square matrix showing the number of reference pixels that have the same values as the actual classified points. using a limited number of points with known coordinates.all images of a block of aerial photographs in the ground coordinate system are computed during photogrammetric triangulation. error matrix . equivalence . currently provides the most comprehensive radar data available.a map projection that is centered around the equator or a point on the equator.Glossary E entity .an ASCII digital street centerline map product available from ETAK.Earth Observation Satellite Company. that is computed based on the equation of a straight line. extend .see equivalence.selected bands of a complete set of NOAA AVHRR data. 606 ERDAS .contained in the header of the data file of a SPOT scene. extent . Inc.in classification accuracy assessment. ERS-1 .the process of moving selected dangling lines up a specified distance so that they intersect existing lines. ERS-2 is scheduled for launch in 1994.g.1.an AutoCAD drawing element that can be placed in an AutoCAD drawing with a single command. A private company that directs the Landsat satellites and distributes Landsat imagery.. the area of the earth’s surface to be mapped. equal area . extract .a stereopair without y-parallax. 2. Euclidean distance . exterior orientation parameters .the distance. epipolar stereopair .the property of a map projection to represent true distances from an identified point.the perspective center’s ground coordinates in a specified map projection and three rotation angles around the coordinate axes. (Menlo Park. spectral) space.the three letters after the period in a file name that usually identify the type of file. ETAK MapBase .the European Space Agency’s (ESA) radar satellite launched in July 1991.

y coordinates.an offset between the x-origin of a map projection and the x-origin of a map. Usually used so that no y-coordinates are negative.the center of an aerial photo.the location of a pixel within the file in x. a filled polygon is solid or has a pattern. filled .0.the data file value for one data unit in an image file. feature based matching .in an attribute data base. vegetation is green. false easting . feature extraction .F F false color . feature space image . feature space area of interest . The upper left file coordinate is usually 0. fiducial center .a type of BSQ format used by EOSAT to store Landsat TM (Thematic Mapper) data. Field Guide 607 .an abstract space that is defined by spectral units (such as an amount of electromagnetic radiation). etc. including the drive and path.a graph of the data file values of one band of data against the values of another band (often called a scatterplot). feature collection .the complete file name. false northing . an angle which defines how far the view will be generated to each side of the line of sight. feature space . if necessary. and labeling various types of natural and man-made phenomena from remotely-sensed images. the file is assumed to be in the current drive and directory.the process of studying and locating areas and objects on the ground and deriving useful information from images. Fiducials are used to compute the transformation from data file to image coordinates.referring to polygons. delineating. water is blue.a color scheme in which features have “expected” colors.the process of identifying. These are not necessarily the true colors of these features.a user-selected area of interest (AOI) that is selected from a feature space image.in perspective views. An unfilled polygon is simply a closed vector which outlines the area of the polygon. field . If a drive or path is not specified. but is not transparent. For instance. fiducials . such as “Class name” and “Histogram. fast format . file coordinates .four or eight reference markers fixed on the frame of an aerial metric camera and visible in each exposure. a category of information about each class or feature. Usually used so that no x-coordinates are negative. file pixel .” field of view .an offset between the y-origin of a map projection and the y-origin of a map.an image matching technique that determines the correspondence between two image features. file specification or filespec .

a ground control point (GCP) selected in one image is precisely matched to its counterpart in the other image using the spectral characteristics of the data and the transformation matrix.filters which use a moving window to calculate new values for each pixel in the image based on the values of the surrounding pixels. a symbol of a tent would indicate the location of a camping area. on a map of a state park. geocentric coordinate system . focal operations .Glossary G filtering .the removal of spatial or spectral features for data enhancement.an image(s) that has been rectified to a particular map projection and cell size and has had radiometric corrections applied.an annotation symbol that represents an activity. GCP prediction . generalize .the Japanese radar satellite launched in February 1992.an image enhancement technique that was derived from signal processing. 608 ERDAS . from-node .a coordinate system which has its origin at the center of the earth ellipsoid. Fourier analysis . GCP matching .see ground control point. focal length . geocoded data . For example. function symbol . and the XGaxis passes through the Greenwich meridian.all bands of an NOAA AVHRR (Advanced Very High Resolution Radiometer) data set. The ZG-axis equals the rotational axis of the earth.areas of the display device memory that store the lookup tables. The YG-axis is perpendicular to both the ZG-axis and XG-axis. so as to create a three-dimensional coordinate system that follows the right hand rule. full set . Some texts may use the terms “filtering” and “spatial filtering” synonymously.for image to image rectification. function memories .the first vertex in a line.the plane of the film or scanner used in obtaining an aerial photo. GCP . which translate image memory values into brightness values. Fuyo 1 (JERS-1) . flip .the orthogonal distance from the perspective center to the image plane. G GAC .see global area coverage.the process of weeding vertices from selected lines using a specified tolerance. Convolution filtering is one method of spatial filtering.the process of reversing the from-to direction of selected lines or links.the process of picking a ground control point (GCP) in either coordinate system and automatically locating that point in the other coordinate system based on the current transformation parameters. focal plane .

Geographical coordinates are defined by latitude and by longitude (Lat/Lon). and analyzes layers of geographic data to produce interpretable information.a type of NOAA AVHRR (Advanced Very High Resolution Radiometer) data with a spatial resolution of 4 × 4 km. GIS .an azimuthal projection obtained from a perspective at the center of the earth. combines. rather than for each pixel like focal functions. georeferencing . with respect to an origin located at the intersection of the equator and the prime (Greenwich) meridian.a coordinate system for explaining the surface of the earth. as well as computer software and human knowledge. global area coverage (GAC) . GISs are used for solving complex geographic planning and management problems. global operations . statistical data.gmd files. great circle .the correction of errors of skew. graphical model . For example.a unique system designed for a particular application that stores.a single-band ERDAS Ver. remotely sensed data.a technique used to combine data layers in an unlimited number of ways using icons to represent input data. and perspective in raw. and any other data needed for a study. A great circle is the shortest possible surface route between two points on the earth. Graphical models are put together like flow charts and are stored in . rotation. Field Guide 609 .gmd file .the network of parallels of latitude and meridians of longitude applied to the global surface and projected onto maps.see geographic information system. an output layer created from modeling can represent the desired combination of themes from many input layers.the ERDAS IMAGINE graphical model file created with Model Maker (Spatial Modeler). grid cell . functions. gigabyte (Gb) .functions which calculate a single value for an entire area. enhances.a “color” scheme with a gradation of gray tones ranging from black to white.an arc of a circle for which the center is the center of the earth. gray scale . GIS file .G geographic information system (GIS) .the process of assigning map coordinates to image data and resampling the pixels of the image to conform to the map projection grid. and output data. A GIS may include computer images. geometric correction .about one billion bytes. 7. .a pixel. graphical modeling . geographical coordinates . gnomonic .a model created with Model Maker (Spatial Modeler). graticule .X data file in which pixels are divided into discrete categories. hardcopy maps.

high-frequency kernel .the process of redistributing pixel values so that there are approximately the same number of pixels with each value within a range. ground truthing . Ground coordinates (X.any output of digital computer (softcopy) data to paper.data that are taken from the actual area being studied. personal experience. header file . H halftoning .the direct transmission of AVHRR data in real-time with the same resolution as Local Area Coverage (LAC). Also called “high-pass kernel.Glossary H grid lines . for use in rectifying an image.the process of using dots of varying size or arrangements (rather than varying intensity) to form varying degrees of a color.the process of determining a lookup table that will convert the histogram of one band of an image or one color gun to resemble another histogram.intersecting lines that indicate regular intervals of distance based on a coordinate system. analysis of aerial photography. histogram . The vertical axis is the number of pixels that have each data value.a convolution kernel that increases the spatial frequency of an image.a three-dimensional coordinate system which utilizes a known map projection. High Resolution Visible (HRV) sensor . map projection. upper left coordinates. For a single band of data.the first part of an image file that contains general information about the data in the file. ground truth . etc. data base coordinates of the upper left corner. such as number of bands. the horizontal axis of a histogram graph is the range of all possible data file values.a file usually found before the actual image data on tapes or CD-ROMs that contains information about the data. histogram matching .the acquisition of knowledge about the study area from field work. Ground truth data are considered to be the most accurate (true) data available about the area of study. ground coordinate system .Y. The contents of header records vary depending on the type of data. ground control point (GCP) . 610 ERDAS . or a chart of the number of pixels that have each possible data file value.Z) are usually expressed in feet or meters. GCPs are used for computing a transformation matrix. number of bands. header record . hardcopy output .” High Resolution Picture Transmission (HRPT) .specific pixel in image data for which the output map coordinates (or other output coordinates) are known. The result is a nearly flat histogram. histogram equalization . Sometimes called a graticule. such as the number of columns and rows.a graph of data distribution. and the pixel depth. etc.a pushbroom scanner on a SPOT satellite that takes a sequence of line images while the satellite circles the earth.

I horizontal control . published by the U. image center . hue.LAN or . See intensity. including (but not limited to) enhancement.a component of IHS (intensity. This system is advantageous in that it presents colors more nearly as perceived by the human eye. is in uncompressed ASCII format only.the horizontal distribution of control points in aerial triangulation (x.any type of algebraic function that is applied to the data file values in one or more bands. such as the AVIRIS with 224 bands. and cyan = 300. An alternate color space from RGB (red. image memory . It varies from 0 to 360. Department of Commerce.S. hue. Blue = 0 (and 360). Image files in ERDAS IMAGINE have the extension . Remotely sensed images are digital representations of the earth.the automatic acquisition of corresponding image points on the overlapping area of two images. image coordinate system . Field Guide 611 . Image fil