ERDAS Field Guide™

December 2010

Copyright © 2010 ERDAS, Inc.
All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive property of ERDAS, Inc. This work is protected under United States copyright law and other international copyright treaties and conventions. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage or retrieval system, except as expressly permitted in writing by ERDAS, Inc. All requests should be sent to the attention of: Manager, Technical Documentation ERDAS, Inc. 5051 Peachtree Corners Circle Suite 100 Norcross, GA 30092-2500 USA. The information contained in this document is subject to change without notice. Government Reserved Rights. MrSID technology incorporated in the Software was developed in part through a project at the Los Alamos National Laboratory, funded by the U.S. Government, managed under contract by the University of California (University), and is under exclusive commercial license to LizardTech, Inc. It is used under license from LizardTech. MrSID is protected by U.S. Patent No. 5,710,835. Foreign patents pending. The U.S. Government and the University have reserved rights in MrSID technology, including without limitation: (a) The U.S. Government has a nonexclusive, nontransferable, irrevocable, paid-up license to practice or have practiced throughout the world, for or on behalf of the United States, inventions covered by U.S. Patent No. 5,710,835 and has other rights under 35 U.S.C. § 200-212 and applicable implementing regulations; (b) If LizardTech's rights in the MrSID Technology terminate during the term of this Agreement, you may continue to use the Software. Any provisions of this license which could reasonably be deemed to do so would then protect the University and/or the U.S. Government; and (c) The University has no obligation to furnish any know-how, technical assistance, or technical data to users of MrSID software and makes no warranty or representation as to the validity of U.S. Patent 5,710,835 nor that the MrSID Software will not infringe any patent or other proprietary right. For further information about these provisions, contact LizardTech, 1008 Western Ave., Suite 200, Seattle, WA 98104. ERDAS, ERDAS IMAGINE, Stereo Analyst, IMAGINE Essentials, IMAGINE Advantage, IMAGINE, Professional, IMAGINE VirtualGIS, Mapcomposer, Viewfinder, and Imagizer are registered trademarks of ERDAS, Inc. Other companies and products mentioned herein are trademarks or registered trademarks of their respective owners.

Table of Contents
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .iii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix Conventions Used in this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix

Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Absorption / Reflection Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Spectral Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Spatial Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Radiometric Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Temporal Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Data Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Line Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Storage Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Storage Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Calculating Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 ERDAS IMAGINE Format (.img) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Image File Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Consistent Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Keeping Track of Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Geocoded Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Using Image Data in GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Subsetting and Mosaicking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Table of Contents Table of Contents iii iii

Multispectral Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Editing Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Editing Continuous (Athematic) Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Image Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Dynamic Range Run-Length Encoding (DR RLE) . . . . . . . . . . . . . . . . . . . . . . . . . . 36 ECW Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42 Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Vector Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Attribute Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Displaying Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Color Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Symbolization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Tablet Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49 Screen Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Imported Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Raster to Vector Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Other Vector Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Shapefile Vector Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52 SDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 SDTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 ArcGIS Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Raster and Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Importing and Exporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

iv

Table of Contents

Raster Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Annotation Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Generic Binary Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63 Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Optical Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Satellite System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Satellite Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 ALOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 ASTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 EROS A and EROS B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 FORMOSAT-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 GeoEye-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 IKONOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 IRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73 KOMPSAT 1-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Landsat 1-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Landsat 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 LPGS and NLAPS Processing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 NOAA Polar Orbiter Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 OrbView-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 QuickBird . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 RapidEye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 SeaWiFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 SPOT 1 -3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 SPOT 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 SPOT 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 WorldView-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 WorldView-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Radar Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Advantages of Using Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Radar Sensor Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Applications for Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Image Data from Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Aircraft Radar Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Aircraft Optical Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Image Data from Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Table of Contents

v

Photogrammetric Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Desktop Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Aerial Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 DOQs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 ADRG Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 ARC System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 ADRG File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 .OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 .IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 .Lxx (legend data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 ADRG File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 ADRI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 .OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 .IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 ADRI File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Raster Product Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 CADRG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 DTED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Using Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123 GPS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Satellite Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Differential Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Applications of GPS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Ordering Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Addresses to Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Raster Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . 131 ERDAS Ver. 7.X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 GRID and GRID Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 JFIF (JPEG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 JPEG2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 MrSID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 SDTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 SUN Raster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

vi

Table of Contents

TIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 GeoTIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Vector Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . 138 ARCGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 AutoCAD (DXF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139 DLG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 ETAK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 IGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 TIGER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142

Image Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Display Memory Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Colormap and Colorcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147 Display Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 8-bit PseudoColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 24-bit DirectColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 24-bit TrueColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 PC Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Displaying Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Thematic Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Using the Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162 Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Viewing Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Viewing Multiple Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Zoom and Roam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Geographic Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Enhancing Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Creating New Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

Geographic Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Information vs. Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174 Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Continuous Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Thematic Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

Table of Contents

vii

Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Raster Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Vector Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 ERDAS IMAGINE Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183 Analysis Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Proximity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Neighborhood Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Recoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Overlaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Matrix Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Graphical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Model Maker Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .200 Output Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Using Attributes in Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Script Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .205 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Vector Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Editing Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Constructing Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Building and Cleaning Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .208

Cartography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Types of Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Thematic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

viii

Table of Contents

Neatlines, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Labels and Descriptive Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Typography and Lettering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Properties of Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Geographical and Planar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Available Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Choosing a Map Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Map Projection Uses in a GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Deciding Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Non-Earth Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Learning Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Plan the Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Map Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 US National Map Accuracy Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 USGS Land Use and Land Cover Map Guidelines . . . . . . . . . . . . . . . . . . . . . . . . 249 USDA SCS Soils Maps Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Digitized Hardcopy Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .252 Latitude/Longitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 When to Rectify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 When to Georeference Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Disadvantages of Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Rectification Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Ground Control Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 GCPs in ERDAS IMAGINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Entering GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 GCP Prediction and Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

Table of Contents

ix

Polynomial Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Effects of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264 Minimum Number of GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Rubber Sheeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Triangle-Based Finite Element Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Triangle-based rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Nonlinear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270 Check Point Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Residuals and RMS Error Per GCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Total RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Error Contribution by Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Tolerance of RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Evaluating RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Resampling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Rectifying to Lat/Lon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Bicubic Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Map-to-Map Coordinate Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Conversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287 Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287

Hardcopy Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Printing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Scaled Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Printing Large Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289 Scale and Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Map Scaling Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Mechanics of Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Halftone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Continuous Tone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294

x

Table of Contents

Contrast and Color Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 RGB to CMY Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295

Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
USGS Projections . . . . . . . . . . . . . . . . . . . . . . . Alaska Conformal . . . . . . . . . . . . . . . . . . . . . . . Albers Conical Equal Area . . . . . . . . . . . . . . . . . Azimuthal Equidistant . . . . . . . . . . . . . . . . . . . . Behrmann . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bonne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cassini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cylindrical Equal Area . . . . . . . . . . . . . . . . . . . . Double Stereographic . . . . . . . . . . . . . . . . . . . . Eckert I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eckert II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eckert III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eckert IV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eckert V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eckert VI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . EOSAT SOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . EPSG Coordinate Systems . . . . . . . . . . . . . . . . Equidistant Conic . . . . . . . . . . . . . . . . . . . . . . . Equidistant Cylindrical . . . . . . . . . . . . . . . . . . . Equirectangular (Plate Carrée) . . . . . . . . . . . . . Gall Stereographic . . . . . . . . . . . . . . . . . . . . . . . Gauss Kruger . . . . . . . . . . . . . . . . . . . . . . . . . . General Vertical Near-side Perspective . . . . . . . Geographic (Lat/Lon) . . . . . . . . . . . . . . . . . . . . Gnomonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hammer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interrupted Goode Homolosine . . . . . . . . . . . . . Interrupted Mollweide . . . . . . . . . . . . . . . . . . . . Krovak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lambert Azimuthal Equal Area . . . . . . . . . . . . . Lambert Conformal Conic . . . . . . . . . . . . . . . . . Lambert Conic Conformal (1 Standard Parallel) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 301 303 306 309 311 313 315 317 319 321 323 325 327 329 331 332 333 335 336 338 339 340 342 344 346 348 350 351 353 356 359

Table of Contents

xi

Loximuthal . . . . . . . . . . . . . . . . . . . . . Mercator . . . . . . . . . . . . . . . . . . . . . . Miller Cylindrical . . . . . . . . . . . . . . . . MGRS . . . . . . . . . . . . . . . . . . . . . . . . . Modified Transverse Mercator . . . . . . Mollweide . . . . . . . . . . . . . . . . . . . . . . New Zealand Map Grid . . . . . . . . . . . . Oblated Equal Area . . . . . . . . . . . . . . Oblique Mercator (Hotine) . . . . . . . . . Orthographic . . . . . . . . . . . . . . . . . . . Plate Carrée . . . . . . . . . . . . . . . . . . . . Polar Stereographic . . . . . . . . . . . . . . Polyconic . . . . . . . . . . . . . . . . . . . . . . Quartic Authalic . . . . . . . . . . . . . . . . Robinson . . . . . . . . . . . . . . . . . . . . . . RSO . . . . . . . . . . . . . . . . . . . . . . . . . . Sinusoidal . . . . . . . . . . . . . . . . . . . . . Space Oblique Mercator . . . . . . . . . . Space Oblique Mercator (Formats A & State Plane . . . . . . . . . . . . . . . . . . . . Stereographic . . . . . . . . . . . . . . . . . . Stereographic (Extended) . . . . . . . . . Transverse Mercator . . . . . . . . . . . . . Two Point Equidistant . . . . . . . . . . . . UTM . . . . . . . . . . . . . . . . . . . . . . . . . . Van der Grinten I . . . . . . . . . . . . . . . . Wagner IV . . . . . . . . . . . . . . . . . . . . . Wagner VII . . . . . . . . . . . . . . . . . . . . . Winkel I . . . . . . . . . . . . . . . . . . . . . . . External Projections . . . . . . . . . . . . . Bipolar Oblique Conic Conformal . . . Cassini-Soldner . . . . . . . . . . . . . . . . . Laborde Oblique Mercator . . . . . . . . .

... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... B) ... ... ... ... ... ... ... ... ... ... ... ... ... ...

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

361 363 366 368 370 372 374 375 376 379 382 383 386 388 390 392 393 395 397 398 408 411 412 414 416 419 421 423 425 427 429 430 432

Minimum Error Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Modified Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Modified Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435

xii

Table of Contents

Mollweide Equal Area . . . . . . . . . . . . . Rectified Skew Orthomorphic . . . . . . . Robinson Pseudocylindrical . . . . . . . . Southern Orientated Gauss Conformal Swiss Cylindrical . . . . . . . . . . . . . . . . . Winkel’s Tripel . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

436 438 439 440 441 442

Mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Input Image Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Exclude Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Image Dodging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Color Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .446 Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .448 Intersection Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Set Overlap Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Automatically Generate Cutlines For Intersection . . . . . . . . . . . . . . . . . . . . . . . . . 450 Geometry-based Cutline Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Output Image Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Output Image Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Run Mosaic To Disc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .453

Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Display vs. File Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Spatial Modeling Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Correcting Data Anomalies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Radiometric Correction: Visible/Infrared Imagery . . . . . . . . . . . . . . . . . . . . . . . . . 460 Atmospheric Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .461 Geometric Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .473 Brightness Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .474 Spatial Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Convolution Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Crisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480

Table of Contents

xiii

Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 Wavelet Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Wavelet Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .484 Algorithm Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Prerequisites and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489 Spectral Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 Spectral Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Principal Components Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 Decorrelation Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 Tasseled Cap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 RGB to IHS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 IHS to RGB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501 Hyperspectral Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 Independent Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 Component Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Band Generation for Multispectral Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .508 Remote Sensing Applications for ICs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 Tips and Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 IFFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 Fourier Noise Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 Homomorphic Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Radar Imagery Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Radiometric Correction: Radar Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 Merging Radar with VIS/IR Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541

Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
The Classification Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545

xiv

Table of Contents

Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 Classification Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 Classification Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 Iterative Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 Supervised vs. Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .549 Classifying Enhanced Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549 Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549 Supervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 Training Samples and Feature Space Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Selecting Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Evaluating Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .554 Selecting Feature Space Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 ISODATA Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .562 Signature Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564 Evaluating Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 Alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 Contingency Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568 Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 Signature Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572 Classification Decision Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Nonparametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574 Parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574 Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578 Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Mahalanobis Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581 Maximum Likelihood/Bayesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582 Fuzzy Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584 Fuzzy Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .584 Fuzzy Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584 Expert Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585

Table of Contents

xv

Knowledge Engineer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586 Knowledge Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 Evaluating Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 Accuracy Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592

Photogrammetric Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
What is Photogrammetry? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 Types of Photographs and Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596 Why use Photogrammetry? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Photogrammetry/ Conventional Geometric Correction . . . . . . . . . . . . . . . . . . . . . 597 Single Frame Orthorectification/Block Triangulation . . . . . . . . . . . . . . . . . . . . . . .598 Image and Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600 Photogrammetric Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 Desktop Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 Scanning Resolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .603 Terrestrial Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606 Interior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 Principal Point and Focal Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608 Fiducial Marks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608 Lens Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 Exterior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 The Collinearity Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Photogrammetric Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614 Space Resection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615 Space Forward Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615 Bundle Block Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616 Least Squares Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 Self-calibrating Bundle Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622 Automatic Gross Error Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622 GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623 GCP Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .624 Processing Multiple Strips of Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625 Tie Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 Automatic Tie Point Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 Image Matching Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628

xvi

Table of Contents

. . . . . . 672 Coregister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .657 Algorithm Description . 657 IMAGINE OrthoRadar Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628 Feature Based Matching . . . . . . . . . . . . . . . . . 660 IMAGINE StereoSAR DEM Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631 Satellite Photogrammetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668 Subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Area Based Matching . . . . . . . . . . . . . . . . 650 Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657 Parameters Required for Orthorectification . . . . . . 654 Radar Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678 Height . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673 Match . . . . . . . . . . . . . . . . . . . . . . . . . . 645 Terrain Data . . . . . . 631 Image Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 Non-Lambertian Model . . . . . . . . . . . . . . . . 673 Degrade . . . . . . . . . 653 Lambertian Reflectance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680 The Interferometric Model . . . . . . . . . . . . . . . . . . . . . . 679 IMAGINE InSAR Theory . . . 636 Collinearity Equations & Satellite Block Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .631 Relation Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682 Image Coregistration . . . 635 SPOT Exterior Orientation . . . . . . . . . . .671 Despeckle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652 Topographic Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Aspect Images . . . . . . . . . . . . . . 687 Table of Contents xvii . . . . . . . . . . . . . . . . . . 667 Introduction . . . . . 667 Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672 Degrade . . . . . . . . . . . . . . . . . . . . 646 Slope Images . . . . . 641 Terrain Analysis . . . . . . . . . 633 SPOT Interior Orientation . . . . . . . . . . . . . . 679 Electromagnetic Wave Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .704 Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708 Feature Space Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717 Bibliography . . . . . . . . . . 698 Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 Statistics . . . . . . . . . . . . . . . . 708 n-Dimensional Histogram . . . . . . 710 Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795 xviii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 Summation . . . . . 705 Dimensionality of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709 Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 714 Glossary . . 712 Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698 Bin Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 712 Matrix Multiplication . . . . . . . . . . 706 Mean Vector . . . . 690 Phase Flattening . . . . . . . . . . . . . . . . . . . 698 Mean . 777 Related Reading . . . . . . . . . . . . . . . . 702 Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792 Index . . . . . . . . . . . . . . . . . . . 713 Transposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707 Feature Space . . . . . . . . . . . . 710 Transformation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696 Math Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706 Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Phase Noise Reduction . . . . . .704 Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692 Phase Unwrapping . . . . . . . . . . 711 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777 Works Cited . . . 701 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . 23: Multispectral Imagery Comparison . . . . . . . . . . 11: Band Interleaved by Line (BIL) . . . . . . . . . . 46 . . 94 . . . . . . . . . . . .. . 7: Laboratory Spectra of Clay Minerals in the Infrared Region 8: IFOV . .. . . . . . . . 17: Vertices . . . . . . . . . . 18: Workspace Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31: Seamless Nine Image DR . . . 6: Reflectance Spectra . . . . . . . . . . . . . . . . . . . . . . . . 21: Digitizing Tablet . 89 . . . . . SPOT XS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28: Radar Reflection from Different Sources and Distances . . . . . . . . . . 13: Image Files Store Raster Layers . . . . . . . . . . . . . . . . . . . . 29: ADRG Overview File Displayed in a Viewer . . . 41 . . . . . . . . . . 35: Example of One Seat with One Display and Two Screens 36: Transforming Data File Values to a Colorcell Value . . . . . . . . .. . . . . 16 . . . . . . . . . . . . . . . 68 . . . . . . . . . . . . . . . .4 . . . . . . . . . . . . . . . . . . . . . . 12: Band Sequential (BSQ) . . . . . 42 . . . . . . Landsat TM . . . . 10: Landsat TM—Band 2 (Four Types of Resolution) . . . . . . . . . . . . . . . . . . . . .7 . . 49 . . . . . . . . . . . . . . 94 . . . . . . . . . . . . . . . 52 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 . . . . . . . . . . . . . . . . . . . . . 26 . . . . . . 21 . . . . . 33: Arc/second Format . 4: Sun Illumination Spectral Irradiance at the Earth’s Surface 5: Factors Affecting Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15: Examples of Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 . . . . . . . . . . . .5 . . . . . . . . . . . 16: Vector Elements . . 34: Common Uses of GPS Data . 95 112 113 117 118 122 127 145 150 List of Figures List of Figures xix xix . . . . . . . . . . .2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19: Attribute CellArray . . . . . . . 28 . . . . 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3: Electromagnetic Spectrum . . . . . . . . . . . . .8 . . . . . . 17 . . . . . . . . . . . . . . . 10 . . 22: Raster Format Converted to Vector Format . . . . . . . . . . . . . . . . . 26: SLAR Radar . . . 14: Example of a Thematic Raster Layer . 30: Subset Area with Overlapping ZDRs . . . . . . . . . . . 32: ADRI Overview File Displayed in a Viewer . 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27: Received Radar Signal . . . . . . . . . .List of Figures Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 1: Pixels and Bands in a Raster Image . . . . . . . . . . . 24: Landsat MSS vs. . 20: Symbolization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 . . . . . . . . . . . . . 48 . . . . . . . . . . . . . . . . . . . . . 9: Brightness Values . . . . 25: SPOT Panchromatic vs. . . . . . . . . . . . . . . . . . . . 2: Typical File Coordinates . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Raster Attributes for lnlandc. . . Bad Lettering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 37: 38: 39: 40: 41: 42: 43: 44: 45: 46: 47: 48: 49: 50: 51: 52: 53: 54: 55: 56: 57: 58: 59: 60: 61: 62: 63: 64: 65: 66: 67: 68: 69: 70: 71: 72: 73: 74: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . . . . . . . . . . . . Using a Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overlay . . . . . Graphical and Script Models For Tasseled Cap Transformation . . . . . . . . . . . . . . .img . . . . . . . . . . . . . . . . . . . . . . . . Transforming Data File Values to Screen Values . . . . Proximity Analysis . . . . . . . . . . . . . . . . . . . . Example of Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Polynomial Curve vs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tangent and Secant Cylinders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Transformations . . . . . . . . . . . . . . . . . . . Good Lettering vs. Modeling Objects . . . . . . . Sample Scale Bars . . . . Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contrast Stretch and Colorcell Values . . . . . . . . . . Tick Marks. . . . . . . . . Stretching by Min/Max vs. . . . Layer Errors . . . Thematic Raster Layer Display Process . . . . . . Nonlinear Transformations . . . . . . Tangent and Secant Cones . . . . . . . . Contiguity Analysis . . . . . . . . . GCPs . . . . . . . . . . . . . Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Sans Serif and Serif Typefaces with Various Styles Applied . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphical Model Structure . Standard Deviation . . . . Continuous Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Legend . . . . . Graphical Model for Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transformation Example—2nd GCP Changed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Symbols . . . . . . . . . . . . . . . . Projection Types . . . . . . . Data Input . . . . Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . Sum Option of Neighborhood Analysis (Image Interpreter) . . . . . . . . Example of Color Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transformation Example—1st-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 152 155 156 157 160 165 166 167 176 181 183 186 187 188 190 192 193 196 197 200 204 210 217 220 221 222 225 226 229 230 231 241 259 261 262 265 265 xx List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and Grid Lines. . . . . . . . Vector Attributes CellArray . . . . . . . . . . . . Sample Neatline. . .

Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure

75: Transformation Example—2nd-Order . . . . . . . . . . . . . . . . . . . . 76: Transformation Example—4th GCP Added . . . . . . . . . . . . . . . . 77: Transformation Example—3rd-Order . . . . . . . . . . . . . . . . . . . . . 78: Transformation Example—Effect of a 3rd-Order Transformation 79: Triangle Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80: Residuals and RMS Error Per Point . . . . . . . . . . . . . . . . . . . . . 81: RMS Error Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82: Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83: Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84: Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85: Linear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86: Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87: Layout for a Book Map and a Paneled Map . . . . . . . . . . . . . . . 88: Sample Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89: Albers Conical Equal Area Projection . . . . . . . . . . . . . . . . . . . . 90: Polar Aspect of the Azimuthal Equidistant Projection . . . . . . . . . 91: Behrmann Cylindrical Equal-Area Projection . . . . . . . . . . . . . . . 92: Bonne Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93: Cassini Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94: Cylindrical Equal-Area Projection . . . . . . . . . . . . . . . . . . . . . . . 95: Eckert I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96: Eckert II Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97: Eckert III Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98: Eckert IV Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99: Eckert V Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100: Eckert VI Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101: Equidistant Conic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . 102: Equirectangular Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . 103: Geographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104: Hammer Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105: Interrupted Goode Homolosine Projection . . . . . . . . . . . . . . . . 106: Interrupted Mollweide Projection . . . . . . . . . . . . . . . . . . . . . . . 107: Lambert Azimuthal Equal Area Projection . . . . . . . . . . . . . . . . 108: Lambert Conformal Conic Projection . . . . . . . . . . . . . . . . . . . . 109: Loximuthal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110: Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111: Miller Cylindrical Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . 112: MGRS Grid Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

266 266 267 267 269 272 273 275 276 278 278 282 290 292 305 308 310 312 314 316 320 322 324 326 328 330 334 337 343 347 349 350 355 358 362 365 367 368

List of Figures

xxi

Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure

113: 114: 115: 116: 117: 118: 119: 120: 121: 122: 123: 124: 125: 126: 127: 128: 129: 130: 131: 132: 133: 134: 135: 136: 137: 138: 139: 140: 141: 142: 143: 144: 145: 146: 147: 148: 149:

Mollweide Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Oblique Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Plate Carrée Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Polar Stereographic Projection and its Geometric Construction . . . . 385 Polyconic Projection of North America . . . . . . . . . . . . . . . . . . . . . . . 387 Quartic Authalic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Robinson Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Sinusoidal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Space Oblique Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . 396 Zones of the State Plane Coordinate System . . . . . . . . . . . . . . . . . . 399 Stereographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Two Point Equidistant Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Zones of the Universal Transverse Mercator Grid in the United States . . 417 Van der Grinten I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Wagner IV Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 Wagner VII Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 Winkel I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Winkel’s Tripel Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 Histograms of Radiometrically Enhanced Data . . . . . . . . . . . . . . . . . 463 Graph of a Lookup Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Enhancement with Lookup Tables . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Nonlinear Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . 466 Piecewise Linear Contrast Stretch . . . . . . . . . . . . . . . . . . . . . . . . . 467 Contrast Stretch Using Lookup Tables, and Effect on Histogram . . . 469 Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470 Histogram Equalization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 Equalized Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Spatial Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Applying a Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Output Values for Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . 477 Local Luminance Intercept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Schematic Diagram of the Discrete Wavelet Transform - DWT . . . . . 486 Inverse Discrete Wavelet Transform - DWT-1 . . . . . . . . . . . . . . . . . 487 Wavelet Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 Two Band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492

xxii

List of Figures

Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure

150: 151: 152: 153: 154: 155: 156: 157: 158: 159: 160: 161: 162: 163: 164: 165: 166: 167: 168: 169: 170: 171: 172: 173: 174: 175: 176: 177: 178: 179: 180: 181: 182: 183: 184: 185: 186: 187:

First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Range of First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . 493 Second Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 Intensity, Hue, and Saturation Color Coordinate System . . . . . . . . . 499 One-Dimensional Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 512 Example of Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 The Padding Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 Comparison of Direct and Fourier Domain Processing . . . . . . . . . . . 518 An Ideal Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 High-Pass Filtering Using the Ideal Window . . . . . . . . . . . . . . . . . . . 521 Filtering Using the Bartlett Window . . . . . . . . . . . . . . . . . . . . . . . . . 521 Filtering Using the Butterworth Window . . . . . . . . . . . . . . . . . . . . . . 522 Homomorphic Filtering Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 Effects of Mean and Median Filters . . . . . . . . . . . . . . . . . . . . . . . . 527 Regions of Local Region Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 One-dimensional, Continuous Edge, and Line Models . . . . . . . . . . . 533 A Noisy Edge Superimposed on an Ideal Edge . . . . . . . . . . . . . . . . 533 Edge and Line Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 Adjust Brightness Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 Range Lines vs. Lines of Constant Range . . . . . . . . . . . . . . . . . . . . 541 Example of a Feature Space Image . . . . . . . . . . . . . . . . . . . . . . . . 554 Process for Defining a Feature Space Object . . . . . . . . . . . . . . . . . 556 ISODATA Arbitrary Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 ISODATA First Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 ISODATA Second Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Ellipse Evaluation of Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . 568 Classification Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Parallelepiped Classification With Two Standard Deviations as Limits 576 Parallelepiped Corners Compared to the Signature Ellipse . . . . . . . . 578 Feature Space Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578 Minimum Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Knowledge Engineer Editing Window . . . . . . . . . . . . . . . . . . . . . . . 586 Example of a Decision Tree Branch . . . . . . . . . . . . . . . . . . . . . . . . . 587 Split Rule Decision Tree Branch . . . . . . . . . . . . . . . . . . . . . . . . . . . 587 Knowledge Classifier Classes of Interest . . . . . . . . . . . . . . . . . . . . . 588 Histogram of a Distance Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 Interactive Thresholding Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591

List of Figures

xxiii

Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure

188: 189: 190: 191: 192: 193: 194: 195: 196: 197: 198: 199: 200: 201: 202: 203: 204: 205: 206: 207: 208: 209: 210: 211: 212: 213: 214: 215: 216: 217: 218: 219: 220: 221: 222: 223: 224: 225:

Exposure Stations Along a Flight Path . . . . . . . . . . . . . . . . . . . . A Regular Rectangular Block of Aerial Photos . . . . . . . . . . . . . . Pixel Coordinates and Image Coordinates . . . . . . . . . . . . . . . . . . Image Space and Ground Space Coordinate System . . . . . . . . . . Terrestrial Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internal Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pixel Coordinate System vs. Image Space Coordinate System . . . Radial vs. Tangential Lens Distortion . . . . . . . . . . . . . . . . . . . . . Elements of Exterior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . Space Forward Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photogrammetric Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . GCP Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GCPs in a Block of Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Point Distribution for Triangulation . . . . . . . . . . . . . . . . . . . . . . . Tie Points in a Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Pyramid for Matching at Coarse to Full Resolution . . . . . . . Perspective Centers of SPOT Scan Lines . . . . . . . . . . . . . . . . . . Image Coordinates in a Satellite Scene . . . . . . . . . . . . . . . . . . . . Interior Orientation of a SPOT Scene . . . . . . . . . . . . . . . . . . . . . Inclination of a Satellite Stereo-Scene (View from North to South) Velocity Vector and Orientation Angle of a Single Scene . . . . . . . Ideal Point Distribution Over a Satellite Scene for Triangulation . . Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Orthophoto—Finding Gray Values . . . . . . . . . . . . . . . . . . Regularly Spaced Terrain Data Points . . . . . . . . . . . . . . . . . . . . 3 × 3 Window Calculates the Slope at Each Pixel . . . . . . . . . . . . Slope Calculation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 × 3 Window Calculates the Aspect at Each Pixel . . . . . . . . . . . . Aspect Calculation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Doppler Cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sparse Mapping and Output Grids . . . . . . . . . . . . . . . . . . . . . . . . Magnitude and Phase Data as shown in the complex plane . . . . . IMAGINE StereoSAR DEM Process Flow . . . . . . . . . . . . . . . . . . . SAR Image Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UL Corner of the Reference Image . . . . . . . . . . . . . . . . . . . . . . . UL Corner of the Match Image . . . . . . . . . . . . . . . . . . . . . . . . . . Image Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

600 601 604 605 606 607 609 610 612 616 617 625 625 626 627 632 634 635 636 638 639 641 642 642 646 648 649 650 651 652 664 665 666 668 669 674 674 675

xxiv

List of Figures

Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure

226: 227: 228: 229: 230: 231: 232: 233: 234: 235: 236: 237: 238: 239: 240: 241: 242: 243:

Electromagnetic Wave . . . . . . . . . . . . . . . . . . . . . . . . . . . Variation of Electric Field in Time . . . . . . . . . . . . . . . . . . . Effect of Time and Distance on Energy . . . . . . . . . . . . . . . Geometric Model for an Interferometric SAR System . . . . . Differential Collection Geometry . . . . . . . . . . . . . . . . . . . . Interferometric Phase Image without Filtering . . . . . . . . . . Interferometric Phase Image with Filtering . . . . . . . . . . . . . Interferometric Phase Image without Phase Flattening . . . . Electromagnetic Wave Traveling through Space . . . . . . . . One-dimensional Continuous vs. Wrapped Phase Function Sequence of Unwrapped Phase Images . . . . . . . . . . . . . . Wrapped vs. Unwrapped Phase Images . . . . . . . . . . . . . . Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two Band Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

680 681 682 683 686 691 691 692 693 694 695 696 698 701 706 707 708 709

List of Figures

xxv

xxvi

List of Figures

List of Tables
Description of File Types 44 Raster Data Formats 56 Annotation Data Formats 63 Vector Data Formats 64 AVNIR-2 Sensor Characteristics 69 PRISM Sensor Characteristics 70 ASTER Characteristics 70 EROS A - EROS B Characteristics 71 FORMOSAT-2 Characteristics 72 GeoEye-1 Characteristics 72 KOMPSAT-1 and KOMPSAT-2 Characteristics 75 AVHRR Data Characteristics 83 QuickBird Characteristics 85 RapidEye Characteristics 86 WorldView-1 Characteristics 91 WorldView-2 Characteristics 92 Commonly Used Bands for Radar Imaging 95 PALSAR Sensor Characteristics 98 COSMO-SkyMed Imaging Characteristics 98 JERS-1 Bands and Wavelengths 102 RADARSAT Beam Mode Resolution 103 RADARSAT-2 Characteristics 104 SIR-C/X-SAR Bands and Frequencies 106 TerraSAR-X Imaging Characteristics 106 Daedalus TMS Bands and Wavelengths 108 ARC System Chart Types 114 Legend Files for the ARC System Chart Types 115 Common Raster Data Products 127 File Types Created by Screendump 136 Common TIFF Format Elements 136 Conversion of DXF Entries 140 Conversion of IGES Entities 142 Colorcell Example 148 Commonly Used RGB Colors 159 Example of a Recoded Land Cover Layer 190 Model Maker Functions 198 General Editing Operations and Supporting Feature Types 207 Comparison of Building and Cleaning Coverages 208 Common Map Scales 217 Pixels per Inch 218 Acres and Hectares per Pixel 219 Map Projections 237 Projection Parameters 238 Earth Spheroids for use with ERDAS IMAGINE 243

List of Tables List of Tables

xxvii xxvii

Non-Earth Spheroids for use with ERDAS IMAGINE 246 NAD27 State Plane Coordinate System for the United States 399 NAD83 State Plane Coordinate System for the United States 404 UTM Zones, Central Meridians, and Longitude Ranges 417 Description of Modeling Functions Available for Enhancement 457 Theoretical Coefficient of Variation Values 529 Training Sample Comparison 553 Scanning Resolutions 602 SAR Parameters Required for Georeferencing 657 STD_LP_HD Correlator 675

xxviii

List of Tables

Preface
Introduction
The purpose of the ERDAS Field Guide™ is to provide background information on why one might use particular geographic information system (GIS) and image processing functions and how the software is manipulating the data, rather than what buttons to push to actually perform those functions. This book is also aimed at a diverse audience: from those who are new to geoprocessing to those savvy users who have been in this industry for years. For the novice, the ERDAS Field Guide provides a brief history of the field, an extensive glossary of terms, and notes about applications for the different processes described. For the experienced user, the ERDAS Field Guide includes the formulas and algorithms that are used in the code, so that he or she can see exactly how each operation works. Although the ERDAS Field Guide is primarily a reference to basic image processing and GIS concepts, it is geared toward ERDAS IMAGINE® users and the functions within ERDAS IMAGINE software, such as GIS analysis, image processing, cartography and map projections, graphics display hardware, statistics, and remote sensing. However, in some cases, processes and functions are described that may not be in the current version of the software, but planned for a future release. There may also be functions described that are not available on your system, due to the actual package that you are using. The enthusiasm with which the first four editions of the ERDAS Field Guide were received has been extremely gratifying, both to the authors and to Leica Geosystems GIS & Mapping, LLC as a whole. First conceived as a helpful manual for users, the ERDAS Field Guide is now being used as a textbook, lab manual, and training guide throughout the world. The ERDAS Field Guide will continue to expand and improve to keep pace with the profession. Suggestions and ideas for future editions are always welcome, and should be addressed to the Technical Writing department of Engineering at Leica Geosystems, in Norcross, Georgia.

Conventions Used in this Book

The following paragraphs are used throughout the ERDAS Field Guide and other ERDAS IMAGINE documentation. These paragraphs contain strong warnings or important tips.

These paragraphs lead you to other chapters in the ERDAS Field Guide or other manuals for additional information.

Preface Preface

xxix xxix

These paragraphs give you additional information.

These paragraphs provide software version specific information. NOTE: Notes give additional instruction

xxx

Preface

Raster Data
Introduction
The ERDAS IMAGINE system incorporates the functions of both image processing and GIS. These functions include importing, viewing, altering, and analyzing raster and vector data sets. This chapter is an introduction to raster data, including: • • • • • • remote sensing data storage formats different types of resolution radiometric correction geocoded data raster data in GIS

See "Vector Data" on page 41 for more information on vector data.

Image Data

In general terms, an image is a digital picture or representation of an object. Remotely sensed image data are digital representations of the Earth. Image data are stored in data files, also called image files, on magnetic tapes, computer disks, or other media. The data consist only of numbers. These representations form images when they are displayed on a screen or are output to hardcopy. Each number in an image file is a data file value. Data file values are sometimes referred to as pixels. The term pixel is abbreviated from picture element. A pixel is the smallest part of a picture (the area being scanned) with a single value. The data file value is the measured brightness value of the pixel at a specific wavelength. Raster image data are laid out in a grid similar to the squares on a checkerboard. Each cell of the grid is represented by a pixel, also known as a grid cell. In remotely sensed image data, each pixel represents an area of the Earth at a specific location. The data file value assigned to that pixel is the record of reflected radiation or emitted heat from the Earth’s surface at that location. Data file values may also represent elevation, as in digital elevation models (DEMs).

Raster Data Raster Data

1 1

The terms pixel and data file value are not interchangeable in ERDAS IMAGINE. Pixel is used as a broad term with many meanings, one of which is data file value. One pixel in a file may consist of many data file values. When an image is displayed or printed, other types of values are represented by a pixel.

See "Image Display" on page 145 for more information on how images are displayed.

Bands

Image data may include several bands of information. Each band is a set of data file values for a specific portion of the electromagnetic spectrum of reflected light or emitted heat (red, green, blue, nearinfrared, infrared, thermal, and so forth) or some other user-defined information created by combining or enhancing the original bands, or creating new bands from other sources. ERDAS IMAGINE programs can handle an unlimited number of bands of image data in a single file. Figure 1: Pixels and Bands in a Raster Image
3 bands 1 pixel

See "Enhancement" on page 455 for more information on combining or enhancing bands of data. Bands vs. Layers In ERDAS IMAGINE, bands of data are occasionally referred to as layers. Once a band is imported into a GIS, it becomes a layer of information which can be processed in various ways. Additional layers can be created and added to the image file (.img extension) in ERDAS IMAGINE, such as layers created by combining existing layers.

Read more about image files in ERDAS IMAGINE Format (.img) on page 25.

2

Raster Data

Layers vs. Viewer Layers The Viewer permits several images to be layered, in which case each image (including a multiband image) may be a layer. Numeral Types The range and the type of numbers used in a raster layer determine how the layer is displayed and processed. For example, a layer of elevation data with values ranging from -51.257 to 553.401 would be treated differently from a layer using only two values to show land and water. The data file values in raster layers generally fall into these categories: • Nominal data file values are simply categorized and named. The actual value used for each category has no inherent meaning—it is simply a class value. An example of a nominal raster layer would be a thematic layer showing tree species. Ordinal data are similar to nominal data, except that the file values put the classes in a rank or order. For example, a layer with classes numbered and named: 1 - Good, 2 - Moderate, and 3 - Poor is an ordinal system. • Interval data file values have an order, but the intervals between the values are also meaningful. Interval data measure some characteristic, such as elevation or degrees Fahrenheit, which does not necessarily have an absolute zero. (The difference between two values in interval data is meaningful.) Ratio data measure a condition that has a natural zero, such as electromagnetic radiation (as in most remotely sensed data), rainfall, or slope.

Nominal and ordinal data lend themselves to applications in which categories, or themes, are used. Therefore, these layers are sometimes called categorical or thematic. Likewise, interval and ratio layers are more likely to measure a condition, causing the file values to represent continuous gradations across the layer. Such layers are called continuous.

Coordinate Systems

The location of a pixel in a file or on a displayed or printed image is expressed using a coordinate system. In two-dimensional coordinate systems, locations are organized in a grid of columns and rows. Each location on the grid is expressed as a pair of coordinates known as X and Y. The X coordinate specifies the column of the grid, and the Y coordinate specifies the row. Image data organized into such a grid are known as raster data. There are two basic coordinate systems used in ERDAS IMAGINE:

Raster Data

3

• • File Coordinates

file coordinates—indicate the location of a pixel within the image (data file) map coordinates—indicate the location of a pixel in a map

File coordinates refer to the location of the pixels within the image (data) file. File coordinates for the pixel in the upper left corner of the image always begin at 0, 0. Figure 2: Typical File Coordinates
0 0
rows (y)

1

2

3

4

1 2 3
columns (x)

(3,1) x,y

Map Coordinates Map coordinates may be expressed in one of a number of map coordinate or projection systems. The type of map coordinates used by a data file depends on the method used to create the file (remote sensing, scanning an existing map, and so forth). In ERDAS IMAGINE, a data file can be converted from one map coordinate system to another.

For more information on map coordinates and projection systems, see "Cartography" on page 211 or "Map Projections" on page 297. See "Rectification" on page 251 for more information on changing the map coordinate system of a data file.

Remote Sensing

Remote sensing is the acquisition of data about an object or scene by a sensor that is far from the object (Colwell, 1983). Aerial photography, satellite imagery, and radar are all forms of remotely sensed data. Usually, remotely sensed data refer to data of the Earth collected from sensors on satellites or aircraft. Most of the images used as input to the ERDAS IMAGINE system are remotely sensed. However, you are not limited to remotely sensed data.

4

Raster Data

This section is a brief introduction to remote sensing. There are many books available for more detailed information, including Colwell, 1983, Swain and Davis, 1978; and Slater, 1980 (see "Bibliography" on page 777). Electromagnetic Radiation Spectrum The sensors on remote sensing platforms usually record electromagnetic radiation. Electromagnetic radiation (EMR) is energy transmitted through space in the form of electric and magnetic waves (Star and Estes, 1990). Remote sensors are made up of detectors that record specific wavelengths of the electromagnetic spectrum. The electromagnetic spectrum is the range of electromagnetic radiation extending from cosmic waves to radio waves ("Jensen, 1996"). All types of land cover (rock types, water bodies, and so forth) absorb a portion of the electromagnetic spectrum, giving a distinguishable signature of electromagnetic radiation. Armed with the knowledge of which wavelengths are absorbed by certain features and the intensity of the reflectance, you can analyze a remotely sensed image and make fairly accurate assumptions about the scene. Figure 3: illustrates the electromagnetic spectrum (Suits, 1983; Star and Estes, 1990). Figure 3: Electromagnetic Spectrum
Reflected SWIR Ultraviolet Radar
0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0 Far-infrared (8.0 - 15.0)

Thermal LWIR

Near-infrared (0.7 - 2.0) Visible (0.4 - 0.7) Blue (0.4 - 0.5) Green (0.5 - 0.6) Red (0.6 - 0.7)

Middle-infrared (2.0 - 5.0)

micrometers μm (one millionth of a meter)

SWIR and LWIR The near-infrared and middle-infrared regions of the electromagnetic spectrum are sometimes referred to as the short wave infrared region (SWIR). This is to distinguish this area from the thermal or far infrared region, which is often referred to as the long wave infrared region (LWIR). The SWIR is characterized by reflected radiation whereas the LWIR is characterized by emitted radiation.

Raster Data

5

Absorption / Reflection Spectra

When radiation interacts with matter, some wavelengths are absorbed and others are reflected.To enhance features in image data, it is necessary to understand how vegetation, soils, water, and other land covers reflect and absorb radiation. The study of the absorption and reflection of EMR waves is called spectroscopy.

Spectroscopy Most commercial sensors, with the exception of imaging radar sensors, are passive solar imaging sensors. Passive solar imaging sensors can only receive radiation waves; they cannot transmit radiation. (Imaging radar sensors are active sensors that emit a burst of microwave radiation and receive the backscattered radiation.) The use of passive solar imaging sensors to characterize or identify a material of interest is based on the principles of spectroscopy. Therefore, to fully utilize a visible/infrared (VIS/IR) multispectral data set and properly apply enhancement algorithms, it is necessary to understand these basic principles. Spectroscopy reveals the: • • absorption spectra—the EMR wavelengths that are absorbed by specific materials of interest reflection spectra—the EMR wavelengths that are reflected by specific materials of interest

Absorption Spectra Absorption is based on the molecular bonds in the (surface) material. Which wavelengths are absorbed depends upon the chemical composition and crystalline structure of the material. For pure compounds, these absorption bands are so specific that the SWIR region is often called an infrared fingerprint. Atmospheric Absorption In remote sensing, the sun is the radiation source for passive sensors. However, the sun does not emit the same amount of radiation at all wavelengths. Figure 4 shows the solar irradiation curve, which is far from linear.

6

Raster Data

As it travels through the atmosphere. and O3 1000 500 0 0.7 3. 1987): • • • • absorption—the amount of radiation absorbed by the atmosphere scattering—the amount of radiation scattered away from the field of view by the atmosphere scattering source—divergent solar irradiation scattered into the field of view emission source—radiation re-emitted after absorption Raster Data 7 .9 1.1 2.2 1. C02.8 Wavelength μm 2.6 VIS 0. radiation is affected by four phenomena (Elachi.3 UV 0.4 2.0 0.Figure 4: Sun Illumination Spectral Irradiance at the Earth’s Surface 2500 Solar irradiation curve outside atmosphere Eλ Spectral Irradiance (Wm-2 m -1) 2000 1500 Solar irradiation curve at sea level Peaks show absorption by H20.5 1.0 INFRARED Source: Modified from Chahine et al. 1983 Solar radiation must travel through the Earth’s atmosphere before it reaches the Earth’s surface.

elevation. ozone. the concentration of atmospheric gases. and other factors. proximity to (or downwind of) urban smog. which can vary considerably around urban areas. especially water vapor. Scattering source and emission source may account for only 5% of the variance. Scattering is modeled as Rayleigh scattering with a commonly used algorithm that accounts for the scattering of short wavelength energy by the gas molecules in the atmosphere (Pratt. 8 Raster Data . 1969). In addition. Aerosols differ regionally (ocean vs. 1991)—for example.Figure 5: Factors Affecting Radiation Radiation Absorption—the amount of radiation absorbed by the atmosphere Scattering—the amount of radiation scattered away from the field of view by the atmosphere Scattering Source—divergent solar irradiations scattered into the field of view Emission Source—radiation re-emitted after absorption Source: Elachi. Los Angeles smog has different concentrations daily). the extent of atmospheric absorbance varies with humidity. Thus. After interaction with the target material. 1987 Absorption is not a linear phenomena—it is logarithmic with concentration (Flaschka. These factors are minor. desert) and daily (for example. Scattering is variable with both wavelength and atmospheric aerosols. the reflected radiation must travel back through the atmosphere and be subjected to these phenomena a second time to arrive at the satellite. The other major gases of importance are carbon dioxide (CO2) and ozone (O3). is variable. but they must be considered for accurate calculation.

Raster Data 9 . The resulting reflectance values translate into discrete digital numbers (or values) recorded by the sensing device. such as vegetation or urban areas. generally modeled as bidirectional reflectance (Clark and Roush.45 to 0. making it useful for coastal water mapping. 1987): • • • reflection transmission scattering It is the reflected radiation. The characteristics of each sensor provide the first level of constraints on how to approach the task of enhancing specific features. Therefore. Each satellite sensor detector is designed to record a specific portion of the electromagnetic spectrum. Reflectance Spectra After rigorously defining the incident radiation (solar irradiation at target). forest type mapping. Landsat Thematic Mapper (TM) band 1 records the 0. It is also useful for soil/vegetation discriminations. 1987). three interactions are possible (Elachi. These gray scale values fit within a certain bit range (such as 0 to 255. and cultural features identification (Lillesand and Kiefer. 1988) and Modtran (Berk et al. Some of the most commonly used are Lowtran (Kneizys et al. one should pay close attention to the characteristics of the land cover types within the constraints imposed by the individual sensors. Remotely sensed data are made up of reflectance values. when choosing an enhancement technique. that is measured by the remote sensor. which is 8-bit data) depending on the characteristics of the sensor.The mathematical models that attempt to quantify the total atmospheric effect on the solar illumination are called radiative transfer equations. See "Enhancement" on page 455 for more information on atmospheric modeling. it is possible to study the interaction of the radiation with the target material. For example.52 μm portion of the spectrum and is designed for water body penetration. 1984). When an electromagnetic wave (solar illumination in this case) strikes a target surface. 1989).

in fact. Specific wavelengths are also absorbed by gases in the atmosphere (H2O vapor.4 . certain wavelengths are absorbed by the chemical bonds.6 .2 1.4 1. is based on the reflectance spectrum of the material of interest (see Figure 6). Figure 6: Reflectance Spectra 4 5 1 100 Kaolinite 80 2 3 6 4 7 Landsat MSS bands 5 7 Atmospheric absorption bands Landsat TM bands Reflectance. man-made.4 Wavelength.6 1.8 1. whether the target is mineral.8 2. O2. μm 10 Raster Data . only the water vapor bands are considered strong enough to exclude the use of their spectral absorption region. it becomes difficult or impossible to use that particular wavelength(s) to study the Earth. When sunlight (the illumination source for VIS/IR imagery) strikes a target. Absorption by other atmospheric gases was not extensive enough to eliminate the use of the spectral region for present day broad band sensors.0 2.0 1. the rest are reflected back to the sensor.The use of VIS/IR imagery for target discrimination. vegetation. the wavelengths that are not returned to the sensor that provide information about the imaged area. and so forth). For the present Landsat and Systeme Pour l’observation de la Terre (SPOT) sensors. If the atmosphere absorbs a large percentage of the radiation. or even the atmosphere itself. CO2.2 2. % 60 Vegetation (green) 40 20 Silt loam 0 . It is. Figure 6 shows how Landsat TM bands 5 and 7 were carefully placed to avoid these regions. Every material has a characteristic spectrum based on the chemical composition of the material.

Hyperspectral Data As remote sensing moves toward the use of more and narrower bands (for example. With the proper selection of band ratios. the Landsat channels are segmented or discontinuous. it is not possible to discern between these three minerals with the Landsat sensor. Raster Data 11 . eventually becoming useless—unless one is studying the atmosphere. silty loam soil.Source: Modified from Fraser. The spectra are offset to better display the lines. Someone wanting to measure the atmospheric content of a specific gas could utilize the bands of specific absorption. the value could be much smaller. For example. As mentioned. the ratio TM5/TM7 is commonly used to measure the concentration of clay minerals. Because of the wide bandpass (2080 to 2350 nm) of TM band 7. Conversely. It is readily apparent that for vegetation this value could be very large. AVIRIS with 224 bands each only 10 nm wide). Sabins. therefore. and for clay minerals. 2220nm/2250nm. These multiband sensors are called hyperspectral sensors. and green vegetation). Figure 6 shows the spectral bandwidths of the channels for the Landsat sensors plotted above the absorption spectra of some common natural materials (kaolin clay. An inspection of the spectra reveals the theoretical basis of some of the indices in the ERDAS IMAGINE Image Interpreter. NOTE: Hyperspectral bands are generally measured in nanometers (nm). For soils. could be used to discriminate between the three materials. Similarly. it would be possible to discriminate between these three clay minerals. 1987 NOTE: This chart is for comparison purposes only. the AVIRIS hyperspectral sensor has a large number of approximately 10 nm wide bands. mineral identification becomes possible. absorption by specific atmospheric gases must be considered. 2350nm/2488nm could produce a color-coded clay mineral image-map. the opposite applies. when the clay ratio TM5/TM7 is considered. We can still use the spectra in interpreting the Landsat data. 1986. 1986. a Normalized Difference Vegetation Index (NDVI) ratio for the three would be very different and. Figure 7 shows detail of the absorption spectra of three clay minerals. As more and more of the incident radiation is absorbed by the atmosphere. It is not meant to show actual values. the value could be near zero. Consider the vegetation index TM4/TM3.Crist et al. the digital number (DN) values of that band get lower. Note that while the spectra are continuous. Evaluation of the spectra shows why. a color composite image prepared from RGB = 2160nm/2190nm. With this data set. For example. again using band ratios.

The Airborne Imaging Spectrometer from the Geophysical & Environmental Research Corp. (GER) has 79 bands in the UV. 12 Raster Data . has up to 52 bands in the visible. and thermal-infrared regions. The Airborne Multispectral Scanner Mk2 by Geoscan Pty. % Montmorillonite Illite 2000 2200 2400 Wavelength. SWIR. nm 2600 Source: Modified from Sabins. SWIR. 1987 NOTE: Spectra are offset vertically for clarity. and thermal-infrared regions. you must understand the phenomenon involved and have some idea of the target materials being sought.The commercial airborne multispectral scanners are used in a similar fashion. Figure 7: Laboratory Spectra of Clay Minerals in the Infrared Region Landsat TM band 7 2080 nm 2350 nm Kaolinite Reflectance. To properly utilize these hyperspectral sensors. visible. Ltd..

that are introducing a new generation of satellite imagery to remote sensing. GHz (109 cycles · sec-1) 40.5 7. The passive sensors record the very low intensity.0 8. See "Enhancement" on page 455 for more information on the NDVI ratio.4 2.0 1.1 1.0 to 26. The following table summarizes the bandwidths used in remote sensing. The microwave energy emitted by an active radar sensor is coherent and defined by a narrow bandwidth.0 to 1. Band Designation* Ka (0.8 to 1.7 1. Because of the very low intensity. large pixel size). Imaging Radar Data Radar remote sensors can be broken into two broad categories: passive and active. AVIRIS.0 to 12.5 cm. these satellites require large solar collectors and storage batteries.86 cm) K Ku X (3.7 to 2.0 to 2.The characteristics of Landsat.0 to 100.0 2. 25. these satellites emit a directed beam of microwave energy at the target.3 *Wavelengths commonly used in imaging radars are shown in parentheses.0 to 30.8 to 7. these images have low spatial resolution (that is.0 15. It is the active sensors.0 to 0. cm 0.0 to 4. microwave radiation naturally emitted by the Earth.5 26. they cannot operate continuously.0 4. termed imaging radar.5 12. 3.2 cm) C S L (23. and then collect the backscattered (reflected) radiation from the target scene. some satellites are limited to 10 minutes of operation per hour.5 to 18.0 cm. and other data types are discussed in "Raster and Vector Data Sources" on page 55.0 Frequency (υ).4 to 3.0 cm) P Wavelength (λ). For this reason.5 to 8. Because they must emit a powerful burst of energy.5 to 15. Raster Data 13 .0 18.1 to 1. To produce an image.0 30.8 3.

The SAR processor software requires operator input parameters. it becomes impossible to make a large enough antenna to create the desired spatial resolution. Four distinct types of resolution must be considered: • • • spectral—the specific wavelength intervals that a sensor can record spatial—the area on the ground represented by each pixel radiometric—the number of possible data file values in each band (indicated by the number of bits into which the recorded energy is divided) 14 Raster Data . the atmosphere does cause perturbations in the signal phase. or the area on the ground that a pixel represents in an image file. which decreases the signal-to-noise ratio (SNR). such as the SAR image or generated DEMs. However. Thus one can record uniform imagery any time of the day or night. processing techniques have been developed which combine the signals received by the sensor as it travels over the target. At some point. Radar images collected during heavy rainfall are often seriously attenuated. the backscattered signal can be affected. such as information about the sensor flight path and the radar sensor's characteristics. which decreases resolution of output products. Resolution Resolution is a broad term commonly used to describe: • • the number of pixels you can display on a display device. the resolution of the resultant image is a function of the antenna size. the microwave frequencies at which imaging radars operate are largely unaffected by the atmosphere. These broad definitions are inadequate when describing remotely sensed data. For a given position in space. In addition. This is termed a real-aperture radar (RAR). To get around this problem. It contains a time history of the radar signal over all the targets in the scene. Thus. In addition. One of the most valuable advantages of imaging radar is that it creates images from its own energy source and therefore is not dependent on sunlight. to process the raw signal data into an image. This allows image collection through cloud cover or rain storms. This is termed a synthetic aperture and the sensor a synthetic aperture radar (SAR). the antenna is perceived to be as long as the sensor path during backscatter reception. this phase history is processed through a hardware/software system called an SAR processor. The received signal is termed a phase history or echo hologram. and is itself a low resolution RAR image.A key element of a radar sensor is the antenna. These input parameters depend on the desired result or intended application of the output imagery. In order to produce a high resolution image.

For example. Large-scale imagery is represented by a larger fraction (one over a smaller number). On the other hand.• temporal—how often a sensor obtains imagery of a particular area These four domains contain separate information that can be extracted from the raw data. Spatial Resolution Spatial resolution is a measure of the smallest object that can be resolved by the sensor. Spectral Resolution Spectral resolution refers to the specific wavelength intervals in the electromagnetic spectrum that a sensor can record (Simonett et al.63 and 0. such as 1:50.73 μm. the SPOT panchromatic sensor is considered to have coarse spectral resolution because it records EMR between 0. An image always has the same spatial resolution. NOTE: Scale and spatial resolution are not always the same thing.000 is considered small-scale imagery. Large-scale in remote sensing refers to imagery in which each pixel represents a small area on the ground. Scale The terms large-scale imagery and small-scale imagery often refer to spatial resolution. and narrow intervals are referred to as fine spectral resolution. with a spatial resolution of 1. but it can be presented at different scales (Simonett et al. or the area on the ground represented by each pixel (Simonett et al.000.51 and 0.69 μm (Jensen. such as SPOT data. 1996). NOTE: The spectral resolution does not indicate how many levels the signal is broken into. 1983). a spatial resolution of 79 meters is coarser than a spatial resolution of 10 meters.1 km. For example. band 1 of the Landsat TM sensor records energy between 0. anything smaller than 1:250. 1983). For instance. 1990). the lower the number. Wide intervals in the electromagnetic spectrum are referred to as coarse spectral resolution. Small-scale imagery is represented by a small fraction (one over a very large number). band 3 of the Landsat TM sensor has fine spectral resolution because it records EMR between 0.45 and 0. The finer the resolution. such as Advanced Very High Resolution Radiometer (AVHRR) data. Scale is the ratio of distance on a map as related to the true distance on the ground (Star and Estes. Small scale refers to imagery in which each pixel represents a large area on the ground. 1983). with a spatial resolution of 10 m or 20 m.52 μm in the visible part of the spectrum. Generally. Raster Data 15 . This terminology is derived from the fraction used to represent the scale of the map.

This is referred to by the number of bits into which the recorded energy is divided. it may still be detectable. such as roads. drainage patterns. The IFOV is a measure of the area viewed by a single detector in a given instant in time (Star and Estes. On the other hand. if the house has a significantly different reflectance than its surroundings. or number of possible data file values in each band. Objects smaller than the stated pixel size may still be detectable in the image if they contrast with the background. since the house does not dominate any one of the four pixels.5 × 79 meters (usually rounded to 57 × 79 meters). it is important to know the number of pixels into which the total field of view for the image is broken. 1990). In Figure 8. For example. objects the same size as the stated pixel size (or larger) may not be detectable if there are brighter or more dominant objects nearby. and so forth. the data file values for each of these pixels reflect the area around the house. However.5 meters in each pass of the scanner. Figure 8: IFOV 20 m 20 m 20 m house 20 m Radiometric Resolution Radiometric resolution refers to the dynamic range. although the IFOV is not always the same as the area represented by each pixel. If the house has a reflectance similar to its surroundings. not the house itself. so the actual area represented by each pixel is 56. Even though the IFOV is not the same as the spatial resolution. Landsat MSS data have an IFOV of 79 × 79 meters.Instantaneous Field of View Spatial resolution is also described as the instantaneous field of view (IFOV) of the sensor. but there is an overlap of 11. a house sits in the middle of four pixels. 16 Raster Data .

The total intensity of the energy from 0 to the maximum amount the sensor measures is broken down into 256 brightness values for 8-bit data.52 . the data file values for each pixel range from 0 to 128. For example. SPOT. can revisit the same area every three days. the data file values range from 0 to 255 for each pixel.255) Spectral Resolution: 0.For instance. Figure 10: illustrates all four types of resolution: Figure 10: Landsat TM—Band 2 (Four Types of Resolution) 79 m 79 m Spatial Resolution: Radiometric Resolution: 8-bit (0 . intensity 1 2 3 4 5 122 max. and 128 brightness values for 7-bit data. but in 7-bit data.0. the Landsat satellite can view the same area of the globe once every 16 days. In Figure 9. intensity 123 124 125 126 127 244 249 255 Temporal Resolution Temporal resolution refers to how often a sensor obtains imagery of a particular area.60 μm Day 1 Day 17 Temporal Resolution: same area viewed Day 31 every 16 days 1 pixel = 79 m ¥ 79 m Raster Data 17 . Figure 9: Brightness Values 0 1 2 3 4 5 6 7 8 9 10 11 8-bit 0 0 7-bit 0 max. on the other hand. The sensor measures the EMR in its range. in 8-bit data. NOTE: Temporal resolution is an important factor to consider in change detection studies. 8-bit and 7-bit data are illustrated.

You can correct line dropout using the 5 × 5 Median Filter from the Radar Speckle Suppression function. Striping Striping or banding occurs if a detector goes out of adjustment—that is. See "Enhancement" on page 455 for more information on radiometric and geometric correction. These errors can be corrected to an extent in GIS by radiometric and geometric correction functions. The result is a line or partial line of data with higher data file values. Data Storage Image data can be stored on a variety of media—tapes. 18 Raster Data . The Convolution and Focal Analysis functions in the ERDAS IMAGINE Image Interpreter also corrects for line dropout. structure) is more important than on what they are stored. or DVD-ROMs. CD-ROMs. Line Dropout Line dropout occurs when a detector either completely fails to function or becomes temporarily saturated during a scan (like the effect of a camera flash on a human retina). it provides readings consistently greater than or less than the other detectors for the same band over the same ground cover.Source: EOSAT Data Correction There are several types of errors that can be manifested in remotely sensed data. NOTE: Radiometric errors are usually already corrected in data from EOSAT or SPOT. for example—but how the data are stored (for example. if it recovers. Among these are line dropout and striping. creating a horizontal streak until the detector(s) recovers. Use ERDAS IMAGINE Image Interpreter or ERDAS IMAGINE Spatial Modeler for implementing algorithms to eliminate striping. Line dropout is usually corrected by replacing the bad line with a line of estimated data file values. The estimated line is based on the lines above and below it. The ERDAS IMAGINE Spatial Modeler editing capabilities allow you to adapt the algorithms to best address the data.

A megabyte (Mb) is about one million bytes. each record in the file contains a scan line (row) of data for one band (Slater. or “off” and “on” respectively. A gigabyte (Gb) is about one billion bytes. The number of values that can be expressed by a set of bits is 2 to the power of the number of bits used. however. depending on the number of bits used.All computer data are in binary format. A bit can have two possible values—0 and 1. Generally. For example. 1. All bands of data for a given line are stored consecutively within the file as shown in Figure 11. a PC may have 256 megabytes of RAM (random access memory). The most common storage formats are: • • • BIL (band interleaved by line) BSQ (band sequential) BIP (band interleaved by pixel) For a single band of data. Blocked data are discussed under Storage Media on page 22. and BSQ) are identical. file size and disk space are referred to by number of bytes. or a file may need 55. all formats (BIL. BIL In BIL (band interleaved by line) format. 1980).024 bytes = 1 kilobyte. A set of bits. Storage Formats Image data can be arranged in several ways on a tape or other media. A byte is 8 bits of data. The basic unit of binary data is a bit. as long as the data are not blocked. BIP. can have many more values. Raster Data 19 .698 bytes of disk space.

20 Raster Data . Band 1 Line 2. Band x Trailer NOTE: Although a header and trailer file are shown in this diagram. Band x Line n. Band 1 Line n. and multiple bands can be easily loaded in any order. Band 2 + + + Line 1. not all BIL data contain header and trailer files. BSQ In BSQ (band sequential) format. 1980). Band 2 + + + Line n. Band x Line 2. Band 1 Line 1. each entire band is stored consecutively in the same file (Slater.Figure 11: Band Interleaved by Line (BIL) Header Image Line 1. in that: • • one band can be read and viewed easily. This format is advantageous. Band 2 + + + Line 2.

Fast format data have the following characteristics: • • • • • • Files are not split between tapes. Geocoded products are normally blocked (EOSAT). There are no header records preceding the image data. Band x + + + Line n. Band 2 Line 3. Band x Line 3. An end-of-volume marker consists of three end-of-file markers. An end-of-volume marker marks the end of each volume (tape). Band 1 Line 2. Regular products (not geocoded) are normally unblocked.Figure 12: Band Sequential (BSQ) Header File(s) Image File Band 1 Line 1. Band 2 Line 3. Band x Trailer File(s) Landsat TM data are stored in a type of BSQ format known as fast format. Band 2 + + + Line n. Raster Data 21 . Band 2 end-of-file Image File Band 2 end-of-file Image File Band x Line 1. it ends on the first tape. If a band starts on the first tape. Band 1 Line 1. There is one header file per tape. Band x Line 2. Band 2 Line 2. ERDAS IMAGINE imports all of the header and image file information. Band 3 + + + Line n. An end-of-file (EOF) marker follows each band.

Storage Media Today. . . Band 3 . Band 3 . . Band 1 Pixel 2. 1980). depending on the system hardware and devices available. When ordering data. most raster data are available on a variety of storage media to meet the needs of users. The sequence for BIP format is: Pixel 1. Band 2 Pixel 1. photograph. The most common forms of storage media are discussed in the following section: • • • • • • 9-track tape 4 mm tape 8 mm tape 1/4” cartridge tape CD-ROM/optical disk DVD-ROM Other types of storage media are: • • floppy disk (3. Band 1 Pixel 1. BIP In BIP (band interleaved by pixel) format. .5” or 5. it is sometimes possible to select the type of media preferred. Pixel 2.25”) film. the values for each band are ordered within a given pixel.See Geocoded Data on page 31 for more information on geocoded data. The pixels are arranged sequentially on the tape (Slater. or paper 22 Raster Data . Band 2 Pixel 2.

75” in size. Blocked Data For reasons of efficiency. data can be blocked to fit more on a tape. read the tape label or box. A record is the basic storage unit on a tape. 10-bit. 8-bit. Tape Contents Tapes are available in a variety of sizes and storage capacities. A physical record is a consecutive series of bytes on a magnetic tape. or blank space. a record may contain 28. BSQ. To obtain information about the data on a particular tape. This petite cassette offers an obvious shipping and storage advantage because of its size. there is limited information on the outside of the tape. such as: • • • • • • • • 4 mm Tapes The 4 mm tape is a relative newcomer in the world of GIS. Often. This tape is a mere 2” × . Therefore. it may be necessary to read the header files on each tape for specific information. or read the header file. followed by a gap. 12-bit. 16-bit number of bands blocking factor number of header files and header records Raster Data 23 . but it can hold up to 2 Gb of data. on the tape. Blocked data are sequenced so that there are more logical records in each physical record. number of tapes that hold the data set number of columns (in pixels) number of rows (in pixels) data storage format—BIL. but only 4000 columns due to a blocking factor of 7. BIP pixel depth—4-bit. For instance. For example. all the data for one line of an image may form a logical record.000 bytes. The number of logical records in each physical record is the blocking factor. • • A logical record is a series of bytes that form a unit.• Tape videotape The data on a tape can be divided into logical records and physical records.

5 Gb.8 mm Tapes The 8 mm tape offers the advantage of storing vast amounts of data. The number of bits per inch on a tape is also referred to as the tape density. although many types of data can be requested in CD-ROM format. Development of next-generation DVDs continues. It is a large circular tape approximately 10” in diameter. or changed from its original integrity. The tapes most commonly used have either 1600 or 6250 bpi. Tapes are available in 5 and 10 Gb storage capacities (although some tape drives cannot handle the 10 Gb size). CD-ROMs offer the advantage of storing large amounts of data in a small. Up to 644 Mb can be stored on a CD-ROM. 9-Track Tapes A 9-track tape is an older format that was the standard for two decades. DVD-ROM DVD-ROM is an optical disk storage device which is read by a DVD drive in a computer.5” × 4” cassette. erased.9Gb. Depending on the length of the tape. and each side can have one or two layers. CD-ROM Data such as ADRG and Digital Line Graphs (DLG) are most often available on CD-ROM. one-layered disk has 4. 9-track tapes are still widely used. This is the most stable of the current media storage types and data stored on CD-ROM are expected to last for decades without degradation. and single-sided. compact device.7 Gb storage capacity. which makes it easy to ship and handle. The 8 mm tape is a 2. 1/4” Cartridge Tapes This tape format falls between the 8 mm and 9-track in physical size and storage capacity. double-layer DVDs can store about 8. it protects the data from accidentally being overwritten. Double-sided. A single-sided. However. A single 9-track tape may be referred to as a volume. It requires a 9track tape drive as a peripheral device for retrieving data. double-layer DVDs can store about 15. since this device is read-only. The size and storage capability make 9-track less convenient than 8 mm or 1/4” tapes. DVDs are available in single-sided or double-sided format. The complete set of tapes that contains one image is referred to as a volume set. bpi. Also. The storage format of a 9-track tape in binary format is described by the number of bits per inch. The tape is approximately 4” × 6” in size and stores up to 150 Mb of data. 9-tracks can store between 120-150 Mb of data. Double-sided. singlelayer. A CD-ROM is an optical read-only storage device which can be read with a CD player. on the tape. 24 Raster Data .

4 = 2.1 Mb Bytes Per Pixel The number of bytes per pixel is listed below: 4-bit data: .000 bytes of disk space is needed. lookup tables. [ ( ( 500 × 500 ) × 2 ) × 3 ] × 1.0 16-bit data: 2.4 = rows = columns = number of bytes per pixel = number of bands adds 30% to the file size for pyramid layers and 10% for miscellaneous adjustments. See Pyramid Layers on page 162 for more information. or just one type.img) can contain two types of raster layers: • • thematic continuous An image file can store a combination of thematic and continuous layers. use the following formula: [ ( ( x × y × b ) × n ) ] × 1. For example. ERDAS IMAGINE Format (. and so forth. to load a 3 band. Raster Data 25 .Calculating Disk Space To calculate the amount of disk space a raster file requires on an ERDAS IMAGINE system.4 = output file size Where: y x b n 1.5 8-bit data: 1. such as histograms. about 2.0 NOTE: On the PC. 000 or 2. ERDAS IMAGINE image files (. 16-bit file with 500 rows and 500 columns. they are converted to the ERDAS IMAGINE file format and stored in image files. disk space is shown in bytes. disk space is shown as kilobytes (1.100. file name extensions identify the file type. 100.img) In ERDAS IMAGINE.024 bytes). NOTE: This output file size is approximate. When data are imported into ERDAS IMAGINE. On the workstation.

A thematic layer is contained within an image file.img) Raster Layer(s) Thematic Raster Layer(s) Continuous Raster Layer(s) ERDAS Version 7. When importing a LAN file. 26 Raster Data . Thematic layers lend themselves to applications in which categories or themes are used. Thematic Raster Layer Thematic data are raster layers that contain qualitative.5 Users For Version 7. it becomes an image file with one thematic raster layer.Figure 13: Image Files Store Raster Layers Image File (. such as: • • • • • soils land use land cover roads hydrology NOTE: Thematic raster layers are displayed as pseudo color layers. categorical information about an area. Thematic raster layers are used to represent data measured on a nominal or ordinal scale. when importing a GIS file from Version 7. each band becomes a continuous raster layer within an image file.5.5 users.

The following types of data are examples of continuous raster layers: • • • • • • Landsat SPOT digitized (scanned) aerial photograph DEM slope temperature NOTE: Continuous raster layers can be displayed as either a gray scale raster layer or a true color raster layer. Raster Data 27 . Continuous Raster Layer Continuous data are raster layers that contain quantitative (measuring a characteristic on an interval or ratio scale) and related. Continuous raster layers can be multiband (for example. continuous values.Figure 14: Example of a Thematic Raster Layer soils See "Image Display" on page 145 for information on displaying thematic raster layers. SPOT panchromatic data). Landsat TM data) or single band (for example.

The default tile size for image files is 512 × 512 pixels. Tiled data are stored in tiles that can be set to any size.img format are tiled data. the file statistics are generated from the data file values in the layer and incorporated into the image file.Figure 15: Examples of Continuous Raster Layers Landsat TM DEM Tiled Data Data in the . Image File Contents The image files contain the following additional information about the data: • • • • • the data file values statistics lookup tables map coordinates map projection This additional information can be viewed using the Image Information function located on the Viewer’s tool bar. This statistical information is used to create many program defaults. 28 Raster Data . and helps you make processing decisions. Statistics In ERDAS IMAGINE.

Using consistent naming conventions and the ERDAS IMAGINE Image Catalog helps keep image files well organized and accessible. This helps everyone involved know what the file contains. The pyramid layer option enables you to display large images faster. if the name of the output file is image.ovr lanierScalebars. if a standard nomenclature is developed in which the file name refers to a process or contents of the file.Pyramid Layers Sometimes a large image takes longer than normal to display in the Viewer. it is possible to determine the progress of a project and contents of a file by examining the directory.map lanier. Develop a naming convention that is based on the contents of the file.plt lanier. Pyramid layers are image layers which are successively reduced by the power of 2 and resampled.img lanierSPOT.ovr lanier. Image File Organization Data are easy to locate if the data files are well organized.img lanierSymbols.img Consistent Naming Convention Raster Data 29 . The name that is used can either cause confusion about the process that has taken place.img. Many processes create an output file. See "Image Display" on page 145 for more information on pyramid layers. For example. See the On-Line Help for detailed information on ERDAS IMAGINE file formats. For example.map. from the Import function. On the other hand.gcc lanierUTM. a directory for the files may look similar to the one below: lanierTM. and every time a file is created. in a project to create a map composition for Lake Lanier.map.ovr lanierlegends. it is difficult to determine the contents of the file. it is necessary to assign a file name. or it can clarify and give direction. Well organized files also make data more accessible to anyone who uses the system. The Pyramid Layer option is available in the Image Information function located on the Viewer’s tool bar and.

img is probably a Landsat TM scene of Lake Lanier. is extracted from the image file header. 30 Raster Data . For example. and its ancillary data. When it is necessary to store some data on a tape. The archived image files are copies of the files on disk—nothing is removed from the disk. one can make some educated guesses about the contents of each file based on naming conventions used. An image database is especially helpful when there are many image files and even many on-going projects. if you like. where it is located.img was probably created when lanierTM.img) without having to know the name or location of the file. Use the ERDAS IMAGINE Image Catalog to track and store information for image files (. This file information helps to quickly determine which image(s) to use.img was rectified to a UTM map projection. The file lanierUTM.map is probably a map composition that has map frames with lanierTM. except archive information. The database can be queried for specific parameters (for example. it is necessary to recatalog the image in order to update the information in the Image Catalog database. type. NOTE: All information in the Image Catalog database. When records are queried based on specific criteria. The Image Catalog CellArray shows which tape the image file is stored on. Keeping Track of Image Files Using a database to store information about images enables you to track image files (. it can be removed from the disk. size. This CellArray enables you to view all of the ancillary data for the image files in the database. Therefore. you could use the database to search for all of the image files of Georgia that have a UTM map projection. the image files that match the criteria are highlighted in the CellArray. The file lanier. if this information is modified in the Image Information utility. the ERDAS IMAGINE Image Catalog database enables you to archive image files to external devices. ERDAS IMAGINE Image Catalog The ERDAS IMAGINE Image Catalog database is designed to serve as a library and information management system for image files (. The information for the image files is displayed in the Image Catalog CellArray™. It is also possible to graphically view the coverage of the selected image files on a map in a canvas window.img and lanierSPOT. For example.img) that are imported and created in ERDAS IMAGINE.img data in them.From this listing. lanierTM. map projection) and the database returns a list of image files that match the search criteria.img) that are imported and created in ERDAS IMAGINE. and the file can be easily retrieved from the tape device to a designated disk directory. Once the file is archived.

such as an aircraft or satellite. Rectification is the process of projecting the data onto a plane and making them conform to a map projection system. a user may want to combine Landsat TM from one date with TM data from a later date. Raster Data 31 . ERDAS IMAGINE programs allow image data with an unlimited number of bands. remotely-sensed image data are gathered by a sensor on a platform. See "Rectification" on page 251 for information on geocoding raw imagery with ERDAS IMAGINE. For example. but the most common satellite data types—Landsat and SPOT—have seven or fewer bands. This section briefly describes some basic image file techniques that may be useful for any application. or create new bands through various enhancement techniques.Geocoded Data Geocoding. the image data are not referenced to a map projection. also known as georeferencing. then perform a classification based on the combined data. It may be useful to combine data from two different dates into one file. See"Map Projections" on page 297 for detailed information on the different projections available. Geocoded data are images that have been rectified to a particular map projection and pixel size. SPOT. This is particularly useful for change detection studies. and subsetting. there are options available to make additional image files from those acquired from EOSAT. The following chapters in this book describe many of these processes. In this raw form. Using Image Data in GIS ERDAS IMAGINE provides many tools designed to extract the necessary information from the images in a database. and so forth. It is possible to geocode raw image data with the ERDAS IMAGINE rectification tools. You can also incorporate elevation data into an existing image file as another band. Image files can be created with more than seven bands. These options involve combining files. Raw. Subsetting and Mosaicking Within ERDAS IMAGINE. This is called multitemporal imagery. Geocoded data are also available from Space Imaging EOSAT and SPOT. mosaicking. is the geographical registration or coding of the pixels in an image.

image files contain areas much larger than a particular study area. but it speeds up processing due to the smaller amount of data to process. Enhancement techniques are often used instead of classification for extracting useful information from images. To create a mosaicked image. Mosaic On the other hand. Often. Enhancement can make important features of raw. You can also use the Subset option from ERDAS IMAGINE Image Interpreter to define a subset area. They range in complexity from a simple contrast stretch. or to each other. it is necessary to combine the images to create one large file. where the number of image file bands can be reduced and new bands created to account for the most variance in the data. 1989). the study area in which you are interested may span several image files. In this case. See "Rectification" on page 251 for information on georeferencing images. Enhancement Image enhancement is the process of making an image more interpretable for a particular application (Faust. each file must be georeferenced to the same coordinate system. 32 Raster Data . it is helpful to reduce the size of the image file to include only the area of interest (AOI). to principal components analysis. This not only eliminates the extraneous data in the file. remotely sensed data and aerial photographs more interpretable to the human eye. where the original data file values are stretched to fit the range of the display device. This is called mosaicking.To combine two or more image files. Subset Subsetting refers to breaking out a portion of a large file into one or more smaller files. use the Mosaic Images option from the Data Preparation menu. This can be important when dealing with multiband data. The ERDAS IMAGINE Import option often lets you define a subset area of an image to preview or import. In these cases. There are many enhancement techniques available.

Editing Raster Data ERDAS IMAGINE provides raster editing tools for editing the data values of thematic and continuous raster data. data values in thematic data can also be recoded according to class. are more applicable to thematic data. Recoding is a function that reassigns data values to a region or to an entire class of pixels. Multispectral Classification Image data are often used to create thematic files through multispectral classification. specifically the Majority option. and replace the pixel of interest with the replacement value. See the ERDAS IMAGINE On-Line Help for information about using and selecting AOIs. Global operations calculate the replacement value for an entire area rather than affecting one pixel at a time. The raster editing functions can be applied to the entire image or a user-selected area of interest (AOI). 5 × 5. This entails using spectral pattern recognition to identify groups of pixels that represent a common characteristic of the scene. See "Enhancement" on page 455 for information about reducing data noise using spatial filtering. such as soil type or vegetation. and the number of surrounding pixels that influence the value is determined by the size of the moving window. See "Classification" on page 545 for a detailed explanation of classification procedures.See "Enhancement" on page 455 for more information on enhancement techniques. The ERDAS IMAGINE raster editing functions allow the use of focal and global spatial modeling functions for computing the values to replace noisy pixels or areas in continuous or thematic data. With raster editing. See "Geographic Information Systems" on page 173 for information about recoding data. and so forth). Raster Data 33 . These functions. Therefore this function affects one pixel at a time. Focal operations are filters that calculate the replacement value based on a window (3 × 3. such as spikes and holes in imagery. This is primarily a correction mechanism that enables you to correct bad data values which produce noise.

interpolation techniques (discussed below). and digitized photographs. a lake). Landsat. the following interpolation techniques provide the best methods for raster editing: • • 2-D polynomial—surface approximation multisurface functions—with least squares prediction 34 Raster Data . holes. the original data value plus a constant value—add a negative constant value to the original data values to compensate for the height of trees and other vertical features in the DEM. but it can also be used with images of other continuous data sources. This discussion of raster editing focuses on DEM editing. The ERDAS IMAGINE Raster Editing functionality was originally designed to edit DEMs. the average of the buffering pixels—replace the original pixel value with the average of the pixels in a specified buffer area around the AOI. This technique is commonly used in forested areas. This is used where the constant values of the AOI are not known. such as radar. but the area is flat or homogeneous with little variation (for example. Editing Continuous (Athematic) Data Editing DEMs DEMs occasionally contain spurious pixels or bad data. and other noises caused by automatic DEM extraction can be corrected by editing the raster data values and replacing them with meaningful values. When editing continuous raster data. spatial filtering—filter data values to eliminate noise such as spikes or holes in the data. These spikes.The raster editing tools are available in the Viewer. SPOT. you can modify or replace original pixel values with the following: • • a constant value—enter a known constant value for areas such as lakes. • • • Interpolation Techniques While the previously listed raster editing techniques are perfectly suitable for some applications.

• distance weighting Each pixel’s data value is interpolated from the reference points in the data file. Where: V a x y Multisurface Functions = = = = data value (elevation value for DEM) polynomial coefficients x coordinate y coordinate The multisurface technique provides the most accurate results for editing DEMs that have been created through automatic extraction. The following equation is used: V = Where: ∑ Wi Qi V Wi Qi = output data value (elevation value for DEM) = coefficients which are derived by the least squares method = distance-related kernels which are actually interpretable as continuous single value surfaces Source: Wang.. The weighting function used in ERDAS IMAGINE is: 2 S W = ⎛ --. . . Z. 1990 Distance Weighting The weighting function determines how the output data values are interpolated from a set of reference data points. These interpolation techniques are described below: 2-D Polynomial This interpolation technique provides faster interpolation calculations than distance weighting and multisurface functions. The following equation is used: V = a1 + a1x + a2y + a2x2 + a4xy + a5y2 +.– 1⎞ ⎝D ⎠ Raster Data 35 . the values of all reference points are weighted by a value corresponding with the distance between each point and the pixel. For each pixel.

This compression is most effective on thematic data.. Z. This combines two different forms of compression to give a quick and effective reduction of dataset size. 36 Raster Data . and dividing by the sum of the weighting factors: n ∑ Wi × Vi V = i = 1n -------------------------- ∑ Wi i=1 Where: V i Wi Vi n = = = = output data value (elevation value for DEM) ith reference point weighting factor of point i number of reference points = data value of point i Source: Wang.Where: S D = = normalization factor distance from output data point and reference point The value for any given pixel is calculated by taking the sum of weighting factors for all reference points multiplied by the data values of those points. 1990 Image Compression Dynamic Range Run-Length Encoding (DR RLE) The IMAGINE IMG format can use a simple lossless form of compression that can be referred to as Dynamic Range Run-Length Encoding (DR RLE). but can also be effective on continuous data with low variance or low dynamic range.

8 bit data with a minimum value of 1 and a maximum value of 254 would have a dynamic range of 254. VRUN). This. For example. For example. This can be computed by taking the difference of the maximum (VMAX) and the minimum (VMIN) value (plus one): RDynamic = VMAX – VMIN + 1. a high degree of lossless compression can be obtained for many types of data. requires the minimum value (VMIN) to be saved along with the data. if the pixel value 0 occurred 100 times in a row then the value (VRUN) would be 0 and the count (NRUN) would be 100. The dynamic range of the data is often several times smaller than the range of the pixel type in which it is stored. The data can then be saved with one value per byte (instead of one value per two bytes).Dynamic Range refers to the span from the minimum pixel value to the maximum pixel value found in a dataset. space can often be saved by counting the number of repeats (NRUN) of a value (VRUN) that occur in succession. The compressed data for each block is stored as follows: Raster Data 37 . In the second case above the Dynamic Range of the data was 256 but the natural range of 16 bit data (two bytes) is 65536. In this case. If the dynamic range is less than the natural range of the data type then a smaller data type can used to store the data with a resulting savings in space. If a single byte is then used to store each of the count and the value. Under these circumstances. By operating on a block of data at a time the Dynamic Range compression can have a greater effect because the data within a block is often more similar in range than the data across the whole image. Run-Length Encoding is a compression technique based on the observation that often there are sequential occurrences (runs) of pixel values in an image. of course. 16 bit data with a minimum value of 1023 and a maximum value of 1278 would have a dynamic range of 256. the data will be stored as the original uncompressed block. Note that it is possible to produce a greater amount of data when applying Dynamic Range Run-Length Encoding. Run-Length Encoded data is stored as a sequence of pairs that consist of the count and the value (NRUN. To recover the data the minimum is added to the compressed value: VPIXEL = VCOMPRESSED +VMIN. By first applying Dynamic Range compression and then Run-Length Encoding. In this case a single byte can be used to store the data by computing a compressed value (VCOMPRESSED) by subtracting the minimum value (VMIN) from the pixel value (VPIXEL). then 2 bytes would be used instead of 100. and then storing the count and only one occurrence of the value.

16. Because the compressed imagery is composed of multi-resolution wavelet levels. This is the number of runs in the block.2.Name Repeated only once Min per block at the front of the data stream Numsegments Dataoffset Type EMIF_T_LONG EMIF_T_LONG EMIF_T_LONG Description This is the minimum value observed in the block of data. This is the number of bits used per data value. 4. 3 or 4. 2.16 or 32 Present for numbitspervalue=16 or 32 Present for numbitspervalue=32 Present for numbitspervalue=32 Numbitspervalue EMIF_T_UCHAR Segment contents repeated “numsegments” times for the block. 8. Wavelet compression technology offers very high quality results at high compression rates. or 32 This is the number of bytes for the count. This means that. ECW compression is more efficient when it is used to compress large image files. You can typically compress a color image to less than 5% of its original size (20:1 compression ratio) and compress a grayscale image to less than 10% of its original size (10:1 compression ratio).8.4 Present for countnumbytes=4 Present for numbitspervalue=1. You may actually achieve higher compression rates where your source image has a structure well suited to compression.3. Present for countnumbytes=1.4 Present for countnumbytes=2. you can experience fast roaming and zooming on the imagery. Countnumbytes Count[0] Count[1] Count[2] Count[3] Data[0] Data[1] Data[2] Data[3] EMIF_T_UCHAR EMIF_T_UCHAR EMIF_T_UCHAR EMIF_T_UCHAR EMIF_T_UCHAR EMIF_T_UCHAR EMIF_T_UCHAR EMIF_T_UCHAR EMIF_T_UCHAR ECW Compression Enhanced Compressed Wavelet (ECW) format significantly reduces the size of image files with minimal deterioration in quality. 38 Raster Data . which is small enough to fit on to a single CD-ROM.4 Present for countnumbytes=3. 2. 10GB of color imagery compresses down to 500MB. This is the number of bytes after the start of this data that the segment data starts at. The minimum size image that can be compressed by ECW method is 128 x 128 pixels. at 20:1 compression. It has the value 1. It will be either 1.2. even on slower media such as CD-ROM.4.3.

Higher values give more file size compression. This is a target only. What is Target Compression Ratio When compressing images there is a tradeoff between the degree of compression achieved and the quality of the resulting image when it is subsequently decoded. The target compression ratio is an abstract figure representing your choice in this tradeoff. ECW Compression Ratios When exporting to ECW images. air photos showing large regions of a similar color like oceans or forests) are easier to compress than others (completely random images). The reason for this is as follows. not similar file size. the actual compression ratio achieved is often significantly larger than the target compression ratio set by the user. The goal is visual similarity in quality levels between multiple files. you select a target compression ratio. and use the target compression ratio to compress images within that quality range. The highest rates of compression can only be achieved by discarding some less important data from the image. Choose a level of quality that benefits your needs. It is important to note that the target ratio makes no guarantees about the actual output size that will be achieved. lower values give better image quality. It approximates the likely ratio of input file size to output file size given certain parameters for compression.In addition to reducing storage requirements. Specify Quality Level rather than Output File Size The concept of ECW and JPEG2000 format compression is that you are compressing to a specified quality level rather than a specified file size. However in typical cases the actual rate of compression obtained will be greater than the target rate. and the actual amount of compression will vary depending on the image qualities and amount of spatial variation. because this is dependent on the nature of the input data. Except when compressing very small files (less than 2MB in size). you can also use free imagery plug-ins for GIS and office applications to read compressed imagery with a wide range of software applications. Images with certain features (for example. Raster Data 39 . Recommended values are 1 to 40 for color images and 1 to 25 for greyscale. known as lossy decompression.

and adapts the best techniques depending on the area being compressed. It is important to understand that encoding techniques are applied after image quantization and do not affect the quality. desert areas or bodies of water). even though the compression ratio may be higher than that which was requested. If your image has areas that are conducive to compression (for example. The compression engine uses multiple wavelet encoding techniques simultaneously. a greater rate of compression may be achieved while still keeping the desired information content and quality. the compression engine uses this value as a measure of how much information content to preserve in the image.Preserve Image Quality When you specify a Target Compression Ratio. 40 Raster Data .

you can use ArcInfo coverages directly without importing them. Vector data consist of: • • • points lines polygons Each is illustrated in Figure 16. raster and vector. While the previous chapter explored the characteristics of raster data. this chapter is focused on vector data. This chapter describes vector data. attribute information.). See "Geographic Information Systems" on page 173 for information on editing vector layers and using vector data in a GIS. Inc. and symbolization. into one system. You do not need ArcInfo software or an ArcInfo license to use the vector capabilities in ERDAS IMAGINE.Vector Data Introduction ERDAS IMAGINE is designed to integrate two data types. Figure 16: Vector Elements vertices node polygons line label point node points Vector Data Vector Data 41 41 . Since the ArcInfo data model is used in ERDAS IMAGINE. The vector data structure in ERDAS IMAGINE is based on the ArcInfo data model (developed by ESRI.

land use. school zones. y. those coordinates may be inches [as in some computer-aided design (CAD) applications]. state borders. A vertex is a point that defines an element. The to-node is the last vertex in a line. coordinates. 42 Vector Data .Points A point is represented by a single x. A series of lines in which the from-node of the first line joins the to-node of the last line is a polygon. such as soil type. commercial districts. etc. such as wildlife habitats. The points that define a line are vertices. or utility line. such as a river. Label points are also used to identify polygons (see Figure 17). The label point links the polygon to its attributes. Universal Transverse Mercator (UTM). The ending points of a line are called nodes. such as voting districts. In some instances. Polygons can also be used to represent nongeographical features. etc. A polygon is a closed line or closed set of lines defining a homogeneous area. road. y coordinate pair. The from-node is the first vertex in a line. Each line has two nodes: a from-node and a to-node. or Cartesian. Lines join other lines only at nodes. such as a mountain peak. or Lambert Conformal Conic. contour lines. Vector data digitized from an ungeoreferenced image are expressed in file coordinates. such as State Plane. Figure 17: Vertices label point line polygon Lines Polygons Vertex vertices In Figure 17. A line (polyline) is a set of line segments and represents a linear geographic feature. Points can represent the location of a geographic feature or a point that has no area. but often the coordinates are map coordinates. Coordinates Vector data are expressed by the coordinates of vertices. Lines can also represent nongeographical boundaries. the line and the polygon are each defined by three vertices. Polygons also contain label points that identify the polygon. or water body. The vertices that define each element are referenced with x. such as the endpoint of a line segment or a location in a polygon where the line segment defining the polygon changes direction.

lines. This enables the user to isolate data into themes. Topology must be updated after a layer is edited also. even though both are represented with polygons. similar to the themes used in raster layers. An ERDAS IMAGINE vector layer is stored in subdirectories on the disk. identify adjacent polygons. Vector data are represented by a set of logical tables of information. See "Geographic Information Systems" on page 173 for more information about analyzing vector layers. the user can overlay them or create a new layer.Tics Vector layers are referenced to coordinates or a map projection system using tic files that contain geographic control points for the layer. This georelational data model is actually a set of files using the computer’s operating system for file management and input/output. These files may serve the following purposes: Vector Data 43 . a mathematical procedure is used to define connections between features. the ERDAS IMAGINE vector structure is based on the ArcInfo data model used for ARC coverages. Digitizing on page 49 describes how topology is created for a new or edited vector layer. Vector Layers Although it is possible to have points. If the project requires that the coincidence of features in two or more layers be studied. Every vector layer must have a tic file. possibly attributes (defined as a set of named items or variables) (ESRI 1989). a polygon is made of connecting lines) (Environmental Systems Research Institute. and polygons in a single layer. Political districts and soil types would probably be in separate layers. polygons) and the attribute information. Usually. Vector layers contain both the vector features (points. Tics are not topologically linked to other features in the layer and do not have descriptive data associated with them. A vector layer is defined as a set of features where each feature has a location (defined by coordinates and topological pointers to other features) and. Vector Files As mentioned above. 1990). a layer typically consists of one type of feature. It must be added later using specific functions. In topological vector data.g. vector layers are also divided by the type of information they represent. Topology is not automatically created when a vector layer is created. stored as files within the subdirectory. It is possible to have one vector layer for streams (lines) and another layer for parcels (polygons). and define a feature as a set of other features (e.. lines. Topology The spatial relationships between features in a vector layer are defined using topology.

1992).• • • • define features provide feature attributes cross-reference feature definition files provide descriptive information for the coverage as a whole A workspace is a location that contains one or more vector layers. Workspaces provide a convenient means for organizing layers into related groups. Table 1: Description of File Types File Type Feature Definition Files File ARC CNT LAB TIC Description Line coordinates and topology Polygon centroid coordinates Label point coordinates and topology Tic coordinates Line (arc) attribute table Polygon or point attribute table Polygon/line/node cross-reference file Coordinate extremes Layer history file Coordinate definition file Layer tolerance file Feature Attribute Files AAT PAT Feature Cross. It is possible to have an unlimited number of workspaces and an unlimited number of vector layers in a workspace. They also provide a place for the storage of tabular data not directly tied to a particular layer. Table 1 summarizes the types of files that are used to make up vector layers. Each workspace is completely independent. 44 Vector Data .PAL Reference File Layer Description Files BND LOG PRJ TOL Figure 18 illustrates how a typical vector workspace is set up (Environmental Systems Research Institute.

A utility is also provided to update path names that are no longer correct due to the use of regular system commands on vector layers. You can select features in the layer based on the attribute information. See the ESRI documentation for more detailed information about the different vector files. Likewise. a vector layer can have a wealth of associated descriptive. and polygons. The attributes for a roads layer may look similar to the example in Figure 19. Attribute Information Along with points. lines. information associated with it. Attribute fields can contain numerical or character data. you MUST use the utilities provided in ERDAS IMAGINE to copy and rename them. that feature is highlighted in the Viewer. Attribute information is displayed in CellArrays.Figure 18: Workspace Structure georgia parcels testdata demo INFO roads streets Because vector layers are stored in directories rather than in simple files. Vector Data 45 . when a row is selected in the attribute CellArray. Custom fields can be added to each attribute table. Some attributes are automatically generated when the layer is created. This is the same information that is stored in the INFO database of ArcInfo. or attribute.

PCODE—polygon attribute information <layer name>. only the required attribute information is imported into the attribute tables (AAT and PAT files) of the new vector layer. The rest of the attribute information is written to one of the following INFO files: • • • <layer name>. such as exporting attributes or merging attributes. The complete path of the file must be specified when establishing an INFO file name in a Viewer application. it can be viewed in CellArrays and edited as desired. 46 Vector Data . This new information can then be exported back to its original format. including merging and exporting.ACODE—arc attribute information <layer name>. the INFO files can be merged into the PAT and AAT files. Once this attribute information has been merged.XCODE—point attribute information To utilize all of this attribute information.pcode Use the Show Attributes option in the IMAGINE Workspace to view and manipulate vector attribute data. as shown in the following example: /georgia/parcels/info!arc!parcels.Figure 19: Attribute CellArray Using Imported Attribute Data When external data types are imported into ERDAS IMAGINE. See the ERDAS IMAGINE On-Line Help for more information about using CellArrays.

For example. When symbols are used. or display a vector layer(s) over a raster layer(s). overlay several layers in one Viewer. the appropriate symbol could be used at each point based on the population of that area. background color. Points Point symbolization options include symbol.and yseparation between symbols. and color. polygons. lines. and the x. lines. width. Lines Lines can be symbolized with varying line patterns. and nodes are symbolized using styles and symbols similar to annotation. you may want to display only the polygons in a layer that also contains street centerlines (lines). the symbol size. if you are studying parcels. See "Image Display" on page 145 for a thorough discussion of how images are displayed. Figure 20 illustrates a pattern fill. meaning that the attributes can be used to determine how points. Points. and color. Symbolization Vector layers can be displayed with symbolization. composition. and polygons). if a point layer represents cities and towns. Polygons symbolized as lines can have varying line styles (see Lines on page 47). as are other data types in ERDAS IMAGINE. In layers that contain more than one feature (a combination of points. As with a pseudo color image. Color Schemes Vector data are usually assigned class values in the same manner as the pixels in a thematic raster file. These class values correspond to different colors on the display screen.Displaying Vector Data Vector data are displayed in Viewers. you select the symbol to use. For filled polygons. you can select which features to display. lines. The symbols available are the same symbols available for annotation. symbol color. Vector Data 47 . For example. You can display a single vector layer. you can assign a color scheme for displaying the vector classes. Polygons Polygons can be symbolized as lines or as filled polygons. size. and polygons are rendered. either a solid fill color or a repeated symbol can be selected. The line styles available are the same as those available for annotation.

See the On-Line Help for information about selecting features and using CellArrays.Figure 20: Symbolization Example The vector layer reflects the symbolization that is defined in the Symbology dialog. photographs. 48 Vector Data . or other hardcopy data can be digitized using a digitizing tablet screen digitizing—create new vector layers by using the mouse to digitize on the screen using other software packages—many external vector data types can be converted to ERDAS IMAGINE vector layers converting raster layers—raster layers can be converted to vector layers Each of these options is discussed in a separate section. Vector Data Sources Vector data are created by: • • • • tablet digitizing—maps.

Most Landsat. However. Position the intersection of the crosshair directly over the point to be digitized. you may want to extract certain features from a digital image to include in a vector layer. or other satellite data are already in digital format upon receipt.. the digitizing of vectors refers to the creation of vector data from hardcopy materials or raster images that are traced using a digitizer keypad on a digitizing tablet or a mouse on a displayed image. Any image not already in digital format must be digitized before it can be read by the computer and incorporated into the database. photographs. such as roads. voting districts. However. The digitizing tablet contains an internal electronic grid that transmits data to ERDAS IMAGINE on cue from a digitizer keypad operated by you. Tablet Digitizing Tablet digitizing involves the use of a digitizing tablet to transfer nondigital data such as maps or photographs to vector format. and so forth. in ERDAS IMAGINE.e. such as: • • digitize a point (i. you may also have maps. Vector Data 49 . and a coordinate system is established with a setup procedure. transmit map coordinate data). or other nondigital data that contain information you want to incorporate into the study. Figure 21: Digitizing Tablet Digitizer Setup The map or photograph to be digitized is secured on the tablet. bodies of water. so it is not necessary to digitize them. Depending on the type of equipment and the program being used.Digitizing In the broadest sense. connect a point to previous points. one of the input buttons is pushed to tell the system which function to perform. Or. digitizing refers to any process that converts nondigital data into numbers. SPOT. Tablet digitizing and screen digitizing enable you to digitize certain features of a map or photograph. Digitizer Operation The handheld digitizer keypad features a small window with a crosshair and keypad buttons.

50 Vector Data . Select the Tablet Input function from the Viewer to use a digitizing tablet to enter new information into that layer. You can measure: • • • lengths and angles by drawing a line perimeters and areas using a polygonal. and copied. printed. Measurement The digitizing tablet can also be used to measure both linear and areal distances on a map or photograph. These operations can also be performed with screen digitizing. You must create topology using the Build or Clean options. until all the points are collected. rectangular. This is discussed further in "Geographic Information Systems" on page 173.• • assign a particular value to the point or polygon. etc. The digitizer puck is used to outline the areas to measure. Select the Measure function from the Viewer or click on the Ruler tool in the Viewer tool bar to enable tablet or screen measurement. Digitizing Modes There are two modes used in digitizing: • • point mode—one point is generated each time a keypad button is pressed stream mode—points are generated continuously at specified intervals. Move the puck along the desired polygon boundaries or lines. or measure the distance between points. digitizing points at appropriate intervals (where lines curve or change direction). while the puck is in proximity to the surface of the digitizing tablet You can create a new vector layer from the Viewer. Newly created vector layers do not contain topological data. or elliptical shape positions by specifying a particular point Measurements can be saved to a file.

Inc. bodies of water.G.S. ArcInfo INTERCHANGE files from ESRI. Inc. Inc. ETAK MapBase files from ETAK. political boundaries selecting training samples for input to the classification programs outlining an area of interest for any number of applications Create a new vector layer from the Viewer. such as: • • • digitizing roads.S.S. Initial Graphics Exchange Standard (IGES) files Intergraph Design (DGN) files from Intergraph Spatial Data Transfer Standard (SDTS) vector files Topologically Integrated Geographic Encoding and Referencing System (TIGER) files from the U. These data formats include: • • • • • • • • • • ArcInfo GENERATE format files from ESRI. ArcView Shapefiles from ESRI. vector data are drawn with a mouse in the Viewer using the displayed image as a reference. Digital Exchange Files (DXF) from Autodesk. Inc. Imported Vector Data Many types of vector data from other software packages can be incorporated into the ERDAS IMAGINE system. Digital Line Graphs (DLG) from U. Vector Data 51 .Screen Digitizing In screen digitizing. Inc. Census Bureau Vector Product Format (VPF) files from the Defense Mapping Agency • See "Raster and Vector Data Sources" on page 55 for more information on these data. These data are then written to a vector layer. Screen digitizing is used for the same purposes as tablet digitizing.

using IMAGINE Vector™. Convert vector data to raster data. and vice versa. The following diagram illustrates a thematic file in raster format that has been converted to vector format. Figure 22: Raster Format Converted to Vector Format Raster soils layer Soils layer converted to vector polygon layer Most commonly. there are other types of vector formats that you can use in ERDAS IMAGINE. You can now use shapefile format (extension . Other Vector Data Types While this chapter has focused mainly on the ArcInfo coverage format.Raster to Vector Conversion A raster layer can be converted to a vector layer and used as another layer in a vector database. thematic raster data rather than continuous data are converted to vector format. You can now: • • • display shapefiles create shapefiles edit shapefiles 52 Vector Data . since converting continuous layers may create more vector features than are practical or even manageable.shp) in ERDAS IMAGINE. The two primary types are: • • shapefile Spatial Database Engine (SDE) Shapefile Vector Format The shapefile vector format was designed by ESRI.

ERDAS IMAGINE’s SDE capability is read-only. According to the USGS. 1999c). untiled spatial layers for fast retrieval powerful and flexible query capabilities using the SQL where clause operation in a client-server environment multiuser access to the data ERDAS IMAGINE has the capability to act as a client to access SDE vector layers stored in a database. the elimination of the duplication of data acquisition. Some of the features of SDE include: • • • • storage of large. the Spatial Database Engine (SDE) is a vector format designed by ESRI. data dictionary. The definition of the vector layer as extracted from a SDE database is stored in a <layername>. For example. such as boundary information. SDE Like the shapefile format. but not edited. SDTS SDTS stands for Spatial Data Transfer Standard. and can be loaded as a regular ERDAS IMAGINE data file. Currently.g. and generate a subset of features by imposing attribute constraints (e.• • • attribute shapefiles symbolize shapefiles print shapefiles The shapefile contains spatial data. the implementation of SDTS is of significant interest to users and producers of digital spatial data because of the potential for increased access to and sharing of spatial data. ERDAS IMAGINE supports the SDE projection systems. SDTS is used to transfer spatial data between computer systems. data quality report. and supporting metadata. features can be queried and AOIs can be created. and selects one of the vector layers. To do this. it uses a wizard interface to connect ERDAS IMAGINE to a SDE database. Additionally.sdv file. georeferencing. Such data includes attribute. The data layers are stored in a relational database management system (RDBMS) such as Oracle. or SQL Server. SQL where clause). it can join business tables with the vector layer.. and the increase in the quality and integrity of spatial data (United States Geological Survey. Vector Data 53 . the reduction of information loss in data exchange.

but independent. The first three parts are related. Agriculture. There are two types of geodatabases: personal and enterprise. The parts of SDTS are as follows: • • • • • • Part 1—Logical Specifications Part 2—Spatial Features Part 3—ISO 8211 Encoding Part 4—Topological Vector Profile Part 5—Raster Profile Part 6—Point Profile ArcGIS Integration ArcGIS Integration is the method you use to access the data in a geodatabase. the entire database is deleted from disk. Each feature class will be symbolized by only one type of geometry such as points symbolizing wells or polygons symbolizing lakes. An example of a feature dataset would be U. Within every feature class are particular features like wells and lakes.The components of SDTS are broken down into six parts. An example of a feature class would be U. It is important to remember when you delete a personal database connection. raster datasets. and tables. The personal geodatabases are for use by an individual or small group. ERDAS IMAGINE can also access CAD and VPF data on the internet. The organization of both personal and enterprise geodatabases starts with a workspace that contains both spatial and non-spatial datasets such as feature classes. 54 Vector Data . ERDAS IMAGINE has always supported ESRI data formats such as coverages and shapefiles. Within the datasets are feature classes. When you delete a database connection on an enterprise database. The last three parts provide definitions for rules and formats for applying SDTS to the exchange of data. only the connection is broken. relationships. and topological associations.S. The services include validation rules. Hydrology. and the enterprise geodatabases are for use by large groups.S. The geodatabase is hosted inside of a regional database management system that provides services for managing geographic data. and nothing in the geodatabase is deleted. and are concerned with the transfer of spatial data. The term geodatabase is the short form of geographic database. using ArcGIS Vector Integration. and now. Industrial strength host systems such as Oracle support the organizational structure of enterprise geodatabases.

In addition to satellite and airborne imagery. and many other sources. video digitized data.S. Census Bureau Topologically Integrated Geographic Encoding and Referencing System files (TIGER) Importing and Exporting Raster Data There is an abundance of data available for use in GIS today. Department of Commerce Initial Graphics Exchange Standard files (IGES) U. sonar. Because of the wide variety of data formats. The raster data types covered include: • • • • • visible/infrared satellite data radar imagery airborne sensor data scanned or digitized maps and photographs digital terrain models (DTMs) The vector data types covered include: • • • • • • ArcInfo GENERATE format files AutoCAD Digital Exchange Files (DXF) United States Geological Survey (USGS) Digital Line Graphs (DLG) MapBase digital street network files (ETAK) U. raster data sources include digital x-rays. ERDAS IMAGINE provides two options for importing data: • • import for specific formats generic import for general formats Raster and Vector Data Sources Raster and Vector Data Sources 55 55 .Raster and Vector Data Sources Introduction This chapter is an introduction to the most common raster and vector data types that can be used with the ERDAS IMAGINE software package. microscopic imagery.S.

Import Table 2 lists some of the raster data formats that can be imported to. Table 2: Raster Data Formats Data Type ADRG ADRI Alaska SAR Facility (. There is a distinct difference between import and direct read. TIFF. Direct read formats are those formats which the Viewer and many of its associated tools can read immediately without any conversion process. which can be read directly by ERDAS IMAGINE. and directly written to ERDAS IMAGINE.L) Algorithm (. BIP. NOTE: Annotation and Vector data formats are listed separately. Import means that the data is converted from its original format into another format (for example. exported from.alg) IMAGINE version 2010 ALOS AVNIR-2 JAXA CEOS ALOS PRISM JAXA CEOS ALOS PRISM JAXA CEOS IMG ALOS Palsar ERSDAC CEOS ALOS Palsar ERSDAC VEXCEL ALOS Palsar JAXA CEOS ARCGEN Arc Coverage ArcInfo & Space Imaging BIL. IMG. or GRID Stack). directly read from. BSQ ASCII Raster ASRP ASTER (EOS HDF Format) AVHRR (NOAA) • • • • • • • • • • • • Import • • Export • Direct Read Direct Write • • • • • • • • 56 Raster and Vector Data Sources .

Table 2: Raster Data Formats Data Type AVHRR (Dundee Format) AVHRR (Sharp) BigGeoTIFF BigTIFF BigTIFF Chip from BigTIFF BIL. BSQ)a GeoEye-1 GeoPDF • • • • • • • • • • •b • • • • • • • • • • • • • • • • • • • • • • • Import • • Export Direct Read Direct Write • • • •b • • • • • Raster and Vector Data Sources 57 .dim) Generic Binary (BIL.hdr) Envisat (. BIP. EROS-B ERS (I-PAF CEOS) ERS (Conae-PAF CEOS) ERS (Tel Aviv-PAF CEOS) ERS (D-PAF CEOS) ERS (UK-PAF CEOS) FIT FORMOSAT DIMAP (. BIP.N1*) ER Mapper EROS-A. BSQa (Generic Binary) Bitmap CADRG (Compressed ADRG) CIB (Controlled Image Base) COSMO-SkyMed DAEDALUS USGS DEM DigitalGlobe TIL DOQ DOQ (JPEG) DTED ECW ENVI (.

map) MODIS (EOS HDF Format) MrSID MSS Landsat MultiGen OpenFlight FLT • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Import • Export • Direct Read • • • Direct Write • 58 Raster and Vector Data Sources .url) Intergraph CCITT Group 4 Intergraph COT Intergraph ISAT IRS-1C/1D (EOSAT Fast Format C) IRS-1C/1D(EUROMAP Fast Format C) IRS-1C/1D (Super Structured Format) JFIF (JPEG) JPEG2000 Landsat-7 Fast-L7A ACRES Landsat-7 Fast-L7A EROS Landsat-7 Fast-L7A Eurimage LAN (Erdas 7.x) Layout file (.gif) GIS (Erdas 7.cub) IMAGINE (.img) Image Web Server ECWP (.x GRD (Surfer: ASCII/Binary) HDF HYDICE (.x) GRASS GRID GRID Stack GRID Stack 7.Table 2: Raster Data Formats Data Type GeoTIFF GIF (.ixw) version IMAGINE 2010 Map composition (.

ogr) PCIDSK (.raw) SDE SDE Raster (.x NSIF NLAPS Data Format (NDF) ORACLE Spatial GeoRaster (.Table 2: Raster Data Formats Data Type NASDA CEOS NITF 1.ixs) version IMAGINE 2010 ShoeBox file (.sdi) SDTS SeaWiFS L1B and L2A (OrbView) and HDF Session file (.sup) LPS required SPOT (CAP/SPIM) SPOT CCRS SPOT DIMAP (.pix) PCX PNG (.1 NITF 2.dim) SPOT Fast Format • • • • • • • • • • • • • • • • • • • • • • • • • • • • Import • • • • • • Export Direct Read • Direct Write • • • Raster and Vector Data Sources 59 .ixp) version IMAGINE 2010 SOCET SET Support (.png) QuickBird RADARSAT (Acres CEOS) RADARSAT (JAXA CEOS) RADARSAT (West Freugh CEOS) RADARSAT (Vancouver CEOS) RADARSAT-2 Raster Product Format RAW (.

Table 2: Raster Data Formats Data Type SPOT (GeoSpot) SPOT (NASDA CAP) SPOT SICORP MetroView Sub-Image (.grd) TARGA (.xml) THEOS DIMAP (.dim) TIFF TIL (DigitalGlobe) TM Landsat Acres Fast Format TM Landsat Acres Standard Format TM Landsat EOSAT Fast Format TM Landsat EOSAT Standard Format TM Landsat ESA Fast Format TM Landsat ESA Standard Format TM Landsat-7 Eurimage CEOS (Multispectral) TM Landsat-7 Eurimage CEOS (Panchromatic) TM Landsat-7 HDF Format TM Landsat IRS Fast Format TM Landsat IRS Standard Format TM Landsat-7 Fast-L7A ACRES TM Landsat-7 Fast-L7A EROS TM Landsat-7 Fast-L7A Eurimage TM Landsat Radarsat Fast Format • • • • • • • • • • • • • • • • • • • • • • Import • • • • • Export Direct Read Direct Write • • 60 Raster and Vector Data Sources .sbi) SUN Raster Surfer Grid (.tga) TerraSAR-X (TSX1*.

or ERDAS IMAGINE formats. Direct read of generic binary data requires an accompanying header file in the ESRI ArcInfo. The import function imports the data file values that make up the raster image.wcs) Web Map Service proxy (.img).Table 2: Raster Data Formats Data Type TM Landsat Radarsat Standard Format Unrestricted Access Image (. as well as the ephemeris or additional data inherent to the data structure. Raster data formats cannot be exported as vector data formats unless they are converted with the Vector utilities.vsk) Web Coverage Service proxy (.uai) USRP VEXCEL SLC (PASL*. Each direct function is programmed specifically for that type of data and cannot be used to import other data types. ERDAS IMAGINE also imports the georeferencing data for the image. or other formats directly writable by ERDAS IMAGINE. b The import function converts raster data to the ERDAS IMAGINE file format (. when the user imports Landsat data. WorldView-2 a Import • Export Direct Read Direct Write • • • • • • • • • • • See Generic Binary Data on page 63.wms) WorldView-1.SLC) VITec (.vit) View (. Space Imaging.vue) Virtual Mosaic (.vmc) Virtual Stack (. For example. Raster and Vector Data Sources 61 .

According to Jordan and Beck. labels) via digital communications require minimal preprocessing and post-processing of transmitted data support variable image sizes and resolution minimize formatting overhead. These files are created when imported into IMAGINE.rrd files) are not created in export functions. particularly for those users transmitting only a small amount of data or with limited bandwidth provide universal features and functions without requiring commonality of hardware or proprietary software • • • • Moreover. NITFS is an unclassified format that is based on ISO/IEC 12087-5. NITFS was first introduced in 1990 and was for use by the government and intelligence agencies. Basic Image Interchange Format (BIIF). and the export function is used when this data is exported out of IMAGINE. If you need statistics and pyramid layers. Statistics and pyramid layers (. and imagery-associated metadata. symbols. please use the Image Command tool. Jordan and Beck list the following attributes of NITF files: • • provide a common basis for storage and digital interchange of images and associated data among existing and future systems support interoperability by simultaneously providing a data format for shared access applications while also serving as a standard message format for dissemination of images and associated data (text.S.Raster Data Sources NITFS NITFS stands for the National Imagery Transmission Format Standard. NITF files support the following: 62 Raster and Vector Data Sources . Military Standard 2500B. text attachments. The NITFS implementation of BIIF is documented in U. establishing a standard data format for digital imagery and imagery-related products. NITFS is now the standard for military organizations as well as commercial industries. NITFS is designed to pack numerous image compositions with complete annotation.

In ERDAS IMAGINE. Import means that the data is converted from its original format into another format (for example. Table 3: Annotation Data Formats Data Type Annotation (. This program allows the import of BIL. BIP.x) AOI (Area of Interest) (. One system’s internal representation for the files and their associated data is processed and put into the NITF format. TIFF. Direct read formats are those formats which the Viewer and many of its associated tools can read immediately without any conversion process. However. and BSQ data that are stored in left to right.aoi) ASCII To Point Annotation DXF To Annotation • • • • • • Import Export Direct Read • Direct Write • Generic Binary Data The Generic Binary import option is a flexible program which enables the user to define the data structure for ERDAS IMAGINE. There is a distinct difference between import and direct read.ovr) ANT (Erdas 7. which can be read directly by ERDAS IMAGINE. Raster and Vector Data Sources 63 . 1999 Annotation Data Annotation data can also be imported directly. such as georeferencing information. and converts it for the receiving systems internal representation of the files and associated data. top to bottom row order. the IMAGINE NITF™ software accepts such information and assembles it into one file in the standard NITF format. or GRID Stack). IMG. this ephemeris data can be viewed using the Data View option (from the Utility menu or the Import dialog).• • • • multiple images annotation on images ASCII text files to accompany imagery and annotation metadata to go with imagery. Table 3: “Annotation Data Formats” lists the Annotation formats. This program imports only the data file values—it does not import ephemeris data. Data formats from unsigned 1-bit up to 64-bit floating point can be imported. annotation and text The process of translating NITFS files is a cross-translation process. Source: Jordan and Beck. The receiving system reformats the NITF file.

Vector Data Vector layers can be created within ERDAS IMAGINE by digitizing points. they can be imported as two real images and then combined into one complex image using the Spatial Modeler. lines.x) • • • • • • • • • • • Import • Export • Direct Read Direct Write • 64 Raster and Vector Data Sources . IMG. You cannot import tiled or compressed data using the Generic Binary import option. can also be imported. or GRID Stack). and exported from. Import means that the data is converted from its original format into another format (for example. Table 4: “Vector Data Formats” lists some of the vector data formats that can be imported to. TIFF. which can be read directly by ERDAS IMAGINE.Complex data cannot be imported using this program. Table 4: Vector Data Formats Data Type ARCGEN ArcGIS Geodatabase (. and polygons using a digitizing tablet or the computer screen.gbd) Arc Interchange Arc_Interchange to Coverage Arc_Interchange to Grid ASCII To Point Coverage Coverage to DXF Coverage to Arc_Interchange DFAD DGN (Intergraph IGDS) DIG Files (Erdas 7. Direct read formats are those formats which the Viewer and many of its associated tools can read immediately without any conversion process. however. ERDAS IMAGINE: There is a distinct difference between import and direct read. Several vector data types. which are available from a variety of government agencies and private companies.

dgn File) IGES MIF/MID (MapInfo) to Coverage ORACLE Spatial Feature (. the vector data are automatically converted to ERDAS IMAGINE vector layers. You can also convert vector layers to raster format.ogv) SDE SDTS Shapefile Terramodel TIGER VirtualGIS TIN Mesh VirtualGIS TIN World VPF • • • • • • • • • • • Import • • • • • • • • Export • Direct Read Direct Write • • • Once imported.Table 4: Vector Data Formats Data Type DLG DXF to Annotation DXF to Coverage ETAK IGDS (Intergraph . Raster and Vector Data Sources 65 . and vice versa. These vector formats are discussed in more detail in Vector Data from Other Software Vendors on page 138. Import and export vector data with the Import/Export function. See "Vector Data" on page 41 for more information on ERDAS IMAGINE vector layers. with the IMAGINE Vector utilities.

in the sensor system on the Landsat TM scanner there are 16 detectors for each wavelength band (except band 6. and the Import and Export functions for data exchange. The sensors are made up of detectors. Satellites have very stable geometry. converts it to a signal. aerial sensors. A sensor is a device that gathers energy. and presents it in a form suitable for obtaining information about the environment (Colwell. Satellite System A satellite system is composed of a scanner with sensors and a satellite platform. such as the Landsat TM scanner or the SPOT panchromatic scanner (Lillesand and Kiefer. so they are easily processed and analyzed by a computer.Optical Satellite Data There are several data acquisition options available including photography. so the same area can be covered on a regular basis for change detection. A detector is the device in a sensor system that records electromagnetic radiation. Once the satellite is launched. 66 Raster and Vector Data Sources . meaning that there is less chance for distortion or skew in the final image. It includes the sensor and the detectors. The FOV is a measure of the field of view of all the detectors combined. However. FOV differs from IFOV in that the IFOV is a measure of the field of view of each detector. • The scanner is the entire data acquisition system. a satellite system offers these advantages: • Digital data gathered by a satellite sensor can be transmitted over radio or microwave communications links and stored on DVDs. and sophisticated satellite scanners. or width of the total field of view (FOV). For example. or magnetic tapes. which has 4 detectors). Many satellites orbit the Earth. • • • There are two types of satellite data access: direct access to many raster data formats for the use of files in their native format. the total width of the area on the ground covered by the scanner is called the swath width. • • In a satellite system. 1987). the cost for data acquisition is less than that for aircraft data. CDs. 1983).

Landsat and the French SPOT satellites are two important data acquisition satellites.Satellite Characteristics The U. Raster and Vector Data Sources 67 . Single band. or monochrome. and National Oceanic and Atmospheric Administration (NOAA) AVHRR data. S. so data are always collected at the same local time of day over the same region. They both record electromagnetic radiation in one or more bands. Landsat MSS. The Landsat and SPOT satellites have several characteristics in common: • • Both scanners can produce nadir views. meaning that they rotate around the Earth at the same rate as the Earth rotates on its axis. Image Data Comparison Figure 23 shows a comparison of the electromagnetic spectrum recorded by Landsat TM. Multiband data are referred to as multispectral imagery. Nadir is the area on the ground directly beneath the scanner’s detectors. • NOTE: The current SPOT system has the ability to collect off-nadir stereo imagery. They have sun-synchronous orbits. imagery is called panchromatic. These systems provide the majority of remotely-sensed digital images in use today. These data are described in detail in the following sections. SPOT.

ALOS was launched from the Tanegashima Space Center in Japan in 2006. regional observation.2 1.0 5.7 .0 12.6 1.3 1.1 1.0 10.3.5 4. Used for cartography.2.0 9.0 11.7) (1.5 2.8 .4 1.9 2. The orbit is sun-synchronous sub-recurrent.0 13.6 3. ALOS enhances the land observing technology of its predecessors JERS-1 and ADEOS.9 1.0 8.Figure 23: Multispectral Imagery Comparison Landsat MSS Landsat TM (4.4 2. ALOS orbits at an altitude of 691 kilometers at an inclination of 98 degrees.5.1 2.8 1.0 Band 2 Band 5 Band 3A micrometers Band 7 Band 3B Band 6 Band 4 Band 5 ALOS Advanced Land Observing Satellite mission (ALOS) is a project operated by the Japan Aerospace Exploration Agency (JAXA).2 2. and resource surveying. disaster monitoring.6 .3 2.5 .0 3.5 1.7 1. 68 Raster and Vector Data Sources .0 1.4) Band 1 Band 1 Band 2 Band 3 Band 2 Band 3 Band 4 Band 4 SPOT XS Band 0 Band 1 Band 2 Band 3 SPOT Pan Band 1 NOAA AVHRR Band 1 .0 7. and the repeat cycle is 46 days with a sub cycle of 2 days.0 2.0 6.

52 to 0.60 μm Band 3: 0.76 to 0.69 μm Band 4: 0. and the Phased Array type L-band Synthetic Aperture Radar (PALSAR) for all-weather. PRISM has three independent optical systems for viewing nadir.ALOS has three remote-sensing instruments: the Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) for digital elevation mapping. 2007. The AVNIR-2 provides better spatial landcoverage maps and land-use classification maps for monitoring regional environments. is a panchromatic radiometer on board the ALOS satellite mission. The nadir-viewing telescope covers a width of 70 km. ALOS AVNIR-2 AVNIR-2. day/night land observation. forward. 2003.61 to 0. Table 5: AVNIR-2 Sensor Characteristics Number of Bands Wavelength 4 Band 1: 0. and ALOS PRISM sections.5m spatial resolution at nadir and its extracted data provides digital surface models.89 μm 10 m (at Nadir) 70 km (at Nadir) 7000 per band . is a visible and near infrared radiometer on board the ALOS satellite mission. The radiometer has 2.44 to + 44 degrees 8 bits Spatial Resolution Swath Width Number of Detectors Pointing Angle Bit Length Source: Japan Aerospace Exploration Agency. Raster and Vector Data Sources 69 . ALOS PRISM PRISM. Panchromatic Remote-sensing Instrument for Stereo Mapping. Source: Japan Aerospace Exploration Agency. ALOS PALSAR. launched in 2006. launched in 2006.50 μm Band 2: 0. producing a stereoscopic image along the satellite’s track. and backward. and the forward and backward viewing telescopes each cover 35 km. Advanced Visible and Near Infrared Radiometer type 2. Each of the three remote-sensing instruments is discussed in the ALOS AVNIR-2. the Advanced Visible and Near Infrared Radiometer type 2 (AVNIR-2) for land coverage observation.42 to 0.

Japan’s Ministry of Economy. Cross-track direction) 8 bits Source: Japan Aerospace Exploration Agency.52 .2. and Thermal Infrared (TIR). ASTER ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) is an instrument flying on Terra.9. Shortwave Infrared (SWIR).76 .52 to 0. ASTER is a cooperative effort between NASA.275 70 Raster and Vector Data Sources .86 SWIR Band 4 1.60 Band 2 0. Trade and Industry (METI). Forward.125 .185 Band 6 2.475 .8. Table 7: ASTER Characteristics Characterisic Spectral Range Wavelengths in microns VNIR Band 1 0.5 to +1.185 . The ASTER instrument consists of three subsystems: Visible and Near Infrared (VNIR). the ASTER instrument is the next generation in remote sensing imaging. 2003c.8. Compared with the Landsat Thematic Mapper and Japan’s JERS-1 OPS scanner.63 .225 TIR Band 10 8.0.0 (between Forward and Backward view) 2.1.68 .77 μm 3 (Nadir.5 degrees (Triplet mode.5 m (at Nadir) 70 km (Nadir only) 35 km (Triplet mode) -1.2. ASTER captures high resolution data in the visible to thermal infrared wavelength spectrum and provides stereo viewing capability for DEM creation.0.69 Band 3 0.475 Band 11 8.PRISM’s wide field of view (FOV) provides three fully overlapped stereo images of a 35 km width without mechanical scanning or yaw steering of the satellite.825 Band 12 8/925 . Table 6: PRISM Sensor Characteristics Number of Bands Wavelength Number of Optics Base-to-Height Ratio Spatial Resolution Swath Width Pointing Angle Bit Length 1 (panchromatic) 0.70 Band 5 2. a satellite launched in December 1999 as part of NASA’s Earth Observing System (EOS).145 . and Japan’s Earth Remote Sensing Data Analysis Center (ERSDAC).0. Backward) 1.

9 0.EROS B Characteristics Characteristic Geometry of orbit Orbit Altitude Swath Width Ground Sampling Distance EROS A sun-synchronous ~ 500 km 14 km at nadir 1.2. 2004.285 Band 8 2.4.V.0. They subsequently launched their second satellite.65 Ground Resolution Swath Width 15 m 60 km 30 m 60 km 90 m 60 km Source: National Aeronautics and Space Administration. 2008 FORMOSAT-2 The FORMOSAT-2 satellite.76 . was developed by ImageSat International N. EROS A imaging techniques offer panchromatic images in basic type and as stereo pairs.95 Band 14 10.11.365 Band 9 2.360 .235 . was the first remote sensing satellite developed by National Space Organization (NSPO). is a Netherlands Antilles company with offices in Cyprus and Israel.V. The main mission of FORMOSAT-2 is to capture satellite images of Taiwan island and surrounding oceanic regions.7 m at nadir from 510 km for TDI stages 1.9 m at nadir from 510 km EROS B sun-synchronous ~ 500 km 7 km at nadir 0.5 to 0. launched in December 2000.430 TIR Band 13 10. ImageSat International N.8 0. EROS B. launched in May 2004.9 Source: ImageSat International N.2. in April 2006. stereo pair.V. EROS B imaging techniques offer panchromatic images in basic.86 SWIR Band 7 2. Raster and Vector Data Sources 71 .25 . and terrestrial and oceanic regions of the entire Earth. and mosaic types.2.5 to 0.8 m at nadir from 510 km for all other TDI stages Spectral Bandwidth 0. triplet.295 .10.Table 7: ASTER Characteristics Characterisic VNIR Band 3 0.95 . EROS A and EROS B The first Earth Remote Observation Satellite (EROS A). Table 8: EROS A .

GeoEye-1 data collection capacity is up to 700.000 square km per day of pan-sharpened multispectral area. Table 9: FORMOSAT-2 Characteristics Geometry of orbit Orbit Altitude Swath Width Sensor Resolution sun-synchronous 891 km 24 km panchromatic .41 m (1.000 sq km (300 km x 50 km) 10.2 m multispectral .2 km at nadir Area Size . or 423 miles in a sun-synchronous orbit type. GeoEye-1 The GeoEye-1 satellite.0.34 feet) multispectral . 2010b.65 m (5.270 sq km (224 km x 28 km) Sensor Resolution nominal at Nadir Spectral Bandwidth Panchromatic panchromatic . 2008 and European Space Agency.000 sq km (100 km x 100 km) Area Size .FORMOSAT-2 onboard sensors include a Remote Sensing Instrument and ISUAL (Imager of Sprites and Upper Atmospheric Lightning).large area Area Size .41 feet) 459 to 800 nm 72 Raster and Vector Data Sources .1. Table 10: GeoEye-1 Characteristics Geometry of orbit Orbit Altitude Orbit Inclination Swath Width sun-synchronous 681 km 98 degrees 15. was developed by GeoEye.000 square km per day of pan area and up to 350.8 m Source: National Space Organization. launched in 2008.stereo area 6.cell size 15.single point 225 sq km (15 km x 15 km) Area Size . a company formed through the combination of ORBIMAGE and Space Imaging. GeoEye-1 orbits at an altitude of 681 km.

Red 4. The revisit time is 2.920 (near infrared) Source: GeoEye. Band 1. Raster and Vector Data Sources 73 . and 10 m vertically.69 μm 0. with the exception of the SW Infrared band.63 to 0.580 nm (green) 655 . Green 3. IKONOS orbits at an altitude of 423 miles. band 5 has a swath width of 148 km.9 days at 1 m resolution. 2008.52 to 0. and 4 have a swath width of 142 kilometers. which is 70 m. 2000a IRS IRS-1C The IRS-1C satellite was launched in December of 1995. Those sensors are as follows: LISS-III LISS-III has a spatial resolution of 23 m. Blue 2. IKONOS The IKONOS satellite was launched in September 1999. The IRS-1C satellite has three sensors on board with which to capture images of the Earth. or 681 kilometers. Center for Health Applications of Aerospace Related Technologies.690 nm (red) 780 .76 to 0.90 μm 0. The sensor has a 744 km swath width. Bands 2. 3.510 nm (blue) 510 . Repeat coverage occurs every 24 days at the Equator. 1999a.45 to 0.Table 10: GeoEye-1 Characteristics Spectral Bandwidth Multispectral 450 .60 μm 0. and 3 m vertically. The resolution of the multispectral scanner is 4 m. The swath width is 13 km at nadir.5 days at 1.5 m resolution. The repeat coverage of IRS-1C is every 24 days. NIR Panchromatic Wavelength (microns) 0. with ground control it is 2 m horizontally. The resolution of the panchromatic sensor is 1 m.45 to 0. and 1.52 μm 0.90 μm Source: Space Imaging. The accuracy with out ground control is 12 m horizontally.

68 μm 0.86 μm 1. Red 4. Its swath width is 70 m. Repeat coverage is every 24 days at the Equator.Band 1. 74 Raster and Vector Data Sources .55 to 1.62 to 0.86 μm 1. Green 3.5 to 0.59 μm 0.8 m. SW IR Wavelength (microns) --0.77 to 0. The revisit time is every five days. 1998 IRS-1D IRS-1D was launched in September of 1997. as well as stereo capability.70 μm Source: National Remote Sensing Agency.55 to 1. 1998 Panchromatic Sensor The panchromatic sensor has 5. and repeat coverage every five days at the Equator. The swath width is 774 km. Blue 2. NIR 5. with ± 26° off-nadir viewing. MIR Wavelength (microns) 0.77 to 0. Band Pan Wide Field Sensor (WiFS) Wavelength (microns) 0. NIR 3. Red 2. Band 1.8 m spatial resolution. It collects imagery at a spatial resolution of 5.62 to 0. IRS-1D’s sensors were copied for IRS-1C. Center for Health Applications of Aerospace Related Technologies.75 μm WiFS has a 188 m spatial resolution.68 μm 0. 1999b.52 to 0.75 μm Source: Space Imaging. which was launched in December 1995.

Oklahoma has been obtaining IRS-1D data since 1997. Through a third party mission agreement. launched in July 2006. acquisition of high resolution images for GIS. The Space Imaging facility located in Norman. For band and wavelength data on IRS-1D.900 nm multispectral (4 bands) Source: European Space Agency. Source: Space Imaging.Imagery collected by IRS-1D is distributed in black and white format. and built-up urban areas” (Space Imaging. KOMPSAT-1. European Space Agency makes a sample dataset of European cities available from these missions. carries an Electro-Optical Camera (EOC) sensor and KOMPSAT-2. Table 11: KOMPSAT-1 and KOMPSAT-2 Characteristics Characteristic Geometry of orbit Orbit Altitude Swath Width Resolution KOMPSAT-1 sun-synchronous circular polar 685 km 24 km EOC 6 m EOC KOMPSAT-2 sun-synchronous circular 685 km ~ 15 km 1 m panchromatic 4 m multispectral Spectral Bandwidth 500 . large ships. This information can be used to classify land cover in applications such as urban planning and agriculture. and composition of printed and digitized maps. carries a Multi-Spectral Camera (MSC) sensor. 1999b). The panchromatic imagery “reveals objects on the Earth’s surface (such) as transportation networks. see IRS on page 73. parks and opens space. 2010c Raster and Vector Data Sources 75 . launched in December 1999. 1998 KOMPSAT 1-2 Korea Aerospace Research Institute (KARI) has developed the KOMPSAT-1 (KOrea Multi-Purpose SATellite) and KOMPSAT-2 satellite systems for surveillance of large scale disasters.900 nm panchromatic 450 .

See Ordering Raster Data on page 127 for more information. Green • Wavelength (microns) 0. Red 0. Detectors record electromagnetic radiation (EMR) in four bands: • Bands 1 and 2 are in the visible portion of the spectrum and are useful in detecting cultural features. such as roads. There have been several Landsat satellites launched since 1972. 1987). 2. and 3. and 705 km for Landsats 4 and 5. 2. with a 79 × 79 m IFOV. Band 1. It corresponds to the green reflectance of healthy vegetation.60 μm Comments This band scans the region between the blue and red chlorophyll absorption bands. the National Aeronautics and Space Administration (NASA) initiated the first civilian program specializing in the acquisition of remotely sensed digital satellite data. and it is also useful for mapping water bodies. NOTE: Landsat data are available through the EROS Data Center. MSS The Multispectral Scanner from Landsats 4 and 5 has a swath width of approximately 185 × 170 km from a height of approximately 900 km for Landsats 1. Bands 3 and 4 are in the near-infrared portion of the spectrum and can be used in land/water and vegetation discrimination. The spatial resolution of MSS data is 56 × 79 m. and 3 gathered Multispectral Scanner (MSS) data and Landsats 4 and 5 collected MSS and TM data.Landsat 1-5 In 1972. 2. A typical scene contains approximately 2340 rows and 3240 columns. MSS data are widely used for general geologic studies as well as vegetation inventories. It is also useful for determining soil boundary and geological boundary delineations and cultural features. 2. but it is stored as 8-bit (Lillesand and Kiefer. Landsats 1. The radiometric resolution is 6-bit. These bands also show detail in water.60 to 0.70 μm 76 Raster and Vector Data Sources . 3 and 4 are no longer operating. This is the red chlorophyll absorption band of healthy green vegetation and represents one of the most important bands for vegetation discrimination. MSS and TM are discussed in more detail in the following sections. and later renamed to Landsat.50 to 0. but Landsat 5 is still in orbit gathering data. The first system was called ERTS (Earth Resources Technology Satellites). Landsats 1.

Lillesand and Kiefer. Bands 4. meaning that each pixel has a possible range of data values from 0 to 255. NIR 4. The larger pixel size of this band is necessary for adequate signal strength. middle-infrared.5 m for all bands except the thermal (band 6). The radiometric resolution is 8-bit. except that the TM sensor records reflected/emitted electromagnetic energy from the visible. which has a spatial resolution of 120 × 120 m.5 × 28.80 μm Comments This band is especially responsive to the amount of vegetation biomass present in a scene. It is useful for crop identification and emphasizes soil/crop and land/water contrasts.Band 3. and 7 are in the reflective-infrared portion of the spectrum and can be used in land/water discrimination. 5.80 to 1. rock type discrimination.10 μm Source: Center for Health Applications of Aerospace Related Technologies. reflective-infrared. Red. However. Detectors record EMR in seven bands: • Bands 1. snow and cloud differentiation. and radiometric resolution than MSS. and so forth. These bands also show detail in water. TM has higher spatial. and thermal-infrared regions of the spectrum. It is useful for vegetation type and health determination. 1996.70 to 0. 2000b TM The TM scanner is a multispectral scanning system much like the MSS. spectral.5 m to match the other bands. Band 6 is in the thermal portion of the spectrum and is used for thermal mapping (Jensen. This band is useful for vegetation surveys and for penetrating haze (Jensen. the thermal band is resampled to 28. TM has a swath width of approximately 185 km from a height of approximately 705 km.5 × 28. and 3 are in the visible portion of the spectrum and are useful in detecting cultural features such as roads. 0. 2. 1987). • • Raster and Vector Data Sources 77 . 1996). The spatial resolution of TM is 28. NIR Wavelength (microns) 0. soil moisture.

This is also one of the few bands that can be used to discriminate between clouds. This band is especially responsive to the amount of vegetation biomass present in a scene. Also useful for cultural feature identification. forest type mapping. 2000b 78 Raster and Vector Data Sources .90 μm 5. MIR 1. and for locating thermal pollution. This band is useful for vegetation and crop stress detection. This band is useful for discriminating between many plant species. insecticide applications.40 to 12. and detecting cultural features. This band is important for the discrimination of geologic rock type and soil boundaries.76 to 0. NIR 0. differentiating between soil and vegetation.60 μm 0.Band 1.35 μm Source: Center for Health Applications of Aerospace Related Technologies.75 μm 6. Red 0. heat intensity. It can also be used to locate geothermal activity. Blue Wavelength (microns) 0. 2.50 μm 7. and ice.55 to 1. It is also useful for determining soil boundary and geological boundary delineations as well as cultural features.52 to 0. This band corresponds to the green reflectance of healthy vegetation. snow. as well as soil and vegetation moisture content. TIR 10. which is useful in crop drought studies and in plant health analyses.69 μm 4.63 to 0.45 to 0. MIR 2. It is useful for crop identification and emphasizes soil/crop and land/water contrasts.52 μm Comments This band is useful for mapping coastal water areas.08 to 2. Green 3. This band is sensitive to the amount of water in plants.

Raster and Vector Data Sources 79 . the colors do not reflect the features in natural colors. water appears navy or black. The bands to be used are determined by the particular application. Bands 4. 4. (A thematic image is also a pseudo color image.Figure 24: Landsat MSS vs. • • Different color schemes can be used to bring out or enhance the features under study. False color composites appear similar to an infrared photograph where objects do not have the same colors or contrasts as they would naturally. 3.) In pseudo color. Bands 5. and Blue (RGB) color guns of the monitor. in an infrared image. and vegetation blue. Green. vegetation appears red. 2. True color means that objects look as they would to the naked eye—similar to a color photograph. 2 create a false color composite. These are by no means all of the useful combinations of these seven bands. The following combinations are commonly used to display images: NOTE: The order of the bands corresponds to the Red. For instance. water yellow. 1 create a true color composite. roads may be red. 2 create a pseudo color composite. • Bands 3. and so forth. Landsat TM MSS 4 bands 7 bands radiometric resolution 0-127 TM 1 pixel= 57x79m 1 pixel= 30x30m radiometric resolution 0-255 Band Combinations for Displaying TM Data Different combinations of the TM bands can be displayed to create different composite effects. For instance.

52 μm 0. South Dakota at the USGS EROS Data Center (EDC). are only able to receive data for the portion of the ETM+ ground track where the satellite can be seen by the receiving station. which is “descriptive information on the image. Landsat 7 Data Types One type of data available from Landsat 7 is browse data. launched in 1999.See "Image Display" on page 145 for more information on how images are displayed. ETM+ data is transmitted using X-band direct downlink at a rate of 150 Mbps.45 to 0. Landsat 7 The Landsat 7 satellite. and the receiving stations can obtain this data in real time using the X-band. quality and information content. is also available.” This information is available via the internet within 24 hours of being received by the primary ground station. The capabilities new to Landsat 7 include the following: • • • 15m spatial resolution panchromatic band 5% radiometric calibration with full aperture 60m spatial resolution thermal IR channel The primary receiving station for Landsat 7 data is located in Sioux Falls. This data has been corrected for scan direction and band alignment errors only.60 μm Resolution (m) 30 30 80 Raster and Vector Data Sources . Landsat 7 Specifications Information about the spectral range and ground resolution of the bands of the Landsat 7 satellite is provided in the following table: Band Number 1 2 Wavelength (microns) 0.52 to 0. Landsat 7 is capable of capturing scenes without cloud obstruction. which is corrected. Stations located around the globe. "Enhancement" on page 455 for more information on how images can be enhanced. Browse data is “a lower resolution image for determining image location. EDC processes the data to Level 0r.” The other type of data is metadata. however. and Ordering Raster Data on page 127 for information on types of Landsat data available. Moreover. uses Enhanced Thematic Mapper Plus (ETM+) to observe the Earth. Level 1G data.

TM.MSS.” (United States Geological Survey.76 to 0. and terrain corrected products. .MSS.90 μm Resolution (m) 30 30 30 60 30 15 Landsat 7 has a swath width of 185 kilometers. The repeat coverage interval is 16 days. and ETM+ data products. or 233 orbits.Landsat Missions web site.69 μm 0.90 μm 1.75 μm 10. The products generated by LPGS and NLAPS are mostly similar.63 to 0.d. Details of the differences are listed on the United States Geological Survey . The NLAPS system is able to “produce systematically-corrected. Raster and Vector Data Sources 81 . 2001 LPGS and NLAPS Processing Systems There are two processing systems used to generate Landsat MSS. TM and ETM+) Level 1P (systematically terrain corrected .). The National Landsat Archive Production System (NLAPS) is the Landsat processing system used for Landsat 1-5 MSS and Landsat 4 TM data. TM and ETM+) There are geometric differences.Band Number 3 4 5 6 7 Panchromatic (8) Wavelength (microns) 0. The satellite orbits the Earth at 705 kilometers. National Aeronautics and Space Administration.5 μm 2.TM and ETM+ only) Level 1T (terrain corrected . Source: National Aeronautics and Space Administration. .55 to 1.08 to 2.50 to 0. n. The Level 1 Product Generation System (LPGS) is for Landsat 7 ETM+ and Landsat 5 TM data. The levels of processing are: • • • • Level 1G (radiometrically and geometrically corrected . and data format differences between the LPGS and NLAPS processing systems. 1998.35 μm 0.TM and MSS only) Level 1Gt (systematically terrain corrected .4 to 12. radiometric differences. Source: United States Geological Survey (USGS) 2008. but there are considerable differences.

which contains information describing the process by which the image data were produced DEM data and the metadata describing them (available only with terrain corrected products) Source: United States Geological Survey.Landsat data received from satellites is generated into TM corrected data using the NLAPS by: • • • • • • correcting and validating the mirror scan and payload correction data providing for image framing by generating a series of scene center parameters synchronizing telemetry data with video data estimating linear motion deviation of scan mirror/scan line corrections generating benchmark correction matrices for specified map projections producing along. 1986). but the data gathered have been used in many fields— from agronomy to oceanography (Needham. the products provided by NLAPS include the following: • • • image data and the metadata describing the image processing procedure. These satellites were originally designed for meteorological applications. NOAA Polar Orbiter Data NOAA has sponsored several polar orbiting satellites to collect data of the Earth.d. many additional NOAA satellites have been launched and some continue to gather data. 82 Raster and Vector Data Sources .and across-scan high-frequency line matrices According to the USGS. Since the TIROS-N. The swath width is 2399 km (1491 miles) and the satellites orbit the Earth 14 times each day at an altitude of 833 km (517 miles). AVHRR The Advanced Very High Resolution Radiometer (AVHRR) is an optical multispectral scanner flown aboard National Oceanic and Atmospheric Administration (NOAA) orbiting satellites. The first of these satellites to be launched was the TIROS-N in 1978. n. 2006a. The AVHRR sensor provides pole to pole on-board collection of data. Source: United States Geological Survey.

1. snow and ice detection. Table 12: AVHRR Data Characteristics Wavelength (microns) NOAA 6. 14 0.68 0.10 1.725 .0.58 . world maps. The term packed refers to the way in which the data are written to the tape. temperatures of radiating surfaces and sea surface temperatures. continental maps. country maps.16. snow melt. the only difference is that HRPT are transmitted directly and LAC are recorded.10 Wavelength (microns) NOAA 15. GAC data have a spatial resolution of approximately 4 × 4 km AVHRR data are available in 10-bit packed and 16-bit unpacked format.58 .10 Band Wavelength (microns) NOAA 7.725 .1. The basic formats for AVHRR data which can be imported into ERDAS IMAGINE are: • • • LAC—(Local Area Coverage) data recorded on board the sensor with a spatial resolution of approximately 1. land cover mapping.68 0.9.0.1. This data is also useful for vegetation studies.10 0.64 Primary Uses 1 2 3A Daytime cloud/surface and vegetation mapping Surface water.12.1.58 .58 .17 0. It also allows for about ten minutes of data to be recorded over any portion of the world on two recorders on board the satellite.11. The USGS also provides a series of derived AVHRR Normalized Difference Vegetation Index (NDVI) Composites and Global Land Cover Characterization (GLCC) data. The AVHRR data collection effort provides cloud mapping. and vegetation mapping Snow and ice detection Raster and Vector Data Sources 83 .8. LAC and HRPT have identical formats.725 . ice.68 0.0. These recorded data are called Local Area Coverage (LAC).1 km HRPT—(High Resolution Picture Transmission) direct transmission of AVHRR data in real-time with the same resolution as LAC GAC—(Global Area Coverage) data produced from LAC data by using only 1 out of every 3 scan lines.1 × 1. land-water boundaries. 1988). Packed data are compressed to fit more data on each tape (Kidwell. and snow cover evalution.The AVHRR system allows for direct transmission in real-time of data called High Resolution Picture Transmission (HRPT).

5 .8. The OrbView-3 satellite provided both 1 meter panchromatic imagery and 4 meter multispectral imagery of the entire Earth.5 Wavelength (microns) NOAA 15. Specific applications include telecommunications and utilities. day and night cloud mapping Source: United States Geological Survey.12. See Ordering Raster Data on page 127 for information on the types of NOAA data available. AVHRR scenes may contain one band.Table 12: AVHRR Data Characteristics Wavelength (microns) NOAA 6.16.93 10.3 .3.9. night-time cloud mapping Sea surface temperature. Orbital Imaging Corporation plans were for “One-meter imagery will enable the viewing of houses. meaning that each pixel has a possible data file value between 0 and 1023. or all bands.93 10. AVHRR data have a radiometric resolution of 10-bits. Four-meter multispectral imagery will provide color and infrared information to further characterize cities.3.11.5 Primary Uses 3B 4 5 Sea surface temperature. All bands are referred to as a full set.55 .3 11.17 3.12.11.11.5 . agriculture and forestry. 2006a.10 3. The satellite orbit was 470 km inclined at 97 degrees/470 km and sun-synchronous. 14 3. 1999). Source: Orbital Sciences Corporation. OrbView-3 OrbView-3 was built for Orbital Imaging Corporation (now GeoEye) and was designed to provide high-resolution imagery. and will make it possible to create highly precise digital maps and three-dimensional flythrough scenes. rural areas and undeveloped land from space” (ORBIMAGE.3. 2008. 84 Raster and Vector Data Sources .11. with a swath width of 8 km. and selected bands are referred to as an extract.50 Band 4 repeated Band Wavelength (microns) NOAA 7.50 .55 . day and night cloud mapping Sea surface temperature.3 . a combination of bands. automobiles and aircraft.12.93 10. Use the Import/Export function to import AVHRR data. The OrbView-3 mission began in 2003 with the satellite’s launch and the mission is complete.55 .3 11.

Table 13: QuickBird Characteristics Geometry of orbit Orbit Altitude Orbit Inclination Swath Width sun-synchronous 450 km 98 degrees normal .16. land and asset management.544 km centered on the satellite ground track Sensor Resolution ground sample distance at nadir Spectral Bandwidth Panchromatic Spectral Bandwidth Multispectral 450 . and areas of interest sizes are 16.600 nm (green) 630 .690 nm (red) 760 .520 nm (blue) 520 .2.5 km at nadir accessible ground .5 km at nadir.Bands 1 2 3 4 Panchromatic Spectral Range 450 to 520 nm 520 to 600 nm 625 to 695 nm 760 to 900 nm 450 to 900 nm Source: ORBIMAGE.5 km for a single area and 16.61 cm (2 feet) multispectral . 2000 QuickBird The QuickBird satellite was launched in 2001 by DigitalGlobe offering imagery for map publishing.900 (near infrared) panchromatic . 1999. QuickBird produces sub-meter resolution panchromatic and multispectral imagery. The data collection nominal swath width is 16. ORBIMAGE.5 km x 16. change detection and insurance risk assessment.5 km x 115 km for a strip.4 m (8 feet) 445 to 900 nm Raster and Vector Data Sources 85 .

which measures variances in vegetation.geometrically uncorrected. radiometric and sensor corrected RapidEye Geo-corrected (Level 2A) -.5 days (at nadir) 4 million sq km per day 86 Raster and Vector Data Sources .orthorectified with radiometric.850 nm (Near IR) Ground Sampling Distance (nadir) Pixel Size (orthorectified) Swath Width Revisit Time Image Capture Capacity 6. and terrain corrections and aligned to a map projection • Table 14: RapidEye Characteristics Number of Satellites Orbit Altitude Equator Crossing Time Sensor Type Spectral Bands 5 630 km in sun-synchronous orbit 11:00 am (approximately) Multi-spectral push broom imager 440 .Source: DigitalGlobe. are calibrated equally to one another. allowing for species separation and monitoring vegetation health.5 m 5m 77 km Daily (off-nadir) / 5. All five satellites contain equivalent sensors.685 nm (Red) 690 . This allows RapidEye to deliver multi-temporal data sets in high resolution in near real-time. and are located in the same orbital plane.590 nm (Green) 630 .510 nm (Blue) 520 . RapidEye standard image products are offered at three processing levels: • • RapidEye Basic (Level 1B) -. RapidEye The German company RapidEye AG launched a constellation of five satellite sensors in 2008. geometric.geo-corrected with radiometric and geometric corrections and aligned to a map projection RapidEye Ortho (Level 3A) -. 2008a. and is the first commercial system to offer the Red-Edge band. The RapidEye satellite system collects imagery in five spectral bands.730 nm (Red Edge) 760 .

Center for Health Applications of Aerospace Related Technologies. 2009. Red 7. Green 5. which maintains orbit.1 km LAC and 4. SPOT 3 was decommissioned in 1996.801 km LAC/HRPT (958. NIR Wavelength (nanometers) 402 to 422 nm 433 to 453 nm 480 to 500 nm 500 to 520 nm 545 to 565 nm 660 to 680 nm 745 to 785 nm 845 to 885 nm Source: National Aeronautics and Space Administration. as well as performs solar and lunar calibration maneuvers. launched in 1990. 2008 and RapidEye AG. The spatial resolution is 1. also carried the DORIS instrument. The SeaStar spacecraft’s orbit is circular. NIR 8.The satellite uses an attitude control system (ACS). SPOT 3. (Spot series. 1999. The ACS also provides attitude information within one SeaWiFS pixel. The swath width is 2. Raster and Vector Data Sources 87 . Blue 2. SPOT 2 satellite.3 degrees) and 1. Cyan 4. 2006). which was launched in 1997. The revisit time is one day. plus the American passenger payload POAM II. used to measure atmospheric ozone at the poles. Band 1. The SeaWiFS instrument is made up of an optical scanner and an electronics module. at an altitude of 705 km. Blue 3.502 km GAC (45 degrees). was the first in the series to carry the DORIS precision positioning instrument.Table 14: RapidEye Characteristics Dynamic Range 12 bit Source: RapidEye AG. SPOT 1 -3 SPOT 1 satellite was developed by the French Centre National d’Etudes Spatiales (CNES) and launched in early 1986. SeaWiFS The Sea-viewing Wide Field-of-View Sensor (SeaWiFS) instrument is on-board the SeaStar spacecraft.5 km GAC. launched in 1993. 1998. Green 6.

This band is especially responsive to the amount of vegetation biomass present in a scene. Red Wavelength (microns) 0.61 to 0.59 μm 0. The width of the swath observed varies between 60 km for nadir viewing and 80 km for off-nadir viewing at a height of 832 km (Jensen. and contains 3 bands (Jensen. 1996).68 μm Comments This band corresponds to the green reflectance of healthy vegetation. The SPOT satellite can observe the same area on the globe once every 26 days. SPOT pushes 3000/6000 sensors along its orbit. 8-bit radiometric resolution. It has a radiometric resolution of 8 bits (Jensen. 1996). Panchromatic SPOT Panchromatic (meaning sensitive to all visible colors) has 10 × 10 m spatial resolution. It is useful for crop identification and emphasizes soil/crop and land/water contrasts.50 to 0. and is quite useful for collecting data in a region not directly in the path of the scanner or in the event of a natural or man-made disaster. Band 1. contains 1 band—0. 1996). Using this off-nadir capability.73 μm—and is similar to a black and white photograph. where timeliness of data acquisition is crucial. but it does have off-nadir viewing capability. and scanning is accomplished by the forward motion of the scanner. It is also very useful in collecting stereo data from which elevation data can be extracted. This band is useful for discriminating between plant species. It is also useful for soil boundary and geological boundary delineations. 3.79 to 0. multispectral and panchromatic.The sensors operate in two modes. The SPOT scanner normally produces nadir views. Off-nadir refers to any point that is not directly beneath the detectors. or multispectral.89 μm 88 Raster and Vector Data Sources . SPOT is commonly referred to as a pushbroom scanner meaning that all scanning parts are fixed. one area on the Earth can be viewed as often as every 3 days. Green 2. XS SPOT XS. Reflective IR 0.51 to 0. This off-nadir viewing can be programmed from the ground control station. This is different from Landsat which scans with 16 detectors perpendicular to its orbit. has 20 × 20 m spatial resolution. but off to an angle.

SPOT XS Panc hrom atic 1 band XS 3 bands 1 pixel= 10x10m radiometric resolution 0-255 1 pixel= 20x20m See Ordering Raster Data on page 127 for information on the types of SPOT data available. or topographic and planimetric maps (Jensen.Figure 25: SPOT Panchromatic vs XS SPOT Panchromatic vs. Raster and Vector Data Sources 89 . Topographic maps indicate elevation. Stereoscopic Pairs Two observations can be made by the panchromatic scanner on successive days. This type of imagery can be used to produce a single image. See Topographic Data on page 121 and "Terrain Analysis" on page 645 for more information about topographic data and how SPOT stereopairs and aerial photographs can be used to create elevation data and orthographic images. SPOT 4 The SPOT 4 satellite was launched in 1998. Planimetric maps correctly represent horizontal distances between objects (Star and Estes. 1996). Stereoscopic imagery can also be achieved by using one vertical scene and one off-nadir scene. SPOT 4 carries High Resolution Visible Infrared (HR VIR) instruments that obtain information in the visible and near-infrared spectral bands. so that the two images are acquired at angles on either side of the vertical. 1990). resulting in stereoscopic imagery.

and a panchromatic sensor. Band 1. and generate orthorectified products. carries two new HRVIR viewing instruments which have a better resolution: 2.75 μm 0. This instrument is identical to the VEGETATION 2 instrument on SPOT 4.6 km 90 Raster and Vector Data Sources .89 μm 1. 1999.17.7 days and is capable of collecting up to 750.78 to 0. WorldView-1 The WorldView-1 satellite was launched in 2007 by DigitalGlobe offering imagery for map creation. (mid-IR) Panchromatic Wavelength 0.68 μm Source: SPOT Image. then the rearward-pointing camera covers the same strip 90 seconds later. The multispectral scanner has a pixel size of 20 × 20 m.6 km x up to 330 km Large area . The satellite has an average revisit time of 1. The panchromatic scanner has a pixel size of 10 × 10 m. SPOT Image. which offers a spatial resolution of one kilometer and a wide imaging swath. change detection and in-depth image analysis.000 square kilometers (290. SPOT 5 carries an HRS (High Resolution Stereoscopic) imaging instrument operating in panchromatic mode with multiple cameras. The SPOT 4 satellite has two sensors on board: a multispectral sensor. and a swath width of 60 km. WorldView-1 produces half-meter resolution panchromatic imagery.68 μm 0. Thus HRS is able to acquire stereopair images almost simultaneously to map relief. The data collection options include: • • • Long strip . SPOT 5 also carries VEGETATION 2 instrument.up to 17. and a swath width of 60 km. (near-IR) 4. Source: Spot series. 1998.61 to 0. 2000c.60 km x 110 km Multiple point targets .59 μm 0. 2006. SPOT 5 The SPOT 5 satellite. Center for Health Applications of Aerospace Related Technologies.The SPOT 4 satellite orbits the Earth at 822 km at the Equator. Red 3.000 square miles) per day of half-meter imagery.5 to 5 meters in panchromatic and infrared mode and 10 meters in multispectral mode.61 to 0.50 to 0. Green 2. launched in 2002.58 to 1. The forward-pointing camera acquires images of the ground. produce DEMs.

6 km at nadir 0. 2008b.50 meters GSD at nadir GSD = ground sample 0. The WorldView-2 collection scenarios are: long strip. red edge. change detection. Raster and Vector Data Sources 91 . including 4 new colors: coastal blue. multiple point targets.59 meters GSD at 25° off-nadir distance Spectral Bandwidth Panchromatic Source: DigitalGlobe. and in-depth remote sensing image analysis. large area collect. and stereo area collect. and near IR2. combined with a multispectral capability featuring two meter resolution imagery. Near Infrared 2 overlaps the Near IR1 band but is less affected by atmospheric influence and enables broader vegetation analysis. • • • • Coastal blue is useful for bathymetric studies. pan-sharpened imagery. Red edge measures plant health and is useful for vegetation classification. yellow.30 km x 110 km Table 15: WorldView-1 Characteristics Geometry of orbit Orbit Altitude Swath Width Sensor Resolution sun-synchronous 496 km 17. WorldView-2 Owned and operated by DigitalGlobe. WorldView-2 is a panchromatic imaging system featuring half-meter resolution imagery. Yellow detects the “yellowness” of vegetation on land and in water. WorldView-2 multispectral capability provides 8 spectral bands. WorldView-2 was launched in 2009 to provide highly detailed imagery for precise vector and terrain data creation.• Stereo area .

895 Near IR2: 860 .450 Blue: 450 . 2010 and Padwick.625 Red: 630 .46 meters GSD at nadir 0.800 nm Multispectral (nm): Coastal: 400 . and the backscattered radiation is detected by the radar system’s receiving antenna.84 meters GSD at nadir 2. The resultant radar data can be used to produce radar images.08 meters GSD at 20° off-nadir 16.4 km at nadir Source: DigitalGlobe.510 Green: 510 .690 Red Edge: 705 . which is tuned to the frequency of the transmitted waves. The SAR Metadata Editor in the IMAGINE Radar Mapping Suite can be used to attach SAR image metadata to SAR images including creating or editing the radar ephemeris.745 Near IR1: 770 . There are many sensor-specific importers and direct read capabilities within ERDAS IMAGINE for most types of radar data. 2010.Table 16: WorldView-2 Characteristics Sensor Bands Pan: 450 . 92 Raster and Vector Data Sources .1040 Sensor Resolution GSD = ground sample distance Swath Width Pan: 0. the waves reflect from the surfaces they strike.580 Yellow: 585 .52 meters GSD at 20° off-nadir Multi: 1. radar data are produced when: • • • a radar transmitter emits a beam of micro or millimeter waves. et al. Radar Satellite Data Simply put.

Advantages of Using Radar Data Radar data have several advantages over other types of remotely sensed imagery: • Radar microwaves can penetrate the atmosphere day or night under virtually all weather conditions. providing data even in the presence of haze. Raster and Vector Data Sources 93 . The sensor transmits and receives as it is moving. Surface eddies. spaceborne. or smoke. Although radar does not penetrate standing water. Researchers are finding that a combination of the characteristics of radar data and visible/infrared data is providing a more complete picture of the Earth.A radar system can be airborne. but in 1978. Airborne radar systems have typically been mounted on civilian and military aircraft. SAR sensors are mounted on satellites and the NASA Space Shuttle. In the last decade. and a careful study of surface action can provide accurate details about the bottom features. • • Radar Sensor Types Radar images are generated by two different types of sensors: • SLAR (Side-looking Airborne Radar)—uses an antenna which is fixed below an aircraft and pointed to the side to transmit and receive the radar signal. and other bodies of water. Figure 26 shows a representation of an airborne SLAR system. (See Figure 26. The signals received over a time interval are combined to create the image. radar can partially penetrate arid and hyperarid surfaces. light rain. the radar satellite Seasat-1 was launched. clouds. • Both SLAR and SAR systems use side-looking geometry. or ground-based. and waves are greatly affected by the bottom features of the water body. it can reflect the surface action of oceans. the importance and applications of radar have grown rapidly. revealing subsurface features of the Earth. swells. snow.) SAR—uses a side-looking. lakes. fixed antenna to create a synthetic aperture. The radar data from that mission and subsequent spaceborne radar systems have been a valuable addition to the data available for use in GIS processing. Under certain circumstances.

These data can be used to produce a radar image of the target area. 94 Raster and Vector Data Sources . unlike a passive microwave sensor which simply receives the low-level radiation naturally emitted by targets.Figure 26: SLAR Radar Range Direction Beam Width Sensor Height at Nadir Azimuth Direction Azimuth Resolution Previous Image Lines Source: Lillesand and Kiefer. A target is any object or feature that is the subject of the radar scan. Figure 27: Received Radar Signal Trees Valley Hill Hill Shadow Trees Strength (DN) Time Active and Passive Sensors An active radar sensor gives off a burst of coherent radiation that reflects from the target. 1987 Figure 27 shows a graph of the data received from the radiation transmitted in Figure 26. Notice how the data correspond to the terrain in Figure 26.

391 GHz Wavelength Range 5.6 cm 76. or single versus multiple bounce scattering.75 cm 3.77-2.20-10.9 cm Radar System USGS SLAR ERS-1.0-76. the waves emitted by active sensors travel in phase and interact minimally on their way to the target area. After interaction with the target area. interfering with each other and producing speckle noise.225-0.B.39-1. Diffuse reflector Specular reflector Corner reflector Source: Lillesand and Kiefer.Like the coherent light from a laser. these waves are no longer in phase. This is due to the different distances they travel from different targets. these bands are commonly used for radar imaging systems: Table 17: Commonly Used Bands for Radar Imaging Band X C L P Frequency Range 5. Once reflected. 1987. Raster and Vector Data Sources 95 .55 GHz 0. Figure 28: Radar Reflection from Different Sources and Distances Radar waves are transmitted in phase.90 GHz 3. Currently. ALOS PALSAR AIRSAR More information about these radar systems is given later in this chapter. RADARSAT-2 SIR-A.9-6.2 GHz 0.8-7.9-19.3 cm 40. they are out of phase. RADARSAT-1.

96 Raster and Vector Data Sources . See "Enhancement" on page 455 and "Radar Concepts" on page 657 for more information on radar imagery enhancement. Wavelength ranges may vary slightly between sensors. NOTE: The C band overlaps the X band. the radar waves can interfere constructively or destructively to produce light and dark pixels known as speckle noise. has lead ERDAS to offer several speckle reduction algorithms. This is especially true when considering the removal of speckle noise. Speckle noise in radar data must be reduced before the data can be utilized. However. The IMAGINE Radar utilities allow you to: • • • • • import radar data into the GIS as a stand-alone source or as an additional layer with other imagery sources remove speckle noise enhance edges perform texture analysis perform radiometric correction IMAGINE OrthoRadar™ allows you to orthorectify radar imagery. combined with the fact that different applications and sensor outputs necessitate different speckle removal models. The letter designations have no special meaning. do not rectify or in any way resample the pixel values before removing speckle noise. A rotation using nearest neighbor might be permissible. Speckle Noise Once out of phase. When processing radar data. The IMAGINE InSAR™ module allows you to generate DEMs from SAR data using interferometric techniques. the order in which the image processing programs are implemented is crucial. This consideration. the radar image processing programs used to reduce speckle noise also produce changes to the image.Radar bands were named arbitrarily when radar was first developed by the military. The IMAGINE StereoSAR DEM™ module allows you to generate DEMs from SAR data using stereoscopic techniques. Since any image processing done before removal of the speckle results in the noise being incorporated into and degrading the image.

Raster and Vector Data Sources 97 . • • • • • • • Radar Sensors Almaz-1 Almaz-1 was launched in 1991 and operated for 18 months before being deorbited in October 1992. Possible GIS applications for radar data include: • Geology—radar’s ability to partially penetrate land cover and sensitivity to micro relief makes radar data useful in geologic mapping.Applications for Radar Data Radar data can be used independently in GIS applications or combined with other satellite data. and monitoring ocean circulation.. The Almaz mission was largely kept secret. and polar oceans. which was attached to a spacecraft. 1997). Ship monitoring—the ability to provide day/night all-weather imaging. SPOT. Source: Russian Space Web. The SAR operated with a single frequency SAR. Classification—a radar scene can be merged with visible/infrared data as an additional layer(s) in vegetation classification for timber mapping. Pollution monitoring—radar can detect oil on the surface of water and can be used to track the spread of an oil spill. as well as detect ships and associated wakes. tides. and archaeology. Glaciology—the ability to provide imagery of ocean and ice phenomena makes radar an important tool for monitoring climatic change through polar ice variation. 2002. determining weather and sea conditions for drilling and installation operations. or AVHRR. It included a “single polarization SAR as well as a sounding radiometric scanner (RMS) system and several infrared bands” (Atlantis Scientific. such as Landsat. Oceanography—radar is used for wind and wave measurement. crop monitoring. Offshore oil activities—radar data are used to provide ice updates for offshore drilling rigs. Almaz-T was launched by the Soviet Union in 1987 and functioned for two years. sea-state and weather forecasting. Hydrology—radar data are proving useful for measuring soil moisture content and mapping snow distribution and water content. Almaz-1 provided S-band information. Inc. mineral exploration. Almaz-1 provided optically-processed data. and so forth. makes radar a tool that can be used for ship navigation through frozen ocean areas such as the Arctic or North Atlantic Passage. and detecting oil spills.

launched in 2006. It provides fine resolution mode and ScanSAR mode. The first two satellites were launched in 2007. the range resolution was 1530 m. The program supplies data for emergency management services. Table 18: PALSAR Sensor Characteristics Fine Mode Center Frequency Polarization Range Resolution Observation Swath Bit Length 1270 MHz (L-band) HH or VV HH+HV or VV+VH 7 to 44 m 14 to 88 m 40 to 70 km 5 bits ScanSAR Mode 1270 MHz (L-band) HH or VV 100m (multilook) 250 to 350 km 5 bits Source: Japan Aerospace Exploration Agency. 2003b. and the fourth satellite is under development. 1997. COSMO-SkyMed COSMO-SkyMed mission is a group of four satellites equipped with radar sensors for Earth observation for civil and defense use. Table 19: COSMO-SkyMed Imaging Characteristics Mode ScanSAR Hugeregion Swath 200 km x 200 km Resolution 100 m pixel 98 Raster and Vector Data Sources . ALOS PALSAR PALSAR. Atlantis Scientific. interferometric products and digital elevation models.. the third satellite launched in 2008. Phased Array type L-band Synthetic Aperture Radar. surveillance.The swath width of Almaz-1 was 20-45 km. environmental resources management. The sensors operate in various wide field and narrow field modes. earth topographic mapping. 1996. which allows a swath three to five times wider than conventional SAR images. The mission was developed by the Italian Space Agency (Agenzia Spaziale Italiana) and Telespazio. is an active microwave sensor on board the ALOS satellite mission. and the azimuth resolution was 15 m. natural resources monitoring. with multi-polarmetric and multi-temporal capabilities. maritime management. Inc. Source: National Aeronautics and Space Administration.

Radar Altimeter 2 determines the two-way delay of the radar echo from the Earth’s surface to a very high precision. ocean studies and ice studies.15 m pixel 15 m pixel Classified 1 m pixel Sources: Telespazio. MERIS . notably atmospheric chemistry. and ice. AATSR . medium-spectral resolution spectrometer operating in the 390 nm to 1040 nm spectral range. an advanced polar-orbiting Earth observation satellite which provides measurements of the atmosphere.Advanced Along Track Scanning Radiometer continues the collection of ATSR-1 and ATSR-2 sea surface temperature data sets. • • Raster and Vector Data Sources 99 . 2008 and e-GEOS. the European Space Agency launched Envisat (ENVIronmental SATellite). ocean. Envisat mission provides for continuity of the observations started with the ERS-1 and ERS-2 satellite missions. Also measures the power and shape of reflected radar pulses. Envisat is equipped with these instruments: • • • ASAR . 2008.Microwave radiometer measures the integrated atmospheric water vapor column and cloud liquid water content. Envisat flies in a sun-synchronous polar orbit at about 800 km altitude. as correction terms for the radar altimeter signal.Programmable. land.Advanced Synthetic Aperture Radar operating at C-band. with a repeat cycle of 35 days. Envisat In 2002. RA-2 .Table 19: COSMO-SkyMed Imaging Characteristics Mode ScanSAR Wideregion Stripmap HImage Stripmap Pingpong Spotlight 1 Spotlight2 Swath 100 km x 100 km 40 km x 40 km 30 km x 30 km Classified 10 km x 10 km Resolution 30 m pixel 3 . MWR .

SAR Wave Mode. was launched by ESA in July of 1991. and 926 nm to 952 nm. Some of the information obtained from the ERS-1 and ERS-2 missions include: • • • maps of the surface of the Earth through clouds physical ocean features and atmospheric phenomena maps and ice patterns of polar regions 100 Raster and Vector Data Sources . announced the end of the ERS-1 mission in March 2000. The ERS-1 was ESA’s first sun-synchronous polar-orbiting mission.Doppler Orbitography and Radio-positioning Integrated by Satellite instrument is a tracking system to determine the precise location of Envisat satellite.Medium resolution spectrometer measures atmospheric constituents in the spectral bands between 250 nm to 675 nm. SCIAMACHY . The instruments aboard ERS-1 include: SAR Image Mode. Wind Scatterometer. LRR . a radar satellite. It includes two photometers measuring in the spectral bands between 470 nm to 520 nm and 650 nm to 700 nm. These and other critical measurements are continued by the ERS-2 mission and Envisat. 756 nm to 773 nm.Michelson Interferometer for Passive Atmospheric Sounding is a Fourier transform spectrometer for measuring gaseous emission spectra in the near to mid infrared range.5 million Synthetic Aperture Radar scenes. 2008a. • • • • Source: European Space Agency.Laser Retro-Reflector tracks orbit determination and range measurement calibration. DORIS . acquiring more than 1. 1997). 2008b. One of its primary instruments was the Along-Track Scanning Radiometer (ATSR).Imaging spectrometer measures trace gases in the troposphere and stratosphere. Source: European Space Agency. ESA.• GOMOS . and Along Track Scanning Radiometer-1 (European Space Agency. The ATSR monitors changes in vegetation of the Earth’s surface. European Space Agency. ERS-1 ERS-1. The measurements of sea surface temperatures made by the ERS-1 AlongTrack Scanning Radiometer are the most accurate ever from space. Radar Altimeter. MIPAS .

• • database information for use in modeling surface elevation changes According to ESA. Data obtained from ERS-2 used in conjunction with that from ERS-1 enables you to perform interferometric tasks. 1995). ERS-2 provides many different types of information. 1995. Using the data from the two sensors. IMAGINE InSAR. which stands for Global Ozone Monitoring Experiment. Facilities that process and archive ERS-2 data are also located around the globe. ERS-2. Wind Scatterometer. The JERS-1 satellite obtained data from 1992 to 1998. Along Track Scanning Radiometer-2. One of the benefits of the ERS-2 satellite is that it can provide data from the exact same type of synthetic aperture radar (SAR). It has an instrument called GOME. JERS-1 JERS stands for Japanese Earth Resources Satellite. like ERS-1 makes use of the ATSR. Raster and Vector Data Sources 101 . which are of great importance for many industrial activities (European Space Agency. This instrument is designed to evaluate atmospheric chemistry. ERS-2 receiving stations are located all over the world. has allowed the development of time-critical applications particularly in weather. marine and ice forecasting. offering global data sets within three hours of observation. regardless of cloud coverage and sunlight conditions. . DEMs can be created. was launched by ESA in April 1995. 1995. a radar satellite. Source: European Space Agency. The instruments aboard ERS-2 include: SAR Image Mode. SAR Wave Mode. An operational near-real-time capability for data acquisition. ERS-2 ERS-2. see IMAGINE InSAR Theory on page 679. .ERS-1 provides both global and regional views of the Earth. Source: European Space Agency. and the Global Ozone Monitoring Experiment. . and has been superseded by the ALOS mission. processing and dissemination. Radar Altimeter. See ERS-1 on page 100 for some of the most common types. For information about ERDAS IMAGINE’s interferometric software.

See ALOS on page 68 for information about the Advanced Land Observing Satellite (ALOS). Source: Japan Aerospace Exploration Agency, 2007. The JERS-1 satellite was launched in February of 1992, with an SAR instrument and a 4-band optical sensor aboard. The SAR sensor’s ground resolution was 18 m, and the optical sensor’s ground resolution was roughly 18 m across-track and 24 m along-track. The revisit time of the satellite was every 44 days. The satellite travelled at an altitude of 568 km, at an inclination of 97.67°. Table 20: JERS-1 Bands and Wavelengths Band
1 2 3 41 5 6 7 8
1

Wavelength
0.52 to 0.60 μm 0.63 to 0.69 μm 0.76 to 0.86 μm 0.76 to 0.86 μm 1.60 to 1.71 μm 2.01 to 2.12 μm 2.13 to 2.25 μm 2.27 to 2.40 μm

Viewing 15.3° forward

Source: Earth Remote Sensing Data Analysis Center, 2000. JERS-1 data comes in two different formats: European and Worldwide. The European data format consists mainly of coverage for Europe and Antarctica. The Worldwide data format has images that were acquired from stations around the globe. According to NASA, “a reduction in transmitter power has limited the use of JERS-1 data” (National Aeronautics and Space Administration, 1996). Source: Eurimage, 1998; National Aeronautics and Space Administration, 1996. RADARSAT The RADARSAT satellite was developed by the Canadian Space Agency and launched in 1995. With the development of RADARSAT-2, the original RADARSAT is also known as RADARSAT-1.

102

Raster and Vector Data Sources

The RADARSAT satellite carries SARs, which are capable of transmitting signals that can be received through clouds and during nighttime hours. RADARSAT satellite has multiple imaging modes for collecting data, which include Fine, Standard, Wide, ScanSAR Narrow, ScanSAR Wide, Extended (H), and Extended (L). The resolution and swath width varies with each one of these modes, but in general, Fine offers the best resolution: 8 m. Table 21: RADARSAT Beam Mode Resolution Beam Mode
Fine Beam Mode Standard Beam Mode Wide Beam Mode ScanSAR Narrow Beam Mode ScanSAR Wide Beam Mode Extended High Beam Mode Low Beam Mode

Resolution
8m 25 m 30 m 50 m 100 m 25 m 35 m

The types of RADARSAT image products include: Single Data, Single Look Complex, Path Image, Path Image Plus, Map Image, Precision Map Image, and Orthorectified. You can obtain this data in forms ranging from CD-ROM to print. The RADARSAT satellite uses a single frequency, C-band. The altitude of the satellite is 496 miles, or 798 km. The satellite is able to image the entire Earth, and its path is repeated every 24 days. The swath width is 500 km. Daily coverage is available of the Arctic, and any area of Canada can be obtained within three days. Source: RADARSAT, 1999; Space Imaging, 1999c. RADARSAT-2 RADARSAT-2, launched in 2007, is a SAR satellite developed by the Canadian Space Agency and MacDonald, Dettwiler, and Associates, Ltd. (MDA). The satellite advancements include 3 meter high-resolution imaging, flexibility in polarization selection, left and right-looking imaging options, and superior data storage. In addition to RADARSAT-1 beam modes, RADARSAT-2 offers UltraFine, Multi-Look Fine, Fine Quad-Pol, and Standard Quad-Pol beam modes. Quadrature-polarization means that four images are acquired simultaneously; two co-polarized images (HH and VV) and two crosspolarized images (HV and VH).

Raster and Vector Data Sources

103

Table 22: RADARSAT-2 Characteristics
Geometry of orbit Orbit Altitude Orbit Inclination Orbit repeat cycle Frequency Band Channel Bandwidth Channel Polarization Spatial Resolution near-polar, sun-synchronous 798 km 98.6 degrees 24 days C-band (5.405 GHz) 11.6, 17.3, 30, 50, 100 MHz HH, HV, VH, VV 3 meters to 100 meters

Source: RADARSAT-2, 2008. SIR-A SIR stands for Spaceborne Imaging Radar. SIR-A was launched and collected data in 1981. The SIR-A mission built on the Seasat SAR mission that preceded it by increasing the incidence angle with which it captured images. The primary goal of the SIR-A mission was to collect geological information. This information did not have as pronounced a layover effect as previous imagery. An important achievement of SIR-A data is that it was capable of penetrating surfaces to obtain information. For example, NASA says that the L-band capability of SIR-A enabled the discovery of dry river beds in the Sahara Desert. SIR-1 used L-band, had a swath width of 50 km, a range resolution of 40 m, and an azimuth resolution of 40 m (Atlantis Scientific, Inc., 1997).

For information on the ERDAS IMAGINE software that reduces layover effect, IMAGINE OrthoRadar, see IMAGINE OrthoRadar Theory on page 657. Source: National Aeronautics and Space Administration, 1995a; National Aeronautics and Space Administration, 1996; Atlantis Scientific, Inc., 1997.

104

Raster and Vector Data Sources

SIR-B SIR-B was launched and collected data in 1984. SIR-B improved over SIR-A by using an articulating antenna. This antenna allowed the incidence angle to range between 15 and 60 degrees. This enabled the mapping of surface features using “multiple-incidence angle backscatter signatures” (National Aeronautics and Space Administration, 1996). SIR-B used L-band, has a swath width of 10-60 km, a range resolution of 60-10 m, and an azimuth resolution of 25 m (Atlantis Scientific, Inc., 1997). Source: National Aeronautics and Space Administration, 1995a, National Aeronautics and Space Administration, 1996; Atlantis Scientific, Inc., 1997. SIR-C SIR-C sensor was flown onboard two separate NASA Space Shuttle flights in 1994. Flight 1 was notable for a fully polarimetric spaceborne SAR, multi-frequency, X-band, and demonstrated ScanSAR for wide swath array. Flight 2 was notable for the first SAR to re-fly, targeted repeat-pass interferometry, and also demonstrated ScanSAR for wide swath array. Source: National Aeronautics and Space Administration, 2006. SIR-C is part of a radar system, SIR-C/X-SAR, which flew in 1994. The system is able to “. . .measure, from space, the radar signature of the surface at three different wavelengths, and to make measurements for different polarizations at two of those wavelengths” (National Aeronautics and Space Administration, 1997). Moreover, it can supply “. . .images of the magnitude of radar backscatter for four polarization combinations” (National Aeronautics and Space Administration, 1995a). The data provided by SIR-C/X-SAR allows measurement of the following: • • • • • vegetation type, extent, and deforestation soil moisture content ocean dynamics, wave and surface wind speeds and directions volcanism and tectonic activity soil erosion and desertification

Raster and Vector Data Sources

105

The antenna of the system is composed of three antennas: one at Lband, one at C-band, and one at X-band. The antenna was assembled by the Jet Propulsion Laboratory. The acquisition of data at three different wavelengths makes SIR-C/X-SAR data very useful. The SIRC and X-SAR do not have to be operated together: they can also be operated independent of one another. SIR-C/X-SAR data come in resolutions from 10 to 200 m. The swath width of the sensor varies from 15 to 90 km, which depends on the direction the antenna is pointing. The system orbited the Earth at 225 km above the surface. Table 23: SIR-C/X-SAR Bands and Frequencies Bands
L-Band C-Band X-Band

Wavelength
0.235 m 0.058 m 0.031 m

Source: National Aeronautics and Space Administration, 1995a, National Aeronautics and Space Administration, 1997. TerraSAR-X TerraSAR-X, launched in 2007, is a German satellite manufactured in a public private partnership between the German Aerospace Center (DLR), Astrium GmbH, and the German Ministry of Education and Science (BMBF). TerraSAR-X carries a high frequency X-band SAR instrument based on an active phased array antenna technology. The satellite orbit is sunsynchronous at 514 km altitude at 98 degrees inclination and 11 days repeat cycle. The satellite sensor operates in several modes; Spotlight, high Resolution Spotlight, Stripmap, and ScanSAR, at varying geometrical resolutions between 1 and 16 meters. It provides single or dual polarization data. Table 24: TerraSAR-X Imaging Characteristics Mode
Spotlight (SL) High Resolution Spotlight (HS) Stripmap (SM) ScanSAR (SC)

Swath
10 x 10 km scene 5 km x 10 km scene 30 km strip 100 km strip

Resolution
1 - 3 meters 1 - 2 meters 3 - 6 meters 16 meters

106

Raster and Vector Data Sources

Source: DLR (German Aerospace Center). 2008.

Image Data from Aircraft

Image data can also be acquired from multispectral scanners or radar sensors aboard aircraft, as well as satellites. This is useful if there is not time to wait for the next satellite to pass over a particular area, or if it is necessary to achieve a specific spatial or spectral resolution that cannot be attained with satellite sensors. For example, this type of data can be beneficial in the event of a natural or man-made disaster, because there is more control over when and where the data are gathered. Two common types of airborne image data are: • • Airborne Synthetic Aperture Radar (AIRSAR) Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)

Aircraft Radar Imagery
AIRSAR AIRSAR was an experimental airborne radar sensor developed by JPL, Pasadena, California, under a contract with NASA. The AIRSAR mission extended from 1988 to 2004. AIRSAR was an imaging tool mounted aboard a modified NASA DC-8 aircraft. This sensor collected data at three frequencies: • • • C-band L-band P-band

Source: National Aeronautics and Space Administration, 2008. Because this sensor measured at three different wavelengths, different scales of surface roughness were obtained. The AIRSAR sensor had an IFOV of 10 m and a swath width of 12 km. AIRSAR data have been used in many applications such as measuring snow wetness, classifying vegetation, and estimating soil moisture. NOTE: These data are distributed in a compressed format. They must be decompressed before loading with an algorithm available from JPL. See Addresses to Contact on page 128 for contact information.

Raster and Vector Data Sources

107

Aircraft Optical Imagery
AVIRIS The AVIRIS was also developed by JPL under a contract with NASA. AVIRIS data have been available since 1992. This sensor produces multispectral data that have 224 narrow bands. These bands are 10 nm wide and cover the spectral range of .4 - 2.4 nm. The swath width is 11 km, and the spatial resolution is 20 m. This sensor is flown at an altitude of approximately 20 km. The data are recorded at 10-bit radiometric resolution. Daedalus TMS Daedalus is a thematic mapper simulator (TMS), which simulates the characteristics, such as spatial and radiometric, of the TM sensor on Landsat spacecraft. The Daedalus TMS orbits at 65,000 feet, and has a ground resolution of 25 meters. The total scan angle is 43 degrees, and the swath width is 15.6 km. Daedalus TMS is flown aboard the NASA ER-2 aircraft. The Daedalus TMS spectral bands are as follows: Table 25: Daedalus TMS Bands and Wavelengths Daedalus Channel
1 2 3 4 5 6 7 8 9 10 11 12

TM Band
A 1 2 B 3 C 4 D 5 7 6 6

Wavelength
0.42 to 0.45 μm 0.45 to 0.52 μm 0.52 to 0.60 μm 0.60 to 0.62 μm 0.63 to 0.69 μm 0.69 to 0.75 μm 0.76 to 0.90 μm 0.91 to 1.05 μm 1.55 to 1.75 μm 2.08 to 2.35 μm 8.5 to 14.0 μm low gain 8.5 to 14.0 μm high gain

Source: National Aeronautics and Space Administration, 1995b

108

Raster and Vector Data Sources

Image Data from Scanning

Hardcopy maps and photographs can be incorporated into the ERDAS IMAGINE environment through the use of a scanning device to transfer them into a digital (raster) format. In scanning, the map, photograph, transparency, or other object to be scanned is typically placed on a flat surface, and the scanner scans across the object to record the image. The image is then transferred from analog to digital data. There are many commonly used scanners for GIS and other desktop applications, such as Eikonix (Eikonix Corp., Huntsville, Alabama) or Vexcel (Vexcel Imaging Corp., Boulder, Colorado). Many scanners produce a Tagged Image File Format (TIFF) file, which can be used directly by ERDAS IMAGINE.

Use the Import/Export function to import scanned data.Eikonix data can be obtained in the ERDAS IMAGINE .img format using the XSCAN™ Tool by Ektron and then imported directly into ERDAS IMAGINE.

Photogrammetric Scanners

There are photogrammetric high quality scanners and desktop scanners. Photogrammetric quality scanners are special devices capable of high image quality and excellent positional accuracy. Use of this type of scanner results in geometric accuracies similar to traditional analog and analytical photogrammetric instruments. These scanners are necessary for digital photogrammetric applications that have high accuracy requirements. These units usually scan only film because film is superior to paper, both in terms of image detail and geometry. These units usually have a Root Mean Square Error (RMSE) positional accuracy of 4 microns or less, and are capable of scanning at a maximum resolution of 5 to 10 microns. The required pixel resolution varies depending on the application. Aerial triangulation and feature collection applications often scan in the 10 to 15 micron range. Orthophoto applications often use 15- to 30-micron pixels. Color film is less sharp than panchromatic, therefore color ortho applications often use 20- to 40-micron pixels.

Desktop Scanners

Desktop scanners are general purpose devices. They lack the image detail and geometric accuracy of photogrammetric quality units, but they are much less expensive. When using a desktop scanner, you should make sure that the active area is at least 9 × 9 inches (that is, A3-type scanners), enabling you to capture the entire photo frame.

Raster and Vector Data Sources

109

Desktop scanners are appropriate for less rigorous uses, such as digital photogrammetry in support of GIS or remote sensing applications. Calibrating these units improves geometric accuracy, but the results are still inferior to photogrammetric units. The image correlation techniques that are necessary for automatic tie point collection and elevation extraction are often sensitive to scan quality. Therefore, errors can be introduced into the photogrammetric solution that are attributable to scanning errors.

Aerial Photography

Aerial photographs, such as NAPP photos, are most widely used data sources in photogrammetry. They can not be utilized in softcopy or digital photogrammetric applications until scanned. The standard dimensions of the aerial photos are 9 × 9 inches or 230 × 230 mm. The ground area covered by the photo depends on the scale. The scanning resolution determines the digital image file size and pixel size. For example, for a 1:40,000 scale standard block of white aerial photos scanned at 25 microns (1016 dots per inch), the ground pixel size is 1 × 1 m2. The resulting file size is about 85 MB. It is not recommended to scan a photo with a scanning resolution less than 5 microns or larger than 5080 dpi.

DOQs

DOQ stands for digital orthophoto quadrangle. USGS defines a DOQ as a computer-generated image of an aerial photo, which has been orthorectified to give it map coordinates. DOQs can provide accurate map measurements. The format of the DOQ is a grayscale image that covers 3.75 minutes of latitude by 3.75 minutes of longitude. DOQs use the North American Datum of 1983, and the Universal Transverse Mercator projection. Each pixel of a DOQ represents a square meter. 3.75-minute quarter quadrangles have a 1:12,000 scale. 7.5-minute quadrangles have a 1:24,000 scale. Some DOQs are available in color-infrared, which is especially useful for vegetation monitoring. DOQs can be used in land use and planning, management of natural resources, environmental impact assessments, and watershed analysis, among other applications. A DOQ can also be used as “a cartographic base on which to overlay any number of associated thematic layers for displaying, generating, and modifying planimetric data or associated data files” (United States Geological Survey, 1999b). According to the USGS: DOQ production begins with an aerial photo and requires four elements: (1) at least three ground positions that can be identified within the photo; (2) camera calibration specifications, such as focal length; (3) a digital elevation model (DEM) of the area covered by the photo; (4) and a high-resolution digital image of the photo, produced by scanning. The photo is processed pixel by pixel to produce an image with features in true geographic positions (United States Geological Survey, 1999b).

110

Raster and Vector Data Sources

Source: United States Geological Survey, 1999b.

ADRG Data

ADRG (ARC Digitized Raster Graphic) data come from the National Imagery and Mapping Agency (NIMA), which was formerly known as the Defense Mapping Agency (DMA). ADRG data are primarily used for military purposes by defense contractors. The data are in 128 × 128 pixel tiled, 8-bit format stored on CD-ROM. ADRG data provide large amounts of hardcopy graphic data without having to store and maintain the actual hardcopy graphics. ADRG data consist of digital copies of NIMA hardcopy graphics transformed into the ARC system and accompanied by ASCII encoded support files. These digital copies are produced by scanning each hardcopy graphic into three images: red, green, and blue. The data are scanned at a nominal collection interval of 100 microns (254 lines per inch). When these images are combined, they provide a 3-band digital representation of the original hardcopy graphic.

ARC System

The ARC system (Equal Arc-Second Raster Chart/Map) provides a rectangular coordinate and projection system at any scale for the Earth’s ellipsoid, based on the World Geodetic System 1984 (WGS 84). The ARC System divides the surface of the ellipsoid into 18 latitudinal bands called zones. Zones 1 - 9 cover the Northern hemisphere and zones 10 - 18 cover the Southern hemisphere. Zone 9 is the North Polar region. Zone 18 is the South Polar region. Distribution Rectangles For distribution, ADRG are divided into geographic data sets called Distribution Rectangles (DRs). A DR may include data from one or more source charts or maps. The boundary of a DR is a geographic rectangle that typically coincides with chart and map neatlines. Zone Distribution Rectangles Each DR is divided into Zone Distribution Rectangles (ZDRs). There is one ZDR for each ARC System zone covering any part of the DR. The ZDR contains all the DR data that fall within that zone’s limits. ZDRs typically overlap by 1,024 rows of pixels, which allows for easier mosaicking. Each ZDR is stored on the CD-ROM as a single raster image file (.IMG). Included in each image file are all raster data for a DR from a single ARC System zone, and padding pixels needed to fulfill format requirements. The padding pixels are black and have a zero value.

The padding pixels are not imported by ERDAS IMAGINE, nor are they counted when figuring the pixel height and width of each image.

Raster and Vector Data Sources

111

ADRG File Format

Each CD-ROM contains up to eight different file types which make up the ADRG format. ERDAS IMAGINE imports three types of ADRG data files: • • • .OVR (Overview) .IMG (Image) .Lxx (Legend or marginalia data)

NOTE: Compressed ADRG (CADRG) is a different format, which may be imported or read directly.

The ADRG .IMG and .OVR file formats are different from the ERDAS IMAGINE .img and .ovr file formats.

.OVR (overview)

The overview file contains a 16:1 reduced resolution image of the whole DR. There is an overview file for each DR on a CD-ROM.

Importing ADRG Subsets Since DRs can be rather large, it may be beneficial to import a subset of the DR data for the application. ERDAS IMAGINE enables you to define a subset of the data from the preview image (see Figure 30).

You can import from only one ZDR at a time. If a subset covers multiple ZDRs, they must be imported separately and mosaicked with the Mosaic option. Figure 29: ADRG Overview File Displayed in a Viewer

112

Raster and Vector Data Sources

The white rectangle in Figure 30 represents the DR. The subset area in this illustration would have to be imported as three files: one for each zone in the DR. Notice how the ZDRs overlap. Therefore, the .IMG files for Zones 2 and 4 would also be included in the subset area. Figure 30: Subset Area with Overlapping ZDRs
Zone 4

overlap area

overlap area

Subset Area

Zone 3

Zone 2

.IMG (scanned image data)

The .IMG files are the data files containing the actual scanned hardcopy graphic(s). Each .IMG file contains one ZDR plus padding pixels. The Import function converts the .IMG data files on the CD-ROM to the ERDAS IMAGINE file format (.img). The image file can then be displayed in a Viewer. Legend files contain a variety of diagrams and accompanying information. This is information that typically appears in the margin or legend of the source graphic.

.Lxx (legend data)

This information can be imported into ERDAS IMAGINE and viewed. It can also be added to a map composition with the ERDAS IMAGINE Map Composer. Each legend file contains information based on one of these diagram types: • Index (IN)—shows the approximate geographical position of the graphic and its relationship to other graphics in the region.

Raster and Vector Data Sources

113

Elevation/Depth Tint (EL)—depicts the colors or tints using a multicolored graphic that represent different elevations or depth bands on the printed map or chart. Slope (SL)—represents the percent and degree of slope appearing in slope bands. Boundary (BN)—depicts the geopolitical boundaries included on the map or chart. Accuracy (HA, VA, AC)—depicts the horizontal and vertical accuracies of selected map or chart areas. AC represents a combined horizontal and vertical accuracy diagram. Geographic Reference (GE)—depicts the positioning information as referenced to the World Geographic Reference System. Grid Reference (GR)—depicts specific information needed for positional determination with reference to a particular grid system. Glossary (GL)—gives brief lists of foreign geographical names appearing on the map or chart with their English-language equivalents. Landmark Feature Symbols (LS)—depict navigationally-prominent entities.

• • •

• • •

ARC System Charts The ADRG data on each CD-ROM are based on one of these chart types from the ARC system: Table 26: ARC System Chart Types ARC System Chart Type
GNC (Global Navigation Chart) JNC-A (Jet Navigation Chart - Air) JNC (Jet Navigation Chart) ONC (Operational Navigation Chart) TPC (Tactical Pilot Chart) JOG-A (Joint Operations Graphic - Air) JOG-G (Joint Operations Graphic - Ground) JOG-C (Joint Operations Graphic - Combined) JOG-R (Joint Operations Graphic - Radar)

Scale
1:5,000,000 1:3,000,000 1:2,000,000 1:1,000,000 1:500,000 1:250,000 1:250,000 1:250,000 1:250,000

114

Raster and Vector Data Sources

Table 26: ARC System Chart Types ARC System Chart Type
ATC (Series 200 Air Target Chart) TLM (Topographic Line Map)

Scale
1:200,000 1:50,000

Each ARC System chart type has certain legend files associated with the image(s) on the CD-ROM. The legend files associated with each chart type are checked in Table 27. Table 27: Legend Files for the ARC System Chart Types ARC System Chart
GNC JNC / JNC-A ONC TPC JOG-A JOG-G / JOG-C JOG-R ATC TLM

IN

EL

SL

BN

VA

HA

AC

GE

GR

GL

LS

• • • • • • • • •


• • • • • • • • • • • • • • • • • •

• • • • • • • • • •

• • • • • •

• •

ADRG File Naming Convention

The ADRG file naming convention is based on a series of codes: ssccddzz • • • ss = the chart series code (see the table of ARC System charts) cc = the country code dd = the DR number on the CD-ROM (01-99). DRs are numbered beginning with 01 for the northwesternmost DR and increasing sequentially west to east, then north to south. zz = the zone rectangle number (01-18)

For example, in the ADRG filename JNUR0101.IMG: • JN = Jet Navigation. This ADRG file is taken from a Jet Navigation chart.

Raster and Vector Data Sources

115

• • • •

UR = Europe. The data is coverage of a European continent. 01 = This is the first DR on the CD-ROM, providing coverage of the northwestern edge of the image area. 01 = This is the first zone rectangle of the DR. .IMG = This file contains the actual scanned image data for a ZDR.

You may change this name when the file is imported into ERDAS IMAGINE. If you do not specify a file name, ERDAS IMAGINE uses the ADRG file name for the image. Legend File Names Legend file names include a code to designate the type of diagram information contained in the file (see the previous legend file description). For example, the file JNUR01IN.L01 means: • • • • • JN = Jet Navigation. This ADRG file is taken from a Jet Navigation chart. UR = Europe. The data is coverage of a European continent. 01 = This is the first DR on the CD-ROM, providing coverage of the northwestern edge of the image area. IN = This indicates that this file is an index diagram from the original hardcopy graphic. .L01 = This legend file contains information for the source graphic 01. The source graphics in each DR are numbered beginning with 01 for the northwesternmost source graphic, increasing sequentially west to east, then north to south. Source directories and their files include this number code within their names.

For more detailed information on ADRG file naming conventions, see the National Imagery and Mapping Agency Product Specifications for ARC Digitized Raster Graphics (ADRG), published by the NIMA Aerospace Center.

ADRI Data

ADRI (ARC Digital Raster Imagery), like ADRG data, are also from the NIMA and are currently available only to Department of Defense contractors. The data are in 128 × 128 tiled, 8-bit format, stored on 8 mm tape in band sequential format.

116

Raster and Vector Data Sources

ADRI consists of SPOT panchromatic satellite imagery transformed into the ARC system and accompanied by ASCII encoded support files. Like ADRG, ADRI data are stored in the ARC system in DRs. Each DR consists of all or part of one or more images mosaicked to meet the ARC bounding rectangle, which encloses a 1 degree by 1 degree geographic area. (See Figure 31.) Source images are orthorectified to mean sea level using NIMA Level I Digital Terrain Elevation Data (DTED) or equivalent data (Air Force Intelligence Support Agency, 1991).

See the previous section on ADRG data for more information on the ARC system. See DTED on page 123 for more information. Figure 31: Seamless Nine Image DR

Image 1

Image 2 3

Image 4

Image 5 Image 6

Image 8 7

Image 9

In ADRI data, each DR contains only one ZDR. Each ZDR is stored as a single raster image file, with no overlapping areas. There are six different file types that make up the ADRI format: two types of data files, three types of header files, and a color test patch file. ERDAS IMAGINE imports two types of ADRI data files: • • .OVR (Overview) .IMG (Image)

Raster and Vector Data Sources

117

The . Padding pixels are not imported.IMG and .OVR) contains a 16:1 reduced resolution image of the whole DR. Figure 32: ADRI Overview File Displayed in a Viewer . Each .OVR (overview) The overview file (. Padding pixels are black and have a zero data value. (See Figure 32.OVR file formats are different from the ERDAS IMAGINE . .img).ovr file formats. nor are they counted in image height or width. There is an overview file for each DR on a tape. The ERDAS IMAGINE Import function converts the .IMG data files to the ERDAS IMAGINE file format (.The ADRI .img and .) This does not appear on the ZDR image.OVR images show the mosaicking from the source images and the dates when the source images were collected.IMG files contain the actual mosaicked images.IMG (scanned image data) The . The image file can then be displayed in a Viewer. The ADRI file naming convention is based on a series of codes: ssccddzz • ss = the image source code: • SP (SPOT panchromatic) SX (SPOT multispectral) (not currently available) TM (Landsat Thematic Mapper) (not currently available) ADRI File Naming Convention cc = the country code 118 Raster and Vector Data Sources .IMG file contains one ZDR plus any padding pixels needed to fit the ARC boundaries.

Zone 9 is the North Polar region. data is in the Equirectangular projection. and measurement is possible. zz = the zone rectangle number (01-18) • For example.IMG = This file contains the actual scanned image data for a ZDR. with or without a pseudocolor lookup table. but may be easier to combine with other data in Geographic coordinates. Raster Product Format The Raster Product Format (RPF). Two military products are currently based upon the general RPF specification: Raster and Vector Data Sources 119 . The ARC System divides the surface of the ellipsoid into 18 latitudinal bands called zones. ERDAS IMAGINE includes the option to use either Equirectangular or Geographic coordinates for nonpolar RPF data.IMG: • • • • • SP = SPOT 10 m panchromatic image UR = Europe. based on the World Geodetic System 1984 (WGS 84). RPF Data are projected to the ARC system. RPF data are organized in 1536 × 1536 frames. ERDAS IMAGINE uses the ADRI file name for the image. with an internal tile size of 256 × 256 pixels. If you do not specify a file name. DRs are numbered beginning with 01 for the northwesternmost DR and increasing sequentially west to east. which is proportional to latitude and longitude. In nonpolar zones. The aspect ratio of projected RPF data is nearly 1. from NIMA. then north to south. The data is coverage of a European continent. Zones 1-9 cover the Northern hemisphere and zones A-J cover the Southern hemisphere. You may change this name when the file is imported into ERDAS IMAGINE. providing coverage of the northwestern edge of the image area 01 = This is the first zone rectangle of the Distribution Rectangle. is primarily used for military purposes by defense contractors. frames appear to be square.• dd = the DR number on the tape (01-99). on CD-ROM. Unprojected RPFs seldom have an aspect of ratio of 1. 01 = This is the first Distribution Rectangle on the CD-ROM. . RPF data are stored in an 8-bit format. Zone J is the South Polar region Polar data is projected to the Azimuthal Equidistant projection. in the ADRI filename SPUR0101.

This RPF directory is often referred to as the root of the product. VQ evaluates all of the vectors within the image. named A. The RPF directory contains a table-of-contents file. included in ERDAS IMAGINE. Loading RPF Data RPF frames may be imported or read directly. which describes the location of all of the frames in the product. ERDAS IMAGINE supplies four image types related to RPF: • RPF Product—combines the entire contents of an RPF CD. Most of the processing effort of VQ is incurred in the compression stage. RPF frame file names typically encode the map zone and location of the frame within the map series. and table-of-contents files are physically formatted within an NITF message. The direct read feature. overview images. • All RPF frames. Since only 4096 unique vector values are possible.OVx file extension. and reduces each vector into a single 12-bit lookup value. is generally preferable since multiple frames with the same resolution can be read as a single image. RPF Frame—reads a single frame file.OV1. RPF Overview—reads a single overview frame file.TOC. Overview images typically have an . ERDAS IMAGINE treats RPF and NITF as distinct formats. A vector is a 4 × 4 tile of 8-bit pixel values. RPF data are stored on CD-ROM. permitting fast decompression by the users of the data in the field. Since an RPF image is broken up into several NITF messages.• • Controlled Image Base (CIB) Compressed ADRG (CADRG) RPF employs Vector Quantization (VQ) to compress the frames. such as . Overview images illustrate the location of a set of frames with respect to political and geographic boundaries. Import may still be desirable if you wish to examine the metadata provided by a specific frame.OVR or . VQ is lossy. as a single image. provided all frames are within the same ARC map zone and resolution. and The RPF directory contains one or more subdirectories containing RPF frame files. • • 120 Raster and Vector Data Sources . Overview images may appear at various points in the directory tree. excluding overview images. with the following structure: • • • The root of the CD-ROM contains an RPF directory.The RPF directory at the root of the CD-ROM is the image to be loaded. but the space savings are substantial.

most available elevation data are created with stereo photography and topographic maps. Arc/second data are often referred to by the number of seconds in each pixel. which is physically formatted as a compressed RPF. The data are not rectangular. but follow the arc of the Earth’s latitudinal and longitudinal lines. CADRG data consist of digital copies of NIMA hardcopy graphics transformed into the ARC system. 3 arc/second data have pixels which are 3 × 3 seconds in size. Compressed Aeronautical Chart (CAC). Arc/second refers to data in the Latitude/Longitude (Lat/Lon) coordinate system. the first pixel of the record is the southernmost pixel. Each degree of latitude and longitude is made up of 60 minutes. CADRG Topographic Data Satellite data can also be used to create elevation. but can be produced from other sources of imagery. due to the coarser collection interval. Raster and Vector Data Sources 121 . Figure 33 illustrates a 1° × 1° area of the Earth. The profiles of DEM and DTED run south to north. and Compressed Raster Graphics (CRG). as discussed above under SPOT. For example. ERDAS IMAGINE software can load and use: • • USGS DEMs DTED Arc/second Format Most elevation data are in arc/second format. CADRG is a successor to ADRG. A row of data file values from a DEM or DTED file is called a profile. Radar sensor data can also be a source of topographic information. The data are scanned at a nominal collection interval of 150 microns. However. instead of 24-bit truecolor. VQ compression. that is.CIB CIB is grayscale imagery produced from rectified imagery and physically formatted as a compressed RPF. CADRG offers a compression ratio of 55:1 over ADRG. or topographic data through the use of stereoscopic pairs. CIB is often based upon SPOT panchromatic data or reformatted ADRI data. as discussed in "Terrain Analysis" on page 645. and the encoding as 8-bit pseudocolor. CIB offers a compression ratio of 8:1 over its predecessor. Each minute is made up of 60 seconds. The actual area represented by each pixel is a function of its latitude. The resulting image is 8-bit pseudocolor. ADRI.

but the area represented by each pixel increases in size from the top of the file to the bottom of the file. See Ordering Raster Data on page 127 for information on ordering DEMs. The extracted section in the example above has been exaggerated to illustrate this point. 122 Raster and Vector Data Sources . NED has been developed by merging the highest-resolution elevation data available across the United States into a seamless raster format. the USGS began offering the National Elevation Dataset (NED). The dataset provides seamless coverage of the United States. DEM was originally a term reserved for elevation data provided by the USGS. DEMs can be: • • purchased from USGS (for US areas only) created from stereopairs (derived from satellite data or aerial photographs) See "Terrain Analysis" on page 645 for more information on using DEMs. Alaska.Figure 33: Arc/second Format Arc/Second Format Longitude 1201 1201 La t i t u de 1201 In Figure 33. and the island territories. USGS DEMs In 2006. DEM DEMs are digital elevation model data. Hawaii. Arc/second data used in conjunction with other image data. must be rectified or projected onto a planar coordinate system such as UTM. but it is now used to describe any digital elevation data. there are 1201 pixels in the first row and 1201 pixels in the last row. such as TM or SPOT.

767. meaning each pixel can have a possible elevation of -32. 1:250. Using Topographic Data Topographic data have many uses in a GIS. DTED data are distributed on 9-track tapes and on CDROM. DTED DTED data are produced by the National Imagery and Mapping Agency (NIMA) and are available only to US government agencies and their contractors. 2006.000 scale is available only in Arc/second format. There are two types of DTED data available: • • DTED 1 — a 1° × 1° area of coverage DTED 2 — a 1° × 1° or less area of coverage Both are in Arc/second format and are distributed in cells. A cell is a 1° × 1° area of coverage. • Both types have a 16-bit range of elevation values. Both have a 16-bit range of elevation values.768 to 32.Source: United States Geological Survey. ERDAS IMAGINE rotates the data 90° counterclockwise as part of the Import process so that coordinates read with any ERDAS IMAGINE program are correct. also called 7. DTED data files are also oriented so that North is on the right side of the image instead of at the top. topographic data can be used in conjunction with other data to: • • • • calculate the shortest and most navigable path over a mountain range assess the visibility from various lookout points or along roads simulate travel through a landscape determine rates of snow melt Raster and Vector Data Sources 123 . ERDAS IMAGINE rotates the data 90° counterclockwise as part of the Import process so that coordinates read with any ERDAS IMAGINE program are correct. There are two types of historic DEMs that are most commonly available from USGS: • 1:24. For example.000 scale. It has a spatial resolution of 30 × 30 m.5-minute DEM. DEM data are stored in ASCII format. Like DEMs. DEM data files from USGS are initially oriented so that North is on the right side of the image instead of at the top. The data file values in ASCII format are stored as ASCII characters rather than as zeros and ones like the data file values in binary data. is usually referenced to the UTM coordinate system.

The US NAVSTAR GPS consists of a constellation of 24 satellites orbiting the Earth. but from 1993 onwards the system started to be used (in a degraded mode) by the general public. GPS Data Introduction Global Positioning System (GPS) data has been in existence since the launch of the first satellite in the US Navigation System with Time and Ranging (NAVSTAR) system on February 22. only three satellites should be required to find the 3D position of the receiver. 1978. Initially. broadcasting data that allows a GPS receiver to calculate its spatial position. and therefore can determine the distance to the satellite. the GPS receiver can calculate its own position based on the known positions of the satellites (that is. if the GPS receiver can see three or more satellites and determine the distance to each. 124 Raster and Vector Data Sources . y. Thus. The satellites orbit the Earth (at an altitude of 20. but various inaccuracies (largely based on the quality of the clock within the GPS receiver that is used to time the arrival of the signal) mean that at least four satellites are generally required to determine a threedimensional (3D) x. There is also a Russian GPS system called GLONASS with similar capabilities. z position. the system was available to US military personnel only. and the availability of a full constellation of satellites since 1994. the intersection of the spheres of distance from the satellite locations). Satellite Position Positions are determined through the traditional ranging technique. A GPS receiver with line of site to a GPS satellite can determine how long the signal broadcast by the satellite has taken to reach its location.• • • orthocorrect satellite or airborne images create aspect and slope layers provide ancillary data from image classification See "Terrain Analysis" on page 645 for more information about using topographic and elevation data.200 km) in such a manner that several are always visible at any location on the Earth's surface. Theoretically.

The field unit can then automatically updates its own location in real time. it can compare this location with the position it calculates from GPS satellites at any particular time and calculate an error vector for that time (that is. but does show the concept behind the use of the GPS system for determining position. This is mainly intended to limit the use of highly accurate GPS positioning to hostile users. The signal used for commercial receivers has an error introduced to it called Selective Availability. Selective Availability introduces a positional inaccuracy of up to 100m to commercial GPS receivers. but especially for commercial users by Selective Availability. Applications of GPS Data GPS data finds many uses in remote sensing and GIS applications. each field-read position (with an appropriate time stamp) can be compared to the error vector for that time and the position corrected using the inverse of the vector. This is generally performed using specialist differential correction software. thereby allowing it to average out the errors. but the errors can be ameliorated through various techniques. One of the biggest uses of this technique is for ocean navigation in coastal areas. The technique works by using a second GPS unit (or base station) that is stationary at a precisely known position. or through more advanced techniques discussed in the following sections. One is for civilian use and one for military use. such as keeping the GPS receiver stationary. A log of such error vectors can then be compared with GPS readings taken from the first. The main disadvantage of this technique is that the range that a GPS base station can broadcast over is generally limited. where base stations have been set up along coastlines and around ports so that the GPS systems on board ships can get accurate real time positional information to help in shallow-water navigation. mobile unit (the field unit that is actually taking GPS location readings of features). Real Time Differential GPS (RDGPS) takes this technique one step further by having the base station communicate the error vector via radio to the field unit in real time. The accuracy of that position is affected by several factors. thereby restricting the range the mobile unit can be used away from the base station. Each satellite actually sends two signals at different frequencies. including the number of satellites that can be seen by a receiver.The explanation above is an over-simplification of the technique used. Differential Correction Differential Correction (or Differential GPS . the distance and direction that the GPS reading is in error from the real position). Under the assumption that the field unit had line of site to the same GPS satellites to acquire its position as the base station.DGPS) can be used to remove the majority of the effects of Selective Availability. such as: Raster and Vector Data Sources 125 . As this GPS knows where it actually is.

The correct amount is then dispensed at that location. VRT relies on the use of a VRT controller box connected to a GPS and the pumping mechanism for a tank full of fertilizers/pesticides/seeds/water/and so forth. Precision agriculture uses GPS extensively in conjunction with Variable Rate Technology (VRT). A digital polygon map (often derived from remotely sensed data) in the controller specifies a predefined amount to dispense for each polygonal region. In this regard the GPS receiver can be compared to using a digitizing tablet to collect data. The user in the field identifies a homogeneous area of identifiable land cover or use on the ground and records its location using the GPS receiver. GPS receivers can be used for the collection of positional information for known point features on the ground. DGPS data can be used to directly capture GIS data and survey data for direct use in a GIS or CAD system. usually via the serial port. this generally requires differential correction of the positional data. you are pointing and clicking on the real features to capture the information. for use in image classification and validation. Thus you take a computer out into the field and connect the GPS receiver to the computer.• Collection of ground truth data. These locations can then be plotted over an image to either train a supervised classifier or to test the validity of a classification. Remote sensing and GIS data layers are then displayed on the computer and the positional signal from the GPS receiver is plotted on top of them. The aim of this process is to maximize yields without causing any environmental damage. If these can be identified in an image. As the tractor pulls the tank around the field the GPS logs the position that is compared to the map position in memory. but instead of pointing and clicking at features on a paper document. • • • • 126 Raster and Vector Data Sources . even spectral properties of realworld conditions at known geographic positions. the positional data can be used as Ground Control Points (GCPs) for geocorrecting the imagery to a map projection system. If the imagery is of high resolution. Moving map applications take the concept of relating the GPS positional information to your geographic data layers one step further by having the GPS position displayed in real time over the geographical data layers.

yaw of the aircraft. and DEM products that can be ordered. 1990 Ordering Raster Data Table 28 describes the different Landsat.bands 1-5 and 7 60 m . Figure 34: Common Uses of GPS Data Navigation on land Navigation on seas Navigation in the air Navigation in space Harbor navigation Navigation in rivers Navigation of recreational vehicles High precision kinematic surveys on the ground Guidance of robots and other machines Cadastral surveying World Wide Geodetic network densification High precision aircraft positioning GPS Photogrammetry without ground control Monitoring deformation Hydrographic surveys 24 Hours Per Day Active control stations Source: Leick.• GPS is often used in conjunction with airborne surveys. z and roll. Each image in the aerial survey block thus has initial exterior orientation parameters which therefore minimizes the need for control in a block triangulation process. y. SPOT. has on board one or more GPS receivers tied to an inertial navigation system. The aircraft. Figure 34 shows some additional uses for GPS coordinates. AVHRR.bands 6H and 6L 30 m 28. pitch.band 8 30 m . Table 28: Common Raster Data Products Data Type Landsat ETM+ Ground Covered 170 x 183 km Pixel Size 15 m .5 m 79 × 56 m 10 m and 20 m # of Bands 8 Format GeoTIFF Landsat TM Landsat MSS SPOT 170 × 183 km 170 × 185 km 60 × 60 km 7 4 1-3 NLAPS GeoTIFF NLAPS BIL Raster and Vector Data Sources 127 . Information in this chart does not reflect all the products that are available. but only the most common types that can be imported into ERDAS IMAGINE. as well as carrying a camera or scanner. As each frame is exposed precise information is captured (or calculated in post processing) on the x.

Data.1 km 4 km 30 m 30 m # of Bands 1-5 1-5 1 Format 10-bit packed or unpacked 10-bit packed or unpacked ASCII Addresses to Contact For more information about these and related products.geoeye.nesdis.5’ Pixel Size 1. Suite 500 Chantilly.5’ × 7.com SPOT data: SPOT Image Corporation 14595 Avion Parkway.com NOAA AVHRR data: NOAA/National Environmental Satellite.noaa.gov Ground Covered 2700 × 2700 km 4000 × 4000 km 7. MD 20910 USA Internet: www. and Information Service (NESDIS) NOAA Central Library 1315 East-West Highway SSMC3. contact the following agencies: • IKONOS.7500 Fax: 703.9570 Internet: www.480.000 National Elevation Dataset (NED) assembled by USGS Source: http://ned. OrbView-2 data: GeoEye 21700 Atlantic Boulevard Dulles. GeoEye-1.usgs. VA 20166 USA Telephone: 703. 2nd Floor Silver Spring.gov • • 128 Raster and Vector Data Sources .450.spot.Table 28: Common Raster Data Products Data Type NOAA AVHRR (Local Area Coverage) NOAA AVHRR (Global Area Coverage) Historic USGS DEM 1:24. VA 20151 USA Telephone: 703-715-3100 Fax: 703-715-3120 Internet: www.

usgs.sat. Geological Survey. and related information from federal. satellite images. aerial photos. QuickBird.S.dundee. SD 57198 USA Telephone: 800-252-4547 Telephone: 605-594-6151 Internet: http://edc. National Center 12201 Sunrise Valley Drive Reston. publications.dscr.com • • • • Raster and Vector Data Sources 129 .usgs.uk Cartographic data including: topographic maps. DEMs. Geological Survey National Center for Earth Resource Observation & Science (EROS) 47914 252nd Street Sioux Falls. VA 23297-5339 USA Telephone: 804-279-6500 Telephone: 800-826-0342 Internet: www. IKONOS.S. Space Technology Centre University of Dundee Dundee. and private agencies: National Mapping Division U.mil/rmf/ ERS-1.html Landsat data: U.gs. Scotland.gov ADRG/CADRG/ADRI data (available only to defense contractors): NGA (National Geospatial-Intelligence Agency) Defense Supply Center Richmond Mapping Customer Operations (DSCR-FAN) 8000 Jefferson Davis Highway Richmond. UK DD1 4HN Telephone: +44 1382 38 4409 Fax: +44 1382 202 575 Internet: www.ac. British Columbia Canada V6V 2J3 Telephone: 604-244-0400 Telephone: 888-780-6444 Internet: www. planimetric data.dla. Envisat radar data: MDA Geospatial Services International 13800 Commerce Parkway Richmond.mdacorporation. VA 20192 USA Telephone: 703-648-4000 Internet: www.gov/pubprod/index. ERS-2. Landsat.• AVHRR Dundee Format NERC Satellite Receiving Station. state.

gov/release/data_available/sar/index.gs.S.h tm • • • • 130 Raster and Vector Data Sources . British Columbia Canada V6V 2J3 Telephone: 604-244-0400 Telephone: 888-780-6444 Internet: www.html Almaz radar data: NPO Mashinostroenia Scientific Engineering Center “Almaz” 33 Gagarin Street Reutov. Hiki-gun Saitama. Russia Telephone: 7/095-538-3018 Fax: 7/095-302-2001 E:mail: NPO@mashstroy. Data. Government RADARSAT sales: NOAA Satellite and Information Service National Environmental Satellite. Geological Survey.gov/pubprod/index.su U. Ohashi Hatoyama-machi.msk.com ALOS and JERS-1 (Fuyo 1) radar data: Japan Aerospace Exploration Agency (JAXA) Earth Observation Center 1401 Numaneoue. 2nd Floor Silver Spring.S. C radar data: U.jp SIR-A.noaa.ncdc.mdacorporation.jaxa. MD 20910 USA http://www. VA 20192 USA Telephone: 703-648-4000 Internet: www. and Information Service (NESDIS) NOAA Central Library 1315 East-West Highway SSMC3. National Center 12201 Sunrise Valley Drive Reston. Japan 350-0393 Telephone: +81-49-298-1200 Internet: www.usgs. B. 143952.class.• RADARSAT data: MDA Geospatial Services International 13800 Commerce Parkway Richmond.

X GRID and GRID Stacks JFIF (JPEG) JPEG2000 MrSID SDTS Sun Raster TIFF and GeoTIFF Other data types might be imported using the Generic Binary import option.X data files are indicated by the file name extensions: • . by using IMAGINE Vector. ERDAS Ver. 7. 7. The two basic types of ERDAS Ver. 7.X series was the predecessor of ERDAS IMAGINE software.Raster Data from Other Software Vendors ERDAS IMAGINE also enables you to import data created by other software vendors. This way. if another type of digital data system is currently in use. The Import and/or Direct Read function handles these raster data types from other software systems: • • • • • • • • ERDAS Ver. it easily converts to the ERDAS IMAGINE file format for use in ERDAS IMAGINE. or if data is received from another system. Data from other vendors may come in that specific vendor’s format.LAN—a multiband continuous image file (the name is derived from the Landsat satellite) Raster and Vector Data Sources 131 . or in a standard format which can be used by several vendors. or vice versa. Convert a vector layer to a raster layer. Vector to Raster Conversion Vector data can also be a source of raster data by converting it to raster format. 7.X The ERDAS Ver.

it becomes an image file with one thematic raster layer. 8-bit.0.• .LAN and . Inc. The ERDAS Ver.X file structure includes: • • • a header record at the beginning of the file the data file values a statistics or trailer file When you import a . but it is also possible to combine GRIDs files into a multilayer image.GIS—a single-band thematic data file in which pixels are divided into discrete categories (the name is derived from geographic information system) . or 16-bit. 7.GIS file. The GRID Stack 7. 132 Raster and Vector Data Sources .x format keeps attribute tables for the entire stack in a separate directory. The name GRID is taken from the raster data format of presenting information in a grid of cells. The data format for GRID is a compressed tiled raster data structure. A GRID Stack (. Like ArcInfo Coverages. The image data are arranged in a BIL format and can be 4-bit. (ESRI) in Redlands. a GRID is stored as a set of files in a directory. including files to keep the attributes of the GRID.x. referred to in ERDAS software as GRID Stack 7.stk) file names multiple GRIDs to be treated as a multilayer image. Starting with ArcInfo version 7. each band becomes a continuous raster layer within the image file. GRID is designed to complement the vector data model system. ESRI introduced the STK format. in a manner similar to that of GRIDs and Coverages.LAN file. When you import a .GIS image files are stored in the same format. California. GRID and GRID Stacks GRID is a raster geoprocessing program distributed by Environmental Systems Research Institute. which contains multiple GRIDs. Each GRID represents a single layer of continuous or thematic imagery. ArcInfo is a well-known vector GIS that is also distributed by ESRI.

image features and so forth). and use the quality levels option to compress images within that quality range. this form of JPEG is considered to be lossy. Since the output image is not exactly the same as the input image. thresholding. not similar file size. by taking advantage of the fact that the data being compressed is a visible image. JPEG can compresses monochrome imagery.jpeg. Specify Output File Size The concept of ECW and JPEG2000 format compression is that you are compressing to a specified quality level rather than a specified file size. Visible quality differences would not be the goal for an end product such as air photos over a common area. with the lossless attaining only relatively low compression ratios but retaining the full accuracy of the input data.html. JPEG2000 provides both a lossy and lossless encoding mode. but achieves compression ratios of 20:1 or higher with color (RGB) imagery. the JPEG File Interchange Format (JFIF) is a standard file format used to store JPEG-compressed imagery.JFIF (JPEG) JPEG is a set of compression techniques established by the Joint Photographic Experts Group (JPEG). but may possibly alter the data content. Additional information on the JPEG standard can be found at http://www. JPEG cannot be used on thematic imagery. There is a lossless form of JPEG compression that uses DCT followed by nonlossy encoding. Using a file-sized based compressor. and you will notice visible quality differences amongst the output images. JPEG2000 JPEG2000 compression technique is a form of wavelet compression defined by the International Organization for Standards (ISO). Raster and Vector Data Sources 133 . The lossy modes can attain very high compression ratios. The goal is visual similarity in quality levels amongst multiple files. The integrity of the source image is preserved by focussing its compression on aspects of the image that are less noticeable to the human eye. The most commonly used form of JPEG involves Discrete Cosine Transformation (DCT). ERDAS IMAGINE only handles the lossy form of JPEG. including TIFF. JPEG2000 is designed to retain the visual appearance of the input data as closely as possible even with high compression ratio lossy processing. the resultant image quality of each file compressed will be different (depending on file size.org/jpeg/index. While JPEG compression is used by other file formats. followed by Huffman encoding. Choose a level of quality that benefits your needs. but it is not frequently used since it only yields an approximate compression ratio of 2:1. due to the change in pixel values.

Specifying a quality value of 100 creates a much larger image and slight increase in quality compared to a quality value of 95. Washington. MrSID Multiresolution Seamless Image Database (MrSID. Inc. pronounced Mister Sid) is a wavelet transform-based compression algorithm designed by LizardTech. The underlying wavelet-based compression methodology used in MrSID yields high compression ratios while satisfying stringent image quality requirements. This is because the output size is affected not only by the compression algorithm used but also by the specific low-level character of the input data. This does not correspond to the actual compression rate.5:1). MrSID encoded imagery is visually lossless. the Lempel-Ziv-Welch. 134 Raster and Vector Data Sources . which will generally be higher (between 2:1 and 2.jpeg. in Seattle. the compression-decompression process does not reproduce the source data pixel-for-pixel).org/jpeg2000. which supports lossless compressed images. algorithm used in the GIF and TIFF image formats). but is essential for large continuous images since it allows much higher compression ratios than lossless methods (for example. whereas MrSID provides excellent image quality at compression ratios of 30:1 or more. both of which make MrSID well-suited to provide efficient storage and retrieval of very large digital images. lowest compression). On typical remotely sensed imagery. Additional information on the JPEG2000 standard can be found at http://www. numerically lossless compression is specified by selecting a target compression ratio of 1:1. Quality Levels JPEG and JPEG2000 quality value ranges from 1 (lowest quality. Values between 50 and 95 are normally used. LZW. The compression technique used in MrSID is lossy (that is. lossless methods provide compression ratios of perhaps 2:1.Currently there is no way of directly controlling the output file size when compressing images using the ECW or JPEG2000 file format and ERDAS products and SDKs. Numerically Lossless Compression When compressing to JPEG2000 format. Lossy compression is not appropriate for thematic imagery. Certain kinds of images are simply easier to compress than others. highest compression) to 100 (highest quality. At standard compression ratios. The novel developments in MrSID include a memory efficient implementation and automatic inclusion of pyramid layers in every data set.

This is imported via the SDTS (Vector) title. Depending on the display hardware and options chosen.er.usgs. selfdescribing method of encoding data. screendump can create any of the file types listed in Table 29. which covers gridded raster data. SDTS Profiles The SDTS standard is organized into profiles. SDTS uses a flexible.com. In addition to the standard metadata. consult the SDTS web site at http://mcmcweb. which covers attributed vector data. the producer may supply detailed attribute data correlated to any image feature.gov/sdts. SDTS requires a number of statements regarding data accuracy.Additional information on the MrSID compression standard can be found at the LizardTech website at http://www. In addition to GIS.lizardtech. For more information on SDTS. SDTS The Spatial Data Transfer Standard (SDTS) was developed by the USGS to promote and facilitate the transfer of georeferenced data and its associated metadata between dissimilar computer systems without loss of fidelity. For metadata. Raster and Vector Data Sources 135 . SDTS Raster Profile and Extensions (SRPE). Two subsets of interest to ERDAS IMAGINE users are: • • Topological Vector Profile (TVP). This is imported as SDTS Raster. SUN Raster files can be used in desktop publishing applications or any application where a screen capture would be useful. SUN Raster A SUN Raster file is an image captured from a monitor display. To achieve these goals. Profiles identify a restricted subset of the standard needed to solve a certain problem domain. There are two basic ways to create a SUN Raster file on a SUN workstation: • • use the OpenWindows Snapshot application use the UNIX screendump command Both methods read the contents of a frame buffer and write the display data to a user-specified file. which has enough structure to permit interoperability.

Any TIFF file that contains an unsupported value for one of these elements may not be compatible with ERDAS IMAGINE. RLE (run-length encoded) None. RLE The data are stored in BIP format. (Seattle. Washington) in 1986 in conjunction with major scanner vendors who needed an easily portable file format for raster image data. Table 30: Common TIFF Format Elements Byte Order Intel (LSB/MSB) Motorola (MSB/LSB) Image Type Black and white Gray scale Inverted gray scale Color palette 136 Raster and Vector Data Sources . the TIFF format is a widely supported format used in video. RLE None. medical imaging. document storage and retrieval. which can be easily transported between different operating systems and computers. satellite imaging. fax transmission. It handles black and white line images. Table 30 shows key Baseline TIFF format elements and the values for those elements supported by ERDAS IMAGINE. and desktop publishing applications. TIFF File Formats TIFF’s great flexibility can also cause occasional problems in compatibility. TIFF TIFF was developed by Aldus Corp. Today. the GeoTIFF extensions permit TIFF files to be geocoded. as well as gray scale and color images. This is because TIFF is really a family of file formats that is comprised of a variety of elements within the format. In addition. RLE None. The TIFF format’s main appeal is its flexibility.Table 29: File Types Created by Screendump File Type 1-bit black and white 8-bit color paletted (256 colors) 24-bit RGB true color 32-bit RGB true color Available Compression None.

4. b c d Additional information on the TIFF specification can be found at Adobe Systems Inc.Table 30: Common TIFF Format Elements RGB (3-band) Configuration BIP BSQ Bits Per Planea Compressiond 1b. website http://partners. Must be imported and exported as 4-bit data. 8). Compression supported on import and direct read/write only. The GeoTIFF format separates cartographic information into two parts: georeferencing and geocoding. 2b. 4 or 8. 32c. 8. 8.html. 4. Multiband data with bit depths differing per band cannot be imported into ERDAS IMAGINE. or as a result of geographic analysis" (Ritter and Ruth. digital elevation models. 64c None CCITT G3 (B&W only) CCITT G4 (B&W only) Packbits LZW LZW with horizontal differencing a All bands must contain the same number of bits (that is. Raster and Vector Data Sources 137 . 1995). Revision 1. GeoTIFF According to the GeoTIFF Format Specification. "The GeoTIFF spec defines a set of TIFF tags provided to describe all ’Cartographic’ information associated with TIFF imagery that originates from satellite imaging systems.adobe.0. scanned aerial photography. scanned maps.com/public/developer/tiff/index. Direct read/write only. 16c. 4.

or a set of tie points. the units of the map coordinates are obtained from the geocoding. These data can then be used for the analyses and. 138 Raster and Vector Data Sources . In addition. the grid lines of the coordinate system always intersect at the center of a pixel. The use of a standard projected coordinate system in GeoTIFF constrains the units that can be used with that standard system. ERDAS IMAGINE transforms the georeferencing to conform to the implied units so that the standard projected coordinate system code can be used. These files become vector layers when imported. In ERDAS IMAGINE. GeoTIFF defines a set of standard projected coordinate systems. The alternative (preserving the georeferencing as is and producing a nonstandard projected coordinate system) is regarded as less interoperable. GeoTIFF allows the raster space to be defined either as having grid lines intersecting at the centers of the pixels (PixelIsPoint) or as having grid lines intersecting at the upper left corner of the pixels (PixelIsArea). datum.Georeferencing Georeferencing is the process of linking the raster space of an image to a model space (that is.remotesensing. if the units used with a projection in ERDAS IMAGINE are not equal to the implied units of an equivalent GeoTIFF geocoding. and so forth.org/geotiff/spec/geotiffhome. GeoTIFF allows georeferencing via a scale and an offset. exported back to their original format (if desired).html. a full affine transformation. In GeoTIFF. a map system). ERDAS IMAGINE currently ignores GeoTIFF georeferencing in the form of multiple tie points. Additional information on the GeoTIFF specification can be found at http://www. ERDAS IMAGINE converts the georeferencing values for PixelIsArea images so that they conform to its raster space definition. Vector Data from Other Software Vendors It is possible to directly import several common vector formats into ERDAS IMAGINE. Geocoding allows for the specification of projection. ellipsoid. Raster space defines how the coordinate system grid lines are placed relative to the centers of the pixels of the image. This interpretation also allows the GeoTIFF image to be reprojected. Geocoding Geocoding is the process of linking coordinates in model space to the Earth’s surface. Therefore. in most cases. ERDAS IMAGINE interprets the GeoTIFF geocoding to determine the latitude and longitude of the map coordinates for GeoTIFF images. not from the georeferencing.

AutoCAD files can also be output to IGES format using the AutoCAD program IGESOUT. See the ArcInfo documentation for more information about these files. you can import a Drawing Interchange File (DXF) into ERDAS IMAGINE. California). engineering. See "Vector Data" on page 41 for more information on ERDAS IMAGINE vector layers. if you have information in AutoCAD that you would like to use in the GIS. and many other applications. Topology is not created or maintained. If there is a syntax error in the data file. Inc.Although data can be converted from one type to another by importing a file into ERDAS IMAGINE and then exporting the ERDAS IMAGINE file into another format. (Sausalito. therefore the coverage must be built or cleaned after it is imported into ERDAS IMAGINE. correct the data file. do the analysis.and three-dimensional models. you must kill the process. For example. attribute data are also imported into ERDAS IMAGINE. ARCGEN ARCGEN files are ASCII files created with the ArcInfo UNGENERATE command. If this happens. These routines are based on ArcInfo data conversion routines. urban planning. This software is frequently used in architecture. Raster and Vector Data Sources 139 . In most cases. AutoCAD (DXF) AutoCAD is a vector software package distributed by Autodesk. The AutoCAD program DXFOUT creates a DXF file that can be converted to an ERDAS IMAGINE vector layer. and then try importing again. AutoCAD DXF is the standard interchange format used by most CAD systems. the import process may not work. Use Import/Export to import vector data from other software vendors into ERDAS IMAGINE vector layers. the import and export routines were designed to work together. ARCGEN files must be properly prepared before they are imported into ERDAS IMAGINE. and then export the data back to DXF format. See "Geographic Information Systems" on page 173 for more information about using vector data in a GIS. The import ARCGEN program is used to import features to a new layer. Each of the following sections lists the types of attribute data that are imported. AutoCAD is a computer-aided design program that enables the user to draw two.

this information is also exported. Refer to an AutoCAD manual for more information about the format of DXF files. These entities become point features in a layer. The initial Z value of 3D entities is stored. When converted to an ERDAS IMAGINE vector layer. The binary format is an optional format for AutoCAD Releases 10 and 11. These entities form lines. If an imported DXF file is exported back to DXF format. Each layer contains one or more drawing elements or entities. An entity is a drawing element that can be placed into an AutoCAD drawing with a single command. Circles are composed of 361 points—one vertex for each degree. It is structured just like the ASCII format. DXF files are composed of a series of related layers. Table 31: Conversion of DXF Entries DXF Entity Line 3DLine Trace Solid 3DFace Circle Arc ERDAS IMAGINE Feature Line Line Comments These entities become two point lines. The first and last point is at the same location. Table 31 describes how various DXF entities are converted to ERDAS IMAGINE. only the data are in binary format. These entities can be grouped to form a single line having many vertices. DXF files can be converted in the ASCII or binary format. These entities become four or five point lines. 140 Raster and Vector Data Sources . each entity becomes a single feature. Line Polyline Point Shape Line Point The ERDAS IMAGINE import process also imports line and point attribute data (if they exist) and creates an INFO directory with the appropriate ACODE (arc attributes) and XCODE (point attributes) files. The initial Z value of 3D entities is stored.See IGES on page 142 for more information about IGES files.

census. each containing six digits. line. this information is also exported. DLGs also store attribute information in the form of major and minor code pairs. lines. Each record represents a single linear feature with address and political.S.DLG DLGs are furnished by the U. The coordinates are stored in Lat/Lon decimal degrees. Code pairs are encoded in two integer fields.000-scale quadrangles 1:2. Most DLGs are in the Universal Transverse Mercator (UTM) map projection. you must Build or Clean it. ETAK ETAK’s MapBase is an ASCII digital street centerline map product available from ETAK.000-scale national atlas maps DLGs are topological files that contain nodes. ETAK files are similar in content to the Dual Independent Map Encoding (DIME) format used by the U. Inc. and XCODE files. There are four possible types of ETAK features: • DIME or D types—if the feature type is D. contours. Geological Survey and provide planimetric base map information.000 scale series is in geographic coordinates.and 15-minute topographic quadrangles 1:100. ETAK has also included road class designations and. See "Geographic Information Systems" on page 173 for information on this process. If an imported DLG file is exported back to DLG format. The ERDAS IMAGINE import process also imports point. You can export to DLG-3 optional format. Census Bureau. hydrography. DLG files are available for the following USGS map series: • • • 7. and polygon attribute data (if they exist) and creates an INFO directory with the appropriate ACODE. stream. and polygons in an ERDAS IMAGINE vector layer). DLGs can be imported in standard format (144 bytes per record) and optional format (80 bytes per record). major landmark features.000. The major code describes the class of the feature (road. California). such as transportation. (Menlo Park.000. and ZIP code boundary information. lines. However.5. and so forth) and the minor code stores more specific information about the feature. a line is created along with a corresponding ACODE (arc attribute) record. and areas (similar to the points.S. and public land survey boundaries. the 1:2. in some areas. PCODE (polygon attributes). To maintain the topology of a vector layer created from a DLG file. Raster and Vector Data Sources 141 .

If an imported IGES file is exported back to IGES format. and are useful for building address coverages. Department of Commerce. this information is also exported. IGES IGES files are often used to transfer CAD data between systems.• Alternate address or A types—each record contains an alternate address record for a line. published by the U. the Virgin Islands. is in uncompressed ASCII format only. TIGER TIGER files are line network products of the U. These records are written to the attribute file. IGES Version 3.0 format. Census Bureau. IGES files can be produced in AutoCAD using the IGESOUT command. • • ERDAS IMAGINE vector data cannot be exported to ETAK format.S. The Census Bureau is using the TIGER system to create and maintain a digital cartographic database that covers the United States. Landmark or L types—if the feature type is L and you opt to output a landmark layer. Shape features or S types—shape records are used to add vertices to the lines.S. Puerto Rico. and the Trust Territories of the Pacific. American Samoa. The following IGES entities can be converted: Table 32: Conversion of IGES Entities IGES Entity IGES Entity 100 (Circular Arc Entities) IGES Entity 106 (Copious Data Entities) IGES Entity 106 (Line Entities) IGES Entity 116 (Point Entities) ERDAS IMAGINE Feature Lines Lines Lines Points The ERDAS IMAGINE import process also imports line and point attribute data (if they exist) and creates an INFO directory with the appropriate ACODE and XCODE files. Guam. then a point feature is created along with an associated PCODE record. 142 Raster and Vector Data Sources . The coordinates for these features are in Lat/Lon decimal degrees.

including wards. TIGER attributes include the following: • • Version numbers—TIGER/Line file version number. mountain peaks. In addition to line segments. The cartographic base is taken from Geographic Base File/Dual Independent Map Encoding (GBF/DIME). in metropolitan areas. Landmarks—landmark area and point features include schools.TIGER/Line is the line network product of the TIGER system. incorporated cities. TIGER files are available in ASCII format on both CD-ROM and tape media. Street attributes—includes street address information for selected urban areas. this information is also exported. towns. The ERDAS IMAGINE import process creates an INFO directory with the appropriate ACODE and XCODE files. where available. hospitals.000-scale national map series. There is a great deal of attribute information provided with TIGER/Line files. Statistical areas are areas used during the census-taking. All released versions after April 1989 are supported. and lakes. in order to have continuous coverage for the entire United States. address ranges for the left and right sides of each segment. where legal areas are not adequate for reporting statistics. Permanent record numbers—each line segment is assigned a permanent record number that is maintained throughout all versions of TIGER/Line files. military installations. counties. and national parks. Source codes—each line and landmark point feature is assigned a code to specify the original source. townships. campgrounds. SPOT imagery. TIGER files contain census geographic codes and. and from the USGS 1:100. Political boundaries—the election precincts or voting districts may contain a variety of areas. If an imported TIGER file is exported back to TIGER format. rivers. Census feature class codes—line segments representing physical features are coded based on the USGS classification codes in DLG3 files. Legal and statistical area attributes—legal areas include states. Indian reservations. Line and point attribute information can be converted into ERDAS IMAGINE format. airports. • • • • • • Raster and Vector Data Sources 143 . and a variety of other sources in all other areas. legislative districts. and election districts.

Guam) do not have address ranges. 144 Raster and Vector Data Sources . Vector Data from Other Software Vendors on page 138. To determine the amount of disk space required to convert a set of TIGER/Line files.0 ARC Command References manuals. The information presented in this section. The amount of additional scratch space needed depends on the largest file and whether it needs to be sorted. 1992. Inc. Disk Space Requirements TIGER/Line files are partitioned into counties ranging in size from less than a megabyte to almost 120 megabytes.. use this rule: the size of the converted layers is approximately the same size as the files used in the conversion. was obtained from the Data Conversion and the 6. The average size is approximately 10 megabytes.TIGER files for major metropolitan areas outside of the United States (for example. both published by ESRI. The amount usually required is about double the size of the file being sorted. Puerto Rico.

making it possible to move the mouse from one screen to the next. The display hardware contains the memory that is used to produce the image. It is expressed in terms of: • display resolution. 1280× 1024. Some typical display resolutions are 1152 × 900. A display may consist of multiple screens. such as Microsoft Windows NT. 8-bit or 24-bit). • Figure 35: Example of One Seat with One Display and Two Screens Screen Screen Display Memory Size The size of memory varies for different displays. • Image Display Image Display 145 145 . true color or pseudo color) and the pixel depth (for example. which is expressed as the horizontal and vertical dimensions of memory—the number of pixels that can be viewed on the display screen. and a display. These screens work together.Image Display Introduction This section defines some important terms that are relevant to image display. and 1024 × 780. • • A host workstation consists of a CPU. 1280 × 1024. For the PC. Most of the terminology and definitions used in this chapter are based on the X Window System (Massachusetts Institute of Technology) terminology. A seat is a combination of an X-server and a host workstation. This hardware determines which types of displays are available (for example. as explained below. This may differ from other systems. typical resolutions are 800 × 600. keyboard. 1680 x 1050 and the number of bits for each pixel or pixel depth. mouse. 1024 × 768.

A set of bits. for the 24-bit image display). The combination of the three color guns. depending upon the number of bits used. if an image is displayed with a magnification factor of 2. If the display being used is not 24-bit. the number of values that can be expressed by 3 bits is 8 (23 = 8). These bits are used to determine the number of possible brightness values. yields 2563. The number of values that can be expressed by a set of bits is 2 to the power of the number of bits used. To display an image. Pixel is a broad term that is used for both: • • the data file value(s) for one data unit in an image (file pixels). For example. Therefore. expressed by the range of values 0 to 255. or 256. or off and on. As an element. or 16.Bits for Image Plane A bit is a binary digit. such as brightness and color. Whereas the file pixel has values that are relevant to data (such as wavelength of reflected light). 24 bits per pixel breaks down to eight bits for each of the three color guns per pixel. For example. Usually. the example above calculates the number of possible brightness values and colors that can be displayed. Pixel The term pixel is abbreviated from picture element. however. then one file pixel takes up 4 (2 × 2) grid cells on the display screen.777. 146 Image Display . a file pixel that consists of one or more numbers must be transformed into a display pixel with properties that can be seen. Raster image data are divided by a grid. The number of possible values that can be expressed by eight bits is 28. meaning a number that can have two possible values—0 and 1. the displayed pixel must have a particular color or gray level that represents these data file values. Displays are referred to in terms of a number of bits. each with 256 possible brightness values. in which each cell of the grid is represented by a pixel.216 possible colors for each pixel on a 24-bit display. one pixel in a file corresponds to one pixel in a display or printout. or one grid location on a display or printout (display pixels). (or 224. a pixel is the smallest part of a digital picture (image). However. A pixel is also called a grid cell. such as 8-bit or 24bit. an image can be magnified or reduced so that one file pixel no longer corresponds to one pixel in the display or printout. in a 24-bit display. can have many more values. on a 24-bit display. each color gun of a pixel can have any one of 256 possible brightness values. For example.

Red. and blue phosphors on the picture tube appear as tiny colored dots on the display screen. A colormap is an ordered set of colorcells. color guns direct electron beams that fall on red. where each of these components is represented as an 8bit value. Color monitors are often called RGB monitors. Each pixel is represented by an equal number of red. green. The red. green. and blue phosphors. and blue light can be added together to produce a wide variety of colors—a wider variety than can be formed from the combinations of any three other colors. or 16. and blue phosphors. Color Guns On a display. referring to the primary colors. and combinations of red. Since many systems have only an 8-bit display. a colormap is used to translate the 8-bit value into a color. green. Color displays that are available currently yield 224. The human eye integrates these dots together.216 colors. green. for every pixel. Therefore. green. All of the colors that can be output to a display can be expressed with three brightness values—one for each color gun. Red. The phosphors glow at certain frequencies to produce different colors. and blue values. and blue) allow you to perceive changes across an image. Brightness Values Brightness values (or intensity values) are the quantities of each primary color to be output to each displayed pixel. and blue are perceived. Each color has a possible 256 different values (28). green. the colormap translates data file values in memory into brightness values for each color gun. brightness values are calculated for all three color guns. Colormaps are not limited to 8-bit displays. On a display. different colors (combinations of red. which is used to perform a function on a set of input values. Colormap and Colorcells A color on the screen is created by a combination of red. and blue light are combined. 24 bits are needed to represent a color. green. green.Colors Human perception of color comes from the relative amounts of red. and blue are therefore the additive primary colors. When an image is displayed.777. green. Image Display 147 . green. To display or print an image. A nearly infinite number of shades can be produced when red. and blue light that are measured by the cones (sensors) in the eye.

This process allows the colors on the screen to be updated in near real-time. whereas a lookup table is a function of ERDAS IMAGINE. 8-bit. When a contrast adjustment is performed on an image in ERDAS IMAGINE. When an application requests a color. green. However. if the auto-update function is being used to view the adjustments in near real-time. For example.777. With a 24-bit display. 148 Image Display . Colorcells can be read-only or read/write. the server specifies which colorcell contains that color and returns the color. 24-bit). The number of colorcells in a colormap is determined by the number of bits in the display (for example. Lookup Table The colormap is a function of the display hardware. green. In the colormap below (Table 33). The red. This offers 256 × 256 × 256.216 different colors. This chapter explains how the colormap is used to display imagery. then this pixel uses the brightness values for the 24th colorcell in the colormap. then the colormap is being used to map the image through the lookup table. or 16. There are 256 colorcells in a colormap with an 8-bit display.Colormap vs. This means that 256 colors can be displayed simultaneously on the display. there are 256 colorcells for each color: red. if a pixel with a data file value of 40 was assigned a display value (colorcell value) of 24. lookup tables are used. this pixel is displayed as blue. Colorcells There is a colorcell in the colormap for each data file value. and blue. and blue values assigned to the colorcell control the brightness of the color guns for the displayed pixel (Nye 1990). Table 33: Colorcell Example Colorcell Index 1 2 3 24 Red 255 0 0 0 Green 0 170 0 0 Blue 0 90 255 255 The colormap is controlled by the X Windows system.

For this reason. but it cannot be changed once it is set. A display may offer more than one visual type and pixel depth. the pixel value would have to be changed and the image redisplayed. An application can easily change the color of displayed pixels by changing the color for the colorcell that corresponds to the pixel value. To change the color of a pixel on the display. However. 32-bit Displays A 32-bit display is a combination of an 8-bit PseudoColor and 24-bit DirectColor. Read/Write Colorcells The color assigned to a read/write colorcell can be changed. Image Display 149 . Whether or not it is DirectColor or TrueColor depends on the display hardware. this colorcell cannot be shared by other application windows. or TrueColor display. Instead. See the ERDAS IMAGINE Configuration Guide for more information on specific display hardware. ERDAS IMAGINE supports the following types of displays: • • • • 8-bit PseudoColor 15-bit HiColor (for Windows NT) 24-bit DirectColor 24-bit TrueColor The above display types are explained in more detail below. This type of colormap allows applications to utilize the type of colorcell that would be most preferred. it would not be possible to change the color for the corresponding colorcell. Display Types The possible range of different colors is determined by the display type. but it cannot be shared by other application windows. and all of the colorcells in the colormap could quickly be utilized.Read-only Colorcells The color assigned to a read-only colorcell can be shared by other application windows. Changeable Colormaps Some colormaps can have both read-only and read/write colorcells. it is not possible to use auto-update operations in ERDAS IMAGINE with readonly colorcells. This allows applications to use auto update operations.

the pixel is displayed with the brightness values of the fourth colorcell (blue). green. giving 256 combinations of red. 256 shades of green. Figure 36: Transforming Data File Values to a Colorcell Value Data File Values Colormap Red band value Green band value Blue band value Colorcell Index 1 Red Value Green Value Blue Value Colorcell value (4) 2 3 4 5 6 0 0 255 Blue pixel In Figure 36. it offers up to 256 shades of red. 150 Image Display . The brightness values for the colorcell that is specified by this colorcell value are used to define the color to be displayed. and blue. data file values for a pixel of three continuous raster layers (bands) is transformed to a colorcell value. It works well with thematic raster layers containing less than 200 colors and with gray scale continuous raster layers. 3-band image files can contain over 16. 24-bit DirectColor A 24-bit DirectColor display enables you to view up to three bands of data at one time.000 different colors. which is approximately 16 million different colors (2563). Since this is a 24-bit display.8-bit PseudoColor An 8-bit PseudoColor display has a colormap with 256 colorcells. and 256 shades of blue. This display grants a small number of colors to ERDAS IMAGINE. under ideal conditions. Since the colorcell value is four. while 8-bit. The data file value for the pixel is transformed into a colorcell value. green. creating displayed pixels that represent the relationships between the bands by their colors. The colorcell that is specified by these values is used to define the color to be displayed.000. the colors are severely limited because. For image files with three continuous raster layers (bands). allowing ERDAS IMAGINE to perform near real-time color modifications using Auto Update and Auto Apply options. Auto Update An 8-bit PseudoColor display has read-only and read/write colorcells. Each cell has a red. and blue brightness value. The data file values for each band are transformed into colorcell values. 256 colors are available on an 8-bit display.

Image Display 151 . creating displayed pixels that represent the relationships between the bands by their colors. Since the colorcell value is 1 for the red band.Figure 37: Transforming Data File Values to a Colorcell Value Colormap Data File Values Red band value Green band value Blue band value Colorcell Values Red band value (1) Green band value (2) Blue band value (6) ColorCell Index 1 2 3 4 5 6 200 ColorGreen Cell Value Index Blue Value 0 0 1 2 3 4 5 6 55 0 90 ColorRed Cell Value Index 1 2 3 4 5 6 55 0 0 Blue-green pixel (0. and 6 for the blue band. 90. data file values for a pixel of three continuous raster layers (bands) are transformed to separate colorcell values for each band. 24-bit TrueColor A 24-bit TrueColor display enables you to view up to three continuous raster layers (bands) of data at one time. The colormap for a 24-bit TrueColor display is not available for ERDAS IMAGINE applications. Once a color is assigned to a screen value. This type of display grants a very large number of colors to ERDAS IMAGINE and it works well with all types of data. but the color can be shared by other applications. 90. 2 for the green band. 200 RGB) In Figure 37. allowing ERDAS IMAGINE to perform real-time color modifications using the Auto Update and Auto Apply options. the RGB brightness values are 0. The data file values for the pixels are transformed into screen values and the colors are based on these values. Auto Update A 24-bit DirectColor display has read-only and read/write colorcells. the color for the pixel is calculated without querying the server and the colormap. it cannot be changed. 200. Therefore. This displays the pixel as a blue-green color.

There is no color degradation under any circumstances with this display. the RGB brightness values are 0. data file values for a pixel of three continuous raster layers (bands) are transformed to separate screen values for each band. it offers 256 shades of red. and 256 shades of blue. green. and thus does not provide ERDAS IMAGINE with any realtime color changing capability. which is approximately 16 million different colors (2563). Since this is a 24-bit display. the screen values must be calculated and the image must be redrawn. 90 for the green band. PC Displays ERDAS IMAGINE for Microsoft Windows NT supports the following visual type and pixel depths: • • • 8-bit PseudoColor 15-bit HiColor 24-bit TrueColor 152 Image Display . Auto Update The 24-bit TrueColor display does not use the colormap in ERDAS IMAGINE. and blue color guns. 90. 256 shades of green. Figure 38: Transforming Data File Values to Screen Values Data File Values Screen Values Red band value Green band value Blue band value Red band value (0) Green band value (90) Blue band value (200) Blue-green pixel (0.The screen values are used as the brightness values for the red. and 200. 200 RGB) In Figure 38. Since the screen value is 0 for the red band. Each time a color is changed. This displays the pixel as a blue-green color. Color Quality The 24-bit TrueColor visual provides the best color quality possible with standard equipment. and 200 for the blue band. 90.

and blue brightness value. however. instead of 0 to 255. A color-infrared image shows the scene as it would appear on colorinfrared film. This section explains how each raster layer type is displayed. is the same as the X Windows 8-bit PseudoColor display. each pixel can have multiple data file values. except that each colorcell has a range of 0 to 63 on most video display adapters. The colormap. giving 64 different combinations of red. For example: • • A natural-color image approximates the colors that would appear to a human observer of the scene. it is possible to assign which layers (bands) are to be displayed with each of the three color guns.000 colors.768 possible color combinations. 32 shades of green. The most useful color assignments are those that allow for an easy interpretation of the displayed image.8-bit PseudoColor An 8-bit PseudoColor display for the PC uses the same type of colormap as the X Windows 8-bit PseudoColor display. which is familiar to many analysts. except that it offers 32 shades of red. therefore. for a total of 32. Image Display 153 . It has 256 colorcells allowing 256 different colors to be displayed simultaneously. 15-bit HiColor A 15-bit HiColor display for the PC assigns colors the same way as the X Windows 24-bit TrueColor display. Displaying Raster Layers Image files (. and 32 shades of blue. When displaying an image file with continuous raster layers. The data file values in each layer are input to the assigned color gun. green. These adapters use a 16-bit color scheme. Continuous Raster Layers An image file (. 24-bit TrueColor A 24-bit TrueColor display for the PC assigns colors the same way as the X Windows 24-bit TrueColor display. green. There are two types of raster layers: • • continuous thematic Thematic raster layers require a different display process than continuous raster layers. Therefore. each colorcell has a red. Some video display adapters allocate 6 bits to the green color gun.img) can contain several continuous raster layers.img) are raster files in the ERDAS IMAGINE format. allowing 64. and blue.

• • Contrast Table When an image is displayed. the contrast of the displayed image is poor. 1989): • Landsat TM—natural color: 3. 2. band 2 to green. and higher data file values are displayed with the highest brightness values. 1 means that band 4 is assigned to red. green. Since the data file values in a continuous raster layer often represent raw data (such as elevation or an amount of reflected light). The range of most displays is 0 to 255 for each color gun. and band 1 is blue and is assigned to the blue color gun. relative to other data file values in that layer. so that the contrast of the displayed image is higher—that is.B order. For example.Band assignments are often expressed in R. but they usually remain in the same order of lowest to highest. and blue brightness values for each band are stored in this table. the range of data file values is often not the same as the range of brightness values of the display. a contrast stretch is usually performed. 1 This is natural color because band 3 is red and is assigned to the red color gun. For example. 2. the assignment 4. 154 Image Display . Some meaningful relationships between the values are usually maintained.G. lower data file values are displayed with the lowest brightness values. 1 This is infrared because band 3 = infrared. and a high data file value in the layer assigned to red. 2. which stretches the range of the values to fit the range of the display. The brightness values often differ from the data file values. A contrast stretch simply stretches the range between the lower and higher data file values. the brightness values in the colormap are also quantitative and related. For example. Figure 39 shows a layer that has data file values from 30 to 40. a screen pixel that is bright red has a high brightness value in the red color gun. Therefore. Below are some widely used band to color gun assignments (Faust. 2 This is infrared because band 4 = infrared. and band 1 to blue. Contrast Stretch Different displays have different ranges of possible brightness values. Landsat TM—color-infrared: 4. band 2 is green and is assigned to the green color gun. ERDAS IMAGINE automatically creates a contrast table for continuous raster layers. 3. The screen pixels represent the relationships between the values of the file pixels by their colors. The red. When these values are used as brightness values. Since the data file values in continuous raster layers are quantitative and related. SPOT Multispectral—color-infrared: 3.

A contrast stretch based on Percentage LUT with a clip of 2. this stretch is a linear contrast stretch. Statistics Files To perform a contrast stretch.The colormap stretches the range of colorcell values from 30 to 40.0% from right end of the histogram is applied to stretch pixel values of all image files from 0 to 255 before they are displayed in the Viewer. Contrast stretching is performed the same way for display purposes as it is for permanent image enhancement.5% from left and 1. certain statistics are necessary. Use the Image Information utility to create and view statistics for a raster layer. such as the mean and the standard deviation of the data file values in each layer. to the range 0 to 255. (The numbers in Figure 39 are approximations and do not show an exact linear relationship.) Figure 39: Contrast Stretch and Colorcell Values 30→0 31→25 32→51 33→76 34→102 35→127 36→153 30 to 40 range 0 0 255 output brightness values 37→178 38→204 255 input colorcell values 39→229 40→255 See Enhancement for more information about contrast stretching. Image Display 155 . Because the output values are incremented at regular intervals. unless a saved contrast stretch exists (the file is not changed). This often improves the initial appearance of the data in the Viewer.

and the displayed image has low contrast. Figure 40: Stretching by Min/Max vs.Usually. The minimum and maximum data file values of each band are often too extreme to produce good results. which is accessible from the Lookup Table Modification dialog. 24-bit DirectColor and TrueColor Displays Figure 41 illustrates the general process of displaying three continuous raster layers on a 24-bit DirectColor display. which determines the range of data used in the stretch. Standard Deviation frequency 0 -2σ mean +2σ 255 stored data file values Original Histogram most of the data frequency frequency most of the data values stretched less than 0 are not displayed -2σ 0 mean stretched data file values +2σ 255 values stretched over 255 are not displayed 0 -2σ mean +2σ 255 stretched data file values Standard Deviation Stretch Min/Max Stretch The mean and standard deviation of the data file values for each band are used to locate the majority of the data file values. When the minimum and maximum are extreme in relation to the rest of the data. See "Math Topics" on page 697 for more information on mean and standard deviation. The process is similar on a TrueColor display except that the colormap is not used. The number of standard deviations above and below the mean can be entered. not all of the data file values are used in the contrast stretch calculations. Use the Contrast Tools dialog. then the majority of data file values are not stretched across a very wide range. 156 Image Display . to enter the number of standard deviations to be used in the contrast stretch.

green. and blue bands are combined and transformed to a colorcell value in the colormap. Image Display 157 . a continuous raster layer looks different when it is displayed in an 8-bit display than a 24-bit display that offers 16 million different colors. the data file values from the red. Since there are only 256 colors available. However. This colorcell then provides the red. See Dithering on page 166 for more information. green. the Viewer performs dithering with the available colors in the colormap to let a smaller set of colors appear to be a larger set of colors.Figure 41: Continuous Raster Layer Display Process Band-tocolor gun assignments: Band 3 assigned to RED Band 2 assigned to GREEN Band 1 assigned to BLUE Histograms of each band: 0 255 0 255 0 255 Ranges of data file values to be displayed: 0 data file values in 255 0 data file values in 255 0 data file values in 255 Colormap: 0 brightness values out 255 0 brightness values out 255 0 brightness values out 255 Color guns: Brightness values in each color gun: Color display: 8-bit PseudoColor Display When displaying continuous raster layers on an 8-bit PseudoColor display. and blue brightness values.

Each data file value is a class value. green. Brightness values of a display generally range from 0 to 255. Only one data file value— the class value—is stored for each pixel. 158 Image Display . green. The class system gives the thematic layer a discrete look. Since these class values are not necessarily related. ERDAS IMAGINE automatically creates a color table. as the brightness values for each color gun. the gradations that are possible in true color mode are not usually useful in pseudo color. The red. The colors listed in Table 34 are based on the range that is used to assign brightness values in ERDAS IMAGINE. in which each class can have its own color. The maximum brightness value for the display device is scaled to 1. however. or put into distinct categories. and assigning colors to the classes of a thematic layer. and blue in different combinations. and blue brightness values for each class are stored in this table. RGB Colors Individual color schemes can be created by combining red. ERDAS IMAGINE translates the values from 0 to 1. Color Table When a thematic raster layer is displayed.Thematic Raster Layers A thematic raster layer generally contains pixels that have been classified.img) file. which is simply a number for a particular category. A thematic raster layer is stored in an image (. Colors can be expressed numerically.

decrease all three brightness values. Over 16 million colors are possible on a 24-bit display. To darken a color. Use the Raster Attribute Editor to create your own color scheme.608 1 . 1. Image Display 159 . Table 34: Commonly Used RGB Colors Color Red Red-Orange Orange Yellow Yellow-Green Green Cyan Blue Blue-Violet Violet Black White Gray Brown Red 1 1 .588 0 1 .227 Blue 0 0 0 0 0 0 1 1 . To lighten a color.588 1 1 1 1 0 0 0 0 1 .588 0 1 . 24-bit DirectColor and TrueColor Displays Figure 42 illustrates the general process of displaying thematic raster layers on a 24-bit DirectColor display.498 . The process is similar on a TrueColor display except that the colormap is not used.Table 34 contains only a partial listing of commonly used colors.0) and white is created from the highest values of all three colors (1.498 0 NOTE: Black is the absence of all color (0. increase all three brightness values.373 Green 0 .471 .498 .392 .392 . 1).490 0 0 0 .0.

Because of the limited resources. Using the Viewer The Viewer is a window for displaying raster. 8-bit PseudoColor Display The colormap is a limited resource that is shared among all of the applications that are running concurrently. the Viewer types are: 160 Image Display . ERDAS IMAGINE does not typically have access to the entire colormap. In IMAGINE ribbon Workspace.Figure 42: Thematic Raster Layer Display Process Original image by class: 1 4 2 2 3 1 3 5 4 Color scheme: CLASS 1 2 3 4 5 COLOR Red = Orange = Yellow = Violet = Green = RED 255 255 255 128 0 Brightness Values GREEN BLUE 0 0 128 0 255 0 0 255 255 0 1 class values in 2 3 4 5 1 class values in 2 3 4 5 1 class values in 2 3 4 5 Colormap: 0 128 brightness values out 255 0 128 brightness values out 255 0 128 brightness values out 255 Color guns: Brightness values in each color gun: = 255 = 128 =0 R O Y R Y G V Display: V O Display a thematic raster layer from the Viewer. vector. and annotation layers.

Therefore. The uses of the Viewer are listed briefly in this section. Map View is designed to create maps and presentation graphics. Colormap ERDAS IMAGINE does not use the entire colormap because there are other applications that also need to use it. Resampling When a raster layer(s) is displayed. the correct colors are applied for that window.• • • 2D View displays raster. the colors in the private colormap are not applied and the screen flickers. the server uses the main colormap and the brightness values assigned to the colorcells. vector. including the window manager. if there are not any available colorcells and the application requires a private colorcell. the more RAM memory is necessary. Therefore. Once the cursor is moved into the application window. Image Display 161 . raster overlays. or a clock. and annotation data in a 2dimensional view window. In this case. terminal windows. Arc View. when the cursor is moved out of the window. NOTE: The more Viewers that are opened simultaneously. Since this is a private colormap. but it can also be used as a tool for image processing and raster GIS modeling. You can open as many Viewer windows as their window manager supports. Color Flickering If an application requests a new color that does not exist in the colormap. However. the file pixels may be resampled for display on the screen. The Viewer not only makes digital images visible quickly. Resampling is used to calculate pixel values when one raster grid must be fitted to another. and flickering may occur as well. the raster grid defined by the file must be fit to the grid of screen pixels in the Viewer. the server assigns that color to an empty colorcell. and described in greater detail in other chapters of the ERDAS Field Guide. and annotation feature layers. then a private colormap is created for the application window. there are some limitations to the number of colors that the Viewer can display simultaneously. vector layers. 3D View renders 3-dimensional DEMs.

3x3. however. Cubic Convolution—uses the data file values of 16 pixels in a 4 × 4 window to calculate an output value with a cubic function. the Viewer refits the file grid to the new screen grid. each uses different filters and different layer options. Pyramid Layers Sometimes a large image file may take a long time to display in the Viewer or to be resampled by an application. 162 Image Display . The default resampling method is Nearest Neighbor. any time an image is resampled in the Viewer. The 3x3 kernel size is recommended for LPS and Stereo Analyst photogrammetry functions. Pyramid layers are image layers which are copies of the original layer successively reduced by the power of 2 and then resampled. LPS/Stereo Analyst pyramid layers and the 3x3 kernel are discussed in Image Pyramid on page 631.All Viewer operations are file-based. the Viewer uses the file as its source. Bilinear Interpolation—uses the data file values of four pixels in a 2 × 2 window to calculate an output value with a bilinear function. Preference Editor The Preference Editor enables you to set parameters for the Viewer that affect the way the Viewer operates. The resampling methods available are: • • • Nearest Neighbor—uses the value of the closest pixel to assign to the output pixel value. So. If the raster layer is magnified or reduced. The Compute Pyramid Layer option enables you to display large images faster and allows certain applications to rapidly access the resampled data. or 4x4 kernel size filtering methods. Both IMAGINE native pyramid layers and LPS/Stereo Analyst pyramid layers are generated with a reduction factor of 2. See the ERDAS IMAGINE On-Line Help for the Preference Editor for information on how to set preferences for the Viewer. These are discussed in detail in "Rectification" on page 251. The Compute Pyramid Layers option in IMAGINE has three options for continuous image data (raster images). 2x2.

A larger image produces more pyramid layers. When the Compute Pyramid Layer option is selected. See "Rectification" on page 251 for more information on Nearest Neighbor. This kernel is suitable for image visual observation. then it is resampled using the Nearest Neighbor method. however it can result in a high degree of smoothing or sharpening which is not necessarily desirable for digital photogrammetric processing. A 4 x 4 kernel uses 16 neighboring pixels of the higher resolution level to arrive at one pixel for the current pyramid level. This method is not recommended for multiresolution image matching. 1997) If the raster layer is thematic. ERDAS IMAGINE automatically creates successively reduced layers until the final pyramid layer is as small as a block size of 64 x 64 pixels. Image Display 163 . The default block size is 512 × 512 pixels. since the computation requires a greater number of pixel operations and is based on double precision arithmetic. X.A 2 x 2 kernel calculates the average of 4 pixels in a 2x2 pixel window of the higher resolution level and applies the average to one pixel for the current level of pyramid. The actual increase in file size can be determined by multiplying the layer size by this formula Where: n = number of pyramid layers NOTE: This equation is applicable to all types of pyramid layers: internal and external. The filter can be represented as: 1 1 1 -4 1 1 The computation is simple. Pyramid layers are added as additional layers in the image file. See Block Size in ERDAS IMAGINE . resulting in fast pyramid layer processing time. and Yang. (Wang. The processing time for this method is much longer than the others. However. Y. The number of pyramid layers created depends on the size of the original image. The file size is increased by approximately one-third when pyramid layers are created. these layers cannot be accessed for display.img Files On-Line Help for information on block size.

they are not deleted from the image file. the Arrange Layers dialog). The Image Files (General) section of the Preference Editor contains a preference for the Initial Pyramid Layer Number. when pyramid layers are deleted. Therefore. However. they do not appear as layers in other parts of the ERDAS IMAGINE software (for example. they can no longer be viewed or used in an application. This means that all reduced pyramid layers generated are retained. Pyramid layers can be deleted through the Image Metadata dialog. if necessary. therefore. the image file size does not change. 164 Image Display .n ∑ i=0 1 --i 4 Pyramid layers do not appear as layers which can be processed: they are for viewing purposes only. but ERDAS IMAGINE utilizes this file space. By default.that is. Pyramid layers are deleted from viewing and resampling access only . the value is set to 1.

see "Raster Data" on page 1 and ERDAS IMAGINE .img format. an image named tm_image1. Pyramid layer (64 × 64) Pyramid layer (128 × 128) Pyramid layer (512 × 512) Pyramid layer (1K × 1K) Pyramid layer (2K × 2K) Original Image (4K × 4K) image file For example. The Compute Pyramid Layers option is available from the ImageInfo dialog and the Image Command Tool. If you choose external pyramid layers. For example. to 2K × 2K.rrd extension.Figure 43: Pyramid Layers ERDAS IMAGINE selects the pyramid layer that displays the fastest in the Viewer. they are stored with the same name in the same directory as the image with which they are associated. but with the . a file that is 4K × 4K pixels could take a long time to display when the image is fit to the Viewer. 512 × 512.rrd.img Files On-Line Help. For more information about the .img has external pyramid layers contained in a file named tm_image1. down to 64 × 64. ERDAS IMAGINE then selects the pyramid layer size most appropriate for display in the Viewer window when the image is displayed. External Pyramid Layers Pyramid layers can be either internal or external. Image Display 165 . The Compute Pyramid Layers option creates additional layers successively reduced from 4K × 4K. 1K × 1K. 128 × 128.

Dithering allows multiple images to be displayed in different Viewers without refreshing the currently displayed image(s) each time a new image is displayed. For example. Some raster formats create internal pyramid layers by default and may not allow applications to control pyramid layers. 166 Image Display . For a simple example. and are dithered on the pixel level. and you want to display gray. Using similar colors and dithering on the pixel level makes the image appear smooth. an 8-bit display has a colormap with 256 colorcells. This can be accomplished by alternating the display of black and white pixels. a dithering algorithm mixes available colors to provide something that looks like the desired color. Figure 44: Example of Dithering Black Gray White In Figure 44. This case will ignore your Pyramid Layer Preference settings. therefore. The colors that the Viewer dithers between are similar to each other. external pyramid layers do not affect the size of the associated image. Dithering lets a smaller set of colors appear to be a larger set of colors. If the desired display color is not available.The extension . dithering is used between a black pixel and a white pixel to obtain a gray pixel. assume the system can display only two colors: black and white. Dithering A display is capable of viewing only a limited number of colors simultaneously. Unlike internal pyramid layers. If some colors are being used for auto update color adjustment while other colors are being used for other imagery. the color quality degrades. You can delete the external pyramid layers associated with an image by accessing the Image Information dialog.rrd stands for reduced resolution data set. a maximum of 256 colors can be displayed at the same time.

Figure 45: Example of Color Patches White 25% Grey 50% Grey 75% Grey Black If the desired color is not an even multiple of 1/4 of the way between two allowable colors.Color Patches When the Viewer performs dithering. it is displayed as an annotation view layer. If the desired color has an exact match. Viewing Layers The Viewer displays layers as one of the following types of view layers: • • • • • annotation vector pseudo color gray scale true color Annotation View Layer When an annotation layer (xxx. Color Artifacts Since the Viewer requires 2 × 2 pixel patches to represent a color. it is rounded to the nearest 1/4. If it is 3/4 of the way between two usable colors. and actual images typically have a different color for each pixel. green. Usually. the difference in color resolution is insignificant. If the desired color is halfway between two of the usable colors. The Viewer separately dithers the red. because adjacent pixels are normally similar to each other. the patch contains two pixels of each of the surrounding usable colors. the patch contains 3 pixels of the color it is closest to. then all of the values in the patch match it.ovr) is displayed in the Viewer. Image Display 167 . Figure 45 shows what the color patches would look like if the usable colors were black and white and the desired color was gray. and blue components of a desired color. it uses patches of 2 × 2 pixels. and 1 pixel of the color that is second closest. artifacts may appear in an image that has been dithered. Similarity between adjacent pixels usually smooths out artifacts that appear.

True Color View Layer Continuous raster layers should be displayed as true color layers in the Viewer. If the layer is a continuous raster layer. This layer is then displayed in all three color guns. the colormap uses the RGB brightness values for the one layer in the RGB table. The last layer that is opened always appears to be on top of the previously opened layers. they must all be referenced to the same map coordinate system. raster layers in one Viewer can have different cell sizes. Be sure to turn off the Clear Display check box when you open subsequent layers. Therefore. Display multiple layers from the Viewer. the layer would initially appear gray. Overlapping Layers When layers overlap. raster layers are resampled from the file to fit to the new scale.Vector View Layer A Vector layer is displayed in the Viewer as a vector view layer. The layers are positioned geographically within the window. To overlay multiple layers in one Viewer. if a raster layer with zeros is displayed over other layers. and resampled to the same scale as previously displayed layers. Thus. 168 Image Display . meaning that they have no opacity. Viewing Multiple Layers It is possible to view as many layers of all types (with the exception of vector layers. which have a limit of 10) at one time in a single Viewer. since there are not any values in the RGB table. Gray Scale View Layer When a raster layer is displayed as a gray scale layer in the Viewer. it is possible to make values of zero transparent in the Viewer. Pseudo Color View Layer When a raster layer is displayed as a pseudo color layer in the Viewer. When multiple layers are magnified or reduced. producing a gray scale image. the areas with zero values allow the underlying layers to show through. In a raster layer. the order in which the layers are opened is very important. A continuous raster layer may be displayed as a gray scale view layer. The colormap uses the RGB brightness values for three layers in the contrast table: one for each color gun to display the set of layers. the colormap uses the brightness values in the contrast table for one layer. This is most appropriate for thematic layers.

The layers are automatically positioned in the Viewer window according to their map coordinates. a color is displayed in a raster layer. which makes the image features appear smaller in the Viewer. Layers that cover distinct geographic areas can be opened in the same Viewer. When an image is zoomed. Use the Contents Panel (Ribbon Workspace) or Arrange Layers dialog (Classic) to restack layers in a Viewer so that they overlap in a different order. The zoom ratio describes the size of the image on the screen in terms of the number of file pixels used to store the image. A zoom ratio greater than 1 is a magnification. which makes the image features appear larger in the Viewer. and are positioned relative to one another geographically. and cannot be seen through. The map coordinate systems for the layers must be the same. • • 100% opacity means that a color is completely opaque. Opacity can be set at any value in the range of 0% to 100%. The effect is like looking at the underlying layers through a colored fog. Non-Overlapping Layers Multiple layers that are opened in the same Viewer do not have to overlap. Opacity is a component of the color scheme of categorical data displayed in pseudo color. 50% opacity lets some color show. • By manipulating opacity. Roaming and zooming have no effect on how the image is stored in the file. it can be roamed (scrolled) so that the desired portion of the image appears on the display screen. and lets some of the underlying layers show through. Any image that does not fit entirely in the Viewer can be roamed and/or zoomed. or solid. 0% opacity allows underlying layers to show completely. A zoom ratio less than 1 is a reduction. you can compare two or more layers of raster data that are displayed in a Viewer.Opacity is a measure of how opaque. It is the ratio of the number of screen pixels in the X or Y dimension to the number that are used to display the corresponding file pixels. Image Display 169 . Zoom and Roam Zooming enlarges an image on the display. if needed.

Use the Raster Attribute Editor to access information about classes in a thematic layer. a trial and error approach is needed to produce an image that has the right contrast and highlights the right features. the image is displayed at 50%. the image is displayed at 200%. each block of 2 × 2 file pixels is displayed with 1 screen pixel. NOTE: ERDAS IMAGINE allows floating point zoom ratios. Geographic Information To prepare to run many programs. By displaying the image in the Viewer and then selecting the pixel(s) of interest. or data file values for a particular pixel or a group of pixels. map coordinates. A zoom ratio of 0. 170 Image Display . Zoom the data in the Viewer by scrolling the mouse wheel. Enhancing Continuous Raster Layers Working with the brightness values in the colormap is useful for image enhancement.. each file pixel is displayed with 1 screen pixel in the Viewer.. The default resampling method is Nearest Neighbor. as specified in the Open Raster Layer dialog. continuous fractional zoom).. A zoom ratio of 2 means. important information about the pixel(s) can be viewed. it is possible to quickly view the effects of different enhancement techniques. See "Geographic Information Systems" on page 173 for information about attribute data. undo enhancements that are not helpful. Resampling is necessary whenever an image is displayed with a new pixel grid. Often. each file pixel is displayed with a block of 2 × 2 screen pixels. and then save the best results to disk.. By using the tools in the Viewer.. Effectively. The Quick View right-button menu gives you options to view information about a specific pixel. so that images can be zoomed at virtually any scale (that is. Effectively. or using zoom options in the Home tab or the Quick View right-button menu. it may be necessary to determine the data file coordinates..5 means.A zoom ratio of 1 means. The resampling method used when an image is zoomed is the same one used when the image is displayed.

Use the Viewer to .Use the Raster options from the Viewer to enhance continuous raster layers. and incorporated into the same band as the image. The new image file contains three continuous raster layers (RGB).img) from the layer(s) displayed in the Viewer. overwriting the values of the pixels in the image plane. Image Display 171 . vector data can be gridded into an image. The Image Information utility must be used to create statistics for the new image file before the file is enhanced. regardless of how many layers are currently displayed.img function to create a new image file from the currently displayed raster layers. Or. Creating New Image Files It is easy to create a new image file (. See "Enhancement" on page 455 for more information on enhancing continuous raster layers. and written to an image file. Annotation layers can be converted to raster format.

172 Image Display .

During the 1800s. urban planning. 1990). Frederick Law Olmstead has long been considered the father of Landscape Architecture for his pioneering work in the early 20th century. The mid-eighteenth century brought the use of map overlays to show troop movements in the Revolutionary War. humans have been continually improving the methods of conveying spatial information. The first British census in 1825 led to the science of demography.Geographic Information Systems Introduction The dawning of GIS can legitimately be traced back to the beginning of the human race. many different cartographers and scientists were all discovering the power of overlays to convey multiple levels of information about an area (Star and Estes. Ian McHarg’s influential work. and manipulate geographic data to create new layers of information. Since then. but there were probably maps before that time. Software companies like ESRI and ERDAS developed software packages that could input. display. discussed the use of overlays of spatially referenced data layers for resource planning and management (Star and Estes. this system was designed to store digitized map data and land-based attributes in an easily accessible format for all of Canada. The era of modern GIS really started in the 1970s. In 1969. developed in 1962 by Roger Tomlinson of the Canada Land Inventory.E. Design with Nature. This system is still in operation today (Parent and Church. The steady advances in features and power of the hardware over the last ten years—and the decrease in hardware costs—have made GIS technology accessible to a wide range of users. The growth rate of the GIS industry in the last several years has exceeded even the most optimistic projections. The earliest known map dates back to 2500 B. 1990). as analysts began to program computers to automate some of the manual processes. Geographic Information Systems Geographic Information Systems 173 173 . Unlike earlier systems that were developed for a specific application. This work on land suitability/capability analysis (SCA). was published. Many of the methods Olmstead used in Landscape Architecture also involved the use of hand-drawn overlays.. and resource management (Rado.C. The first system to be called a GIS was the Canadian Geographic Information System. a system designed to analyze many data layers to produce a plan map. This type of analysis was beginning to be used for a much wider range of applications. such as change detection. 1987). This could be considered an early GIS. another application for GIS. 1992).

hardware. store. and software (Walker and Miller. if you are looking for a suitable refuge for bald eagles. E757261 has a data file value 8. a true GIS includes knowledgeable staff. and analyze layers of geographic data to produce interpretable information. 1990). a training program. manipulate. A GIS should also be able to create reports and maps (Marble. Data Information. and floods? Information vs. data. zip code data is probably not needed.” is information. 174 Geographic Information Systems . 1990). hardcopy maps. The information you wish to derive determines the type of data that must be input. tornadoes. or any other data that is needed in a study. For example. Although the term GIS is commonly used to describe software packages. hurricanes. is independently meaningful. It is relevant to a particular problem or question: • • “The land cover at coordinate N875250. as opposed to data. from Landscape Architecture to natural resource management to transportation routing. budgets. a GIS is a unique system designed to input. The GIS database may include computer images. retrieve. statistical data. You can input data into a GIS and output information. marketing. while land cover data may be useful.” is data. such as earthquakes. “Land cover with a value of 8 are on slopes too steep for development. The central purpose of a GIS is to turn geographic data into useful information—the answers to real-life questions—questions such as: • • • • • How can we monitor the influence of global climatic changes on the Earth’s resources? How should political districts be redrawn in a growing metropolitan area? Where is the best place for a shopping center that is most convenient to shoppers and least harmful to the local ecology? What areas should be protected to ensure the survival of endangered species? How can communities be better prepared to face natural disasters. GIS technology can be used in almost any geography-related discipline.Today.

Raster images are stored in image files. Once the project is defined. hydrology. The seamless integration of these two types of data enables you to reap the benefits of both data formats in one system. you can begin the process of building the database.) The ERDAS IMAGINE software package employs a hierarchical. land. etc. population demographics. The database must be designed to meet the needs of the organization and objectives. etc. object-oriented architecture that utilizes both raster imagery and topological vector data. aerial photographs. Although software and data are commercially available. and vector layers are coverages or shapefiles based on the ESRI ArcInfo and ArcView data models. etc. the first step in any GIS project is usually an assessment of the scope and goals of the study.) attribute data (characteristics of roads. vegetation. a custom database must be created for the particular project and study area. etc. soils.) statistics (frequency of an occurrence. Data Input Acquiring the appropriate data for a project involves creating a database of layers that encompasses the study area. etc. these data layers are combined and manipulated in order to create new layers and to extract meaningful information from them. elevation data. parcels. utility and communication lines. In the analysis phase. This chapter discusses these steps in detail.For this reason. imagery. Geographic Information Systems 175 . ERDAS IMAGINE provides tools required to build and manipulate a GIS database. A database created with ERDAS IMAGINE can consist of: • • • • • continuous layers (satellite imagery.) vector layers (streets. Successful GIS implementation typically includes two major steps: • • data input analysis Data input involves collecting the necessary data layers into a GIS database. slope.) thematic layers (land use.

vector data may be better suited for these applications: • • • • urban planning tax assessment and planning traffic engineering facilities management 176 Geographic Information Systems .Figure 46: Data Input Raster Data Input Landsat TM SPOT panchromatic Aerial photograph Soils data Land cover Vector Data Input Roads Census data Ownership parcels Political boundaries Landmarks Raster Attributes Vector Attributes GIS analyst using ERDAS IMAGINE Raster data might be more appropriate in the following applications: • • • • • site selection natural resource management petroleum exploration mission planning change detection On the other hand.

Extremely accurate base maps can be created from rectified satellite images or aerial photographs. if you want to propose a new park site. This one layer would then include many separate themes. and indicate through the use of colors and/or annotation which areas would be best for the new site. a database for the city recreation department might include files of all the parks in the area.. or meaningful annotation for the image. drainage basins. Depending upon the goals of a project. continuous values. only raster or vector data may be needed. etc. For example.g. slope. all other layers that are added to the database can be registered to this base map. aerial photographs. For example. there may be attribute data that describe the information. Both data formats can be used and the functions of both types of systems can be accessed. roads. The full collection of data that describe a certain theme is called a layer. slope. Geographic Information Systems 177 . A single theme may require more than a simple raster or vector file to fully describe it. The concept of themes has evolved from early GISs. Each of these files contains different information—each is a different theme. etc. Depending upon the project. you might create one layer that shows roads. Landsat TM) or single band (e.. land ownership. Much of GIS analysis is concerned with combining individual themes into one or more layers that answer the questions driving the analysis. these layers often form the foundation of the database. with each file containing different types of information. Continuous Layers Continuous raster layers are quantitative (measuring a characteristic) and have related. and other continuous raster layers can be incorporated into a database and provide a wealth of information that is not available in thematic layers or vector layers. These files might depict park boundaries. in which transparent overlays were created for each theme and combined (overlaid) in different ways to derive new information. scanned maps. Satellite images. This chapter explores these analysis techniques. elevation data. county and municipal boundaries. but most applications benefit from using both. vegetation types. Continuous raster layers can be multiband (e. In addition to the image..The advantage of an integrated raster and vector system such as ERDAS IMAGINE is that one data structure does not have to be chosen over the other. SPOT panchromatic). a color scheme. land cover.g. soil types. Then. Themes and Layers A database usually consists of files with data of the same geographical area. In fact. it may be helpful to combine several themes into one layer.

the data file values of thematic raster layers can have a nominal. ordinal. greens for healthy vegetation. 178 Geographic Information Systems . See "Image Display" on page 145 for information on pseudo color display.Once used only for image processing. Usually. • Nominal classes represent categories with no particular order. An example of a thematic layer is a vegetation classification with discrete classes representing coniferous forest. which are generally multiband and statistically related. because it represents one of many characteristics about the study area. such as roads. urban. etc. where particular colors are often assigned to help visualize the information. utility lines. continuous data are now being incorporated into GIS databases and used in combination with thematic data to influence processing algorithms or as backdrop imagery on which to display the results of analyses. they are usually displayed in pseudo color mode. Current satellite data and aerial photographs are also effective in updating outdated vector data. The vectors can be overlaid on the raster backdrop and updated dynamically to reflect new or changed features. For example. these are characteristics that are not associated with quantities (e. interval. or ratio relationship (Star and Estes. soil type or political area). Class Numbering Systems As opposed to the data file values of continuous raster layers. This chapter explores the many uses of continuous data in a GIS. wetlands. 1990). deciduous forest. See "Raster Data" on page 1 for more information on continuous data. or land use.g. Since thematic layers usually have only one band. agriculture.. blues are usually used for water features. Thematic Layers Thematic data are typically represented as single layers of information stored as image files and containing discrete classes. Classes are simply categories of pixels which represent the same condition. etc. A thematic layer is sometimes called a variable.

. Geographic Information Systems 179 .g. such as poor. Typical vector layers. A frequent and popular application is the creation of land cover classification schemes through the use of both supervised (user-assisted) and unsupervised (automatic) pattern-recognition algorithms contained within ERDAS IMAGINE. Landsat TM. and other linear features. An ordinal class numbering system is often created from a nominal system. determines the class numbering system used in the thematic layers. Vector Data Converted to Raster Format Vector layers can be converted to raster format if the raster format is more appropriate for an application. The output is a single thematic layer that represents specific classes based on the approach selected. Spatial Modeler automatically converts vector layers to raster for processing. SPOT) by using the ERDAS IMAGINE Image Interpreter. such as rainfall amounts. in which classes have been ranked by some criteria. Classification Thematic layers can be generated from remotely sensed data (e. and the way that it contributes to the final product. Layers that have one numbering system can easily be recoded to a new system. Classification. This is discussed in detail under Recoding on page 190. • • The variable being analyzed. Use the Vector Utilities menu from the Vector icon in the ERDAS IMAGINE icon panel to convert vector layers to raster format. See "Classification" on page 545 for more information. better. streams. can easily be converted to raster format within ERDAS IMAGINE for further analysis. boundaries. or use the vector layers directly in Spatial Modeler. but the distance between each value is meaningful as well. good. the final layer may rank the proposed park sites according to their overall suitability. This numbering system might be used for temperature data. Ratio classes differ from interval classes only in that ratio classes have a natural zero point. and Spatial Modeler tools.• Ordinal classes are those that have a sequence. and best. such as communication lines. In the case of the recreation department database used in the previous example. Interval classes also have a natural sequence.

meaning that the spatial relationships between features are maintained. These layers are topologically complete. green. school zones.Other sources of raster data are discussed in "Raster and Vector Data Sources" on page 55. utility corridors. which is the total number of pixels in each class a list of class names that correspond to class values a list of class values a color table. Vector data can be acquired from several private and governmental agencies. and blue. or converting other data types to vector format. Thematic layers contain the following information: • • • • a histogram of the data values. etc. population density. tax parcels. and polygons. Use the Image Information option on the Viewer’s tool bar to generate or update statistics for image files. lines. voting districts. which make up the colors of each class when the layer is displayed For thematic data. landmarks. vector layers may also be shapefiles based on the ArcView data model. In ERDAS IMAGINE. as described in Attributes on page 181. stored as brightness values in red. Vector Layers The vector layers used in ERDAS IMAGINE are based on the ArcInfo data model and consist of points. Vector layers can be analyzed independently or in combination with continuous and thematic raster layers. using a digitizing tablet. Vector data can also be created in ERDAS IMAGINE by digitizing on the screen. Vector layers can be used to represent transportation routes. these statistics are called attributes and may be accompanied by many other types of information. 180 Geographic Information Systems . Statistics Both continuous and thematic layers include statistical information. See "Raster Data" on page 1 for more information about the statistics stored with continuous layers. communication lines.

Vector attribute information is stored in either an INFO file. editing. or floating point numbers. You may define fields. which contain similar information for the other classes or features. but more fields can be added as needed to fully describe the data. so a separate section on each follows. Each record is like an index card. Attributes Text and numerical data that are associated with the classes of a thematic layer or the features in a vector layer are called attributes. Both are viewed in CellArrays. Figure 47: Raster Attributes for lnlandc. which allow you to display and manipulate the information. This information can take the form of character strings. dbf file. Attribute information for raster layers is stored in the image file. Attributes work much like the data that are handled by database management software. containing information about one class or feature in a file of many index cards.See "Vector Data" on page 41 for more information on the characteristics of vector data. or SDE database. and other operations. raster and vector attributes are handled slightly differently. or from the Raster Attribute Editor. A record is the set of all attribute data for one class. Figure 47 shows the attributes for a land cover classification layer. there are fields that are automatically generated by the software. However. raster attributes for image files are accessible from the Table tab > Show Attributes option. Raster Attributes In ERDAS IMAGINE. exporting. but also includes options for importing.img Most thematic layers contain the following attribute fields: • • Class Name Class Value Geographic Information Systems 181 . Both consist of a CellArray. which is similar to a table or spreadsheet that not only displays the information. which are categories of information about each class. copying. integer numbers. In both cases.

copy. to locate the class name associated with a particular area in a displayed image. processing may be further refined by comparing the attributes of several files. In some cases it is read-only and in other cases it is a fully functioning editor. The attribute information in a database depends on the goals of the project. allowing the information to be modified. Depending on the type of information associated with the layers of a database. and paste individual cells. you can select features in one using the other. Some of the attribute editing capabilities in ERDAS IMAGINE include: • • import/export ASCII information to and from other software packages. such as spreadsheets and word processors cut. green. When both the raster layer and its associated attribute information are displayed. Viewing Raster Attributes Simply viewing attribute information can be a valuable analysis tool. Attribute information is accessible in several places throughout ERDAS IMAGINE. rows. and blue values) Opacity percentage Histogram (number of pixels in the file that belong to the class) As many additional attribute fields as needed can be defined for each class. For example. simply click in that area with the mouse and the associated row is highlighted in the Raster Attribute Editor. or columns to and from the same Raster Attribute Editor or among several Raster Attribute Editors generate reports that include all or a subset of the information in the Raster Attribute Editor use formulas to populate cells directly edit cells by entering in new information • • • 182 Geographic Information Systems .• • • Color table (red. Manipulating Raster Attributes The applications for manipulating attributes are as varied as the applications for GIS. See "Classification" on page 545 for more information about the attribute information that is automatically generated when new thematic layers are created in the classification process.

For example.The Raster Attribute Editor in ERDAS IMAGINE also includes a color cell column. There is more information on GIS modeling in Graphical Modeling on page 195. You can simply view attributes or use them to: • • • select features in a vector layer for further processing determine how vectors are symbolized label features Figure 48 shows the attributes for a vector layer with polygon features. GIS analysis functions and algorithms are accessible through three main tools: • script models created with SML Geographic Information Systems 183 . Figure 48: Vector Attributes CellArray See "Vector Data" on page 41 for more information about vector attributes. some of the Image Interpreter functions calculate statistics that are automatically added to the Raster Attribute Editor. so that class (object) colors can be viewed or changed. Analysis ERDAS IMAGINE Analysis Tools In ERDAS IMAGINE. In addition to direct manipulation. See "Enhancement" on page 455 for more information on the Image Interpreter. Models that read and/or modify attribute information can also be written. attributes can be changed by other programs. Vector Attributes Vector attributes are stored in the Vector Attributes CellArrays.

However. and has been designed using natural language commands and simple syntax rules. Some applications may require a combination of these tools. Using these tools. Image Interpreter The Image Interpreter houses a set of common functions that were all created using either Model Maker or SML. a GIS that is completely customized to a specific application and its preferences can be created. Many of the functions described in the following sections can be accomplished using any of these tools. 184 Geographic Information Systems . SML is intended for more advanced analyses. They have been given a dialog interface to match the other processes in ERDAS IMAGINE. NOTE: References to the Spatial Modeler in this chapter mean that the named procedure can be accomplished using both Model Maker and SML. Model Maker is also easy to use and utilizes many of the same steps that would be performed when drawing a flow chart of an analysis. It is a modeling language that enables you to create script (text) models for a variety of applications. the actual models are also provided with the software to enable customized processing. or converted to script form and edited further. Graphical models can be run. they can be created with the IMAGINE Developers’ Toolkit™.• • graphical models created with Model Maker prepackaged functions in Image Interpreter Spatial Modeler Language SML is the basis for all ERDAS IMAGINE GIS functions. Models may be used to create custom algorithms that best suit your data and objectives. Model Maker Model Maker is essentially SML linked to a graphical interface. In most cases. If new capabilities are needed. Customizing ERDAS IMAGINE Tools ERDAS Macro Language (EML) enables you to create and add new and/or customized dialogs. using SML. these processes can be run from a single dialog. edited. See the ERDAS IMAGINE On-Line Help for more information about EML and the IMAGINE Developers’ Toolkit. This enables you to create graphical models using a palette of easy-to-use tools. saved in libraries.

However. You can also select a particular AOI that is defined in a separate file (AOI layer. an output layer created from modeling can represent the desired combination of themes from many input layers. density. thematic raster layer. Script modeling—offers all of the capabilities of graphical modeling with the ability to perform more complex functions. sum. such as boundary. or vector layer) or an AOI that is selected immediately preceding the operation by entering specific coordinates or by selecting the area in a Viewer. Geographic Information Systems 185 . Neighborhood analysis —any image processing technique that takes surrounding pixels into consideration. etc. Graphical modeling—enables you to combine data layers in an unlimited number of ways. Matrix analysis—outputs the coincidence values of the input layers. Several types of analyses can be performed. such as convolution filtering and scanning. For example. • • • • • • • • Using an Area of Interest Any of these functions can be performed on a single layer or multiple layers. the layers can be analyzed and new information extracted. Recoding—enables you to assign new class values to all or a subset of the classes in a layer. Overlaying—creates a new file with either the maximum or minimum value of the input layers. Some information can be extracted simply by looking at the layers and visually comparing them to other layers. Contiguity analysis—enables you to identify regions of pixels in the same class and to filter out small regions. This is similar to the convolution filtering performed on continuous data. Indexing—adds the values of the input layers.Analysis Procedures Once the database (layers and attribute data) is assembled. mean. such as conditional looping. new information can be retrieved by combining and comparing layers using the following procedures: • Proximity analysis—the process of categorizing and evaluating pixels based on their distances from other pixels in a specified class or classes.

which is categorized by the distance of each pixel from specified classes of the input layer. One application of this tool would be an analysis for locating helicopter landing zones that require at least 250 contiguous pixels at 10-meter resolution. Groups of contiguous pixels in the same class. This new file then becomes a new layer of the database and provides a buffer zone around the specified class(es). 186 Geographic Information Systems . or clumps. Proximity analysis determines which pixels of a layer are located at specified distances from pixels in a certain class or classes. For example. based on whether they fall inside or outside the buffer zone. or 2) eliminate raster regions that are too small to be considered for an application. can be identified by their sizes and manipulated. A new thematic layer (image file) is created. a real estate developer would be concerned with the distance between a potential site for a shopping center and an interchange to a major highway. In further analysis. Figure 49: Proximity Analysis Lake Streams Buffer zones Original layer After proximity analysis performed Contiguity Analysis Contiguity analysis is concerned with the ways in which pixels of a class are grouped together.Proximity Analysis Many applications require some measurement of distance or proximity. it may be beneficial to weight other factors. Figure 49 shows a layer containing lakes and streams and the resulting layer after a proximity analysis is run to create a buffer zone around all of the water features. called raster regions. Contiguity analysis can be used to: 1) divide a large class into separate raster regions. Use the Search (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform a proximity analysis.

they can be filtered out according to their sizes. Every pixel is analyzed spatially. with a maximum outer radius of 256 rectangular. In Figure 50. with the option to mask-out certain pixels Geographic Information Systems 187 . This is sometimes referred to as eliminating the salt and pepper effects. Figure 50: Contiguity Analysis Clumped layer Sieved layer Use the Clump and Sieve (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform contiguity analysis. The scanning window can be of any size in SML. according to the pixels that surround it. with a maximum diameter of 512 pixels doughnut-shaped. which is defined by you. it has the following constraints: • • • circular. Neighborhood Analysis With a process similar to the convolution filtering of continuous raster layers. The number and the location of the surrounding pixels is determined by a scanning window. thematic raster layers can also be filtered.Filtering Clumps In cases where very small clumps are not useful. all of the small clumps in the original (clumped) layer are eliminated. The GIS filtering process is sometimes referred to as scanning. but is not to be confused with data capture via a digital camera. up to 512 × 512 pixels. Neighborhood analysis is based on local or neighborhood characteristics of the data (Star and Estes. These operations are known as focal operations. 1990). or sieving. In Model Maker.

Specify a class or classes in another thematic layer to be used as a mask. an annotation overlay. and the other areas remain the same. There are several types of analysis that can be performed upon each window of pixels. Figure 51: Using a Mask 8 8 2 2 2 2 8 8 2 2 2 2 6 6 6 6 2 2 6 6 6 8 8 8 8 8 8 6 6 8 3 3 4 4 5 5 3 3 3 4 4 5 4 4 4 4 4 5 4 4 4 4 5 5 5 5 5 5 5 5 • mask layer target layer In Figure 51. The scanning window in SML can be of any size.Use the Neighborhood (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform neighborhood analysis. or 7 × 7. Specify an area that is defined by an existing AOI layer. The scanning window moves only through this area as the analysis is performed. 5 × 5. class 2 in the mask layer was selected for the mask. Only the corresponding (shaded) pixels in the target layer are scanned—the other values remain unchanged. while the other pixels remain the same. as described below: 188 Geographic Information Systems . The area(s) within the polygon are scanned. The scanning window in Model Maker is defined by you and can be up to 512 × 512. The scanning window used in Image Interpreter can be 3 × 3. The output layer is the same size as the input layer or the selected rectangular portion. Defining Scan Area You may define the area of the file to be scanned. The output layer contains only the specified area. Define the area in one or all of the following ways: • • Specify a rectangular portion of the file to scan. The pixels in the scanned layer that correspond to the pixels of the selected class or classes in the mask layer are scanned. Neighborhood analysis creates a new thematic layer. or a vector layer.

• Boundary—detects boundaries between classes. Standard deviation—outputs the standard deviation of class values in the window. Density—outputs the number of pixels that have the same class value as the center (analyzed) pixel. The value is defined by you. This can be used to emphasize classes with the low class values. Median—outputs the statistical median of the class values in the window. This is often useful in assessing vegetation crown closure. then this option can work like a convolution filter. This is useful for creating boundary or edge lines from classes. totaling enables you to further rank pixels based on their proximity to high-ranking pixels. This can be used to emphasize classes with the higher class values or to eliminate linear features or boundaries. This option may be useful if class values represent quantitative data. The output layer contains only boundary pixels. If class values represent quantitative data. Majority—outputs the class value that represents the majority of the class values in the window. This option can be used to identify the least common classes. This option operates like a low-frequency filter to clean up a salt and pepper layer. In a file where class values are ranked. based upon the analyzed pixel. Minority—outputs the least common of the class values that are within the window. This is mostly used on ordinal or interval data. Minimum—outputs the least or smallest class value within the window. This is also a measure of homogeneity (sameness). Sum—totals the class values. Diversity is also a measure of heterogeneity (difference). Diversity—outputs the number of class values that are present within the window. The value is defined by you. Mean—averages the class values. Maximum—outputs the greatest class value within the window. It can also be used to highlight disconnected linear features. such as a land/water interface. Rank—outputs the number of pixels in the scan window whose value is less than the center pixel. • • • • • • • • • • • Geographic Information Systems 189 .

In the following example (Table 35). Recoding is used to: • • • reduce the number of classes combine classes assign different class values to existing classes When an ordinal. it may be beneficial to recode the input layers so all of the best classes have the highest class values. ratio. in creating a model that outputs good. Recoding is often performed to make later steps easier. Table 35: Example of a Recoded Land Cover Layer Value 0 New Value 0 Class Name Background 190 Geographic Information Systems . In this example. only the pixel in the third column and third row of the file is summed. or interval class numbering system is used. Recoding Class values can be recoded to new values. better. a land cover layer is recoded so that the most environmentally sensitive areas (Riparian and Wetlands) have higher class values. Recoding involves the assignment of new values to one or more classes. the Sum option of Neighborhood (Image Interpreter) is applied to a 3 × 3 window of pixels in the input layer. and best areas. The analyzed pixel is always the center pixel of the scanning window.Figure 52: Sum Option of Neighborhood Analysis (Image Interpreter) 2 2 2 2 2 8 8 2 2 2 6 6 8 2 2 6 6 6 8 2 6 6 6 6 8 8 6 6 2 48 6 2 2 8 Output of one iteration of the sum operation 8 + 6 + 6 + 2 + 8 + 6 + 2 + 2 + 8 = 48 In Figure 52. For example. the analyzed pixel is given a value based on the total of all of the pixels in the window. recoding can be used to assign classes to appropriate values. In the output layer.

Table 35: Example of a Recoded Land Cover Layer Value 1 2 3 4 5 6 New Value 4 1 1 4 1 1 Class Name Riparian Grassland and Scrub Chaparral Wetlands Emergent Vegetation Water Use the Recode (GIS Analysis) function in Image Interpreter or Spatial Modeler to recode layers. and in class 3 in another. as shown in Figure 53. and the maximum class value dominated. Geographic Information Systems 191 . then the same area would be coded to class 5 in the output layer. The output layer contains either the minimum or the maximum class values of the input layers. if an area was in class 5 in one layer. Overlaying Thematic data layers can be overlaid to create a composite layer. For example.

Indexing Thematic layers can be indexed (added) to create a composite layer. 192 Geographic Information Systems . For example. The slope layer is first recoded to combine all steep slopes into one value. When overlaid with the land use layer. the intersection of class 3 in one layer and class 5 in another would result in class 8 in the output layer.Figure 53: Overlay Basic Overlay 6 8 9 3 5 Application Example 2 1 6 5 9 9 9 9 0 2 2 3 2 3 9 9 9 9 3 1 5 4 2 1 5 4 2 Overlay Land Use 1 = commercial 2 = residential 3 = forest 4 = industrial 5 = wetlands Overlay Composite 1 = commercial 2 = residential 3 = forest 4 = industrial 5 = wetlands 9 = steep slopes (Land Use masked) Recode Recoded Slope 0 = flat slopes 9 = steep slopes Original Slope 1-5 = flat slopes 6-9 = steep slopes 1 3 0 0 0 0 The application example in Figure 53 shows the result of combining two layers—slope and land use. The output layer contains the sums of the input layer values. as shown in Figure 54. Use the Overlay (GIS Analysis) function in Image Interpreter or Spatial Modeler to overlay layers. the highest data file values (the steep slopes) dominate in the output layer.

Figure 54: Indexing Basic Index 9 9 5 3 8 5 Application Example 9 9 1 5 18 10 10 2 2 9 5 1 5 9 36 24 16 8 16 36 36 36 28 = 9 9 9 9 + Access 9 = good 5 = fair 1 = poor Soils 9 = good 5 = fair 1 = poor 1 9 + 18 Weighting Importance ×1 ×1 ×1 18 18 18 Slope 9 = good 5 = fair 1 = poor Weighting Importance ×2 ×2 ×2 Weighting Importance ×1 ×1 ×1 Output values calculated The application example in Figure 54 shows the result of indexing. In this example. Because good slope is a more critical factor to you than good soils or good access. Use the Index (GIS Analysis) function in the Image Interpreter or Spatial Modeler to index layers. a weighting factor is applied to the slope layer. Geographic Information Systems 193 . and good access. In this example. good slope. slope is given a weight of 2. you want to develop a new subdivision. A weighting factor has the effect of multiplying all input values by some constant. and the most likely sites are where there is the best combination (highest value) of good soils.

If these files were indexed (summed) instead of matrixed. the output class value at column 1. The output classes are assigned according to the coincidence of any two input classes. In this example. row 1 is 3. if you want to find the best areas for a bird sanctuary. climate. For example. The output is best described with a matrix diagram. both combinations would be coded to class 4. row 3 is 11. showing only the best areas for the sanctuary. Modeling enables you to create a small set of layers— perhaps even a single layer—which. representing an area that is not being studied. Modeling Modeling is a powerful and flexible analysis tool. the classes of the two input layers represent the rows and columns of the matrix. input layer 2 data values (columns) 0 0 input layer 1 data values (rows) 1 2 3 0 0 0 0 1 0 1 6 11 2 0 2 7 12 3 0 3 8 13 4 0 4 9 14 5 0 5 10 15 In this diagram. you would create a thematic layer for each of these criteria. at a glance. 194 Geographic Information Systems . the resulting class values of a matrix operation are unique for each coincidence of two input class values. The modeling process would create one thematic layer. and the output class at column 3. contains many types of information about the study area. Use the Matrix (GIS Analysis) function in Image Interpreter or Spatial Modeler to matrix layers. Then. Unlike overlaying or indexing. availability of water. taking into account vegetation. and distance from highly developed areas. each of these layers would be input to a model. because 0 is usually the background class.Matrix Analysis Matrix analysis produces a thematic layer that contains a separate class for every coincidence of classes in two layers. Modeling is the process of creating new layers from combining or operating upon existing layers. All combinations of 0 and any other class are coded to 0.

with each map corresponding to a separate theme. overlay. index. In ERDAS IMAGINE. proximity analysis. Through the extensive functions and operators available in the ERDAS IMAGINE graphical modeling program. Complex models can be developed easily and then quickly edited and re-run on another data set. 1976).) can be performed in models. digital files replace these hardcopy layers and allow much more flexibility for recoloring. and outputs. the corresponding pixels at the same coordinates in all input layers are addressed as if they were physically overlaid like hardcopy maps.. and reproducing geographical information (Steinitz et al. In a model. in that you identify a logical flow of steps needed to perform the desired action.The set of procedures that define the criteria is called a model. Today. The model is run and a new output layer(s) is created. they are essentially the same—input files are defined. Although these two types of models look different. as well as image processing functions. Image Processing and GIS In ERDAS IMAGINE. you can analyze many layers of data in very few steps without creating intermediate files that occupy extra disk space. This type of modeling is very similar to drawing flowcharts. Use the Model Maker function in Spatial Modeler to create graphical models and SML to create script models. Use the Model Maker function in Spatial Modeler to create graphical models. or new functions can be created by you. functions. recode. recoding. functions and/or operators are specified.g. or they can be created using a script language. the traditional GIS functions (e. models can be created graphically and resemble a flow chart of steps. Geographic Information Systems 195 . Before computers were used for modeling. and outputs are defined. the concept of layers is especially important. Modeling is performed using a graphical editor that eliminates the need to learn a programming language. the most widely used approach was to overlay registered maps on paper or transparencies. Data Layers In modeling. Models can utilize analysis functions that have been previously defined. Graphical Modeling Graphical modeling enables you to draw models using a palette of tools that defines inputs. neighborhood analysis. Both thematic and continuous layers can be input into models that accomplish many objectives at once. etc.

such as slope. and floodplain.g. Descriptions of all of the graphical models delivered with ERDAS IMAGINE are available in the On-Line Help. SPOT panchromatic) that has had a convolution filter applied. All of this can be accomplished in a single model (as shown in Figure 55). An output layer can be created that ranks most to least sensitive regions based on several factors. the output thematic layer can be overlaid onto a high resolution. Figure 55: Graphical Model for Sensitivity Analysis See the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on creating the environmental sensitivity model in Figure 55.. Model Structure A model created with Model Maker is essentially a flow chart that defines: 196 Geographic Information Systems . suppose there is a need to assess the environmental sensitivity of an area for development. To visualize the location of these areas. continuous raster layer (e.For example. land cover.

matrix(ces). or operations to be performed on the input data the output image(s). output. See the On-Line Help for instructions on editing existing models. vector(s). function. table(s). and scalars to be created The graphical models created in Model Maker all have the same basic structure: input. The model on the left in Figure 56 is the most basic form. functions. and outputs can vary. but the overall form remains constant. functions. matrix(ces). and scalar(s) to be analyzed calculations. but it retains the same input/function/output flow. table(s). Geographic Information Systems 197 . The model on the right is more complex.• • • the input image(s). All components must be connected to one another before the model can be executed. The number of inputs.gmd extension. There are several sample graphical models delivered with ERDAS IMAGINE that can be used as is or edited for more customized processing. Figure 56: Graphical Model Structure Basic Model Complex Model Input Input Function Output Function Input Function Input Output Output Graphical models are stored in ASCII files with the .

power.. majority. and not. density. Read attribute information and map a raster through an attribute column. less than.. inequality. Use exponential operators. including natural and common logarithmic. Includes over 20 miscellaneous functions for data type conversion. division. minority. Perform operations over a stack of layers including diversity. divide. and transpose matrices. Perform distance functions. Run logical tests using conditional statements and either. multiplication. Use bitwise and. various tests. and others. Perform logical functions including and. Manipulate colors to and from RGB (red. or. Create a matrix or table from a list of scalars. greater than. subtraction. mean. and other utilities. green. hue. mean. Constraints on which pixel values to include in calculations for the Focal (Scan) function. such as diversity.. less than or equal. saturation).if. as well as convert a matrix to a table and vice versa. and more. max. and others. contrast stretch.Model Maker Functions The functions available in Model Maker are divided into the following categories: Table 36: Model Maker Functions Category Description Includes convolution filtering. maximum. greater than or equal. Perform neighborhood analysis functions. minority. principal components. layer width and height. sum. standard deviation. Analysis Arithmetic Bitwise Boolean Color Conditional Data Generation Descriptor Distance Exponential Focal (Scan) Focal Use Opts Focal Apply Opts Global Matrix Other Relational Size Stack Statistics 198 Geographic Information Systems . Create raster layers from map coordinates. and more. standard deviation. including proximity analysis.or. min. exclusive or. rank. blue) and IHS (intensity. and square root. minimum. diversity. Constraints on which pixel values to apply the results of calculations for the Focal (Scan) function. etc.. including boundary. histogram matching. Multiply. column numbers. and modulus. factorial. majority. Measure cell X and Y size. Includes equality. Analyze an entire layer and output one value. mean. and sum. or. Perform basic arithmetic functions including addition.otherwise. standard deviation. number of rows and columns. or row numbers. and not. sum... median.

including sine/arcsine. cosine/arccosine. then processed similarly to raster data. and tangent. majority. sine. standard deviation. arccosine. shapefile. diversity. mean. See the ERDAS IMAGINE Tour Guides and the On-Line SML manual for complete instructions on using Model Maker. mean. Rasters are typically used to specify and manipulate data from image files. Vector Vector data in either a vector coverage. cosine. converted from vector to raster. or shapefiles or annotation layers. Manipulate character strings. range. min. and more detailed information about the available functions and operators. Zonal These functions are also available for script modeling. or annotation layer can be read directly into the Model Maker. Use common trigonometric functions. raster vector matrix table scalar Geographic Information Systems 199 . The five basic object types used in Model Maker are: • • • • • Raster A raster object is a single layer or multilayer array of pixel data.Table 36: Model Maker Functions Category Statistical String Surface Trigonometric Description Includes density. diversity. hyperbolic arcsine. tangent/arctangent. and standard deviation. Calculate aspect and degree/percent slope and produce shaded relief. majority. Objects Within Model Maker. Perform zonal operations including summary. and more. rank. max. an object is an input to or output from a function. Model Maker cannot write to coverages.

calculated (e. For example.648 (signed 32-bit integer) Float—floating point data (double precision) 200 Geographic Information Systems . A table may consist of up to 32. or character strings.483. A table has one column and a fixed number of rows. They can also be used to store covariance matrices. or matrices of linear combination coefficients. Table A table object is a series of numeric values.147. eigenvector matrices. Information in the table can be attributes. A matrix has a fixed number of rows and columns. Matrices may be used to store convolution kernels or the neighborhood definition used in neighborhood functions. The graphics used in Model Maker to represent each of these objects are shown in Figure 57. or defined by you. Scalar A scalar object is a single numeric value.g. color. a table with four rows could be used to store the maximum value from each layer of a four layer image file. colors.648 to 2. Tables are typically used to store columns from the Raster Attribute Editor or a list of values that pertains to the individual layers of a set of layers. histograms).147. Scalars are often used as weighting factors.483.. or character string.Matrix A matrix object is a set of numbers arranged in a two-dimensional array.767 rows. Figure 57: Modeling Objects Matrix Scalar + + Vector Table Raster Data Types The five object types described above may be any of the following data types: • • • Binary—either 0 (false) or 1 (true) Integer—integer values from -2.

Maximum—the maximum cell size of the input layers is used. Output Parameters Since it is possible to have several inputs in one model. Geographic Information Systems 201 . The criteria function can be used to build a table of conditions that must be satisfied to output a particular row value for an attribute (or cell value) associated with the selected raster. you can change the data type of input files before they are processed. so you must select the output cell size as either: • • • Map Projection The output map projection defaults to be the same as the first input. Either of the following options can be selected: • • Union—the model operates on the union of all input rasters. must be specified in order to use it in the model calculations. or working window. Pixel Cell Size Input rasters may also be of differing resolution (pixel size). Minimum—the minimum cell size of the input layers is used (this is the default setting). The criteria function simplifies the process of creating a conditional statement. Other—specify a new cell size. Using SML.) Intersection—the model uses only the area of the rasters that is common to all input rasters. Working Window Raster layers of differing areas can be input into one model. you can optionally define the working window and the pixel cell size of the output data along with the output map projection. or projection may be selected to be the same as a chosen input. the image area. The output projection may also be selected from a projection library. However. Using Attributes in Models With the criteria function in Model Maker. attribute data can be used to determine output values.• String—a character string (for table objects only) Input and output data types do not have to be the same. (This is the default.

you would simply add more conditions to the criteria table. 202 Geographic Information Systems . Criteria which must be met for each output column are entered in a cell in that column (e. The columns of the criteria table represent either attributes associated with a raster layer or the layer itself. >5). Multiple sets of criteria may be entered in multiple rows. The output raster contains the first row number of a set of criteria that were met for a raster cell.90 46. consider the sample thematic layer. See the ERDAS IMAGINE Tour Guides or the On-Line Help for specific instructions on using the criteria function. The output file would have two classes: pine forests larger than 10 acres and background.The inputs to a criteria function are rasters or vectors. The following logic would therefore be coded into the model: “If Turf Condition is not Good or Excellent. If you want the output file to show varying sizes of pine forest.88 128. a model could be created.img. Comparisons of attributes can also be combined with mathematical and logical functions on the class values of the input file(s). For example. Otherwise. With these capabilities. that contains the following attribute information: Class Name Grant Park Piedmont Park Candler Park Springdale Park Histogram 2456 5167 763 548 Acres 403. that shows the soil types for parks with either fair or poor turf condition. parks.g..img. if the cell values are of direct interest.img and soils.45 547. The following is a slightly more complex example: If you have a land cover file and you want to create a file of pine forests larger than 10 acres. highly complex models can be created.33 Path Condition Fair Good Excellent None Turf Condition Good Fair Excellent Excellent Car Spaces 127 94 65 0 A simple model could create one output layer that shows only the parks in need of repairs. using the input layers parks.” More than one input layer can also be used. Example For example. then the output class value is 1. the output class value is 2. Attributes can be used from every input file. the criteria function could be used to output values only for areas that satisfy the conditions of being both pine forest and larger than 10 acres. and if Path Condition is not Good or Excellent.

both the graphical and script models are shown for a tasseled cap transformation. Notice how even the annotation on the graphical model is included in the automatically generated script model. These scripts can then be edited with a text editor using SML syntax and rerun or saved in a library. It includes all of the functions available in Model Maker. Generating script models from graphical models may aid in learning SML.mdl files.Script Modeling SML is a script language used internally by Model Maker to execute the operations specified in the graphical models that are created. The Text Editor is available from the Tools menu located on the ERDAS IMAGINE menu bar and from the Model Librarian (Spatial Modeler). They are stored in ASCII . In Figure 58. Script models can also be written from scratch in the text editor. plus: • • conditional branching and looping the ability to use complex data types Graphical models created with Model Maker can be output to a script file (text only) in SML. Geographic Information Systems 203 . SML can also be used to directly write to models you create.

$n2_Custom_Matrix ) .331210. -0.054930. 0. FLOAT MATRIX n2_Custom_Matrix.img". 204 Geographic Information Systems .img". 0.854680.Figure 58: Graphical and Script Models For Tasseled Cap Transformation Tasseled Cap Transformation Models Graphical Model Script Model # TM Tasseled Cap Transformation # of Lake Lanier.162630. 0. -0. 0. 7: 0.224900. Open existing script models from the Model Librarian (Spatial Modeler).406390. 0. -0.457320).551770. # # function definitions # n4_lntassel = LINEARCOMB ( $n1_tm_lanier . -0.247170. 0.000000.251780. 0.480870. FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE "/usr/imagine/examples/lntassel.000000.000000.252520. 0. # # set window for the model # SET WINDOW UNION. 0.139290. -0. Georgia # # declarations # INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR "/usr/imagine/examples/tm_lanier.701330. 0.117490. # # set cell size for the model # SET CELLSIZE MIN. Convert graphical models to scripts using Model Maker.425140. 0.403590. 0. -0. 0. QUIT.331830. 0. # # load matrix n2_Custom_Matrix # n2_Custom_Matrix = MATRIX(3.

Statements A script model consists primarily of one or more statements. 7: 0. -0. 0.480870.139290. 0.000000. which cause a set of statements to be executed as a group.000000.252520.403590.117490. 0. 0.054930. Set Example The following set statements are used: SET CELLSIZE MIN. Declaration Example In the script model in Figure 58. 0. 0. FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE "/usr/imagine/examples/lntassel. FLOAT MATRIX n2_Custom_Matrix. 0. 0. Geographic Information Systems 205 . 0.162630. SET WINDOW UNION. Assignment Example The following assignment statements are used: n2_Custom_Matrix = MATRIX(3. -0.331830. 0. n4_lntassel = LINEARCOMB ( $n1_tm_lanier .224900. -0. 0. Each statement falls into one of the following categories: • • • • • • Declaration—defines objects to be manipulated within the model Assignment—assigns a value to an object Show and View—enables you to see and interpret results from the model Set—defines the scope of the model or establishes default values used by the Modeler Macro Definition—defines substitution text associated with a macro name Quit—ends execution of the model SML also includes flow control structures so that you can utilize conditional branching and looping in the models and statement block structures.425140. 0.251780.000000.331210.img".551770. -0.img".406390.701330. the following lines form the declaration portion of the model: INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR "/usr/imagine/examples/tm_lanier.457320). $n2_Custom_Matrix ) . -0. 0.854680. -0.247170. 0.

. both raster and vector layers are present. Vector Analysis Most of the operations discussed in the previous pages of this chapter focus on raster data. you could restrict the model to only those areas in the raster input files. copy. Updated or new attributes may also be written to vector layers in models. etc. There can be multiple features selected with a mixture of any and all feature types. see the On-Line SML manual. For script model syntax rules.0. and blue Variables Variables are objects in the Modeler that have been associated with names using Declaration Statements. For example. representing intensity of red. you can dynamically update the vector layer by digitizing new or changed features on the screen. in a complete GIS database. One of the most common applications involving the combination of raster and vector data is the updating of vector layers using current raster imagery as a backdrop for vector editing. The declaration statement defines the data type and object type of the variable. moved roads. then there are probably errors due to changes in the area (new roads. Assignment Statements are used to set or change the value of a variable. The declaration may also associate a raster variable with certain layers of an image file or a table variable with an attribute table. new development.Data Types In addition to the data types utilized by Graphical Modeling. and sample models. Assume you want to run a site suitability model on only areas designated for commercial development in the zoning ordinances.). However. and nodes. Editing Vector Layers Editable features are polygons (as lines). if a vector database is more than one or two years old. cut. By selecting these zones in a vector polygon layer. Vector layers can also be used to indicate an AOI for further processing. Vector layers can also be used as inputs to models. label points.0 to 1. delete). you can also perform the following operations on the line features in multiple or single selections: • spline—smooths or generalizes all currently selected lines using a specified grain tolerance 206 Geographic Information Systems . descriptions of all available functions and operators.g. lines. paste. In addition to the basic editing operations (e. script model objects can store data in the following data types: • • Complex—complex data (double precision) Color—three floating point numbers in the range of 0. Editing operations and commands can be performed on multiple or single selections. green. When displaying existing vector layers over a raster layer.

Geographic Information Systems 207 . The software stores all edits in sequential order. These numbers are then used to determine line connectivity and polygon contiguity. see "Raster and Vector Data Sources" on page 55. You must also reconstruct the topology of vector layers imported into ERDAS IMAGINE. or moving a vertex or node) can be done on a single selected line. Once calculated. Constructing Topology Either the Build or Clean option can be used to construct topology. these values are recorded and stored in that layer’s associated attribute table. so that continually pressing Undo reverses the editing. Table 37 details general editing operations and the feature types that support each of those operations. Table 37: General Editing Operations and Supporting Feature Types Add Points Lines Polygons Nodes yes yes yes yes Delete yes yes yes yes Move yes yes yes yes Reshape no yes no no The Undo utility may be applied to any edits. For more information on vectors. After a vector layer is edited. deleting. each feature is assigned an internal number. When topology is constructed. To create spatial relationships between features in a vector layer. it is necessary to create topology. the topology must be constructed to maintain the topological relationships between features.• • • • generalize—weeds vertices from selected lines using a specified tolerance split/unsplit—makes two lines from one by adding a node or joins two lines by removing a node densify—adds vertices to selected lines at a tolerance you specify reshape (for single lines only)—enables you to move the vertices of a line Reshaping (adding.

The differences in these two options are summarized in Table 38 (Environmental Systems Research Institute. but the Clean option processes only lines and polygons. The automatically generated fields for a line layer are: • • • • • • • FNODE#—the internal node number for the beginning of a line (from-node) TNODE#—the internal number for the end of a line (to-node) LPOLY#—the internal number for the polygon to the left of the line (zero for layers containing only lines and no polygons) RPOLY#—the internal number for the polygon to the right of the line (zero for layers containing only lines and no polygons) LENGTH—length of each line. lines. whereas Clean creates intersections (nodes) wherever lines cross one another.When topology is constructed. measured in layer units Cover#—internal line number (values assigned by ERDAS IMAGINE) Cover-ID—user-ID (values modified by you) The automatically generated fields for a point or polygon layer are: • • • • AREA—area of each polygon. Build recognizes only existing intersections (nodes). and polygons. Different fields are stored for the different types of layers. feature attribute tables are created with several automatically created fields. 1990). Table 38: Comparison of Building and Cleaning Coverages Capabilities Processes: Polygons Yes Yes Build Clean 208 Geographic Information Systems . measured in layer units (zero for layers containing only points and no polygons) Cover#—internal polygon number (values assigned by ERDAS IMAGINE) Cover-ID—user-ID (values modified by you) Building and Cleaning Coverages The Build option processes points. measured in layer units (zero for layers containing only points and no polygons) PERIMETER—length of each polygon boundary.

nor should you try to display a layer that is being built or cleaned. the lines that make up each polygon are identified. Until topology is constructed.Table 38: Comparison of Building and Cleaning Coverages Capabilities Lines Points Build Yes Yes Yes Yes Clean Yes No Yes Yes Numbers features Calculates spatial measurements Creates intersections Processing speed Errors No Faster Yes Slower Constructing topology also helps to identify errors in the layer. Some of the common errors found are: • • • • Lines with less than two nodes Polygons that are not closed Polygons that have no label point or too many label points User-IDs that are not unique Constructing typology can identify the errors mentioned above. Construct topology using the Vector Utilities menu from the Vector icon in the ERDAS IMAGINE icon panel. When topology is constructed. no polygons exist and lines that cross each other are not connected at a node. line intersections are created. Geographic Information Systems 209 . You should not build or clean a layer that is displayed in a Viewer. and a label point is associated with each polygon. because there is no intersection.

two kinds of potential node errors may be observed. Acceptable pseudo nodes may represent an island (a spatial pseudo node) or the point where a road changes from pavement to gravel (an attribute pseudo node). Pseudo nodes do not necessarily indicate an error or a problem. it registers as a dangling node. pseudo nodes and dangling nodes. In polygon layers there may be label errors—usually no label point for a polygon. or more than one label point for a polygon. 210 Geographic Information Systems . or was digitized past an intersection. or it may be that a line does not intersect another line. In the latter case. These are identified in the Viewer with special symbols. in a street centerline map. resulting in an open polygon. cul-de-sacs or dead-ends are often represented by dangling nodes. The default symbols used by IMAGINE are shown in Figure 59 below but may be changed in the Vector Properties dialog. A dangling node refers to the unconstructed node of a dangling line. In some cases. Pseudo nodes occur where a single line connects with itself (an island) or where only two lines intersect. Every line begins and ends at a node point. Figure 59: Layer Errors Pseudo node (island) No label point in polygon Dangling nodes Label points in one polygon (due to dangling node) Errors detected in a layer can be corrected by changing the tolerances set for that layer and building or cleaning again. then running Build or Clean. a dangling node may be acceptable. So if a line does not close properly.When the Build or Clean options are used to construct the topology of a vector layer. Refer to the ERDAS IMAGINE Tour Guides manual for step-bystep instructions on editing vector layers. For example. or by editing the layer manually. two or more points may have been mistakenly digitized for a polygon.

or draftsmen) information and created a map to illustrate that information. Types of Maps A map is a graphic representation of spatial relationships on the Earth or other planets. See "Hardcopy Output" on page 289 for information about printing hardcopy maps. map production is largely automated—and the final output is not always paper. said: “Mapped and related statistical data do form the greatest storehouse of knowledge about the condition of the living space of mankind. Tomlinson. This chapter defines some basic cartographic terms and explains how maps are created within the ERDAS IMAGINE environment. the analyst is the cartographer and can design his maps to best suit the data and the end user. analyzed. Some of the different types of maps are defined in Table 39. Roger F. in many cases. In the past. Use the Map Composer to create hardcopy and softcopy maps and presentation graphics. map making was carried out by mapping agencies who took the analyst’s (be they surveyors. As the veteran GIS and image processing authority. But now. no matter how large that piece of paper is or how small the annotation is. photogrammetrists. depending on the intended use of the map. Maps stored on a computer can be queried. These representations were once hand-drawn with paper and pen. Maps can take on many forms and sizes. who often need to know much more about an area than can be reproduced on paper.Cartography Introduction Maps and mapping are the subject of the art and science known as cartography—creating two-dimensional representations of our threedimensional Earth. In this manual. Maps no longer refer only to hardcopy output. it only makes sense that maps be created as accurately as possible and be as accessible as possible. This chapter concentrates on the production of digital maps. The capabilities of a computer system are invaluable to map users. But today. and updated quickly. the maps discussed begin as digital files and may be printed later as desired.” With this thought in mind. Cartography Cartography 211 211 .

Aspect maps are often color-coded to show the eight major compass directions. Also called a shaded relief map. NTS quads. Lines are often spaced in increments of ten or twenty feet or meters. A reference map that outlines the mapped area. A map that is an enlargement of some congested area of a smaller scale map. A map on which isopleths (lines representing quantities that cannot exist at a point. Any map that appears to be. and that is usually placed on the same sheet with the smaller scale main map. such as counties. A map portraying background reference information onto which other information is placed. Base maps usually show the location and extent of natural Earth surface features and permanent human-made objects. orthophotos. and identifies all adjacent map sheets. or is. A map showing the limits of a specific set of mapping entities. Base Bathymetric Cadastral Choropleth Composite Contour Derivative Index Inset Isarithmic Isopleth Morphometric Outline Planimetric Relief 212 Cartography . without topographic features or elevation contours. or analyzing other maps. A map that uses isorithms (lines connecting points of the same value for any of the characteristics used in the representation of surfaces) to represent a statistical surface. A map showing only the horizontal position of geographic objects.Map Aspect Purpose A map that shows the prevailing direction that a slope faces at each pixel. Area symbols usually represent categorized classes of the mapped phenomenon. Raster imagery. A map in which lines are used to connect points of equal elevation. or any of 360 degrees. A map portraying properties of a surface using area symbols. Outline maps usually contain a very small number of details over the desired boundaries with their descriptive codes. combining. three-dimensional. Also called an isometric map. A map representing morphological features of the Earth’s surface. and orthoimages are often used as base maps. A map created by altering. identifies all of the component maps for the area if several map sheets are required. A map on which the combined information from different thematic maps is presented. such as population density) are used to represent some selected quantity. etc. A map portraying the shape of a water body or reservoir using isobaths (depth contours). A map showing the boundaries of the subdivisions of land for purposes of describing and recording ownership or taxation.

For example.) A map depicting terrain relief. For this reason. or production relative to the other areas. Cartography 213 . Also called a line-of-sight map or a visibility map. land cover. soils. Thematic Maps Thematic maps comprise a large portion of the maps that many organizations create. a map showing corn fields in the United States would be a qualitative map. A map showing only those areas visible (or invisible) from a specified point(s). You can create thematic data layers from continuous data (aerial photography and satellite images) using the ERDAS IMAGINE classification capabilities. hydrology. A quantitative map displays the spatial aspects of numerical data.g. etc. In ERDAS IMAGINE. Slope maps are usually color-coded according to the steepness of the terrain at each pixel. maps are stored as a map file with a . A map illustrating the class characterizations of a particular spatial variable (e. A map showing corn production (volume) in each area would be a quantitative map.Map Slope Thematic Topographic Viewshed Purpose A map that shows changes in elevation over distance. See "Classification" on page 545 for more information. this map type is explored in more detail. Quantitative maps show ordinal (less than/greater than) and interval/ratio (difference) scale data (Dent.map extension. 1985). Thematic maps may be subdivided into two groups: • • qualitative quantitative A qualitative map shows the spatial distribution or location of a kind of nominal data. It would not show how much corn is produced in each location..

Cartographers usually try to use a color scheme that highlights the primary purpose of the map. to something more complex. See Map Composition on page 246 for more information about creating maps. but technological advances have made this easy. you could overlay the thematic data onto a line coverage of state borders or a satellite image of the area. Color Selection The colors used in thematic maps may or may not have anything to do with the class or category of information shown. In ERDAS IMAGINE. In temperature mapping. This progression should not be used for series other than elevation. This may be valuable information when planning emergency response and resource management efforts for the area. Most people are more sensitive to red. it was difficult and expensive to produce maps that included both thematic and continuous data. followed by green. such as an aerial photograph or satellite image. 1969). blue. and yellow for warm temperatures and blue. ranging up through yellows and browns to reds in the higher elevations. For example. use yellows and tans for dryness and sparse vegetation and greens for lush vegetation. The map reader’s perception of colors also plays an important role. or countries. and purple. This base may be as simple as an outline of counties. • • • When mapping interval or ordinal data. In the past. greens in the lowlands. green.Base Information Thematic maps should include a base of information so that the reader can easily relate the thematic data to the real world. some guidelines have been established (Robinson and Sale. the higher ranks and greater amounts are generally represented by darker colors. Use blues for water. orange. The satellite image can provide more detail about the areas bordering the flood plains. Satellite images can also provide very current information about an area. In land cover mapping. you can include multiple layers in a single map composition. in a thematic map showing flood plains in the Mississippi River valley. start with blues for water. • • 214 Cartography . yellow. Although color selection is left entirely up to the map designer. states. use red. and gray for cool temperatures. When mapping elevation data. and can assist you in assessing the accuracy of a thematic image.

These annotation components are actually groups of the basic elements and can be ungrouped and edited like any other graphic. such as legends. Since a map is a form of communication. and grid lines symbols (north arrows. Cartography 215 . You can also create your own groups to form symbols that are not in the ERDAS IMAGINE symbol library.) labels (rivers.) Create annotation using the Annotation tool palette in the Viewer or in a map composition. scale bars. etc. mountains. tick marks. (Symbols are discussed in more detail under Symbols on page 222. The basic annotation elements in ERDAS IMAGINE include: • • • • rectangles (including squares) ellipses (including circles) polygons and polylines text These elements can be used to create more complex annotation.) and descriptive text (title. This annotation may take the form of: • • • • • scale bars legends neatlines. Use the Raster Attributes option in the Viewer to select and modify class colors. etc. Annotation is any explanatory material that accompanies a map to denote graphical features on the map. copyright.) The annotation listed above is made up of single elements. etc. Annotation A map is more than just an image(s) on a background. Therefore. production notes. credits. cities. it must convey information that may not be obvious by looking at the image. etc. maps usually contain several annotation elements to explain the map.• Use browns for land forms.

such as a neighborhood or subdivision. the scale must be smaller. The units on the map and on the ground do not have to be the same in a verbal statement. Verbal Statement A verbal statement of scale describes the distance on the map to the distance on the ground.ovr file. Scale Map scale is a statement that relates distance on a map to distance on the Earth’s surface.map. including: • • • representative fraction verbal statement scale bar Representative Fraction Map scale is often noted as a simple ratio or fraction called a representative fraction.map would be lanier. which is named after the map composition.000 is considered small-scale. If a relatively small area is to be mapped.ovr extension). It is perhaps the most important information on a map. the smaller the scale. One-inch and 6-inch maps of the British Ordnance Survey are often referred to by this method (1 inch to 1 mile. anything smaller than 1:250. For example. The units on both sides of the ratio must be the same. 216 Cartography . Scale can be reported in several ways. since the level of detail and map accuracy are both factors of the map scale. 6 inches to 1 mile) (Robinson and Sale. Annotation that is created in a Viewer window is stored in a separate file from the other data in the Viewer. As a rule. or the area of the Earth’s surface to be mapped. If a large area is to be mapped. then the scale can be larger.000 is approximately 1 inch to 16 miles.000. such as an entire continent. the annotation for a file called lanier. Generally. 1969).000.000 or 1/24.000 inches on the ground could be described as having a scale of 1:24. the less detailed the map can be. A verbal statement describing a scale of 1:1. Map annotation that is created in a Map Composer window is also stored in an . These annotation files are called overlay files (.How Annotation is Stored An annotation layer is a set of annotation elements that is drawn in a Viewer or Map Composer window and stored in a file.ovr. A map in which one inch on the map equals 24. Scale is directly related to the map extent.

100 km 0.500 1:63. Maps often include more than one scale bar to indicate various measurement systems.16 cm 2.00 cm Cartography 217 .634 km 0. such as kilometers and miles.000 1:15.025 mi 0.000 in 0.530 in 2.000 yd 0.000 mi 1. Common Map Scales You can create maps with an unlimited number of scales.904 yd 16.000 1:62.00 cm 20.158 mi 0.425 ft 6.000 1/40 inch represents 4.200 km 0.25 cm 5.030 mi 0.200 ft 10.845 in 0.634 in 1 kilometer is represented by 50.640 in 2.384 yd 0.000 yd 139. Use the Text tool to create a verbal statement.716 yd 43.986 mi 1.379 mi 0.000 1:24.395 mi 0.Scale Bars A scale bar is a graphic annotation element that describes map scale.840 1:20.316 mi 0.670 in 6. however.625 km 0.676 yd 17.17 cm 4.000 1:5.00 cm 10.240 km 0.25 cm 1.000 in 3.000 m 0.58 cm 1. It shows the distance on paper that represents a geographical distance on the map.040 mi 1 inch represents 56. there are some commonly used scales.000 in 1.680 in 12.500 km 0.800 km 1.180 mi 1.00 cm 3.580 mi 1 centimeter represents 20.500 mi 0.000 km 1 mile is represented by 31.000 1:25.260 mi 1.170 in 2. Table 39 lists these scales and their equivalents (Robinson and Sale.380 yd 22.000 1:31.340 in 4.952 yd 11.00 cm 6.750 km 0.014 in 1.000 yd 34.000 yd 13.156 km 0. Figure 60: Sample Scale Bars 1 1 0 0 1 2 1 3 2 4 Kilometers Miles Use the Scale Bar tool in the Annotation tool palette to automatically create representative fractions and scale bars.33 cm 1.00 cm 4.000 1:80. 1969).270 in 1.000 1:10.360 1:75.317 km 0.680 1:50.250 mi 0.000 1:100.789 mi 0.250 km 0.000 m 50. Table 39: Common Map Scales Map Scale 1:2.00 cm 1.792 in 0.032 mi 0.60 cm 1.

44 2.5 5 10 15 20 25 30 35 40 45 50 75 100 150 200 250 300 350 SCALE 1”=100’ 1:1200 30.48 24.42 15.61 .40 76.03 1.44 2.09 10.92 60.80 42.197 mi 0.81 3.890 mi 15.80 243.47 6.76 32.75 28.10 3.87 1”=1500’ 1:18000 457.22 25.96 40.88 91.507 in 0.03 1.31 1”=2000’ 1:24000 609.13 6.16 9.48 22.41 .96 30.52 1.05 2.000 1/40 inch represents 0.16 8.67 643.44 45.74 1.60 182.13 6.29 1.35 1.72 30.70 8.10 .44 1”=1000’ 1:12000 304.71 7.49 15.24 10.76 .22 1.127 in 0.29 15.000 1:250.87 .00 mm 2.30 .92 60.63 1”=1 mile 1:63360 1609.48 20.74 321.000 1:500.41 .06 3.05 6.05 2.10 4.16 7.50 50.00 254.40 16.395 mi 1 inch represents 1.93 107.30 .24 .00 84.23 3. Table 40: Pixels per Inch Pixel Size (m) 1 2 2.00 mm Table 40 shows the number of pixels per inch for selected scales and pixel sizes.47 64.38 20.29 31.19 8.35 804.30 .14 6.00 635.29 80.08 4.250 km 2.43 10.19 10.36 4.00 508.55 12.22 .06 3.24 12.52 1.64 45.10 4.20 60.46 16.099 mi 0.000.05 2.00 127.40 121.48 24.00 mm 1.Table 39: Common Map Scales Map Scale 1:125.52 1.12 .970 mi 3.02 .000 1:1.24 12.20 .19 6.48 15.98 40.05 2.87 160.17 1”=500’ 1:6000 152.050 mi 0.05 2.80 152.32 17.500 km 5.96 30.81 .77 6.60 304.64 30.57 3.10 4.063 in 1 kilometer is represented by 8.84 121.000 km 1 mile is represented by 0.19 21.20 228.74 1”=4167’ 1:50000 1270.03 1.39 3.62 6.37 53.52 1.08 11.86 18.33 36.10 4.61 .52 1.51 .23 35.61 .44 5.09 1”=200’ 1:2400 60.73 8.780 mi 1 centimeter represents 1.68 .950 mi 7.08 4.24 13.35 3.240 13.62 6.38 12.67 63.253 in 0.60 218 Cartography .02 .15 .22 1.03 1.32 15.00 mm 4.76 .35 5.93 12.02 .05 2.83 1.03 1.06 3.96 30.000 km 10.10 5.

0988 0.03 .06 .17 .44 .2500 0.08 .68 .69 2.61 . Table 41: Acres and Hectares per Pixel Pixel Size (m) 1 2 2.35 1.04 .3900 2.76 .12 1.0002 0.0001 0.81 1.30 2.07 .05 .0025 0.04 .3027 0.08 .0247 0.02 . Way.38 .22 1. Cunningham and D.76 .5598 Hectares 0.1225 0.09 .10 .41 1.57 .19 .79 1.0000 2.0225 0.15 1”=1000’ 1:12000 .52 1.46 1”=2000’ 1:24000 1.0004 0. The Ohio State University Table 41 lists the number of acres and hectares per pixel for various pixel sizes.06 1”=500’ 1:6000 .01 1.54 2.0625 0.68 .0015 0.59 1.91 .0062 0.30 1”=1500’ 1:18000 1.07 .02 3.1600 0.02 .14 1.0100 0.0556 0.0010 0.76 .0900 0.87 .34 .14 .5004 0.15 .12 .03 1”=200’ 1:2400 .51 .1544 0.65 .0400 0.2500 Cartography 219 .2025 0.2224 0.0006 0.30 .22 .4710 5.5625 1.51 .27 1”=1 mile 1:63360 4.58 3.3954 0.22 2.61 Courtesy of D.82 2.38 .5 5 10 15 20 25 30 35 40 45 50 75 100 150 Acres 0.Table 40: Pixels per Inch (Continued) Pixel Size (m) 400 450 500 600 700 800 900 1000 SCALE 1”=100’ 1:1200 .6178 1.18 2.25 .34 .61 1”=4167’ 1:50000 3.

9576 121. Cunningham and D.1044 Hectares 4.2500 9.0000 6. 220 Cartography .0000 64.8842 15. but can be created manually. Way. Legends are especially useful for maps of categorical data displayed in pseudo color. Symbol legends are not created automatically.0000 81. where each color represents a different feature or category. Symbols in legends should appear exactly the same size and color as they appear on the map (Robinson and Sale.0000 20. symbols. displayed in gray scale.2703 39.4440 22.2500 25. Figure 61: Sample Legend Legend pasture forest swamp developed Use the Legend tool in the Annotation tool palette to automatically create color legends. A legend can also be created for a single layer of continuous data.7761 88.0386 61. 1969).0000 100.5367 50.1468 200.0000 12.2394 30.1546 247.2500 16. and line styles that are used in a map. The Ohio State University Legends A legend is a key to the colors.0000 49.Table 41: Acres and Hectares per Pixel Pixel Size (m) 200 250 300 350 400 450 500 600 700 800 900 1000 Acres 9.0812 158.0000 Courtesy of D.0000 36. Legends are likewise used to describe all unknown or unique symbols utilized.

See the OnLine Help for instructions. Grid lines are intersecting lines that indicate regular intervals of distance. not just the image area. It differs from the map border in that the border usually encloses the entire map. tick marks. Tick marks are small lines along the edge of the image area or neatline that indicate regular intervals of distance. and Grid Lines neatline • • grid lines tick marks Grid lines may also be referred to as a graticule. but is really up to the map designer. Tick Marks. and grid lines.Neatlines. and Grid Lines Neatlines. Tick marks and grid lines can also be created over images displayed in a Viewer. Graticules are discussed in more detail in Projections on page 226. It is often helpful to place grid lines over the image area of a map. If the grid lines help readers understand the content of the map. Cartography 221 . • A neatline is a rectangular border around the image area of a map. Tick Marks. Figure 62: Sample Neatline. Usually. they are an extension of tick marks. tick marks. This is becoming less common on thematic maps. based on a coordinate system. they should be used. and grid lines serve to provide a georeferencing system for map detail and are based on the map projection of the image shown. Use the Grid/Tick tool in the Annotation tool palette to create neatlines.

etc. the symbol for a house might be a square. profile—formed like the profile of an object. Abstract symbols usually take the form of geometric shapes. (Dent. etc. they represent tangible objects. There are two major classes of symbols: • • replicative abstract Replicative symbols are designed to look like their real-world counterparts. oil wells. amount of rainfall. Both replicative and abstract symbols are composed of one or more of the following annotation elements: • • • Symbol Types These basic elements can be combined to create three different types of replicative symbols: • plan—formed after the basic outline of the object it represents. because most houses are rectangular. railroads. objects cannot be depicted in their true shape or size. on a map of a state park. such as population density. such as circles. Profile symbols generally represent vertical objects.Symbols Since maps are a greatly reduced version of the real-world. 1985). Therefore. squares. windmills. and triangles. function—formed after the activity that a symbol represents. For example. trees. such as coastlines. Figure 63: Sample Symbols Plan Profile Function point line area • • 222 Cartography . They are traditionally used to represent amounts that vary from place to place. For example. such as trees. and houses. a symbol of a tent would indicate the location of a camping area. a set of symbols is devised to represent real-world objects.

or other explanatory material. this is a very subjective area and many organizations already have guidelines to use. For example. A specific color could be used to indicate county seats. if captions are provided outside of the image area (Dent. larger circles would be used to show areas with higher population. It focuses the reader’s attention on the primary purpose of the map. if you include data that you do not own in a map. Credits Map credits (or source information) can include the data source and acquisition date. For example. The title may be omitted. if a circle is used to show cities and towns. This section is intended as an introduction to the concepts involved and to convey traditional guidelines. Use the Symbol tool in the Annotation tool palette and the symbol library to place symbols in maps. you must give credit to the owner. copyright information. their placement is crucial to effective communication. and other details that are required or helpful to readers. color. Typography and Lettering The choice of type fonts and styles and how names are lettered can make the difference between a clear and attractive map and a jumble of imagery and text. Use the Text tool in the Annotation tool palette to add labels and descriptive text to maps. Descriptive text on a map can include the map title and subtitle. production notes. credits. As with many other aspects of map design. accuracy information. where available. Any features that help orient the reader or are important to the content of the map should be labeled. Since symbols are not drawn to scale. and patterns to indicate different meanings within a map. captions. Labels and Descriptive Text Place names and other labels convey important information to the reader about the features on the map.Symbols can have different sizes. Cartography 223 . The use of size. and pattern generally shows qualitative or quantitative differences among areas marked. however. 1985). Title The map title usually draws attention by virtue of its size. colors.

). Dent. Exercise caution in using very thin letters that may not reproduce well. size. italic. etc. landform. 1985).If your organization does not have a set of guidelines for the appearance of maps and you plan to produce many in the future. and other natural features were labeled in italic. one or two styles are enough when also using the variations of those type faces (e. However. On the other hand. rather than two different serif fonts or two different sans serif fonts [e.. this is not strictly adhered to by map makers today. Type Styles Type style refers to the appearance of the text and may include font. This ensures that all of the maps produced follow the same conventions. and style (bold. it would be beneficial to develop a style guide specifically for mapping. italic.g.) In the past. regardless of who actually makes the map. Sans (sans serif) and Roman (serif) could be used together in one map]. although names in which the letters must be spread out across a large area are better in all capital letters. Avoid ornate text styles because they can be difficult to read. ERDAS IMAGINE enables you to make map templates to facilitate the development of map standards within your organization. 1969. This is a matter of personal preference. and names in all capital letters and lesser important text in lowercase with initial capitals.g. therefore lowercase letters might improve the legibility of the map. Use different sizes of type for showing varying levels of importance. etc. Put more important text in labels. bold. Use no more than four to six different type sizes. using letters that are too bold may obscure important information in the image. (Studies have found that capital letters are more difficult to read. • • • • • 224 Cartography . use a serif and a sans serif. the following techniques help to make maps more legible (Robinson and Sale. although water features are still nearly always labeled in italic.). city names are usually in a larger type size than the town names. Although the type styles used in maps are purely a matter of the designer’s taste. Generally. on a map with city and town labels. hydrology. titles. When using two typefaces. underline. underline.. • Do not use too many different typefaces in a single map. For example.

Here is a list of guidelines that have been used by cartographers in the past (Robinson and Sale. Type should not be curved (i. it should never be set in a straight line. conflicts with the lettering. such as lines and tones. • • • • • Cartography 225 . or kerning) as little as necessary. Dent.. • • Names should be either entirely on land or water—not overlapping both. but not the names. and in small-scale maps. In large-scale maps this means parallel with the upper and lower edges. Lettering should never be upside-down in any respect. Lettering should generally be oriented to match the orientation structure of the map. and position are the three most important factors in lettering. the data. there are no set rules for how lettering is to appear.Figure 64: Sample Sans Serif and Serif Typefaces with Various Styles Applied Sans Serif Sans 10 pt regular Sans 10 pt italic Sans 10 pt bold Sans 10 pt bold italic SANS 10 PT ALL CAPS Serif Roman 10 pt regular Roman 10 pt italic Roman 10 pt bold Roman 10 pt bold italic ROMAN 10 PT ALL CAPS Use the Styles dialog to adjust the style of text. Lettering Lettering refers to the way in which place names and other labels are added to a map.. 1985). 1969. different from preceding bullet) unless it is necessary to do so. Much is determined by the purpose of the map and the end user.e.e. Here again. Many organizations have developed their own rules for lettering. space between individual letters. orientation. Where the continuity of names and other map data. Letter spacing. Names should be letter spaced (i. but should always have a slight curve. this means in line with the parallels of latitude. If lettering must be disoriented. should be interrupted.

etc. or developable surface. A map projection is the manner in which the spherical surface of the Earth is represented on a flat (two-dimensional) surface. 226 Cartography . preferably above and to the right. The letters identifying linear features (roads. use the native language of the intended map user.• • Lettering that refers to point locations should be placed above or below the point. studies have shown that coding labels by color can improve a reader’s ability to find information (Dent. The word(s) should be repeated along the feature as often as necessary to facilitate identification. However. railroads. Projections This section is adapted from “Map Projections for Use with the Geographic Information System” by Lee and Walsh (Lee and Walsh. but all involve transfer of the distinctive global patterns of parallels of latitude and meridians of longitude onto an easily flattened surface. For geographical names. 1984). the name Germany should be used. These labels should be placed above the feature and river names should slant in the direction of the river flow (if the label is italic). Bad Lettering Good Bad • Atlanta Atlanta GEORGIA Savannah G e o r g i a Savannah Text Color Many cartographers argue that all lettering on a map should be black. For an English-speaking audience. This can be accomplished by direct geometric projection or by a mathematically derived transformation. In fact.) should not be spaced. rivers. rather than Deutscheland. 1985). There are many kinds of projections. the map may be well-served by incorporating color into its design. Figure 65: Good Lettering vs.

or most often. This is accomplished by exact transformation of angles around points. or recording motion. The property of equivalence is important in maps that are used for comparing density and distribution data. For more information about the Projection Chooser. Cartography 227 . it is inevitable that some error or distortion occurs in transforming a spherical surface into a flat surface. Ideally. a distortion-free map has four valuable properties: • • • • conformality equivalence equidistance true direction Each of these properties is explained below.The three most common developable surfaces are the cylinder. A plane is already flat. without stretching. as in navigation. while a cylinder or cone may be cut and laid out flat. map projections may be classified into three general families: cylindrical. wherein a projection preserves the shape of any small geographical area. and azimuthal or planar. and plane (Figure 66 on page 229). cone. meaning that areas on one portion of a map are in scale with areas in any other portion. conical. Preservation of equivalence involves inexact transformation of angles around points and thus. is mutually exclusive with conformality except along one or two selected lines. No map projection can be true in all of these properties. Conformality is the characteristic of true shape. One necessary condition is the perpendicular intersection of grid lines as on the globe. Projections that compromise in this manner are known as compromise projections. each projection is devised to be true in selected properties. Therefore. Properties of Map Projections Regardless of what type of projection is used. as in populations. Equivalence is the characteristic of equal area. guiding. see the ERDAS IMAGINE On-Line Help. Thus. Map projections are selected in the Projection Chooser. a compromise among selected properties. A conformal map or projection is one that has the property of true shape. The property of conformality is important in maps which are used for analyzing.

road distances). on a spherical surface. This characteristic is most important in aviation. Note that all meridians are great circles. An azimuth is an angle measured clockwise from a meridian.g. or at most two. a more desirable property than true direction may be where great circles are represented by straight lines. Along a great circle. This property can be fulfilled on any given map from one. reference lines such as the equator or a meridian are chosen to have equidistance and are termed standard parallels or standard meridians. the shortest surface distance between two points is not a rhumb line. being an arc of a circle whose center is the center of the Earth. going north to east. but the only parallel that is a great circle is the equator. Typically.. However. azimuths constantly change (unless the great circle is the equator or a meridian). points in any direction or along certain lines. Thus. Equidistance is important in maps that are used for analyzing measurements (i. but a great circle. The property of constant direction makes it comparatively easy to chart a navigational course. The line of constant or equal direction is termed a rhumb line.. True direction is characterized by a direction line between two points that crosses reference lines (e.e. meridians) at a constant angle or azimuth. The scale of distance is constant over the entire map. 228 Cartography .Equidistance is the characteristic of true distance measuring.

these projections are symmetrical around a chosen center or central meridian. or orientation. of the projection surface. A tangent plane intersects the global surface at only one point and is perpendicular to a line passing through the center of the sphere. Choice of the projection to be used depends upon the true property or combination of properties desired for effective cartographic analysis.Figure 66: Projection Types Regular Cylindrical Regular Conic Transverse Cylindrical Polar Azimuthal (planar) Oblique Azimuthal (planar) Oblique Cylindrical Projection Types Although a great number of projections have been devised. the majority of them are geometric or mathematical variants of the basic direct geometric projection families described below. Choice of the projection center determines the aspect. Azimuthal Projections Azimuthal projections. Thus. also called planar projections. are accomplished by drawing lines from a given perspective point through the globe onto a tangent plane. Cartography 229 . This is conceptually equivalent to tracing a shadow of a figure cast by a light source.

termed the standard parallel. the map is error-free and possess equidistance. Conceptually. it may be: • • • the center of the Earth (gnomonic) an infinite distance away (orthographic) on the Earth’s surface. A tangent cone intersects the global surface to form a circle. the cone slices underneath the global surface. the conical aspect may be polar. and intersect the global surface. Usually. equatorial. in this instance. opposite the projection plane (stereographic) Conical Projections Conical projections are accomplished by intersecting.Azimuthal projections may be centered: • • • on the poles (polar aspect) at a point on the equator (equatorial aspect) at any other orientation (oblique aspect) The origin of the projection lines—that is. Only polar conical projections are supported in ERDAS IMAGINE. a cone with the global surface and mathematically projecting lines onto this developable surface. or oblique. Note that the use of the word secant. Along this line of intersection. or touching. In this case. the perspective point—may also assume various positions. For example. forming two circles that possess equidistance. Cones may also be secant. is only conceptual and not geometrically accurate. between the standard parallels. this line is a parallel. Figure 67: Tangent and Secant Cones Tangent one standard parallel Secant two standard parallels 230 Cartography .

e. A tangent cylinder intersects the global surface on only one line to form a circle. A secant cylinder.Cylindrical Projections Cylindrical projections are accomplished by intersecting. These modifications are made to reduce distortion. If the cylinder is rotated 90 degrees from the vertical (i. However. the Space Oblique Mercator projection is a modification of the Mercator projection. has two lines possessing equidistance. This results in the Earth appearing oval instead of rectangular (Environmental Systems Research Institute. which is then cut and unrolled. the long axis becomes horizontal). a cylinder with the global surface. wherein the central line of the projection becomes a chosen standard meridian as opposed to a standard parallel. it cannot truly be a cylindrical projection. then the aspect becomes transverse. cone. For example. or touching. Modified projections are modified versions of another projection. This central line of the projection is commonly the equator and possesses equidistance. Cartography 231 . one slightly less in diameter than the globe. Mercator possesses true direction and conformality. The surface is mathematically projected onto the cylinder.. because all meridians except the central meridian are curved. Figure 68: Tangent and Secant Cylinders Tangent one standard parallel Secant two standard parallels Perhaps the most famous cylindrical projection is the Mercator. the Sinusoidal is called a pseudocylindrical projection because all lines of latitude are straight and parallel. Other Projections The projections discussed so far are projections that are created by projecting from a sphere (the Earth) onto a plane. as with a tangent cone. or cylinder. 1991). which became the standard navigational map. For example. Pseudo projections have only some of the characteristics of another class projection. often by including additional standard lines or a different pattern of distortion. Many other projections cannot be created so easily. and all meridians are equally spaced.

Most often this is the center. or origin. or spherical. is defined by values of false easting and false northing. Lat/Lon coordinates are reported in degrees. Values of false easting are read first and may be in meters or feet. Latitude and longitude are defined with respect to an origin located at the intersection of the equator and the prime meridian. of the projection. and seconds. This point is defined in two coordinate systems: • • geographical planar Geographical Geographical. map projection information appears in the Projection Chooser. In practice. coordinates are defined by a column and row position on a planar grid (X. Lines of latitude are called parallels. lines of longitude are called meridians. Meridians are designated as 0° to 180°. and the first half refers to the easting and the second half the northing. England). which run north/south. being a false origin. Planar Planar.Y). The 180° meridian (opposite the prime meridian) is the International Dateline. Map projections are various arrangements of the Earth’s latitude and longitude lines onto a plane. coordinates are based on the network of latitude and longitude (Lat/Lon) lines that make up the graticule of the Earth. The origin of the projection. Coordinates increase from 0. Within the graticule. Parallels are designated as 0° at the equator to 90° at the poles. The origin of a planar coordinate system is typically located south and west of the origin of the projection. minutes. The Projection Chooser provides the following projections: USGS Projections • • Alaska Conformal Albers Conical Equal Area 232 Cartography .Geographical and Planar Coordinates Map projections require a point of reference on the Earth’s surface. with the prime meridian at 0° (Greenwich. Available Map Projections In ERDAS IMAGINE. Grid references always contain an even number of digits. this eliminates negative coordinate values and allows locations on a map projection to be defined by positive coordinate pairs. The equator is the largest parallel. which run east/west.0 going east and north. or Cartesian. which is used to georeference images and to convert map coordinates from one type of projection to another. east or west of the prime meridian.

• • • • • • • • • • • • • • • • • • • • • • • • • Azimuthal Equidistant Behrmann Bonne Cassini Eckert I Eckert II Eckert III Eckert IV Eckert V Eckert VI EOSAT SOM Equidistant Conic Equidistant Cylindrical Equirectangular (Plate Carrée) Gall Stereographic Gauss Kruger General Vertical Near-side Perspective Geographic (Lat/Lon) Gnomonic Hammer Interrupted Goode Homolosine Interrupted Mollweide Lambert Azimuthal Equal Area Lambert Conformal Conic Loximuthal Cartography 233 .

• • • • • • • • • • • • • • • • • • • • • • • • • Mercator Miller Cylindrical Modified Transverse Mercator Mollweide New Zealand Map Grid Oblated Equal Area Oblique Mercator (Hotine) Orthographic Plate Carrée Polar Stereographic Polyconic Quartic Authalic Robinson RSO Sinusoidal Space Oblique Mercator Space Oblique Mercator (Formats A & B) State Plane Stereographic Stereographic (Extended) Transverse Mercator Two Point Equidistant UTM Van der Grinten I Wagner IV 234 Cartography .

• • External Projections • • • • • • • • • • • • • • • • • • • • • Wagner VII Winkel I Albers Equal Area (see Albers Conical Equal Area on page 303) Azimuthal Equidistant (see Azimuthal Equidistant on page 306) Bipolar Oblique Conic Conformal Cassini-Soldner Conic Equidistant (see Equidistant Conic on page 333) Laborde Oblique Mercator Lambert Azimuthal Equal Area (see Lambert Azimuthal Equal Area on page 353) Lambert Conformal Conic (see Lambert Conformal Conic on page 356) Mercator (see Mercator on page 363) Minimum Error Conformal Modified Polyconic Modified Stereographic Mollweide Equal Area (see Mollweide on page 372) Oblique Mercator (see Oblique Mercator (Hotine) on page 376) Orthographic (see Orthographic on page 379) Plate Carrée (see Equirectangular (Plate Carrée) on page 336) Rectified Skew Orthomorphic (see RSO on page 392) Regular Polyconic (see Polyconic on page 386) Robinson Pseudocylindrical (see Robinson on page 390) Sinusoidal (see Sinusoidal on page 393) Southern Orientated Gauss Conformal Cartography 235 .

for 30°51’12’’: dd(30. Units Use the units of measure that are appropriate for the map projection type. These parameters fall into three general classes: (1) definition of the spheroid.12) = 30. After choosing the desired map projection. along with appropriate prompts that enable you to specify these parameters. (2) definition of the surface viewing window.• • • • • • • Stereographic (see Stereographic on page 408) Swiss Cylindrical Stereographic (Oblique) (see Stereographic on page 408) Transverse Mercator (see Transverse Mercator on page 412) Universal Transverse Mercator (see UTM on page 416) Van der Grinten (see Van der Grinten I on page 419) Winkel’s Tripel Choice of the projection to be used depends upon the desired major property and the region to be mapped (see Table 43).51. 236 Cartography . Note also that values for longitude west of Greenwich.12) = -30. When prompted. England. • Lat/Lon coordinates are expressed in decimal degrees. and (3) definition of scale. For each map projection. several parameters are required for its definition (see Table 42).85333 You can also enter Lat/Lon coordinates in radians. All other coordinates are expressed in meters.85333 -dd(30. minutes.51. you can use the DD function to convert coordinates in degrees. and values for latitude south of the equator are to be entered as negatives. seconds format to decimal.85333 or 30:51:12 = 30. • • State Plane coordinates are expressed in feet or meters. a menu of spheroids displays. For example.

E-W expanses flight (straight great circles) Nonpolar regions. pictorial Hemispheres or less N-S expanses or equatorial regions City maps. spherical coordinates Data entry. continents Square or round expanses Polar regions.9.Table 42: Map Projections Type 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Map projection Geographic Universal Transverse Mercator State Plane Albers Conical Equal Area Lambert Conformal Conic Mercator Polar Stereographic Polyconic Equidistant Conic Transverse Mercator Stereographic Lambert Azimuthal Equal Area Azimuthal Equidistant Gnomonic Orthographic General Vertical NearSide Perspective Sinusoidal Equirectangular Miller Cylindrical Van der Grinten I Oblique Mercator Construction N/A Cylinder (see #9) (see #4. computer plotting (simplistic) World maps World maps Oblique expanses (e. E-W expanses Middle latitudes.. plane coordinates Data entry.20) Cone Cone Cylinder Plane Cone Cone Cylinder Plane Plane Plane Plane Plane Plane Pseudo-Cylinder Cylinder Cylinder N/A Cylinder Property N/A Conformal Conformal Equivalent Conformal True Direction Conformal True Direction Conformal Compromise Equidistant Conformal Conformal Equivalent Equidistant Compromise Compromise Compromise Equivalent Compromise Compromise Compromise Conformal Conformal Use Data entry. satellite tracking Mapping of Landsat imagery Space Oblique Mercator Cylinder Cartography 237 . 7. plane coordinates Middle latitudes. Hawaiian islands). E-W expanses N-S expanses Hemispheres. navigation (straight rhumb lines) Polar regions N-S expanses Middle latitudes. radio/seismic work (straight great circles) Navigation.g. seismic work (straight great circles) Globes.

Table 42: Map Projections Type 22 Map projection Modified Transverse Mercator Construction Cylinder Property Conformal Alaska Use Table 43: Projection Parameters Projection type (#)1 Parameter Definition of Spheroid Spheroid selections • • • • • • • • • • • • • • • • • • • 3 4 5 6 7 8b 9 10 11 12 13 14 15 16 17 18 19 202 21b 22 Definition of Surface Viewing Window False easting False northing Longitude of central meridian Latitude of origin of projection Longitude of center of projection Latitude of center of projection Latitude of first standard parallel Latitude of second standard parallel Latitude of true scale Longitude below pole Definition of Scale Scale factor at central meridian Height of perspective point above sphere Scale factor at center of projection • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 238 Cartography .

Additional parameters required for definition of the map projection are described in the text of “Map Projections” on page 297. 2. Cartography 239 .1. Parameters for definition of map projection types 0-2 are not applicable and are described in the text. Numbers are used for reference only and correspond to the numbers used in Table 42.

In mapping large areas (entire countries. there have been three fundamental rules regarding map projection use (Maling. the amount of distortion in a particular projection is barely. the choice of map projection becomes more critical. 1992): • • • if the country to be mapped lies in the tropics. one or several map projections may be used. Guidelines Since the sixteenth century. use a conical projection if the map is required to show one of the polar regions. use an azimuthal projection 240 Cartography . but distortion increases outward toward the edges of the map. In large areas. 1992): • • • decide how to best display the area of interest or illustrate the results of analysis register all imagery to a single coordinate system for easier comparisons test the accuracy of the information and perform measurements on the data Deciding Factors Depending on your applications and the uses for the maps created. virtually any map projection is acceptable. use a cylindrical projection if the country to be mapped lies in the temperate latitudes. there may be little or no distortion in the center of the map.Choosing a Map Projection Map Projection Uses in a GIS Selecting a map projection for the GIS database enables you to (Maling. including: • • • • • type of map special properties that must be preserved types of data to be mapped map accuracy scale If you are mapping a relatively small area. noticeable. continents. if at all. Many factors must be weighed when selecting a projection. and the world). In small areas.

This flattening of the sphere makes it an oblate spheroid. use a conformal projection. Spheroids The previous discussion of direct geometric map projections assumes that the Earth is a sphere.These rules are no longer held so strongly. The amount of flattening of the Earth is expressed as the ratio: f = (a – b) ⁄ a Where: a b = the equatorial radius (semi-major axis) = the polar radius (semi-minor axis) Most map projections use eccentricity (e2) rather than flattening. Figure 69: Ellipse Minor axis Major axis semi-major axis semi-minor axis An ellipse is defined by its semi-major (long) and semi-minor (short) axes. 1990): • • • Statistical data should be displayed using an equal area projection to maintain proper proportions (although shape may be sacrificed). Equal area projections are well-suited to thematic data. The purpose of a particular map and the merits of the individual projections must be examined before an educated choice can be made. However. and for many maps this is satisfactory. there are some guidelines that may help you select a projection (Pearson. the planet bulges slightly at the equator. However. which is an ellipse rotated around its shorter axis. There are too many factors to consider in map projection selection for broad generalizations to be effective today. The relationship is: e 2 = 2f – f 2 Cartography 241 . Where shape is important. due to rotation of the Earth around its axis.

If other regions are to be mapped. However. the spheroid in use is the Clarke 1866 for NAD27 and GRS 1980 for NAD83 (State Plane). you have the option to choose from the following list of spheroids: • • • • • • • • • • • • • • • • • Airy Australian National Bessel Clarke 1866 Clarke 1880 Everest GRS 1980 Helmert Hough International 1909 Krasovsky Mercury 1960 Modified Airy Modified Everest Modified Mercury 1968 New International 1967 Southeast Asia 242 Cartography .000 or larger. these spheroids may not give the best fit for a particular region. In North America. Several principal spheroids are in use by one or more countries.The flattening of the Earth is about 1 part in 300. and becomes significant in map accuracy at a scale of 1:100. Only recently have satellite tracking data provided spheroid determinations for the entire Earth. Upon choosing a desired projection type. Differences are due primarily to calculation of the spheroid for a particular region of the Earth’s surface. different projections should be used. Calculation of a map projection requires definition of the spheroid (or ellipsoid) in terms of the length of axes and eccentricity squared (or radius of the reference sphere).

0 6356256.865 6378293. Table 44: Earth Spheroids for use with ERDAS IMAGINE Spheroid 165 Airy (1940) Airy Modified (1849) Australian National (1965) Bessel (1841) Bessell (Namibia) Clarke 1858 Clarke 1866 Clarke 1880 Clarke 1880 IGN Everest (1830) Everest (1956) 6378160.• • • • • • Sphere of Nominal Radius of Earth Sphere of Radius 6370977m Walbeck WGS 66 WGS 72 WGS 84 The spheroids listed above are the most commonly used.0 Semi-Minor Axis 6356783. There are many other spheroids available.2 6356774.383 6356619. These additional spheroids are not documented in this manual. The semi-major and semi-minor axes of all supported spheroids are listed in Table 44. and Pakistan India.86955 6356515.145 6378249.0 Semi-Major Axis 6378165. Burma. and they are listed in the Projection Chooser.2284 Cartography 243 . and Indonesia Namibia Global North America and the Philippines France and Africa Global India.0 6356583.3452 6356075. Chile.155 6377483.91 Global England Ireland Australia Use Central Europe.4133 6377301.0 6377563.96284 6356165.0 6378206.719 6356078.4 6378249. You can use the IMAGINE Developers’ Toolkit to add your own map projections and spheroids to ERDAS IMAGINE.8 6356514.0 6377397. Nepal 6377276. as well as the principal uses of these spheroids.243 6356100.

613 6377298. more recent version As Everest above.504086 6356911.343479 6356775.0 6378166.556 6378166.16962789 Egypt 092 6356794.0 6378388.2836 6356768.063 6377309. more recent calculation Singapore As International 1909 below.5 6378136.6679 6356103.5503 6356784.2 6356751.0 Semi-Minor Axis 6356094.0 6378160.143 6356103.038993 6356108.3205 6356772. rarely used As Airy above.94613 6356774. more recent calculation Soviet Union South America 244 Cartography .0188 6356794.0 6356911.063 6378150.0 6378157.Table 44: Earth Spheroids for use with ERDAS IMAGINE Spheroid Everest (1969) Everest (Malaysia & Singapore) Everest (Pakistan) Everest (Sabah & Sarawak) Fischer (1960) Fischer (1968) GRS 1980 (Geodetic Reference System) Hayford Helmert Hough IAU 1965 Indonesian 1974 International 1909 (= Hayford) IUGG 1967 Krasovsky (1940) Mercury 1960 Modified Airy Modified Everest Modified Mercury 1968 Modified Fischer (1960) New International 1967 SGS 85 (Soviet Geodetic System 1985) South American (1969) Semi-Major Axis 6377295.0 6378270.89 6377304.0 6378245.664 6377304.3372 6356752.0 6378160.0 6378200.31414 Global Global Pakistan Use Brunei.0 6378160.570542 6356097.0 6378160.0 6377341. with modification of ellipse axes Global Global Remaining parts of the world not listed here Hungary Former Soviet Union and some East European countries Early satellite.7192 As International 1909 above.0 6378137.337303 6356773.3016 6356774.0 6378150.283666 6356036.0 6378155. East Malaysia Global Global Adopted in North America for 1983 Earth-centered coordinate system (satellite) Global 6378388.516 6356863. more recent version As Mercury 1960 above.946128 6356818.039 6356768.0 6356774.

287 6356759.Table 44: Earth Spheroids for use with ERDAS IMAGINE Spheroid Southeast Asia Sphere Sphere of Nominal Radius of Earth Sphere of Radius 6370997 m Semi-Major Axis 6378155.769356 6356750.3205 6371000.0 6370997.0 6378165.0 6378145.519915 6356752.0 6370997.0 6378135. more recent 929 calculation Cartography 245 .8467 6356783. up to 1910 Global As WGS 72 above.0 As named Global Use A perfect sphere A perfect sphere with the same surface area as the Clarke 1866 spheroid Soviet Union.0 6370997.0 6378137.0 Semi-Minor Axis 6356773.0 6371000. older version NASA (satellite) Walbeck (1819) WGS 60 (World Geodetic System 1960) WGS 66 (World Geodetic System 1966) WGS 72 (World Geodetic System 1972) WGS 84 (World Geodetic System 1984) 6376896.0 6370997.0 6355834.31424517 As WGS 72.

Many GIS analysts may already know more about cartography than they realize.0 24973000. But.0 3376200. Map composition is also much easier than in the past when maps were hand drawn. Spheroids for these planetary bodies have a defined semi-major axis and a semi-minor axis. The semi-major and semi-minor axes of the supported extraterrestrial spheroids are listed in the following table. the results of your analyses can be communicated much more effectively.0 6051800. See Figure 69 on page 241 for an illustration of the axes defined in an ellipse. and other planets in our Solar System.0 6051800. various asteroids. but that is how we learn. it is merely an overview of some of the issues involved in creating cartographically-correct products.0 71492000.0 25559000.0 2439700. Perhaps the first maps you made were imitations of existing maps.0 Map Composition Learning Map Composition Cartography and map composition may seem like an entirely new discipline to many GIS and image processing analysts—and that is partly true.0 54364000.0 1195000. Venus.0 60268000. by learning the basics of map design. simply because they have access to map-making software. 246 Cartography . such as the Moon.0 3396200. Table 45: Non-Earth Spheroids for use with ERDAS IMAGINE Spheroid Moon Mercury Venus Mars Jupiter Saturn Uranus Neptune Pluto Semi-Major Axis 1738100.0 24764000.0 2439700. Mars. measured in meters.0 Semi-Minor Axis 1736000.Non-Earth Spheroids Spheroid models can be applied to planetary bodies other than the Earth.0 66854000. This chapter is certainly not a textbook on cartography.0 1195000.0 24341000. corresponding to Earth spheroids.

) may be added for locational reference. suppose you are going to do a series of maps about global deforestation for presentation to Congress. The typeface size and style to be used for titles. The following questions may aid in the planning process: • • • • • • • How is this map going to be used? Will the map have a single theme or many? Is this a single map. since these maps may be used in very high-level decisions. • • • Cartography 247 . The first step in creating a map is to plan its contents and layout. etc. so that all the maps produced have the same style.0” sheets. Political boundaries might need to be included. since the maps are printed in color.5” × 11. urban centers. how big will it be? Will it be printed in color or black and white? Are there map guidelines already set up by your organization? The answers to these questions can help to determine the type of information that must go into the composition and the layout of that information. Cultural features (roads. since they influence the types of actions that can be taken in each deforested area. and you are going to print these maps in color on an inkjet printer. or is it part of a series of similar maps? Who is the intended audience? What is the level of their knowledge about the subject matter? Will it remain in digital form and be viewed on the computer screen or will it be printed? If it is going to be printed. The colors used should be chosen carefully. and labels have to be larger than for maps printed on 8. Select symbols that are widely recognized. For example. you can begin map composition. Include a statement about the accuracy of each map. and make sure they are all explained in a legend. This scenario might lead to the following conclusions: • • • • A format (layout) should be developed for the series.Plan the Map After your analysis is complete. The type styles selected should be the same for all maps. captions.

where points refer only to points that can be well-defined on the ground. especially. Accuracy should be tested by comparison of actual map data with survey data of higher accuracy (not necessarily with ground truth). See the tour guide about Map Composer in the ERDAS IMAGINE Tour Guides for step-by-step instructions on creating a map. If maps have been tested and do meet these standards.000. On maps with scales larger than 1:20. Map Accuracy Maps are often used to influence legislation. US National Map Accuracy Standard • • • • • 248 Cartography . the corresponding error term is 1/30 inch. These guidelines are summarized below (Fisher. etc. scale. The analyst/cartographer must be aware of these factors before map production begins. At no more than 10 percent of the elevations tested can contours be in error by more than one half of the contour interval. promote a cause.000. Maps that have been tested but fail to meet the requirements should omit all mention of the standards on the legend.Once this information is in hand. determines its usefulness. you can actually begin sketching the look of the map on a sheet of paper. in a large part. Doing so ensures that all of the necessary data layers are available. generalization. The accuracy of the map. There are many factors that influence map accuracy: the projection used. Refer to the On-Line Help for details about how Map Composer works. base data. However. It is helpful for you to know how you want the map to look before starting the ERDAS IMAGINE Map Composer. or enlighten a particular group before decisions are made. 1991): • On scales smaller than 1:20. not more than 10 percent of points tested should be more than 1/50 inch in horizontal error. The United States Bureau of the Budget has developed the US National Map Accuracy Standard in an effort to standardize accuracy reporting on maps. In these cases. It is usually up to individual organizations to perform accuracy assessment and decide how those findings are reflected in the products they produce. and makes the composition phase go quickly. map accuracy is of the utmost importance. several agencies have established guidelines for map makers. a statement should be made to that effect in the legend.

Accuracy should be maintained between interpreters and times of sensing. care must be taken in pursuing this avenue if it is necessary to maintain a particular level of accuracy. Cartography 249 .USGS Land Use and Land Cover Map Guidelines The USGS has set standards of their own for land use and land cover maps (Fisher. or were not produced using the same accuracy standards that are currently in use. 1991): • Up to 25% of the pedons may be of other soil types than those named if they do not present a major hindrance to land management. If the hardcopy maps that are digitized are outdated. • • Digitized Hardcopy Maps Another method of expanding the database is by digitizing existing hardcopy maps. USDA SCS Soils Maps Guidelines The United States Department of Agriculture (USDA) has set standards for Soil Conservation Service (SCS) soils maps (Fisher. No single included soil type may occupy more than 10% of the area of the map unit. Up to only 10% of pedons may be of other soil types than those named if they do present a major hindrance to land management. Although this may seem like an easy way to gather more information. 1991): • • • The minimum level of accuracy in identifying land use and land cover categories is 85%. the digitized map may negatively influence the overall accuracy of the database. The several categories shown should have about the same accuracy.

250 Cartography .

There are a number of different map projection methods. There are a number of map coordinate systems for determining location on an image. row) pairs of numbers. conform to other images. then image B must be registered to image A so that they conform to each other. The tools for rectifying image data are used to transform disparate images to the same coordinate system. in equal area map projections.Y (column. This chapter covers the processes of geometrically correcting an image so that it can be represented on a planar surface. Even images of seemingly flat areas are distorted by both the curvature of the Earth and the sensor being used. However. the shapes. the pixels must be resampled. each map projection system compromises accuracy between certain properties. angle. image A is not rectified to a particular map projection. Rectification is the process of transforming the data from one grid system into another grid system using a geometric transformation. such as conservation of distance.Rectification Introduction Raw. and many other applications. For example. Each map projection system is associated with a map coordinate system. remotely sensed image data gathered by a satellite or aircraft are representations of the irregular surface of the Earth. and are expressed as X. and scale in parts of the map may be distorted (Jensen. Since the pixels of the new grid may not align with the pixels of the original grid. A map projection system is any system designed to represent the surface of a sphere or spheroid (such as the Earth) on a plane. A map coordinate system is not necessarily involved. This is useful for comparing land use area. to maintain equal area. or area. the pixel grids of each image must conform to the other images in the data base. if image A is not rectified and it is being used with image B. Registration is the process of making an image conform to another image. images of one area that are collected from different sources must be used together. These coordinate systems conform to a grid. a circle of a specified diameter drawn at any location on the map represents the same total area. angles. While polynomial transformation and triangle-based methods are described in this chapter. discussion about various rectification techniques can be found in Yang (Yang. 1997). In this example. Since flattening a sphere to a plane causes distortions to the surface. For example. Rectification Rectification 251 251 . density. To be able to compare separate images pixel by pixel. Resampling is the process of extrapolating data values for the pixels on the new grid from the values of the source pixels. 1996). and have the integrity of a map. so there is no need to rectify image B to a map projection. Registration In many cases.

which can be derived by using 3D GCPs. Image-toimage registration involves georeferencing only if the reference image is already georeferenced. by definition. and some tips for doing so are included in this chapter. 252 Rectification .Georeferencing Georeferencing refers to the process of assigning map coordinates to image data. Orthorectification Orthorectification is a form of rectification that corrects for terrain displacement and can be used if there is a DEM of the study area. In relatively flat areas. orthorectification is not necessary. you cannot rectify data using Image Information. You must use the Rectification tools described in this chapter. See "Photogrammetric Concepts" on page 595 for more information on orthocorrection. It is possible to purchase image data that is already geocoded. involves changing only the map coordinate information in the image file. It is based on collinearity equations. where a high degree of accuracy is required. Rectification. You can view map projection information for a particular file using the Image Information utility. Georeferencing. The properties of map projections and of particular map projection systems are discussed in "Cartography" on page 211 and "Map Projections" on page 297. The image data may already be projected onto the desired plane. Image Information allows you to modify map information that is incorrect. Geocoded data should be rectified only if they must conform to a different projection system or be registered to other rectified data. but in mountainous areas (or on aerial photographs of buildings). but not yet referenced to the proper coordinate system. since all map projection systems are associated with map coordinates. Lat/Lon expresses locations in the terms of a spheroid. Geocoded data are images that have been rectified to a particular map projection and pixel size. involves georeferencing. an image is not usually rectified to Lat/Lon. orthorectification is recommended. although it is possible to convert images to Lat/Lon. The grid of the image does not change. and usually have had radiometric corrections applied. Latitude/Longitude Lat/Lon is a spherical coordinate system that is not associated with a map projection. by itself. not a plane. Therefore. However.

If you are doing a government project. To select the optimum map projection and coordinate system. the projection may be predetermined. Where on the globe is the study area? Polar regions and equatorial regions require different projections for maximum accuracy. Use an equal area projection for thematic or distribution maps and conformal or equal area projections for presentation maps. such as change detection or thermal inertia mapping (day and night comparison) developing GIS data bases for GIS modeling identifying training samples according to map coordinates prior to classification creating accurate scaled photomaps overlaying an image with vector data.When to Rectify Rectification is necessary in cases where the pixel grid of the image must be changed to fit a map projection system or a reference image. and oblique areas may all require different projection systems (Environmental Systems Research Institute. north-south. Before selecting a map projection. Rectification 253 . What is the extent of the study area? Circular. the primary use for the data base must be considered. A commonly used projection in the United States government is State Plane. 1992). you must determine the appropriate coordinate system for the data base. such as ArcInfo comparing images that are originally at different scales extracting accurate distance and area measurements mosaicking images performing any other analyses requiring precise geographic locations Before rectifying the data. There are several reasons for rectifying image data: • • • • • • • • • comparing pixels scene to scene in applications. consider the following: • • • How large or small an area is mapped? Different projections are intended for different size areas. eastwest.

An unrectified image is more spectrally correct than a rectified image. If map coordinates or map units are not needed in the application. These images need only to be georeferenced. Since these data are very accurate. which is a much simpler process than rectification. This involves redefining: • • the map coordinate of the upper left corner of the image the cell size (the area represented by each pixel) This information is usually the same for each layer of an image file. then that image is already planar and does not require rectification unless there is some skew or rotation of the image. some spectral integrity of the data can be lost during rectification. since the classification is then based on the original data values. the image header can simply be updated with new map coordinate information. Use the Image Information utility to modify image file header information that is incorrect. the cell size of band 6 of Landsat TM data is different than the cell size of the other bands. Scanning and digitizing produce images that are planar. especially when using GPS data for the GCPs. The available resampling methods are discussed in detail later in this chapter. it may be beneficial to rectify the data first. In many cases. Disadvantages of Rectification During rectification. On the other hand. Another benefit is that a thematic file has only one band to rectify instead of the multiple bands of a continuous file. 254 Rectification .When to Georeference Only Rectification is not necessary if there is no distortion in the image. Classification Some analysts recommend classification before rectification. but do not contain any map coordinate information. then it may be wiser not to rectify the image. For example. although it could be different. the data file values of rectified pixels must be resampled to fit into a new grid of pixel rows and columns. the classification may be more accurate if the new coordinates help to locate better training samples. For example. if an image file is produced by scanning or digitizing a paper map that is in the desired projection system. Although some of the algorithms for calculating these values are highly reliable. which may be a drawback in some applications. Thematic Files Nearest neighbor is the only appropriate resampling method for thematic files.

GCPs in ERDAS IMAGINE Any ERDAS IMAGINE image can have one GCP set associated with it. Compute and test a transformation. Create an output image file with the new coordinate information in the header. called a reference system. The pixels must be resampled to conform to the new grid. in image-to-image registration. Throughout this documentation. which conforms to a plane in the new map projection and coordinate system inserting new information to the header of the file.Rectification Steps NOTE: Registration and rectification involve similar sets of procedures. 3. Locate GCPs.Y pairs of coordinates: • • source coordinates—usually data file coordinates in the image being rectified reference coordinates—the coordinates of the map or reference image to which the source image is being registered The term map coordinates is sometimes used loosely to apply to reference coordinates and rectified coordinates. Disk rectification involves: • • rearranging the pixels of the image onto a new grid. For example. If a GCP set exists for the top layer that is displayed in the Viewer. rectification is the conversion of data file coordinates to some other grid and coordinate system. These coordinates are not limited to map coordinates. 2. The GCP set is stored in the image file along with the raster layers. Rectification 255 . regardless of the application: 1. Display rectification is temporary. such as the upper left corner map coordinates and the area represented by each pixel Ground Control Points GCPs are specific pixels in an image for which the output map coordinates (or other output coordinates) are known. map coordinates are not necessary. because a new file is created. Usually. then those GCPs can be displayed when the Multipoint Geometric Correction tool (IMAGINE ribbon Workspace) or GCP Tool (Classic) is opened. GCPs consist of two X. Rectifying or registering image data on disk involves the following general steps. many references to rectification also apply to image-to-image registration. Images can be rectified on the display (in a Viewer) or on the disk. but disk rectification is permanent.

With both the source and reference Viewers open. The point ID is a name given to GCPs in separate files that represent the same geographic location. saved as an external file. or buildings. The Multipoint Geometric Correction tool contains the both the source and reference Viewers within the tool. and so forth) should not be used. the rectified coordinates for all other points in the image are extrapolated. Select many GCPs throughout the scene. For small-scale imagery. but you can enter your own unique ID strings to set up corresponding GCPs as needed. the edges of lakes or other water bodies. Use the mouse to select a pixel from an image in the Viewer. Use a digitizing tablet to register an image to a hardcopy map. Entering GCPs Accurate GCPs are essential for an accurate rectification. A default point ID string is provided (such as GCP #1). towers. larger features such as urban areas or geologic features may be used. the more reliable the rectification is. Use an existing Ground Control Coordinates file (. The more dispersed the GCPs are. From the GCPs. utility corridors. one column shows the point ID of each GCP. vegetation.In the CellArray of GCP data that displays in the Multipoint Geometric Correction tool or GCP Tool. 256 Rectification .gcc file extension). The source and reference coordinates of the GCPs can be entered in the following ways: • • They may be known a priori. Landmarks that can vary (for example. and entered at the keyboard. • • Information on the use and setup of a digitizing tablet is discussed in "Vector Data" on page 41. airport runways. Even though only one set of GCPs is associated with an image file. one GCP set can include GCPs for a number of rectifications by changing the point IDs for different groups of corresponding GCPs. GCPs for large-scale imagery might include the intersection of two roads. This file contains the X and Y coordinates along with the GCP point ID. enter source coordinates and reference coordinates for image-to-image registration. Such GCPs are called corresponding GCPs.

Automated GCP matching is a step beyond GCP prediction. AVHRR) and finer base maps (that is.gcc file. Landsat and SPOT). Coarser maps (that is. Avoid using maps over 1:250. Both of these methods require an existing transformation which consists of a set of coefficients used to convert the coordinates from one system to another. a GCP selected in one image is precisely matched to its counterpart in the other image using the spectral characteristics of the data and the geometric transformation.000 scale USGS quadrangles make good base maps for rectifying Landsat TM and SPOT imagery. you should not try to match Landsat MSS to SPOT or Landsat TM to an aerial photograph. In other words. Rectification 257 . accurate base maps must be collected. 1:24.gcc. GCP Prediction and Matching Automated GCP prediction enables you to pick a GCP in either coordinate system and automatically locate that point in the other coordinate system based on the current transformation parameters.000) are more suitable for imagery of finer resolution (that is. you should try to match coarser resolution imagery to finer resolution imagery (that is. For imageto-image rectification. 1:250. 1:24.Digitizing Tablet Option If GCPs are digitized from a hardcopy map and a digitizing tablet. Mouse Option When entering GCPs with the mouse. Landsat TM to SPOT).000) are more suitable for imagery of lower resolution (that is. How GCPs are Stored GCPs entered with the mouse are stored in the image file. if possible. For example. You should try to match the resolution of the imagery with the scale and projection of the source map. GCPs entered with the mouse can also be saved as a separate *. and those entered at the keyboard or digitized using a digitizing tablet are stored in a separate file with the extension . and avoid stretching resolution spans greater than a cubic convolution radius (a 4 × 4 area).000. GCP matching enables you to fine tune a rectification for highly accurate results.

000 indicates an exact match. Histogram matching is discussed in "Enhancement" on page 455.000 to 1. 258 Rectification . then use GCP prediction to locate the corresponding GCP on the other image (map). select a point in either the source or the destination image. The correlation ranges from -1. you have the option to discard points. If the automatically generated point is not accurate. Values above 0. You can perform histogram matching to ensure that there is no offset between the images. After selecting several GCPs.9000 are recommended. GCP prediction can also be used when applying an existing transformation to another image in a data set. If it is within an acceptable range of accuracy. such as two visible bands or two infrared bands. This point is determined based on the current transformation derived from existing GCPs. GCP Matching In GCP matching. You can also select the radius from the predicted GCP from which the matching operation searches for a spectrally similar pixels. The threshold is an absolute value threshold ranging from 0. Depending upon the distortion in the imagery. you can select which layers from the source and destination images to use. then there may be enough GCPs to perform an accurate rectification (depending upon how evenly dispersed the GCPs are). then more GCPs should be gathered before rectifying the image. If a match cannot be made because the absolute value of the correlation is less than the threshold. The degree of complexity of the polynomial is expressed as the order of the polynomial.000. A value of 0. Once the GCPs are automatically selected. Since the matching process is based on the reflectance values.000 to +1. The order is simply the highest exponent used in the polynomial. select layers that have similar spectral wavelengths. those that do not meet an acceptable level of error can be edited. A correlation threshold is used to accept or discard points. The search window can be any odd size between 5 × 5 and 21 × 21.000. This saves time in selecting another set of GCPs by hand. Examine the automatically generated point and see how accurate it is. the number of GCPs used. complex polynomial equations may be required to express the needed transformation.000 indicates a bad match and a value of 1.GCP Prediction GCP prediction is a useful technique to help determine if enough GCPs have been gathered. Polynomial Transformation Polynomial equations are used to convert source file coordinates to rectified map coordinates.8000 or 0. and their locations relative to one another.

ERDAS IMAGINE allows 1st. This common method is discussed in statistics textbooks. You can specify the order of the transformation you want to use in the Transform Editor. It can change: • location in X and/or Y Rectification 259 . For example. which is discussed later in this chapter. The matrix consists of coefficients that are used in polynomial equations to convert the coordinates. GCPs Reference X coordinate GCP Polynomial curve Source X coordinate Every GCP influences the coefficients. in Figure 70. Usually. Figure 70: Polynomial Curve vs. The least squares regression method is used to calculate the transformation matrix from the GCPs. Linear Transformations A 1st-order transformation is a linear transformation. The size of the matrix depends upon the order of transformation.The order of transformation is the order of the polynomial used in the transformation. Transformation Matrix A transformation matrix is computed from the GCPs.through nth-order transformations. GCPs are plotted on a graph and compared to the curve that is expressed by a polynomial. A discussion of polynomials and order is included in "Math Topics" on page 697. The goal in calculating the coefficients of the transformation matrix is to derive the polynomial equations for which there is the least possible amount of error when they are used to transform the reference coordinates of the GCPs into the source coordinates. It is not always possible to derive coefficients that produce no error. 1st-order or 2nd-order transformations are used. The distance between the GCP reference coordinate and the curve is called RMS error. even if there is not a perfect fit of each GCP to the polynomial that the coefficients represent.

you can specify any positive or negative number of degrees for clockwise and counterclockwise rotation. You can perform simple linear transformations to an image displayed in a Viewer or to the transformation matrix itself. and vice versa. ERDAS IMAGINE provides the following options for 1st-order transformations: • • • • Scale Scale is the same as the zoom option in the Viewer. If you are scaling an image in the Viewer. except that you can specify different scaling factors for X and Y. and look for systematic errors. scale offset rotate reflect 260 Rectification . When doing this type of rectification. and rotate descending data so that north is up. A 1st-order transformation can also be used for data that are already projected onto a plane. Rotation occurs around the center pixel of the image. SPOT and Landsat Level 1B data are already transformed to a plane. rotate scanned quad sheets according to the angle of declination stated in the legend. but may not be rectified to the desired map projection. such as the GCP source and distribution. For example. the zoom option undoes any changes to the scale that you do. and when rectifying relatively small image areas. Linear transformations may be required before collecting GCPs on the displayed image. Offset Offset moves the image by a user-specified number of pixels in the X and Y directions. For rotation. convert a planar map projection to another planar map projection. Examine other factors first. it is not advisable to increase the order of transformation if at first a high RMS error occurs. You can reorient skewed Landsat TM data.• • • scale in X and/or Y skew in X and/or Y rotation First-order transformations can be used to project raw imagery to a planar map projection.

Reflection Reflection options enable you to perform the following operations: • • • left to right reflection top to bottom reflection left to right and top to bottom reflection (equal to a 180° rotation) Linear adjustments are available from the Viewer or from the Transform Editor. a0 b0 a1 b1 a2 b2 Coefficients are used in a 1st-order polynomial as follows: xo = a0 + a1 x + a2 y yo = b0 + b1 x + b2 y Rectification 261 . You can perform linear transformations in the Viewer and then load that transformation to the Transform Editor. or you can perform the linear transformations directly on the transformation matrix. Figure 71: Linear Transformations original image change of scale in X change of scale in Y change of skew in X change of skew in Y rotation The transformation matrix for a 1st-order transformation consists of six coefficients—three for each coordinate (X and Y). Figure 71 illustrates how the data are changed in linear transformations.

on scans of warped maps and with radar imagery. The process of correcting nonlinear distortions is also known as rubber sheeting. Other representations of a 1st-order transformation matrix may take a different form.Where: x and y are source coordinates (input) xo and yo are rectified coordinates (output) The coefficients of the transformation matrix are as above The position of the coefficients in the matrix and the assignment of the coefficients in the polynomial is an ERDAS IMAGINE convention. These transformations can correct nonlinear distortions. The transformation matrix for a transformation of order t contains this number of coefficients: 262 Rectification . for data covering a large area (to account for the Earth’s curvature). and with distorted data (for example. Third-order transformations are used with distorted aerial photographs. Nonlinear Transformations Transformations of the 2nd-order or higher are nonlinear transformations. Figure 72 illustrates the effects of some nonlinear transformations. Fourth-order transformations can be used on very distorted aerial photographs. due to camera lens distortion). Figure 72: Nonlinear Transformations original image some possible outputs Second-order transformations can be used to convert Lat/Lon data to a planar projection.

or Rectification 263 . using numbers. An easier way to arrive at the same number is: (t + 1) × (t + 2) Clearly.t+1 2∑i i=1 It is multiplied by two for the two sets of coefficients—one set for X. is: x o = 5 + 4x – 6y + 10x 2 – 5xy + 1y 2 + 3x 3 + 7x 2 y – 11xy 2 + 4y 3 y o = 13 + 12x + 4y + 1x 2 – 21xy + 11y 2 – 1x 3 + 2x 2 y + 5xy 2 + 12y 3 These equations use a total of 20 coefficients. Higher Order Polynomials The polynomial equations for a t-order transformation take this form: ⎛ t ⎞⎛ i ⎞ i–j j xo = ⎜ Σ ⎟ ⎜ Σ ⎟ ak × x × y ⎜ i = o⎟ ⎜ j = o⎟ ⎝ ⎠⎝ ⎠ ⎛ t ⎞⎛ i ⎞ i–j j yo = ⎜ Σ ⎟ ⎜ Σ ⎟ bk × x × y ⎜ i = o⎟ ⎜ j = o⎟ ⎝ ⎠⎝ ⎠ Where: t is the order of the polynomial ak and bk are coefficients the subscript k is determined by: k = i⋅i+j+j --------------2 An example of 3rd-order transformation equations for X and Y. one for Y. the size of the transformation matrix increases with the order of the transformation.

This enables you to draw two-dimensional graphs that illustrate the way that higher orders of transformation affect the output image. which are used in the polynomials for rectification. it is helpful to see the output of various orders of polynomials.(3 + 1) × (3 + 2) Effects of Order The computation and output of a higher-order polynomial equation are more complex than that of a lower-order polynomial equation. instead of two (X.Y). To understand the effects of different orders of transformation in image rectification. In mathematical terms. Suppose GCPs are entered with these X coordinates: Source X Coordinate (input) 1 2 3 Reference X Coordinate (output) 17 9 1 These GCPs allow a 1st-order transformation of the X coordinates. This equation is graphed in Figure 73. Coefficients like those presented in this example would generally be calculated by the least squares regression method. The following example uses only one coordinate (X). higher-order polynomials are used to perform more complicated image rectifications. which is satisfied by this equation (the coefficients are in parentheses): x r = ( 25 ) + ( – 8 ) x i Where: xr xi = = the reference X coordinate the source X coordinate This equation takes on the same format as the equation of a line (y = mx + b). NOTE: Because only the X coordinate is used in these examples. a 1st-order polynomial is linear. Therefore. the number of GCPs used is less than the number required to actually perform the different orders of transformation. Therefore. a 1st-order transformation is also known as a linear transformation. 264 Rectification .

Figure 74: Transformation Example—2nd GCP Changed reference X coordinate 16 12 8 4 0 0 source X coordinate 1 2 3 4 A line cannot connect these points. a 2nd-order polynomial equation expresses these points: x r = ( 31 ) + ( – 16 )x i + ( 2 )x i 2 Polynomials of the 2nd-order or higher are nonlinear. what if the second GCP were changed as follows? Source X Coordinate (input) 1 2 3 Reference X Coordinate (output) 17 7 1 These points are plotted against each other in Figure 74.Figure 73: Transformation Example—1st-Order reference X coordinate 16 12 xr = (25) + (-8)xi 8 4 0 0 source X coordinate 1 2 3 4 However. which illustrates that they cannot be expressed by a 1st-order polynomial. like the one above. In this case. Rectification 265 . The graph of this curve is drawn in Figure 75.

5) 8 4 0 0 source X coordinate 1 2 3 4 As illustrated in Figure 76. To ensure that all of the GCPs fit. 266 Rectification . The equation and graph in Figure 77 could then result. this fourth GCP does not fit on the curve of the 2nd-order polynomial equation.Figure 75: Transformation Example—2nd-Order reference X coordinate 16 12 xr = (31) + (-16)xi + (2)xi2 8 4 0 0 source X coordinate 1 2 3 4 What if one more GCP were added to the list? Source X Coordinate (input) 1 2 3 4 Reference X Coordinate (output) 17 7 1 5 Figure 76: Transformation Example—4th GCP Added reference X coordinate 16 12 xr = (31) + (-16)xi + (2)xi2 (4. the order of the transformation could be increased to 3rd-order.

because the output pixels would be arranged in a different order than the input pixels. However. Source X Coordinate (input) 1 2 3 4 Reference X Coordinate (output) xo(1) = 17 xo(2) = 7 xo(3) = 1 xo(4) = 5 xo ( 1 ) > xo ( 2 ) > xo ( 4 ) > xo ( 3 ) 17 >7 >5 >1 Figure 78: Transformation Example—Effect of a 3rd-Order Transformation input image X coordinates 1 2 3 4 1 2 3 4 1 2 3 3 4 5 6 4 7 2 output image X coordinates 8 9 10 11 12 13 14 15 16 17 18 1 4 In this case. In this example. a 3rd-order transformation probably would be too high.Figure 77: Transformation Example—3rd-Order reference X coordinate 16 12 xr = (25) + (-5)xi + (-4)xi2 + (1)xi3 8 4 0 0 source X coordinate 1 2 3 4 Figure 77 illustrates a 3rd-order transformation. this equation may be unnecessarily complex. Rectification 267 . Performing a coordinate transformation with this equation may cause unwanted distortions in the output image for the sake of a perfect fit for all the GCPs. a higher order of transformation would probably not produce the desired results. in the X direction.

the minimum number of GCPs required to perform a transformation is listed in the following table: Order of Transformation 1 2 3 4 5 6 7 8 9 10 Minimum GCPs Required 3 6 10 15 21 28 36 45 55 66 For the best rectification results. no matter how many GCPs are used. Although it is possible to get a perfect fit. The minimum number of points required to perform a transformation of order t equals: ((t + 1)(t + 2)) -----------------------------------2 Use more than the minimum number of GCPs whenever possible. However. the equation used in a 2nd-order transformation is the equation of a paraboloid. and they should be well-distributed. to perform a 1st-order transformation. Therefore. you should always use more than the minimum number of GCPs. which is expressed by the equation of a plane. more GCPs are needed. Six points are required to define a paraboloid. at least six GCPs are required to perform a 2ndorder transformation. For instance. 268 Rectification . For 1st. three points define a plane.through 10th-order transformations.Minimum Number of GCPs Higher orders of transformation can be used to correct more complicated types of distortion. Similarly. it is rare. at least three GCPs are needed. Therefore. to use a higher order of transformation.

Figure 79 shows an example of the triangle network formed by 13 control points. Each triangle has three control points as its vertices. The circumcircle formed from three points of any triangle does not have any other point inside. Figure 79: Triangle Network p1 p0 p5 p6 p4 p3 p9 p8 p10 p11 p12 p2 p7 Rectification 269 . 1992) summarily listed four kinds of triangulation. the Delaunay triangulation is most widely used and is adopted because of the smaller angle variations of the resulting triangles. it is necessary to triangulate the control points into a mesh of triangles. It has been widely used as a local interpolation technique in geographic applications. Greedy and Delaunay triangulation. including the arbitrary. Watson (Watson. Of the four kinds. For image rectification. It can also be called as the triangle-based rectification because the transformation and resampling for image rectification are performed on a triangle-by-triangle basis. the known control points can be triangulated into many triangles. The Delaunay triangulation can be constructed by the empty circumcircle criterion. finite element analysis is also called rubber sheeting. Then. Because the transformation exactly passes through each control point and is not in a uniform manner. Triangulation To perform the triangle-based rectification.Rubber Sheeting Triangle-Based Finite Element Analysis The finite element analysis is a powerful tool for solving complicated computation problems which can be approached by small simpler pieces. the polynomial transformation can be used to establish mathematical relationships between source and destination systems for each triangle. This triangle-based technique should be used when other rectification methods such as polynomial transformation and photogrammetric modeling cannot produce acceptable results. The triangles defined this way are the most equiangular possible. optimal.

and two first order and three second order partial derivatives can be easily derived by establishing a second order polynomial using vertices in the neighborhood of the vertex. one point value is given. It is a smooth function. the nonlinear transformation with polynomial order larger than one is used by considering the gradient information. the polynomials can be used as the general transformation form between source and destination systems. This triangle-based method is appealing because it breaks the entire region into smaller subsets. the geometric rectification can be done on a triangle-by-triangle basis. the geometry of each subset can be much simpler and modeled through simple transformation. The formula is as follows: It has 21 coefficients for each polynomial to be determined. The transformation function and its first order partial derivative are continuous. 270 Rectification . For each triangle. Nonlinear transformation Even though the linear transformation is easy and fast. Linear transformation The easiest and fastest is the linear transformation with the first order polynomials: ⎧ xo = a 0 + a 1 x + a 2 y ⎨ ⎩ yo = b 0 + b 1 x + b 2 y There is no need for extra information because there are three known conditions in each triangle and three unknown coefficients for each polynomial. It is not difficult to construct (Akima. which means that the sum of the polynomial items beyond the third order in the normal partial derivative has a value zero. 1978). If the geometric problem of the entire region is very complicated. it has one disadvantage. This phenomenon is obvious when shaded relief or contour lines are derived from the DEM which is generated by the linear rubber sheeting.Triangle-based rectification Once the triangle mesh has been generated and the spatial order of the control points is available. For solving these unknowns. The transitions between triangles are not always smooth. Three more conditions can be obtained by assuming that the normal partial derivative on each edge of the triangle is a cubic polynomial. It is caused by incorporating the slope change of the control data at the triangle edges and vertices. In order to distribute the slope change smoothly across triangles. The fifth order or quintic polynomial transformation is chosen here as the nonlinear rubber sheeting technique in this dissertation. 21 conditions should be available. Then the total 18 conditions are ready to be used. For each vertex of the triangle.

They are shown for each GCP. which are used in the modeling process. Residuals are the distances between the source and retransformed coordinates in one direction. RMS Error RMS error is the distance between the input (source) location of a GCP and the retransformed location for the same GCP. For an exact modeling method like rubber sheeting. the ground control points. do not have much geometric residuals remaining.⎧ ⎪ xo = ⎪ ⎪ ⎨ ⎪ ⎪ yo = ⎪ ⎩ 5 i i–j ∑ ∑ ak ⋅ x i=0 j=0 5 i ⋅y j ∑ ∑ bk ⋅ x i=0 j=0 i–j ⋅y j Check Point Analysis It should be emphasized that the independent check point analysis is critical for determining the accuracy of rubber sheeting modeling. In other words. the accuracy assessment using independent check points is recommended. The Y residual is the distance between the source Y coordinate and the retransformed Y coordinate. then the RMS error is a distance in pixel widths. when the point is transformed with the geometric transformation. To evaluate the geometric transformation between source and destination coordinate systems. RMS error is calculated with a distance equation: RMS error = ( xr – xi ) + ( yr – yi ) 2 2 Where: xi and yi are the input source coordinates xr and yr are the retransformed coordinates RMS error is expressed as a distance in the source coordinate system. Residuals and RMS Error Per GCP The GCP Tool contains columns for the X and Y residuals. an RMS error of 2 means that the reference pixel is 2 pixels away from the retransformed pixel. Rectification 271 . If data file coordinates are the source coordinates. For example. The X residual is the distance between the source X coordinate and the retransformed X coordinate. it is the difference between the desired output coordinate for a GCP and the actual output coordinate for the same point.

This is calculated with a distance formula: Ri = Where: XR i2 + YR i2 Ri XRi YRi = = = the RMS error for GCPi the X residual for GCPi the Y residual for GCPi Figure 80 illustrates the relationship between the residuals and the RMS error per point. Figure 80: Residuals and RMS Error Per Point source GCP X residual RMS error Y residual retransformed GCP Total RMS Error From the residuals.∑ XR i2 n 1 -. the following calculations are made to determine the total RMS error. more points should be added in that direction. the X RMS error. This is a common problem in off-nadir data. RMS Error Per GCP The RMS error of each point is reported to help you evaluate the GCPs.If the GCPs are consistently off in either the X or the Y direction.∑ YR i2 n i=1 i=1 n T = 2 Rx + 2 Ry Ry = 1 or -XR i2 + YR i2 n∑ i=1 n 272 Rectification . and the Y RMS error: n Rx = 1 -.

if the RMS error tolerance is 2. inside which a retransformed coordinate is considered to be correct (that is. R E i = ----i T Where: Ei Ri T Tolerance of RMS Error = error contribution of GCPi = the RMS error for GCPi = total RMS error In most cases. This value is listed in the Contribution column of the GCP Tool. close enough to use). For example. Figure 81: RMS Error Tolerance source pixel 2 pixel RMS error tolerance (radius) Retransformed coordinates within this range are considered correct Rectification 273 . The amount of RMS error that is tolerated can be thought of as a window around each source coordinate. it is advantageous to tolerate a certain amount of error rather than take a more complex transformation.Where: Rx Ry T n i XRi YRi Error Contribution by Point = X RMS error = Y RMS error = total RMS error = the number of GCPs = GCP number = the X residual for GCPi = the Y residual for GCPi A normalized value representing each point’s RMS error in relation to the total RMS error is also reported. then the retransformed pixel can be 2 pixels away from the source pixel and still be considered accurate.

A closer fit should be possible. One should start with a 1st-order transformation unless it is known that it does not work. 274 Rectification . To fit all of the GCPs. Acceptable accuracy depends on the image area and the particular project. there are four options: • Throw out the GCP with the highest RMS error. Since the grid of pixels in the source image rarely matches the grid for the reference image. GCPs acquired from GPS should have an accuracy of about 10 m. The danger of using higher order rectifications is that the more complicated the equation for the transformation. Therefore. assuming that this GCP is the least accurate. Increase the complexity of transformation. and the accuracy of the GCPs and ancillary data being used. the type of data being used. • • • Resampling Methods The next step in the rectification/registration process is to create the output file. the RMS error should not exceed 1. the less regular and predictable the results are. A transformation can then be computed that can accommodate the GCPs with less error.00. Most rectifications are either 1st-order or 2nd-order. Another transformation can then be computed from the remaining GCPs. it may cause greater error to remove it. there may be very high distortion in the image. After each computation of a transformation and RMS error. you can assess the relative distortion in going from image to map or map to map. For example. However. but GCPs from 1:24. Evaluating RMS Error To determine the order of polynomial transformation. creating more complex geometric alterations in the image. if this is the only GCP in a particular region of the image.000-scale maps should have an accuracy of about 20 m. Tolerate a higher amount of RMS error. if you are rectifying Landsat TM data and want the rectification to be accurate to within 30 meters. Select only the points for which you have the most confidence. the pixels are resampled so that new data file values for the output file can be calculated.Acceptable RMS error is determined by the end use of the data base. It is possible to repeatedly compute transformation matrices until an acceptable RMS error is reached. It is important to remember that RMS error is reported in pixels.

Figure 82: Resampling GCP GCP GCP 1. which is determined by the geometric transformation and the cell size. it may be beneficial to specify the output corners relative to the reference file system. The following resampling methods are supported in ERDAS IMAGINE: • • Nearest Neighbor on page 276—uses the value of the closest pixel to assign to the output pixel value. the upper left X and upper left Y coordinate are 0. the number of rows and columns of pixels in the output is calculated from the dimensions of the output map. Cubic Convolution on page 281—uses the data file values of sixteen pixels in a 4 × 4 window to calculate an output value with a cubic function. 4. Bicubic Spline Interpolation on page 284—fits a cubic spline surface through the current block of points. so that the GCPs of the two grids fit together. • • In all methods. the input image is laid over the output grid. The output grid. The input image with source GCPs. GCP 2. Rectification 275 . with reference GCPs shown.0 and not the defaults. 3. Using a resampling method. If an image to image rectification is being performed. To compare the two grids. the pixel values of the input image are assigned to pixels in the output grid. In this case. so that the images are coregistered. The output corners (upper left and lower right) of the output file can be specified. Bilinear Interpolation on page 277—uses the data file values of four pixels in a 2 × 2 window to calculate an output value with a bilinear function. The default values are calculated so that the entire source file is resampled to the destination file.

the transformation is based on the center of the output file.yr) 276 Rectification . yr) are used in bilinear interpolation and cubic convolution as well. For example. However. yr) is the nearest neighbor.If the output units are pixels.yr) nearest to (xr. you can enter meters and calculate the equivalent size in decimal degrees. the rectified coordinates (xo. Rectifying to Lat/Lon You can specify the nominal cell size if the output coordinate system is Lat/Lon. if you want the cell to be a specific size in meters. The pixel that is closest to the retransformed coordinates (xr. yo) of the pixel are retransformed back to the source coordinate system using the inverse of the transformation. Enter the nominal cell size in the Nominal Cell Size dialog. then the origin of the image is the upper left corner. if you want the output file cell size to be 30 × 30 meters. The data file value(s) for that pixel become the data file value(s) of the pixel in the output image. the origin is the lower left corner. The retransformed coordinates (xr. Otherwise. Since the transformation between angular (decimal degrees) and nominal (meters) measurements varies across the image. The output cell size for a geographic projection (that is. then the program would calculate what this size would be in decimal degrees and automatically update the output cell size. Nearest Neighbor To determine an output pixel’s nearest neighbor. Lat/Lon) is always in angular units of decimal degrees. Figure 83: Nearest Neighbor (xr.

1996). and 4. Data values may be dropped. Suitable for use before classification. In this example. the extremes and subtleties of the data values are not lost. Given the data file values of these four pixels on a grid. Disadvantages When this method is used to resample from a larger to a smaller grid size. the data file value of the rectified pixel is based upon the distances between the retransformed coordinate location (xr. Bilinear Interpolation In bilinear interpolation.Advantages Transfers original data values without averaging them as the other methods do. or determining different levels of turbidity or temperatures in a lake (Jensen. streams) may result in breaks or gaps in a network of linear data. while other values may be duplicated. 2. there is usually a stair stepped effect around diagonal lines and curves. Rectification 277 . the neighbor pixels are numbered 1. yr) and the four closest pixels in the input (source) image (see Figure 84). Appropriate for thematic files. 3. This is an important consideration when discriminating between vegetation types. The averaging that is performed with bilinear interpolation and cubic convolution is not suited to a qualitative class value system. locating an edge associated with a lineament. The easiest of the three methods to Using on linear thematic data (for compute and the fastest to use. which can have data file values based on a qualitative (nominal or ordinal) system or a quantitative (interval or ratio) system. therefore. example. roads. the task is to calculate a data file value for r (Vr).

yr) n 3 D 4 r is the location of the retransformed coordinate To calculate Vr.V1) / D Y1 Ym D data file coordinates (Y) Y3 The equation for calculating Vm from V1 and V3 is: V3 – V1 V m = ----------------.× dy + V 1 D 278 Rectification . If the data file values are plotted in a graph relative to their distances from one another. V3 .Figure 84: Bilinear Interpolation 1 2 dy m dx r (xr.V1). By interpolating Vm and Vn. then a visual linear interpolation is apparent. which is a simple process to illustrate. The data file value of m (Vm) is a function of the change in the data file value between pixels 3 and 1 (that is. first Vm and Vn are considered. Figure 85: Linear Interpolation Calculating a data file value as a function of spatial distance between two pixels V3 data file values Vm V1 (V3 . you can perform linear interpolation.

which is at the retransformed coordinate location (xr.× dx + ----------------.× dy + V 2 – ----------------.yr). the equation for calculating the data file value for n (Vn) in the pixel grid is: V4 – V2 V n = ----------------.× dy + V 2 D From Vn and Vm. the data file value for r. then this equation translates to the equation of a line in y = mx + b form.× dy + D D Rectification 279 .can be calculated in the same manner: Vn – Vm V r = -----------------.Where: Yi Vi dy D = the Y coordinate for pixel i = the data file value for pixel i = the distance between Y1 and Ym in the source coordinate system = the distance between Y1 and Y3 in the source coordinate system If one considers that (V3 .× dy + V 1 V3 – V1 D D = -----------------------------------------------------------------------------------------------------. Similarly.× dx + Vm D The following is attained by plugging in the equations for Vm and Vn to this final equation for Vr : V4 – V2 V3 – V1 ----------------.V1 / D) is the slope of the line in the graph above.

yr).yr) and the data file coordinate of pixel i = the data file value for pixel i = the distance between pixels (in X or Y) in the source coordinate system For each of the four pixels. Some equations for bilinear interpolation express the output data file value as: Vr = ∑ wi Vi Where: wi is a weighting factor The equation above could be expressed in a similar format. 280 Rectification . the data file value is weighted more if the pixel is closer to (xr.V 1 ( D – dx )( D – dy ) + V 2 ( dx )( D – dy ) + V 3 ( D – dx )( dy ) + V 4 ( dx ) ( dy ) V r = -------------------------------------------------------------------------------------------------------------------------------------------------------------------------D2 In most cases D = 1.yr) and the data file coordinate of pixel i = the change in the Y direction between (xr. since data file coordinates are used as the source coordinates and data file coordinates increment by 1.× V i 2 D Where: Δxi Δyi Vi D = the change in the X direction between (xr. in which the calculation of wi is apparent: 4 Vr = ∑ i=1 ( D – Δx i ) ( D – Δy i ) --------------------------------------------.

Edges are smoothed. bilinear interpolation has the effect of a lowfrequency convolution. such as in SPOT/TM merges within the 2 × 2 resampling matrix limit.Advantages Results in output images that are smoother.yr) is expressed in data file coordinates (pixels). The pixels around (i. are averaged to determine the output data file value. Cubic Convolution Cubic convolution is similar to bilinear interpolation. See "Enhancement" on page 455 for more about convolution filtering.j) is used. as illustrated in Figure 86. Rectification 281 . and an approximation of a cubic function. such that: i = int (xr) j = int (yr) This assumes that (xr. in a 4 × 4 array. This method is often used when changing the cell size of the data. More spatially accurate than nearest neighbor. except that: • • a set of 16 pixels. is applied to those 16 input values. without the stair-stepped effect that is possible with nearest neighbor.j) make up a 4 × 4 grid of input pixels. rather than a linear function. Disadvantages Since pixels are averaged. and some extremes of the data file values are lost. To identify the 16 pixels in relation to the retransformed coordinate (xr. the pixel (i.yr).

the pixels farther from (xr. serving to average and smooth the values. The general effect of the cubic convolution depends upon the data. Some convolutions may have more of the effect of a lowfrequency filter (like bilinear interpolation).Figure 86: Cubic Convolution (i. j + n – 2 ) × f ( d ( i – 1. j + n – 2 ) – 1 ) + 282 Rectification . yr). j + n – 2 ) ) + V ( i + 1. Several versions of the cubic convolution equation are used in the field. like a highfrequency filter. j + n – 2 ) × f ( d ( i + 1. function is used to weight the 16 input pixels.j) (Xr. Different equations have different effects upon the output data file values. yr) have exponentially less weight than those closer to (xr. The formula used in ERDAS IMAGINE is: 4 Vr = ∑ V ( i – 1. rather than a linear. The cubic convolution used in ERDAS IMAGINE is a compromise between low-frequency and high-frequency. j + n – 2 ) × f ( d ( i. j + n – 2 ) + 1 ) + n=1 V ( i. Others may tend to sharpen the image.Yr) Since a cubic.

This method is extremely slow.j) = the output data file value = -1 (a constant) = the following function: ⎛(a + 2) x 3 – (a + 3) x 2 + 1 ⎜ f ( x ) = ⎜ a x 3 – 5a x 2 + 8a x – 4a ⎜ ⎝ 0 if x < 1 if 1 < x < 2 otherwise Source: Atkinson.j) = the distance between a pixel with coordinates (i. Uses 4 × 4 resampling.yr) V(i. The effect of the cubic curve weighting can both sharpen the image and smooth out noise (Atkinson. Rectification 283 .j) and (xr. matches the 4 × 4 window more closely than the 2 × 2 window). In most cases.j) Vr a f(x) = the data file value of pixel (i. This method is recommended when you are dramatically changing the cell size of the data. The actual effects depend upon the data being used. 1985). the mean and standard deviation of the output pixels match the mean and standard deviation of the input pixels more closely than any other resampling method.Where: i = int (xr) j = int (yr) d(i. 1985 Advantages Disadvantages Data values may be altered. such as in TM/aerial photo merges (that is.

n Vm-1.2 V2.y) is constructed as following: in each cell 284 Rectification .1 y2 x2 V2.n V2. Data Points The known data points are an array of raster of m × n.n V2.n-1 yn V1.n Vm.n-1 Vm. x1 y1 V1.2 yn-1 V1.2 Vm-1.1 xm-1 Vm-1. you should use Bilinear Interpolation.2 Vm.1 V1. Bicubic Spline Interpolation is so similar to Bilinear Interpolation that unless you have the need to maximize surface smoothness. This algorithm is much slower than other methods of interpolation. The output value is derived from the fitting surface that will retain the values of the known points.n-1 Vm-1. but it has the advantage of giving a more exact fit to the curve without the oscillations that other interpolation methods can create.j is the cell value in (xi .1 xm Vm.yj) Equations A bicubic polynomial function V(x.Bicubic Spline Interpolation Bicubic Spline Interpolation is based on fitting a cubic spline surface through the current block of points.n-1 Xi + 1 = Xi + d Yj + 1 = Yj + d Where: 1<i<m 1<j<n d is the cell size of the raster Vi.

y ) = p = 0q = 0 ∑ ∑ ap. The function satisfies the conditions • V ( x i .i = 1. Please refer to Shikin and Plis (Shikin and Plis. yr) can be calculated by the following formula: 3 3 (i . 2. y j ≤ y ≤ y j + 1 } i = 1. 2. Calculate value for unknown point The value for point (xr. 2. m . Because in IMAGINE the input raster grid has been expanded two cells around the boundary. 1995) for the boundary conditions and the mathematical details for solving the equations. the spline must interpolate all data points. the boundary condition has no significant effects on the resampling. n that is.j = 1.j = 1. m . n • The functions and their first and second derivatives must be continuous across the interval and equal at the endpoints and the fourth derivatives of the equations should be zero. j .V ( x. …. y r ) = p = 0q = 0 ∑ ∑ a p. IMAGINE uses the first type of boundary condition. j ) p ( y – yj ) q R ij = { ( x. ….… . q ( x – xi ) 3 3 ( i. y ) x i ≤ x ≤ x i + 1. 2. • Coefficients can be obtained by resolving the known points together with the selection of the boundary condition type. ….r q r ( x r – x i r ) ( y r – y j r ) p q Rectification 285 .y j ) = V i. j ) V ( x r.

Some examples of when this is required are as follows (Environmental Systems Research Institute. Advantages Results in the smoothest output images.xir (ir. A change in the projection is a geometric change—distances. When the projection used for the files in the data base does not produce the desired properties of a map. the conversion process requires that pixels be resampled. Disadvantages The most computationally intensive resampling method. 1995 Map-to-Map Coordinate Conversions There are many instances when you may need to change a map that is already registered to a planar projection to another projection. and scale are represented differently. jr) y jr (xr . areas. Because the coefficients are resolved by using all other known points. Therefore.yr) d d The value is determined by 16 coefficients. This method is often used when upsampling. More spatially accurate than nearest neighbor. 286 Rectification . such as UTM or State Plane. Source: Shikin and Plis. all other points contribute to the value. and is therefore the slowest. When it is necessary to combine data from more than one zone of a projection. 1992): • • • When combining two maps with different projection characteristics. The nearer points contribute more whereas the farther points contribute less.

Vector Data Rectification 287 . So. If the original unrectified data are available. Conversion Process To convert the map coordinate system of any georeferenced image. Converting the map coordinates of vector data is much easier than converting raster data.Resampling causes some of the spectral integrity of the data to be lost (see the disadvantages of the resampling methods explained previously). In this procedure. The program calculates the reference coordinates for the GCPs with the appropriate conversion formula and a transformation that can be used in the regular rectification process. GCPs are generated automatically along the intersections of a grid that you specify. ERDAS IMAGINE provides a shortcut to the rectification process. it is not usually wise to resample data that have already been resampled if the accuracy of data file values is important to the application. each coordinate is simply converted using the appropriate conversion formula. it is usually wiser to rectify that data to a second map projection system than to lose a generation by converting rectified data and resampling it a second time. Since vector data are stored by the coordinates of nodes. There are no coordinates between nodes to extrapolate.

288 Rectification .

A scaled map is a georeferenced map that has been projected to a map projection.000 inches on the ground. These methods are used to print and store large maps: • A book map is laid out like the pages of a book. such as 1 inch = 1000 feet. Printing Maps Scaled Maps ERDAS IMAGINE enables you to create and output a variety of types of hardcopy maps. These topics are covered in this chapter: • • printing maps the mechanics of printing For additional information. borders and tick marks appear on the outer edges of the large map. There is a border. see the chapter about "Windows Printing" on page 65 in the ERDAS IMAGINE Configuration Guide. A paneled map is designed to be spliced together into a large paper map. The scale is often expressed as a ratio. Printing Large Maps Some scaled maps do not fit on the paper that is used by the printer. but no tick marks on every page. • Hardcopy Output Hardcopy Output 289 289 . Each page fits on the paper used by the printer. See "Rectification" on page 251 for information on rectifying and georeferencing images and "Cartography" on page 211 for information on creating maps.Hardcopy Output Introduction Hardcopy output refers to any output of image data to paper. with several referencing features. that includes a scale. where 1 inch on the map represents 12. A scaled map usually has a legend. and is accurately laid-out and referenced to represent distances and locations. like 1:12.000. therefore.

Therefore.Figure 87: Layout for a Book Map and a Paneled Map + + neatline neatline + + map composition map composition tick marks Book Map ++ Paneled Map + + Scale and Resolution The following scales and resolutions are noticeable during the process of creating a map composition and sending the composition to a hardcopy device: • • • • • spatial resolution of the image display scale of the map composition map scale of the image(s) map composition to paper scale device resolution Spatial Resolution Spatial resolution is the area on the ground represented by each raw image data pixel. the scale could be set to 1:0. Display Scale Display scale is the distance on the screen as related to one unit on paper. it would not be possible to view the entire composition on the screen.25 so that the entire map composition would be in view. For example. 290 Hardcopy Output . if the map composition is 24 inches by 36 inches.

Map Composition to Paper Scale This scale is the original size of the map composition as related to the desired output size on paper. This composition was originally created using the ERDAS IMAGINE Map Composer at a size of 22” × 34”. and the hardcopy output must be in two different formats. 300 dots per inch (DPI). or the area that one pixel represents measured in map units. Use the ERDAS IMAGINE Map Composer to define the above scales and resolutions. Hardcopy Output 291 . These areas may need to be shown at different scales for different applications. One map composition can have multiple image areas set at different scales. The map scale is defined when you create an image area in the map composition. A TIFF file must be created and sent to a film recorder having a 1. • • It must be output to a PostScript printer on an 8.Map Scale The map scale is the distance on a map as related to the true distance on the ground. The examples in this section focus on the relationship between these factors and the output file created by Map Composer for the specific hardcopy device or file format.5” × 11” piece of paper. Map Scaling Examples The ERDAS IMAGINE Map Composer enables you to define a map size. Figure 88 is the map composition that is used in the examples. Device Resolution The number of dots that are printed per unit—for example.000 dpi resolution. as well as the size and scale for the image area within the map composition.

36 (horizontal direction) 8. To determine the map composition to paper scale factor.6” / 34” = 0. If this scale is set for a one to one ratio.Figure 88: Sample Map Composition Output to PostScript Printer Since the map was created at 22” × 34”.1” / 22” = 0. 292 Hardcopy Output .1” × 8. therefore. then the composition is paneled.23.5” × 11” piece of paper. Since the printable area for the printer is approximately 8. these numbers are used in the calculation. the output hardcopy map is paneled.6”. the map composition to paper scale needs to be calculated so that the composition fits on an 8. the map composition to paper scale would be set for 0.23 (vertical direction) The vertical direction is the most limiting. • • 8. See the hardware manual of the hardcopy device for information about the printable area of the device. If the specified size of the map (width and height) is greater than the printable area for the printer. it is necessary to calculate the most limiting direction.

the X and Y dimensions need to be calculated: • • • X = 22 inches × 1.000 dots/inch = 22. Dividing the map composition by three in both X and Y directions (2. This division is accomplished by specifying a 1/3 or 0.000 DPI device resolution.333 map composition to paper scale when outputting the map composition to an image file. but disk space (600 MB total).tif file. This file size is small enough to process and leaves enough room for the image to TIFF conversion.tif files. To determine the number of megabytes for the map composition.Use the Print Map Composition dialog to output a map composition to a PostScript printer. because the total disk space is only 600 megabytes. it can be sent to a film recorder that accepts . Output to TIFF The limiting factor in this example is not page size. Use the ERDAS IMAGINE Print Map Composition dialog to output a map composition to an image file.tif file is output to a film recorder with a 1. Hardcopy Output 293 . The image file created from the map composition must be less than half to accommodate the . Remember.000 Y = 34 × 1. the file must be enlarged three times to compensate for the reduction during the image file creation.000 × 34. the image file could be very large. A three-band image file must be created in order to convert the map composition to .000 22.000 = 34. it is possible to reduce the file size with little image degradation. The .000 × 3 = 2244 MB (multiplied by 3 since there are 3 bands) Although this appears to be an unmanageable file size. See the hardware manual of the hardcopy device for information about the DPI device resolution.tif file. Once the image file is created and exported to TIFF format.244 MB / 3 /3) results in approximately a 250 megabyte file. Due to the three bands and the high resolution.

colors can have different intensities. data file pixels are skipped to accommodate the reduction. halftones in the primary colors (cyan. Continuous Tone Printing Continuous tone printing enables you to output color imagery using the four process colors (cyan. Halftoning is the process of converting a continuous tone image into a pattern of dots. are overlaid. The printer converts digital data from the host computer into a continuous tone image. green. By using varying percentages of these colors. it is possible to create a wide range of colors. and black). The halftone dots of different colors. The dots for halftoning are a fixed density—either a dot is there or it is not there. For scaled maps. If a very large image file is being printed onto a small piece of paper. magenta. plus black. and blue to create other colors. yellow. 294 Hardcopy Output . magenta. each output pixel may contain one or more dot patterns. A newspaper photograph is a common example of halftoning. in close proximity. The quality of the output picture is similar to a photograph. By using different patterns of dots. Hardcopy Devices The following hardcopy devices use halftoning to output an image or map composition: • • Tektronix Inkjet Printer Tektronix Phaser Printer See the user’s manual for the hardcopy device for more information about halftone printing.Mechanics of Printing Halftone Printing This section describes the mechanics of transferring an image or map composition from a data file to a hardcopy map. create the effect of blended colors in much the same way that phosphorescent dots on a color computer monitor combine red. To make a color illustration. The output is smoother than halftoning because the dots for continuous tone printing can vary in density. and yellow).

The data file values that are sent to the printer and the contrast and color tables that accompany the data file are all in the RGB color scheme. The entire image or map composition is loaded into the printer’s memory. green. they are loaded from the color table. they are loaded from the ERDAS IMAGINE contrast table. which has the dyes for all of the four process colors. This allows the printer to control the amount of dye that is transferred to the paper to create a continuous tone image. Hardcopy Output 295 . just as they are used in displaying an image. instead of the primary colors of light (red. Contrast and Color Tables ERDAS IMAGINE contrast and color tables are used for some printing processes. green. For continuous raster layers. magenta. and yellow (CMY) values. and yellow can be combined to make black through a subtractive process. Cyan.Example There are different processes by which continuous tone printers generate a map. whereas the primary colors of light are additive— red. The amount of heat applied is determined by the brightness values of the input image. the primary colors of pigment (cyan. heat is used to transfer the dye from a ribbon. The RGB brightness values in the contrast and color tables must be converted to cyan. RGB to CMY Conversion Colors Since a printer uses ink instead of light to create a visual image. While the paper moves through the printer. The translation of data file values to brightness values is performed entirely by the software program. 1977). See the user’s manual for the hardcopy device for more information about continuous tone printing. and yellow) are used in printing. magenta. magenta. Hardcopy Devices The following hardcopy device uses continuous toning to output an image or map composition: • Tektronix Phaser II SD NOTE: The above printers do not necessarily use the thermal dye transfer process to generate a map. One example is a process called thermal dye transfer. and blue). to the paper. For thematic layers. and blue combine to make white (Gonzalez and Wintz. The density of the dot depends on the amount of heat applied by the printer to transfer the dye.

The following equation shows this relationship: C M Y Where: = MAX . NOTE: Black ink may not be available on all printers. Consult the user’s manual for your printer.B = the maximum brightness value = red value from lookup table = green value from lookup table = blue value from lookup table = calculated cyan value = calculated magenta value = calculated yellow value MAX R G B C M Y Black Ink Although. that the presence of cyan in a color means an equal lack of red. for example. the color that results is often a dark.G = MAX . Images often appear darker when printed than they do when displayed on the display device. muddy brown. each RGB brightness value is subtracted from the maximum brightness value to produce the brightness value for the opposite color. and yellow combine to create black ink.R = MAX . Many printers also use black ink for a truer black. cyan. Therefore. magenta.The RGB primary colors are the opposites of the CMY colors— meaning. 296 Hardcopy Output . Use the programs discussed in "Enhancement" on page 455 to brighten or enhance an image before it is printed. theoretically. To convert the values. it may be beneficial to improve the contrast and brightness of an image before it is printed.

For general information about map projection types. You should change map projection information using Image Information only if you know the information to be incorrect. 1997) Other sources are noted in the text. View. 1987) ArcInfo HELP (Environmental Systems Research Institute. or change projection information using the Image Information option. The projections in each section are presented in alphabetical order. Rectify an image to a particular map projection using the ERDAS IMAGINE Rectification tools. refer to "Cartography" on page 211. add. and External Projections The external projections were implemented outside of ERDAS IMAGINE so that you could add to these using the IMAGINE Developers’ Toolkit. NOTE: You cannot rectify to a new map projection using the Image Information option. Use the rectification tools to actually georeference an image to a new map projection system.Map Projections Introduction This appendix is an alphabetical listing of the map projections supported in ERDAS IMAGINE. 1984) Map Projections—A Working Manual (Snyder. It is divided into two sections: • • USGS Projections. Map Projections Map Projections 297 297 . The information in this appendix is adapted from: • • • Map Projections for Use with the Geographic Information System (Lee and Walsh.

USGS Projections The following USGS map projections are supported in ERDAS IMAGINE and are described in this section: • • • • • • • • • • • • • • • • • • • • • • • Alaska Conformal Albers Conical Equal Area Azimuthal Equidistant Behrmann Bonne Cassini Cylindrical Equal Area Double Stereographic Eckert I Eckert II Eckert III Eckert IV Eckert V Eckert VI EOSAT SOM EPSG Coordinate Systems Equidistant Conic Equidistant Cylindrical Equirectangular (Plate Carrée) Gall Stereographic Gauss Kruger General Vertical Near-side Perspective Geographic (Lat/Lon) 298 Map Projections .

• • • • • • • • • • • • • • • • • • • • • • • • • Gnomonic Hammer Interrupted Goode Homolosine Interrupted Mollweide Krovak Lambert Azimuthal Equal Area Lambert Conformal Conic Lambert Conic Conformal (1 Standard Parallel) Loximuthal Mercator Miller Cylindrical Military Grid Reference System (MGRS) Modified Transverse Mercator Mollweide New Zealand Map Grid Oblated Equal Area Oblique Mercator (Hotine) Orthographic Plate Carrée Polar Stereographic Polyconic Quartic Authalic Robinson RSO Sinusoidal Map Projections 299 .

• • • • • • • • • • • • Space Oblique Mercator Space Oblique Mercator (Formats A & B) State Plane Stereographic Stereographic (Extended) Transverse Mercator Two Point Equidistant UTM Van der Grinten I Wagner IV Wagner VII Winkel I 300 Map Projections .

Alaska Conformal Use of this projection results in a conformal map of Alaska. The scale factor for Alaska is from 0. The list of available spheroids is located in Table 44 on page 243. 1997 Prompts The following prompts display in the Projection Chooser once Alaska Conformal is selected. 156° W. This projection is useful for mapping the complete state of Alaska on the Clarke 1866 spheroid or NAD27. [It is] a sixth-order-equation modification of an oblique Stereographic conformal projection on the Clarke 1866 spheroid. but not with other datums and spheroids. Most of Alaska and the Aleutian Islands (with the exception of the panhandle) is bounded by a line of true scale. 152° W” (Environmental Systems Research Institute. Respond to the prompts as described. 1997). Scale increases outward from these coordinates.997 to 1.997 at roughly 62. False easting False northing Map Projections 301 . The method of projection is “modified planar.5° N. Spheroid Name Datum Name Select the spheroid and datum to use. Uses Source: Environmental Systems Research Institute. That is one quarter the range for a corresponding conic projection (Snyder. The origin is at 64° N. 1987).003. It has little scale distortion as compared to other conformal projections. Construction Property Meridians Parallels Graticule spacing Modified planar Conformal N/A N/A N/A Linear scale The minimum scale factor is 0. Distortion increases as distance from Alaska increases.

These values must be in meters.Enter values of false easting and false northing corresponding to the desired center of the projection. 302 Map Projections . the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. That is. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection.

United States Base Map (48 states). The graticule is symmetrical. The North or South Pole is represented by an arc.5° and 45.25% on a map of the United States (48 states) with standard parallels of 29. Meridians and parallels intersect each other at right angles. Maps based on the Albers Conical Equal Area for Alaska use standard parallels 55°N and 65°N. Parallel spacing decreases away from the standard parallels and increases between them. for Hawaii. the standard parallels are 8°N and 18°N.5°N. The graticule spacing preserves the property of equivalence of area.5°N and 45. Albers Conical Equal Area is well-suited to countries or continents where north-south depth is about 3/5 the breadth of east-west. Parallels are arcs of concentric circles concave toward a pole. Construction Property Meridians Parallels Cone Equal-area Meridians are straight lines converging on the polar axis. but not at the pole. Thus.5°N and 45. and individual sheets can be joined along their edges. Graticule spacing Linear scale Uses This projection produces very accurate area and distance measurements in the middle latitudes (Figure 89). and the Geologic map of the United States are based on the standard parallels of 29. It retains its properties at various scales. Linear scale is true on the standard parallels. Used for thematic maps. Maximum scale error is 1. Used for large countries with an east-west orientation. the two standard parallels are 29. The National Atlas of the United States. Map Projections 303 . When this projection is used for the continental US.Albers Conical Equal Area The Albers Conical Equal Area projection is mathematically based on a cone that is conceptually secant on two parallels. Meridian spacing is equal on the standard parallels and decreases toward the poles.5°N. There is no areal deformation.5° North.

meridians intersect parallels at right angles). These values must be in meters. Respond to the prompts as described. Albers Conical Equal Area has concentric arcs for parallels and equally spaced radii for meridians.. False easting at central meridian False northing at origin Enter values of false easting and false northing. define the origin of the map projection in both spherical and rectangular coordinates. but are farthest apart between the standard parallels and closer together on the north and south edges. 304 Map Projections . the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.This projection possesses the property of equal-area. Albers Conical Equal Area is the projection exclusively used by the USGS for sectional maps of all 50 states of the US in the National Atlas of 1970. Longitude of central meridian Latitude of origin of projection Enter values for longitude of the desired central meridian and latitude of the origin of projection. corresponding to the intersection of the central meridian and the latitude of the origin of projection. Then. Thus. The list of available spheroids is located in Table 44 on page 243. there is no angular distortion (i. Prompts The following prompts display in the Projection Chooser once Albers Conical Equal Area is selected. and the standard parallels are correct in scale and in every direction.e.e. Spheroid Name Datum Name Select the spheroid and datum to use. Note that the first standard parallel is the southernmost.. That is. the standard parallels). Latitude of 1st standard parallel Latitude of 2nd standard parallel Enter two values for the desired control lines of the projection (i. Like other conics. and conformality exists along the standard parallels. It is often convenient to make them large enough to prevent negative coordinates from occurring within the region of the map projection. Parallels are not equally spaced.

Note the change in spacing of the parallels.Figure 89: Albers Conical Equal Area Projection In Figure 89. Map Projections 305 . the standard parallels are 20°N and 60°N.

The polar aspect is used as the emblem of the United Nations. except the outer meridian of a hemisphere. Equatorial aspect: the meridians are complex curves concave toward a straight central meridian. Meridians Oblique aspect: the meridians are complex curves concave toward the point of tangency. The USGS uses the oblique aspect in the National Atlas and for large-scale mapping of Micronesia. Polar aspect: linear scale is true from the point of tangency along the meridians only. the Equator is straight. It has true direction and true distance scaling from the point of tangency. The entire Earth can be represented. Oblique aspect: the parallels are complex curves. Parallel spacing is equidistant.Azimuthal Equidistant The Azimuthal Equidistant projection is mathematically based on a plane tangent to the Earth. the projection shows distances true to scale when measured between the point of tangency and any other point on the map. The Azimuthal Equidistant projection is used for radio and seismic work. but generally less than one hemisphere is portrayed. as every place in the world is shown at its true distance and direction from the point of tangency. Polar aspect: the parallels are concentric circles. Uses 306 Map Projections . Parallels Equatorial aspect: the parallels are complex curves concave toward the nearest pole. Angular and area deformation increase away from the point of tangency. Polar aspect: the meridian spacing is equal and increases away from the point of tangency. though the other hemisphere can be portrayed. but is much distorted. Graticule spacing Linear scale Oblique and equatorial aspects: linear scale is true from the point of tangency. In all aspects. Construction Property Plane Equidistant Polar aspect: the meridians are straight lines radiating from the point of tangency. which is a circle.

and all distances and directions are shown accurately from the central point. The list of available spheroids is located in Table 44 on page 243. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Prompts The following prompts display in the Projection Chooser if Azimuthal Equidistant is selected. Longitude of center of projection Latitude of center of projection Enter values for the longitude and latitude of the desired center of the projection. Define the center of the map projection in both spherical and rectangular coordinates.. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. Respond to the prompts as described. and the projection is neither equal-area nor conformal. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection.This projection is used mostly for polar projections because latitude rings divide meridians at equal intervals with a polar aspect (Figure 90). Also. Distances are not correct or true along parallels. Spheroid Name Datum Name Select the spheroid and datum to use. Map Projections 307 . That is. Meridians are equally spaced. Linear scale distortion is moderate and increases toward the periphery. straight lines radiating from the center of this projection represent great circles. These values must be in meters. a city) and distance measurements are true from that central point. This projection can also be used to center on any point on the Earth (e.g.

Figure 90: Polar Aspect of the Azimuthal Equidistant Projection This projection is commonly used in atlases for polar maps. 308 Map Projections .

Spheroid Name Datum Name Select the spheroid and datum to use. the Behrmann projection is the same as the Lambert Cylindrical Equal-area projection. Map Projections 309 . See Meridians and Parallels. Longitude of central meridian Enter a value of the longitude of the desired central meridian. These values must be in meters. That is. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. 1989 Prompts The following prompts display in the Projection Chooser once Behrmann is selected. The list of available spheroids is located in Table 44 on page 243. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. Linear scale Uses Source: Snyder and Voxland. Scale is true along latitudes 30° N and S.42 the length of the Equator. perpendicular to meridians. These changes prevent distortion at latitudes 30° N and S instead of at the Equator. Symmetry is present about any meridian or the Equator. Used for creating world maps. Construction Property Meridians Parallels Graticule spacing Cylindrical Equal-area Straight parallel lines that are equally spaced and 0. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. Straight lines that are unequally spaced and farthest apart near the Equator. Respond to the prompts as described. Poles are straight lines the same length as the Equator.Behrmann With the exception of compression in the horizontal direction and expansion in the vertical direction.

Figure 91: Behrmann Cylindrical Equal-Area Projection Source: Snyder and Voxland. 1989 310 Map Projections .

There is some distortion. Although it was used in the 1800s and early 1900s. Bonne was replaced by Lambert Azimuthal Equal Area by the mapping company Rand McNally & Co. Inc. Scale is true along the central meridian and all parallels. Latitude of standard parallel Longitude of central meridian Enter values of the latitude of standard parallel and the longitude of central meridian. Spheroid Name Datum Name Select the spheroid and datum to use. (see Lambert Azimuthal Equal Area on page 353) Construction Property Meridians Parallels Graticule spacing Linear scale Uses Pseudocone Equal-area N/A Parallels are concentric arcs that are equally spaced. and Hammond. Respond to the prompts as described. False easting False northing Map Projections 311 . True scale is achievable along the central meridian and all parallels. This projection is best used on maps of continents and small areas. The list of available spheroids is located in Table 44 on page 243.Bonne The Bonne projection is an equal-area projection. The central meridian is a linear graticule. 1997 Prompts The following prompts display in the Projection Chooser once Bonne is selected. Source: Environmental Systems Research Institute.

Figure 92: Bonne Projection Source: Snyder and Voxland. These values must be in meters. That is.Enter values of false easting and false northing corresponding to the desired center of the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. 1989 312 Map Projections .

scale distortion increases. Longitude of central meridian Latitude of origin of projection Enter the values for longitude of central meridian and latitude of origin of projection. Linear scale Uses Source: Environmental Systems Research Institute. Scale Factor Enter the scale factor. The extent is 5 degrees to either side of the central meridian. Construction Property Meridians Parallels Graticule spacing Cylinder Compromise N/A N/A Linear graticules are located at the Equator. Scale is true along the central meridian and lines perpendicular to the central meridian. 1997 Prompts The following prompts display in the Projection Chooser once Cassini is selected. It is best used for areas that are mostly in the north-south direction.Cassini The Cassini projection is a transverse cylindrical projection and is neither equal-area or conformal. central meridian. Cassini is used for large maps of areas near the central meridian. With increasing distance from central meridian. Spheroid Name Datum Name Select the spheroid and datum to use. False easting False northing Map Projections 313 . The list of available spheroids is located in Table 44 on page 243. as well as meridians 90° from the central meridian. Respond to the prompts as described.

Enter values of false easting and false northing corresponding to the desired center of the projection. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. These values must be in meters. Figure 93: Cassini Projection Source: Snyder and Voxland. That is. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. 1989 314 Map Projections .

Spheroid Name Datum Name The list of available spheroids is located in Table 44 on page 243. Equal area mapping of regions predominately bordering the Equator. Linear scale Uses Source: Snyder and Voxland. See Meridians and Parallels.Cylindrical Equal Area The Cylindrical Equal Area projection is suited for equal-area mapping of regions that are predominately bordering the Equator. Scale is true along the Equator. There is shape distortion but no area distortion in this projection. Equal area is maintained by scale increasing with distance from Equator in direction of parallels and scale decreasing in direction of meridians. False easting False northing Map Projections 315 . and shape distortion in the polar regions is extreme. Respond to the prompts as described.32 the length of the Equator. thus it is also known as the Lambert Cylindrical Equal Area projection. Poles are straight lines the same length as the Equator. Straight lines that are unequally spaced and farthest apart near the Equator. This projection was presented by Johann Heinrich Lambert in 1772. Construction Property Meridians Parallels Graticule spacing Cylindrical Equal-area Straight parallel lines that are equally spaced and 0. Latitude of standard parallel Longitude of central meridian Enter the value for the latitude of the standard parallel and the value for the longitude of the central meridian. 1987 Prompts The following prompts display in the Projection Chooser once Cylindrical Equal Area is selected. Symmetry is present about any meridian or the Equator. 1989 and Snyder. perpendicular to meridians. Same scale at the parallel of opposite sign.

the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. These values must be in meters. That is. Figure 94: Cylindrical Equal-Area Projection Source: Snyder and Voxland. 1989 316 Map Projections .Enter values of false easting and false northing corresponding to the desired center of the projection.

Parallels Oblique aspect: the parallels are concave toward the the poles with one parallel being a straight line.Double Stereographic Double Stereographic projection is a variation of Stereographic projection and is also used to represent polar areas. Construction Property Meridians Plane Conformal All meridians are straight lines or circular arcs. The list of available spheroids is located in Table 44 on page 243. Scale Factor Map Projections 317 . All parallels are straight lines or circular arcs. Scale increases in movement from the center. Spheroid Name Datum Name Select the spheroid and datum to use. In contrast. Respond to the prompts as described. Points are projected from a position on the opposite side of the globe onto a plane tangent to the Earth. Double Stereographic is the term used in ESRI software for Oblique Stereographic case (EPSG code 9809) which uses equations from the EPSG. the Stereographic case uses the USGS equations of Snyder. The Double Stereographic projection is used for large-scale coordinate systems in the Netherlands and New Brunswick. The following prompts display in the Projection Chooser if Stereographic is selected. Equatorial aspect: parallels curve in opposite directions oneither side of the Equator Graticule spacing Linear scale Prompts Graticule intersections are 90 degrees.

Designate the desired scale factor. but close to one is often used. It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian.org. This parameter is used to modify scale distortion. These values must be in meters. Longitude of center of projection Latitude of center of projection Enter the values for the longitude of the center of the projection and the latitude of the center of the projection. That is. or to lessen scale distortion away from the central meridian. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. 2000 and Geotoolkit. Source: Environmental Systems Research Institute. A factor of less than. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. 2009a 318 Map Projections . A value of one indicates true scale only along the central meridian.

Eckert I A great amount of distortion at the Equator is due to the break at the Equator. That is. Spheroid Name Datum Name Select the spheroid and datum to use. Symmetry exists about the central meridian or the Equator. Scale is constant at any latitude (and latitude of opposite sign) and any meridian. Map Projections 319 . Linear scale Uses Source: Snyder and Voxland. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. This projection is used as a novelty to show a straight-line graticule. The list of available spheroids is located in Table 44 on page 243. Construction Property Meridians Parallels Graticule spacing Pseudocylinder Neither conformal nor equal-area Meridians are converging straight lines that are equally spaced and broken at the Equator. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. Scale is true along latitudes 47° 10’ N and S. These values must be in meters. Respond to the prompts as described. Longitude of central meridian Enter a value of the longitude of the desired central meridian. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Poles are lines one half the length of the Equator. See Meridians and Parallels. equally spaced straight parallel lines. Parallels are perpendicular to the central meridian. 1989 Prompts The following prompts display in the Projection Chooser once Eckert I is selected.

1989 320 Map Projections .Figure 95: Eckert I Projection Source: Snyder and Voxland.

See Meridians and Parallels. Spheroid Name Datum Name Select the spheroid and datum to use. Respond to the prompts as described. Scale is true along altitudes 55° 10’ N. False easting False northing Map Projections 321 .Eckert II The break at the Equator creates a great amount of distortion there. but the Eckert I projection has equidistant parallels. Central meridian is one half as long as the Equator. Parallels Graticule spacing Linear scale Uses Source: Snyder and Voxland. 1989 Prompts The following prompts display in the Projection Chooser once Eckert II is selected. Parallels are straight parallel lines that are unequally spaced. Scale is constant along any latitude. Parallels are perpendicular to the central meridian. The Eckert I projection has meridians positioned identically to Eckert II. Eckert II is similar to the Eckert I projection. The greatest separation is close to the Equator. Construction Property Meridians Pseudocylinder Equal-area Meridians are straight lines that are equally spaced and broken at the Equator. and S. Pole lines are half the length of the Equator. The list of available spheroids is located in Table 44 on page 243. Longitude of central meridian Enter a value of the longitude of the desired central meridian. This projection is used as a novelty to show straightline equal-area graticule. Symmetry exists at the central meridian or the Equator.

These values must be in meters. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.Enter values of false easting and false northing corresponding to the desired center of the projection. That is. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. 1989 322 Map Projections . Figure 96: Eckert II Projection Source: Snyder and Voxland.

but the Equator is free of angular distortion” (Snyder and Voxland. The meridians +/. Symmetry exists at the central meridian or the Equator. Longitude of central meridian Enter a value of the longitude of the desired central meridian. 1989). The list of available spheroids is located in Table 44 on page 243.Eckert III In the Eckert III projection. Construction Property Meridians Pseudocylinder Area is not preserved. Parallels are equally spaced straight lines. The poles and the central meridian are straight lines one half the length of the Equator. Parallels Graticule spacing Linear scale Uses Source: Environmental Systems Research Institute. Used for mapping the world. See Meridians and Parallels. Pole lines are half the length of the Equator. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Meridians are elliptical curves that are equally spaced elliptical curves. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is. Features close to poles are compressed in the northsouth direction. “no point is free of all scale distortion.180° from the central meridian are semicircles. Map Projections 323 . Respond to the prompts as described. Scale is correct only along 37° and 55’ N and S. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. Spheroid Name Datum Name Select the spheroid and datum to use. 1997 Prompts The following prompts display in the Projection Chooser once Eckert III is selected. These values must be in meters.

1989 324 Map Projections .Figure 97: Eckert III Projection Source: Snyder and Voxland.

Respond to the prompts as described. Construction Property Meridians Parallels Graticule spacing Pseudocylinder Equal-area Meridians are elliptical arcs that are equally spaced. features are compressed in the north-south direction” (Environmental Systems Research Institute. Linear scale Uses Source: Environmental Systems Research Institute. See Meridians and Parallels. Use for world maps only. Nearer the poles. Spheroid Name Datum Name Select the spheroid and datum to use. “Scale is distorted north-south 40 percent along the Equator relative to the east-west dimension. False easting False northing Map Projections 325 . Longitude of central meridian Enter a value of the longitude of the desired central meridian. An example of a thematic map is one depicting land cover. Scale is correct only along these parallels. 1997). 1997 Prompts The following prompts display in the Projection Chooser once Eckert IV is selected. This distortion decreases to zero at 40° 30’ N and S and at the central meridian.Eckert IV The Eckert IV projection is best used for thematic maps of the globe. The poles and the central meridian are straight lines one half the length of the Equator. Parallels are straight lines that are unequally spaced and closer together at the poles. The list of available spheroids is located in Table 44 on page 243.

It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. Figure 98: Eckert IV Projection Source: Snyder and Voxland.Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. That is. 1989 326 Map Projections .

Eckert V

The Eckert V projection is only supported on a sphere. Like Eckert III, no point is free of all scale distortion, but the Equator is free of angular distortion” (Snyder and Voxland, 1989).

Construction Property Meridians

Pseudocylinder Area is not preserved. Meridians are sinusoidal curves that are equally spaced. The poles and the central meridian are straight lines one half as long as the Equator. Parallels are straight lines that are equally spaced. See Meridians and Parallels. Scale is correct only along 37° 55’ N and S. Features near the poles are compressed in the north-south direction. This projection is best used for thematic world maps.

Parallels Graticule spacing Linear scale

Uses

Source: Environmental Systems Research Institute, 1997 Prompts The following prompts display in the Projection Chooser once Eckert V is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.
Longitude of central meridian

Enter a value of the longitude of the desired central meridian.
False easting False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Map Projections

327

Figure 99: Eckert V Projection

Source: Snyder and Voxland, 1989

328

Map Projections

Eckert VI

The Eckert VI projection is best used for thematic maps. An example of a thematic map is one depicting land cover.

Construction Property Meridians Parallels Graticule spacing

Pseudocylinder Equal-area Meridians are sinusoidal curves that are equally spaced. Parallels are unequally spaced straight lines, closer together at the poles. See Meridians and Parallels. The poles and the central meridian are straight lines one half the length of the Equator. “Scale is distorted north-south 29 percent along the Equator relative to the east-west dimension. This distortion decreases to zero at 49° 16’ N and S at the central meridian. Scale is correct only along these parallels. Nearer the poles, features are compressed in the north-south direction” (Environmental Systems Research Institute, 1997). Use for world maps only.

Linear scale

Uses

Source: Environmental Systems Research Institute, 1997 Prompts The following prompts display in the Projection Chooser once Eckert VI is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.
Longitude of central meridian

Enter a value of the longitude of the desired central meridian.
False easting False northing

Map Projections

329

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Figure 100: Eckert VI Projection

Source: Snyder and Voxland, 1989

330

Map Projections

EOSAT SOM

The EOSAT SOM projection is similar to the Space Oblique Mercator projection. The main exception to the similarity is that the EOSAT SOM projection’s X and Y coordinates are switched.

For information, see Space Oblique Mercator on page 395. Prompts The following prompts display in the Projection Chooser once EOSAT SOM is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.
Landsat vehicle ID (1-5)

Specify whether the data are from Landsat 1, 2, 3, 4, or 5.
Orbital path number (1-251 or 1-233)

For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the path range is from 1 to 233.
False easting False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Map Projections

331

EPSG Coordinate Systems

The EPSG Coordinate Systems is a dataset of parameters for coordinate reference system and coordinate transformation description. This dataset is known as the EPSG Geodetic Parameter Dataset, published by the OGP Surveying and Positioning Committee. This committee was formed in 2005 by the absorption into OGP of the now-defunct European Petroleum Survey Group (EPSG). The EPSG geodetic parameter dataset is a repository of parameters required to identify coordinates such that position is described unambiguously through a coordinate reference system (CRS) definition, and define transformations and conversions to allow coordinates to be changed from one CRS to another CRS. The EPSG dataset uses numeric codes for the map projections. For example, map projection NAD83/UTM zone 17N is defined as code EPSG::26917. The EPSG geodetic parameter dataset and documentation is maintained at www.epsg-registry.org. References: www.epsg.org

Prompts The following prompt displays in the Projection Chooser once EPSG Coordinate Systems is selected. Respond to the prompt as described.
Projection

Select the projection to use.

332

Map Projections

Equidistant Conic

With Equidistant Conic (Simple Conic) projections, correct distance is achieved along the line(s) of contact with the cone, and parallels are equidistantly spaced. It can be used with either one (A) or two (B) standard parallels.

Construction Property Meridians Parallels

Cone Equidistant Meridians are straight lines converging on a polar axis but not at the pole. Parallels are arcs of concentric circles concave toward a pole. Meridian spacing is true on the standard parallels and decreases toward the pole. Parallels are placed at true scale along the meridians. Meridians and parallels intersect each other at right angles. The graticule is symmetrical. Linear scale is true along all meridians and along the standard parallel or parallels. The Equidistant Conic projection is used in atlases for portraying mid-latitude areas. It is good for representing regions with a few degrees of latitude lying on one side of the Equator. It was used in the former Soviet Union for mapping the entire country (Environmental Systems Research Institute, 1992).

Graticule spacing

Linear scale

Uses

This projection is neither conformal nor equal-area, but the north-south scale along meridians is correct. The North or South Pole is represented by an arc. Because scale distortion increases with increasing distance from the line(s) of contact, the Equidistant Conic is used mostly for mapping regions predominantly east-west in extent. The USGS uses the Equidistant Conic in an approximate form for a map of Alaska. Prompts The following prompts display in the Projection Chooser if Equidistant Conic is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Map Projections

333

Define the origin of the projection in both spherical and rectangular coordinates.
Longitude of central meridian Latitude of origin of projection

Enter values for the longitude of the desired central meridian and the latitude of the origin of projection.
False easting False northing

Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of the origin of projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.
One or two standard parallels? Latitude of standard parallel

Enter one or two values for the desired control line(s) of the projection, i.e., the standard parallel(s). Note that if two standard parallels are used, the first is the southernmost. Figure 101: Equidistant Conic Projection

Source: Snyder and Voxland, 1989

334

Map Projections

Equidistant Cylindrical

The Equidistant Cylindrical projection is similar to the Equirectangular projection.

For information, see Equirectangular (Plate Carrée) on page 336. Prompts The following prompts display in the Projection Chooser if Equidistant Cylindrical is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.
Longitude of standard parallel Latitude of true scale

Enter a value for longitude of the standard parallel and the latitude of true scale.
False easting False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Map Projections

335

Equirectangular (Plate Carrée)

Also called Simple Cylindrical, Equirectangular is composed of equally spaced, parallel meridians and latitude lines that cross at right angles on a rectangular map. Each rectangle formed by the grid is equal in area, shape, and size.

Construction Property Meridians Parallels Graticule spacing Linear scale

Cylinder Compromise All meridians are straight lines. All parallels are straight lines. Equally spaced parallel meridians and latitude lines cross at right angles. The scale is correct along all meridians and along the standard parallels (Environmental Systems Research Institute, 1992). Best used for city maps, or other small areas with map scales small enough to reduce the obvious distortion. Used for simple portrayals of the world or regions with minimal geographic data, such as index maps (Environmental Systems Research Institute, 1992).

Uses

Equirectangular is not conformal nor equal-area, but it does contain less distortion than the Mercator in polar regions. Scale is true on all meridians and on the central parallel. Directions due north, south, east, and west are true, but all other directions are distorted. The Equator is the standard parallel, true to scale and free of distortion. However, this projection may be centered anywhere. This projection is valuable for its ease in computer plotting. It is useful for mapping small areas, such as city maps, because of its simplicity. The USGS uses Equirectangular for index maps of the conterminous US with insets of Alaska, Hawaii, and various islands. However, neither scale nor projection is marked to avoid implying that the maps are suitable for normal geographic information. Prompts The following prompts display in the Projection Chooser if Equirectangular is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.

336

Map Projections

The list of available spheroids is located in Table 44 on page 243.
Longitude of central meridian Latitude of true scale

Enter a value for longitude of the desired central meridian to center the projection and the latitude of true scale.
False easting False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Figure 102: Equirectangular Projection

Source: Snyder and Voxland, 1989

Map Projections

337

Gall Stereographic

The Gall Stereographic projection was created in 1855. The two standard parallels are located at 45° N and 45° S. This projection is used for world maps.

Construction Property Meridians Parallels Graticule spacing

Cylinder Compromise Meridians are straight lines that are equally spaced. Parallels are straight lines that have increased space with distance from the Equator. All meridians and parallels are linear.

Linear scale

“Scale is true in all directions along latitudes 45° N and S. Scale is constant along parallels and is symmetrical around the Equator. Distances are compressed between latitudes 45° N and S, and expanded beyond them” (Environmental Systems Research Institute, 1997). Use for world maps only.

Uses

Source: Environmental Systems Research Institute, 1997 Prompts The following prompts display in the Projection Chooser once Gall Stereographic is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.
Longitude of central meridian

Enter a value for longitude of the desired central meridian.
False easting False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

338

Map Projections

Gauss Kruger

The Gauss Kruger projection is the same as the Transverse Mercator projection, with the exception that Gauss Kruger uses a fixed scale factor of 1. Gauss Kruger is available only in ellipsoidal form. Many countries such as China and Germany use Gauss Kruger in 3degree zones instead of 6-degree zones for UTM.

For more information, see Transverse Mercator on page 412. Prompts The following prompts display in the Projection Chooser once Gauss Kruger is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.
Scale factor

Designate the desired scale factor. This parameter is used to modify scale distortion. A value of one indicates true scale only along the central meridian. It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian, or to lessen scale distortion away from the central meridian. A factor of less than, but close to one is often used.
Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.
Latitude of origin of projection

Enter the value for the latitude of origin of projection.
False easting False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Map Projections

339

General Vertical Near-side Perspective

General Vertical Near-side Perspective presents a picture of the Earth as if a photograph were taken at some distance less than infinity. The map user simply identifies area of coverage, distance of view, and angle of view. It is a variation of the General Perspective projection in which the “camera” precisely faces the center of the Earth.

Construction Property

Plane Compromise The central meridian is a straight line in all aspects. In the polar aspect all meridians are straight. In the equatorial aspect the Equator is straight (Environmental Systems Research Institute, 1992). Parallels on vertical polar aspects are concentric circles. Nearly all other parallels are elliptical arcs, except that certain angles of tilt may cause some parallels to be shown as parabolas or hyperbolas. Polar aspect: parallels are concentric circles that are not evenly spaced. Meridians are evenly spaced and spacing increases from the center of the projection. Equatorial and oblique aspects: parallels are elliptical arcs that are not evenly spaced. Meridians are elliptical arcs that are not evenly spaced, except for the central meridian, which is a straight line. Radial scale decreases from true scale at the center to zero on the projection edge. The scale perpendicular to the radii decreases, but not as rapidly (Environmental Systems Research Institute, 1992). Often used to show the Earth or other planets and satellites as seen from space. Used as an aesthetic presentation, rather than for technical applications (Environmental Systems Research Institute, 1992).

Meridians

Parallels

Graticule spacing

Linear scale

Uses

Central meridian and a particular parallel (if shown) are straight lines. Other meridians and parallels are usually arcs of circles or ellipses, but some may be parabolas or hyperbolas. Like all perspective projections, General Vertical Near-side Perspective cannot illustrate the entire globe on one map—it can represent only part of one hemisphere.

340

Map Projections

Prompts The following prompts display in the Projection Chooser if General Vertical Near-side Perspective is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.
Height of perspective point

Enter a value for the desired height of the perspective point above the sphere in the same units as the radius. Then, define the center of the map projection in both spherical and rectangular coordinates.
Longitude of center of projection Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.
False easting False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Map Projections

341

Geographic (Lat/Lon)

The Geographic is a spherical coordinate system composed of parallels of latitude (Lat) and meridians of longitude (Lon) (Figure 103). Both divide the circumference of the Earth into 360 degrees. Degrees are further subdivided into minutes and seconds (60 sec = 1 minute, 60 min = 1 degree). Because the Earth spins on an axis between the North and South Poles, this allows construction of concentric, parallel circles, with a reference line exactly at the north-south center, termed the Equator. The series of circles north of the Equator is termed north latitudes and runs from 0° latitude (the Equator) to 90° North latitude (the North Pole), and similarly southward. Position in an east-west direction is determined from lines of longitude. These lines are not parallel, and they converge at the poles. However, they intersect lines of latitude perpendicularly. Unlike the Equator in the latitude system, there is no natural zero meridian. In 1884, it was finally agreed that the meridian of the Royal Observatory in Greenwich, England, would be the prime meridian. Thus, the origin of the geographic coordinate system is the intersection of the Equator and the prime meridian. Note that the 180° meridian is the international date line. If you choose Geographic from the Projection Chooser, the following prompts display:
Spheroid Name Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Note that in responding to prompts for other projections, values for longitude are negative west of Greenwich and values for latitude are negative south of the Equator.

342

Map Projections

Map Projections 343 .Figure 103: Geographic Projection North Pole Parallel (Latitude) 60 Equator 30 0 3 0 6 0 Meridian (Longitude) Figure 103 shows the graticule of meridians and parallels on the global surface.

the Equator is straight. Polar aspect: the parallels are concentric circles. it is the only projection which shows all great circles as straight lines. while intervals between parallels increase rapidly from the center and parallels are convex to the Equator. Polar aspect: the meridian spacing is equal and increases away from the pole. which is straight). However. Because of the close perspective. Linear scale and angular and areal deformation are extreme. rapidly increasing away from the center of the projection. parabolas. Because great circles are straight. Meridians are straight and parallel. the latitude intervals increase rapidly from the center outwards. The Gnomonic projection is used in seismic work because seismic waves travel in approximately great circles. this projection is limited to less than a hemisphere. Construction Property Plane Compromise Polar aspect: the meridians are straight lines radiating from the point of tangency. Oblique and equatorial aspects: the graticule spacing increases very rapidly away from the center of the projection. Rhumb lines are curved. 344 Map Projections . With a polar aspect. which is the opposite of the Mercator projection. or hyperbolas concave toward the poles (except for the Equator. Graticule spacing Linear scale Uses With an equatorial or oblique aspect.Gnomonic Gnomonic is a perspective projection that projects onto a tangent plane from a position in the center of the Earth. It is used with the Mercator projection for navigation. this projection is useful for air and sea navigation. Meridians Parallels Oblique and equatorial aspects: parallels are ellipses. Oblique and equatorial aspects: the meridians are straight lines. The parallel spacing increases rapidly from the pole.

It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection.Prompts The following prompts display in the Projection Chooser if Gnomonic is selected. Longitude of center of projection Latitude of center of projection Enter values for the longitude and latitude of the desired center of the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. Map Projections 345 . That is. Spheroid Name Datum Name Select the spheroid and datum to use. Respond to the prompts as described. The list of available spheroids is located in Table 44 on page 243. Define the center of the map projection in both spherical and rectangular coordinates.

In particular. Source: Environmental Systems Research Institute. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. Parallels Graticule spacing Linear scale Uses The Hammer projection is useful for mapping the world. Spheroid Name Datum Name Select the spheroid and datum to use. the Hammer projection is suited for thematic maps of the world. Only the Equator and central meridian are straight lines. such as land cover. all parallels are complex curves that have a concave shape toward the nearest pole.Hammer Construction Property Meridians Modified Azimuth Equal-area The central meridian is half as long as the Equator and a straight line. These values must be in meters. Scale lessens along the Equator and central meridian as proximity to the origin grows. Respond to the prompts as described. That is. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. With the exception of the Equator. Use for world maps only. 1997 Prompts The following prompts display in the Projection Chooser once Hammer is selected. Longitude of central meridian Enter a value for longitude of the desired central meridian. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. 346 Map Projections . Others are curved and concave toward the central meridian and unequally spaced. The list of available spheroids is located in Table 44 on page 243.

Figure 104: Hammer Projection Source: Snyder and Voxland. 1989 Map Projections 347 .

8” North and South. Parallels are straight parallel lines. where the linear scale of the projections match. which are perpendicular to the central meridians. and AVHRR Pathfinder project. This projection is suitable for thematic or distribution mappings of the entire world. 1989 Prompts The following prompts display in the Projection Chooser once Interrupted Goode Homolosine is selected. The projection is interrupted to reduce distortion of major land areas. Goode in 1923. Parallels Graticule spacing Linear scale Uses Source: Snyder and Voxland. Source: United States Geological Survey (USGS) 2009. they are equally spaced. Respond to the prompts as described. Poles are points. Scale is true at each latitude between 40° 44’ N and S along the central meridian within the same latitude range. 1989). 348 Map Projections . Parallels gradually get closer together closer to the poles. and elliptical arcs elsewhere. there are six central meridians. and the Sinusoidal projection (Goode 1925) used for lower latitudes. all concave toward the central meridian. Symmetry is nonexistent in the interrupted form. Other meridians are equally spaced sinusoidal curves between latitudes 40° 44’ N and S. See Meridians and Parallels. each a straight line 0.Interrupted Goode Homolosine The Interrupted Goode Homolosine projection is an equal area pseudocylindrical projection developed by J. Scale varies with increased latitudes.P. There is a slight bend in meridians at the 40° 44’ latitudes” (Snyder and Voxland. This projection is a combination of the Mollweide projection (also called Homolographic) used for higher latitudes. and has been chosen for two USGS projects: Global Advanced Very High Resolution Radiometer (AVHRR) 1-km data set project. Spheroid Name Datum Name Select the spheroid and datum to use. The two projections join at 40° 44’11. Between latitudes 40° 44’ N and S. Construction Property Meridians Pseudocylindrical Equal-area “In the interrupted form. This projection is useful for world maps.22 as long as the Equator but not crossing the Equator.

Figure 105: Interrupted Goode Homolosine Projection Source: Snyder and Voxland. 1989 Map Projections 349 .

Interrupted Mollweide The interrupted Mollweide projection reduces the distortion of the Mollweide projection. 1989 350 Map Projections . Spheroid Name Datum Name Select the spheroid and datum to use. Respond to the prompts as described. Source: Snyder and Voxland. Prompts The following prompts display in the Projection Chooser once Interrupted Mollweide is selected. 1989 For more information. It is interrupted into six regions with fixed parameters for each region. Figure 106: Interrupted Mollweide Projection Source: Snyder and Voxland. see Mollweide on page 372.

2009b Prompts The following prompts display in the Projection Chooser once Krovak is selected. Longitude of center of projection Latitude of center of projection Enter the values for the longitude of the center of the projection and the latitude of the center of the projection. This projection is used in the Czech Republic and Slovakia under the name Krovak projection. The list of available spheroids is located in Table 44 on page 243. A value of one indicates true scale only along the central meridian.org. A factor of less than. False easting False northing Map Projections 351 . see Projections. Source: Environmental Systems Research Institute. but close to one is often used. 2000 and Geotoolkit. Scale factor Designate the desired scale factor. It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian. or to lessen scale distortion away from the central meridian. This parameter is used to modify scale distortion. For a description of azimuth. The projection method is a conic projection based on one standard parallel. Enter the values for the following prompts: Spheroid Name Datum Name Select the spheroid and datum to use. Azimuth Enter the value for the azimuth of the center line passing through the center of the projection. and the lines of contact are two pseudo-standard parallels.Krovak The Krovak Oblique Conformal Conic projection (EPSG code 9819) is an oblique aspect of Lambert Conformal Conic projection.

XY plane rotation Enter the value to define the orientation of the projection along with the X scale and Y scale prompts. 352 Map Projections .Enter values of false easting and false northing corresponding to the desired center of the projection. X scale Y scale Enter the values to orient the X axis and the Y axis. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is. Pseudo standard parallel 1 Enter the value for the latitude of the pseudo standard parallel. These values must be in meters. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

This central point can be located anywhere. Meridians are also curved. Map Projections 353 . It is the only projection that can accurately represent both area and true direction from the center of the projection (Figure 107 on page 355). Construction Property Plane Equal-area Polar aspect: the meridians are straight lines radiating from the point of tangency. retains the property of equivalence of area. This projection generally represents only one hemisphere. Meridians Parallels Graticule spacing Linear scale Uses In the polar aspect. Angular deformation increases toward the periphery of the projection. Oblique and equatorial aspects: meridians are complex curves concave toward a straight central meridian. Polar aspect: parallels are concentric circles. and the parallel spacing is unequal and decreases toward the periphery of the projection. Oblique and equatorial aspects: the parallels are complex curves. Linear scale is better than most azimuthals. The polar. and the scale distorts accordingly.Lambert Azimuthal Equal Area The Lambert Azimuthal Equal Area projection is mathematically based on a plane tangent to the Earth. latitude rings decrease their intervals from the center outwards. Concentric circles are closer together toward the edge of the map. parallels are curves flattened in the middle. in all aspects. except the outer meridian of a hemisphere. This projection is well-suited to square or round land masses. The polar aspect is used by the USGS in the National Atlas. but not as good as the equidistant. oblique. Scale decreases radially toward the periphery of the map projection. which is a circle. In the equatorial aspect. Scale increases perpendicular to the radii toward the periphery. except for the central meridian. and equatorial aspects are used by the USGS for the Circum-Pacific Map. The Equator on the equatorial aspect is a straight line. The graticule spacing. Polar aspect: the meridian spacing is equal and increases. and spacing decreases toward the edges.

• • Refer to Spheroids for more information. Respond to the prompts as described. That is. 354 Map Projections . Calculation Method: Unspecified Sphere Formulas Ellipsoid Formulas Select a method by which the transformation is calculated: • Sphere Formulas .Calculation is based on the model of the Earth as a sphere. The list of available spheroids is located in Table 44 on page 243. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. Ellipsoid Formulas . not a sphere. Define the center of the map projection in both spherical and rectangular coordinates. however it may be satisfactory for some maps.Model used is unknown or not specified. This option provides the more accurate reprojection within IMAGINE. These values must be in meters. Unspecified . Spheroid Name Datum Name Select the spheroid and datum to use. This option is less accurate since the Earth is an ellipsoid. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Longitude of center of projection Latitude of center of projection Enter values for the longitude and latitude of the desired center of the projection.Calculation is based on the model of the Earth as an ellipsoid.Prompts The following prompts display in the Projection Chooser if Lambert Azimuthal Equal Area is selected.

In Figure 107 on page 355. centered on 40°N. Figure 107: Lambert Azimuthal Equal Area Projection A B C Map Projections 355 . three views of the Lambert Azimuthal Equal Area projection are shown: A) Polar aspect. B) Equatorial aspect. showing one hemisphere. frequently used in old atlases for maps of the eastern and western hemispheres. C) Oblique aspect.

The standard parallels for the US are 33° and 45°N. Aeronautical charts for Alaska use standard parallels at 55°N and 65°N. Great circle lines are approximately straight. The United States (50 states) Base Map uses standard parallels at 37°N and 65°N. Used for large countries in the mid-latitudes having an east-west orientation. It is mathematically based on a cone that is tangent at one parallel or. great circles are approximately straight. Maximum scale error is 2. The graticule is symmetrical. described previously. North or South Pole is represented by a point—the other pole cannot be shown. and the State Base Map series are constructed on this projection. The conformal property of Lambert Conformal Conic. Parallel spacing increases away from the standard parallels and decreases between them.Lambert Conformal Conic This projection is very similar to Albers Conical Equal Area. Linear scale is true on standard parallels. Meridian spacing is true on the standard parallels and decreases toward the pole. is most valuable in middle latitudes. like Albers. Construction Property Meridians Parallels Cone Conformal Meridians are straight lines converging at a pole. This projection.5-minute and 15-minute quadrangles. The National Atlas of Canada uses standard parallels at 49°N and 77°N. more often. Meridians and parallels intersect each other at right angles. The latter series uses standard parallels of 33°N and 45°N. Areal distortion is minimal.5% on a map of the United States (48 states) with standard parallels at 33°N and 45°N. meridians and parallels cross at right angles. Graticule spacing Linear scale Uses The major property of this projection is its conformality. especially in a country sprawling east to west like the US. The correct angles produce correct shapes. but increases away from the standard parallels. and sheets can be joined along their edges. and the straightness of great circles makes it valuable for landmark flying. The graticule spacing retains the property of conformality. It retains its properties at various scales. Also. At all coordinates. that is conceptually secant on two parallels (Figure 108 on page 358). 356 Map Projections . Parallels are arcs of concentric circles concave toward a pole and centered at a pole. Some of the National Topographic Map Series 7.

whereas Albers possesses equal-area. It is often convenient to make them large enough to ensure that there are no negative coordinates within the region of the map projection. that is. the standard parallels. Spheroid Name Datum Name Select the spheroid and datum to use. Since 1962. Longitude of central meridian Latitude of origin of projection If you only have one standard parallel you should enter that same value into all three latitude fields. define the origin of the map projection in both spherical and rectangular coordinates. Latitude of 1st standard parallel Latitude of 2nd standard parallel Enter two values for the desired control lines of the projection. Unlike Albers. Respond to the prompts as described. The list of available spheroids is located in Table 44 on page 243. Lambert Conformal Conic possesses true shape of small areas. Lambert Conformal Conic has been used for the International Map of the World between 84°N and 80°S. In comparison with Albers Conical Equal Area.Lambert Conformal Conic is the State Plane coordinate system projection for states of predominant east-west expanse. False easting at central meridian False northing at origin Enter values of false easting and false northing corresponding to the intersection of the central meridian. Then. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Note that the first standard parallel is the southernmost. Prompts The following prompts display in the Projection Chooser if Lambert Conformal Conic is selected. parallels of Lambert Conformal Conic are spaced at increasing intervals the farther north or south they are from the standard parallels. That is. Enter values for longitude of the desired central meridian and latitude of the origin of projection. and the latitude of the origin of projection. Map Projections 357 . These values must be in meters.

358 Map Projections . Note the change in spacing of the parallels. the standard parallels are 20°N and 60°N.Figure 108: Lambert Conformal Conic Projection In the figure above.

To maintain the conformal property. thus making it unity on two parallels either side of it. Conical projections with one standard parallel are normally considered to maintain the nominal map scale along the parallel of latitude which is the line of contact between the imagined cone and the ellipsoid. The graticule is symmetrical. The projection is then a Lambert Conformal Conic with two standard parallels. Sometimes it is desirable to limit the maximum positive scale distortion by distributing it more evenly over the map area extent. This is the same effect as choosing two specific standard parallels in the first place.Lambert Conic Conformal (1 Standard Parallel) This projection is very similar to Lambert Conformal Conic. The graticule spacing retains the property of conformality. They are normally chosen in the one standard parallel case to approximately bisect the latitudinal extent of the country or area. Parallels are arcs of concentric circles concave toward a pole and centered at a pole. the natural origin of the projected coordinate system is the intersection of the standard parallel with the longitude of origin (central meridian). the spacing of the parallels is variable and increases with increasing distance from the standard parallel. exemplified by the United States State Plane coordinate zones. Parallel spacing increases away from the standard parallels and decreases between them. Meridians and parallels intersect each other at right angles. Meridian spacing is true on the standard parallels and decreases toward the pole. For the one standard parallel Lambert. Some former French territories were mapped using this method. This may be achieved by introducing a scale factor on the standard parallel of slightly less than unity. described previously. rather than specifying two standard parallels. Construction Property Meridians Parallels Cone Conformal Meridians are straight lines converging at a pole. Any number of Lambert projection zones may be formed according to which standard parallel or standard parallels are chosen. the latitude of natural origin is the standard parallel. However. Graticule spacing Map Projections 359 . while the meridians are all straight lines radiating from a point on the prolongation of the ellipsoid’s minor axis. The longitude of natural origin is the central meridian. Where the central meridian cuts the one standard parallel will be the natural origin of the projected coordinate system. For a one standard parallel Lambert. one standard parallel and a scale factor are specified.

remotesensing. that is. Enter the value for the desired control line of the projection.org/Epicentre. Used for large countries in the mid-latitudes having an east-west orientation. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. That is. the central meridian: Longitude of natural origin Enter the scale factor at the natural origin on the standard parallel: Scale factor at natural origin Enter constants of false easting and false northing corresponding to the intersection of the central meridian and the standard parallel. See www. that is. Select the spheroid and datum to use: Spheroid Name Datum Name The list of available spheroids is located in Table 44 on page 243. http://www.2 Usage Guide. Respond to the prompts as described.html 360 Map Projections . False easting False northing Sources: Petrotechnical Open Software Corporation: Epicentre v2.org/geotiff/geotiff. The following prompts display in the Projection Chooser if Lambert Conic Conformal (1SP) is selected.Linear scale Uses Prompts Linear scale is true on standard parallels. the standard parallel: Latitude of natural origin Define the longitude of the natural origin.2_2. These values must be in meters.posc. It is often convenient to make them large enough to ensure that there are no negative coordinates within the region of the map projection.

Used for world maps where loxodromes (rhumb lines) are emphasized. The list of available spheroids is located in Table 44 on page 243. the ratio is 0. What is most noteworthy about the loximuthal projection is the loxodromes that are “straight. Scale is also constant along any given latitude. Scale is true along the central meridian. Parallels Graticule spacing Linear scale Uses Source: Snyder and Voxland. Symmetry exists about the central meridian. They are perpendicular to the central meridian. and correct in azimuth from the center” (Snyder and Voxland. Longitude of central meridian Latitude of central parallel Map Projections 361 . If the central latitude is the Equator. depending on the central latitude. Symmetry also exists at the Equator if it is designated as the central latitude. 1989). but different for the latitude of opposite sign.Loximuthal The distortion of the Loximuthal projection is average to pronounced. Distortion is not present at the central latitude on the central meridian. Parallels are straight parallel lines that are equally spaced.65. Respond to the prompts as described. if it is 40° N or S. See Meridians and Parallels. the ratio is 0. 1989 Prompts The following prompts display in the Projection Chooser if Loximuthal is selected. Other meridians are equally spaced complex curves intersecting at the poles and concave toward the central meridian” (Snyder and Voxland. Construction Property Meridians Pseudocylindrical Neither conformal nor equal-area The “central meridian is a straight line generally over half as long as the Equator. true to scale. Spheroid Name Datum Name Select the spheroid and datum to use.5. Poles are points. 1989).

the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.Enter a value for longitude of the desired central meridian to center the projection and the latitude of central parallel. That is. Figure 109: Loximuthal Projection Source: Snyder and Voxland. 1989 362 Map Projections . False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. These values must be in meters.

Scale can be determined by measuring one degree of latitude. or along two parallels equidistant from the Equator (the secant form). Distance scales are usually furnished for several latitudes. However. the projection is rarely extended beyond 80°N or S unless the latitude of true scale is other than the Equator. Graticule spacing Linear scale Uses This projection can be thought of as being mathematically based on a cylinder tangent at the Equator. which show constant direction. Otherwise. of Commerce. Meridians intersect parallels at right angles. are straight. Areal enlargement is extreme away from the Equator. Most great circles appear as long arcs when drawn on a Mercator map. For this reason. parallels are placed increasingly farther apart with increasing distance from the Equator. Parallels are straight and parallel. the Mercator is a special-purpose map projection best suited for navigation. The graticule is symmetrical. Due to extreme scale distortion in high latitudes. The use of the Mercator map projection as the base for nautical charts is universal. which equals 60 nautical miles. Meridians and parallels are straight lines and cross at 90° angles. However. Linear scale is true along the Equator only (line of tangency). poles cannot be represented. Any straight line is a constant-azimuth (rhumb) line. The graticule spacing retains the property of conformality. It is a reasonably accurate projection within a 15° band along the line of tangency. Meridian spacing is equal and the parallel spacing increases away from the Equator. Examples are the charts published by the National Ocean Survey. rhumb lines are not the shortest path—great circles are the shortest path. to preserve conformality. US Dept. Map Projections 363 . Shape is true only within any small area. a Mercator map was very valuable to sea navigators.Mercator This famous cylindrical projection was originally designed by Flemish map maker Gerhardus Mercator in 1569 to aid navigation (Figure 110 on page 365). 69 statute miles. Angular relationships are preserved. or 111 kilometers. An excellent projection for equatorial regions. Secant constructions are used for large-scale coastal charts. Construction Property Meridians Parallels Cylinder Conformal Meridians are straight and parallel. Rhumb lines.

False easting at central meridian False northing at origin Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of true scale. That is. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. 364 Map Projections . Longitude of central meridian Latitude of true scale Enter values for longitude of the desired central meridian and latitude at which true scale is desired.Prompts The following prompts display in the Projection Chooser if Mercator is selected. Selection of a parameter other than the Equator can be useful for making maps in extreme north or south latitudes. Spheroid Name Datum Name Select the spheroid and datum to use. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. Define the origin of the map projection in both spherical and rectangular coordinates. The list of available spheroids is located in Table 44 on page 243. Respond to the prompts as described.

all angles are shown correctly..e. therefore small shapes are true (i. the map is conformal).Figure 110: Mercator Projection In Figure 110 on page 365. Rhumb lines are straight. which makes it useful for navigation. Map Projections 365 .

Miller Cylindrical is not equal-area. Meridians are equidistant. Graticule spacing Linear scale Uses Meridians and parallels are straight lines intersecting at right angles. Both poles are represented as straight lines. or lines. Longitude of central meridian 366 Map Projections . 1992). but latitude line intervals are modified so that the distance between them increases less rapidly. Thus. while parallels are spaced farther apart the farther they are from the Equator. Respond to the prompts as described. Meridians are parallel and equally spaced. beyond 45°. All parallels are straight lines. While the standard parallels. Useful for world maps. the lines of latitude are parallel. or conformal. are at latitudes 45°N and S. Miller Cylindrical is used for world maps and in several atlases. that are true to scale and free of distortion. Prompts The following prompts display in the Projection Chooser if Miller Cylindrical is selected. equidistant. only the Equator is standard. Miller Cylindrical also includes the poles as straight lines whereas the Mercator does not.Miller Cylindrical Miller Cylindrical is a modification of the Mercator projection (Figure 111 on page 367). Construction Property Meridians Parallels Cylinder Compromise All meridians are straight lines. The list of available spheroids is located in Table 44 on page 243. Meridians and parallels intersect at right angles (Environmental Systems Research Institute. Spheroid Name Datum Name Select the spheroid and datum to use. Miller Cylindrical lessens the extreme exaggeration of the Mercator. It is similar to the Mercator from the Equator to 45°. and the distance between them increases toward the poles.

Miller Cylindrical is neither conformal nor equal-area. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. Map Projections 367 . Figure 111: Miller Cylindrical Projection This projection resembles the Mercator.Enter a value for the longitude of the desired central meridian to center the projection. These values must be in meters. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. That is. but has less distortion in polar regions.

Each row and each column is sequentially lettered such that two letters provide a unique identification within approximately 9° for each 100.000-meter square identification together with the numerical location. 2010a.MGRS The United States Military Grid Reference System (MGRS) is designed for use with UTM (Universal Transverse Mercator) and UPS (Universal Polar Stereographic) grid coordinate systems. called the Grid Zone Designation. Figure 112: MGRS Grid Zones An example of an MGRS designation is: 15SWC8081751205 The components of the MGRS values are: 368 Map Projections .000-meter grid square. each of which is given a unique identification. The MGRS is an alphanumeric version of a numerical UTM grid coordinate. For that portion of the world where the UTM grid is specified (80° South to 84° North). the UTM grid zone number is the first element of a Military Grid reference.000-meter square identification.000-meter squares.000-meter grid squares forming a matrix of rows and columns. Each square is identified by two letters called tie 100. A reference keyed to a gridded map of any scale is made by giving the 100. These areas are covered by a pattern of 100. The world is generally divided into 6° by 8° geographic areas. There are additional designations that express additional refinements. The grid zones are divided into a pattern of 100. Source: National Geospatial-Intelligence Agency (NGA).

000-meter grid squares within the grid zone.nga. Map Projections 369 . The letters are C through X. For a complete description. see the National Geospatial Intelligence Agency website http://earth-info. omitting I and O.Grid Zone Designation • • First two characters represent the 6° wide UTM zone. Easting and Northing Values The remaining characters represent the easting and northing values within the 100. 2010b.000-meter grid square.mil/GandG/publications. 100. The number of characters determines the amount of precision. The third character is a letter designating a band of latitude. • • • • • 1 character = 10 km precision 2 characters = 1 km precision 3 characters = 100 meters precision 4 characters = 10 meters precision 5 characters = 1 meter precision Source: National Geospatial-Intelligence Agency (NGA).000-Meter Grid Squares • The fourth and fifth characters designate one of the 100. The 20 bands begin at 80° South and proceed northward. The publication is named DMA Technical Manual 8358.1.

Parallels are arcs concave to the pole. the USGS devised a projection specifically for the revision of a 1954 map of Alaska which. published by the American Association of Petroleum Geologists.000 (map “E”) and 1:1. The Bathymetric Maps Eastern Continental Margin U. with the scale along the meridians reduced to 0. False easting False northing 370 Map Projections .500.000 scale.000 and published at 1:2. The graticule is symmetrical on post-1973 editions of the Alaska Map E. uses the straight meridians on its Modified Transverse Mercator and is more equivalent to the Equidistant Conic map projection.Modified Transverse Mercator In 1972.000. like its predecessors.584.500. Meridian spacing is approximately equal and decreases toward the pole. the meridians are straight. Graphically prepared by adapting coordinates for the UTM projection.000 (map “B”). It resembles the Transverse Mercator in a very limited manner and cannot be considered a cylindrical projection. Prompts The following prompts display in the Projection Chooser if Modified Transverse Mercator is selected. and the standard parallels at latitude 66. On post-1973 editions.A. Linear scale is more nearly correct along the meridians than along the parallels. Respond to the prompts as described.09° and 53.500. Meridians Parallels Graticule spacing Linear scale Uses It is found to be most closely equivalent to the Equidistant Conic for the Clarke 1866 ellipsoid. It resembles the Equidistant Conic projection for the ellipsoid in actual construction. The projection was also used in 1974 for a base map of the AleutianBering Sea Region published at 1:2. USGS’s Alaska Map E at the scale of 1:2.000. Construction Property Cone Equidistant On pre-1973 editions of the Alaska Map E. was based on the Polyconic projection. This projection was drawn to a scale of 1:2. meridians are curved concave toward the center of the projection.9992 of true scale.50°N.. it is identified as the Modified Transverse Mercator projection. Parallels are approximately equally spaced.S.

Map Projections 371 .Enter values of false easting and false northing corresponding to the desired center of the projection. That is. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. These values must be in meters.

The Equator and the central meridian are linear graticules. 1997 Prompts The following prompts display in the Projection Chooser once Mollweide is selected. Respond to the prompts as described. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. The exception is the central meridian. Longitude of central meridian Enter a value for longitude of the desired central meridian. Use for world maps only. 372 Map Projections . Linear scale Uses Source: Environmental Systems Research Institute. Construction Property Meridians Parallels Graticule spacing Pseudocylinder Equal-area Meridians are elliptical arcs that are equally spaced. Mollweide designed the projection in 1805. Scale is accurate along latitudes 40° 44’ N and S at the central meridian. The Mollweide projection is used primarily for thematic maps of the world. All parallels are straight lines. The list of available spheroids is located in Table 44 on page 243. These values must be in meters. Spheroid Name Datum Name Select the spheroid and datum to use. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. and is severe at the extremes of the projection. which is a straight line. That is. Distortion becomes more pronounced farther from the lines. It is an equal-area pseudocylindrical projection.Mollweide Carl B.

Figure 113: Mollweide Projection Source: Snyder and Voxland. 1989 Map Projections 373 .

e. no shift would be specified. 1997 Prompts The following prompts display in the Projection Chooser once New Zealand Map Grid is selected. The Datum Name defaults to Geodetic Datum 1949.New Zealand Map Grid This projection is used only for mapping New Zealand.0) Specify absolute or relative eastng/northing: Relative Easting Shift (from 2510000..02 percent of actual scale for the country of New Zealand. Construction Property Meridians Parallels Graticule spacing Linear scale Uses Modified cylindrical Conformal N/A N/A None Scale is within 0. If using the relative method. the following prompts are displayed (the values shown below reflect the definition of EPSG Code 27200): Specify absolute or relative eastng/northing: Absolute Longitude of origin of projection: 173:00:00.000000S Easting: 2510000. This projection is useful only for maps of New Zealand. the following prompts are displayed (under most circumstances. You can define this projection either by specifying absolute easting and northing values or by specifying shifts relative to the easting and northing normally used for this projection. Respond to the prompts as described.0): 0. Source: Environmental Systems Research Institute. Spheroid Name Datum Name The Spheroid Name defaults to International 1924.000000E Latitude of origin of projection: 41:00:00.000000 meters 374 Map Projections . These fields are not editable.0): 0. the values would remain 0.000000 meters Northing: 6023150.000000 meters If using the absolute method.000000 meters Northing Shift (from 6023150. i.

the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Longitude of center of projection Latitude of center of projection Enter the longitude of the center of the projection and the latitude of the center of the projection. Map Projections 375 . False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. Respond to the prompts as described. That is.Oblated Equal Area Prompts The following prompts display in the Projection Chooser once Oblated Equal Area is selected. Spheroid Name Datum Name Select the spheroid and datum to use. These values must be in meters. Rotation angle Enter the oblated equal area oval rotation angle. Parameter M Parameter N Enter the oblated equal area oval shape of parameters M and N. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection.

and the National Geographic Society’s maps “West Indies. ERTS flight indexes.” Uses The USGS uses the Hotine version of Oblique Mercator. strip charts for navigation. The Hotine version is based on a study of conformal projections published by British geodesist Martin Hotine in 1946-47. Spheroid Name Datum Name Select the spheroid and datum to use. Respond to the prompts as described. Prompts The following prompts display in the Projection Chooser if Oblique Mercator (Hotine) is selected. Areal enlargement increases away from the line of tangency. Projection is reasonably accurate within a 15° band along the line of tangency. Parallels are complex curves concave toward the nearest pole.Oblique Mercator (Hotine) Oblique Mercator is a cylindrical. except each 180th meridian is straight. 376 Map Projections .” “Countries of the Caribbean. or along two lines of equidistance from and parallel to the line of tangency.” “Hawaii.” and “New Zealand. Examples are: NASA Surveyor Satellite tracking charts. Graticule spacing increases away from the line of tangency and retains the property of conformality. the Hotine version was used for mapping Landsat satellite imagery. Prior to the implementation of the Space Oblique Mercator. Useful for plotting linear configurations that are situated along a line oblique to the Earth’s Equator. conformal projection that intersects the global surface along a great circle. It is equivalent to a Mercator projection that has been altered by rotating the cylinder so that the central line of the projection is a great circle path instead of the Equator. Construction Property Meridians Parallels Graticule spacing Linear scale Cylinder Conformal Meridians are complex curves concave toward the line of tangency. Shape is true only within any small area. Linear scale is true along the line of tangency.

It is often convenient to add additional values so that no negative coordinates occur within the region of the map projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. reverse the signs of the coordinates obtained. To shift the origin to the intersection of the latitude of the origin entered above and the central line of the projection. This parameter may be used to modify scale distortion away from this central line.0 indicates true scale only along the central line.The list of available spheroids is located in Table 44 on page 243. the additional prompts are: Azimuth east of north for central line Longitude of point of origin Format A defines the central line of the projection by the angle east of north to the desired great circle path and by the latitude and longitude of the point along the great circle path from which the angle is measured. A value of less than. A value of 1. and use these for false eastings and northings. Latitude of point of origin False easting False northing The center of the projection is defined by rectangular coordinates of false easting and false northing. These values must be in meters. Appropriate values should be entered. Scale factor at center Designate the desired scale factor along the central line of the projection. Format A For format A. That is. compute coordinates of the latter point with zero false eastings and northings. Do you want to enter either: A) Azimuth East of North for central line and the longitude of the point of origin B) The latitude and longitude of the first and second points defining the central line These formats differ slightly in definition of the central line of the projection. The origin of rectangular coordinates on this projection occurs at the nearest intersection of the central line with the Earth’s Equator. but close to one is often used to lessen scale distortion away from the central line. Map Projections 377 .

Format B For format B. 1989 378 Map Projections . Appropriate values should be entered. the additional prompts are: Longitude of 1st point Latitude of 1st point Longitude of 2nd point Latitude of 2nd point Format B defines the central line of the projection by the latitude of a point on the central line which has the desired scale factor entered previously and by the longitude and latitude of two points along the desired great circle path. Figure 114: Oblique Mercator Projection Source: Snyder and Voxland.

Orthographic The Orthographic projection is geometrically based on a plane tangent to the Earth. Parallels Oblique aspect: the parallels are ellipses concave toward the poles. with spaces closing up toward the outer edge. Meridians Oblique aspect: the meridians are ellipses. Light rays that cast the projection are parallel and intersect the tangent plane at right angles. Graticule spacing Linear scale Uses This projection is limited to one hemisphere and shrinks those areas toward the periphery. It is the most familiar of the azimuthal map projections. In the equatorial aspect. Polar aspect: the parallels are concentric circles. the central meridian and parallels are straight. concave toward the center of the projection. and is a projection in which distortion becomes a visual aid. USGS uses the Orthographic map projection in the National Atlas. Scale decreases along lines radiating from the center of the projection. Oblique and equatorial aspects: the graticule spacing decreases away from the center of the projection. Map Projections 379 . The Earth appears as it would from outer space. This projection is a truly graphic representation of the Earth. Equatorial aspect: the parallels are straight and parallel. and the parallel decreases from the point of tangency. Polar aspect: meridian spacing is equal and increases. Construction Property Plane Compromise Polar aspect: the meridians are straight lines radiating from the point of tangency. Scale is true on the parallels in the polar aspect and on all circles centered at the pole of the projection in all aspects. Equatorial aspect: the meridians are ellipses concave toward the straight central meridian. and the point of projection is at infinity (Figure 115 on page 381). latitude ring intervals decrease from the center outwards at a much greater rate than with Lambert Azimuthal. In the polar aspect. Directions from the center of the projection are true.

These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. That is. C) Oblique aspect. Prompts The following prompts display in the Projection Chooser if Orthographic is selected. Respond to the prompts as described. Longitude of center of projection Latitude of center of projection Enter values for the longitude and latitude of the desired center of the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. 380 Map Projections . Spheroid Name Datum Name Select the spheroid and datum to use. Orthographic has been used as a basis for maps by Rand McNally and the USGS. B) Equatorial aspect. Its utility is more pictorial than technical. Define the center of the map projection in both spherical and rectangular coordinates. The list of available spheroids is located in Table 44 on page 243.The Orthographic projection seldom appears in atlases.Three views of the Orthographic projection are shown in Figure 115 on page 381: A) Polar aspect. centered at 40°N and showing the classic globe-like view.

Figure 115: Orthographic Projection A C B Map Projections 381 .

the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. These values must be in meters. Respond to the prompts as described. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. Figure 116: Plate Carrée Projection Source: Snyder and Voxland. 1989 382 Map Projections .Plate Carrée The parameters for the Plate Carée projection are identical to that of the Equirectangular projection. For more information. Spheroid Name Datum Name Select the spheroid and datum to use. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. Longitude of central meridian Latitude of true scale Enter a value for longitude of the desired central meridian to center the projection and the latitude of true scale. The list of available spheroids is located in Table 44 on page 243. That is. Prompts The following prompts display in the Projection Chooser if Plate Carrée is selected. see Equirectangular (Plate Carrée) on page 336.

Of all the polar aspect planar projections. The projection is equivalent to the polar aspect of the Stereographic projection on a spheroid. like all stereographic projections. the scale factor at the pole is made 0. but not both. Parallels are concentric circles. This projection produces a circular map with one of the poles at the center. The scale increases with distance from the center. Linear scale Uses The point of tangency is a single point—either the North Pole or the South Pole. All of either the northern or southern hemispheres can be shown. Even though scale and area are not constant with Polar Stereographic. the point of global contact is a line of latitude (Environmental Systems Research Institute. and scale increases for areas farther from the central pole. In the Universal Polar Stereographic (UPS) system. Polar Stereographic stretches areas toward the periphery. regions north of 84°N and 80°S. parallels are concentric circles. This form is called Universal Polar Stereographic (UPS). If the plane is secant instead of tangent. Polar Stereographic is an azimuthal projection obtained by projecting from the opposite pole (Figure 117 on page 385). The Astrogeology Center of the Geological Survey at Flagstaff. Polar regions (conformal). The central point is either the North Pole or the South Pole. Arizona. thus making the standard parallel (latitude of true scale) approximately 81°07’N or S.994. The distance between parallels increases with distance from the central pole.Polar Stereographic The Polar Stereographic may be used to accommodate all regions not included in the UTM coordinate system. If a standard parallel is chosen rather than one of the poles. this latitude represents the true scale. has been using the Polar Stereographic projection for the mapping of polar areas of every planet and satellite for which there is sufficient information. Map Projections 383 . possesses the property of conformality. this projection. Meridians are straight and radiating. Construction Property Meridians Parallels Graticule spacing Plane Conformal Meridians are straight. 1992). this is the only one that is conformal. and the scale nearer the pole is reduced.

That is. 1992). Latitude of true scale Enter a value for latitude at which true scale is desired.Prompts The following prompts display in the Projection Chooser if Polar Stereographic is selected. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Spheroid Name Datum Name Select the spheroid and datum to use. For secant projections. 384 Map Projections . Longitude below pole of map Enter a value for longitude directed straight down below the pole for a north polar aspect. These values must be in meters. or the South Pole. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. 90 00 00. or straight up from the pole for a south polar aspect. This is equivalent to centering the map with a desired meridian. Respond to the prompts as described. Ellipsoid projections of the polar regions normally use the International 1909 spheroid (Environmental Systems Research Institute. -90 00 00 (Environmental Systems Research Institute.This projection is conformal and is the most scientific projection for polar regions. Define the origin of the map projection in both spherical and rectangular coordinates. specify the latitude of true scale as the North Pole. 1992). specify the latitude of true scale as any line of latitude other than 90°N or S. The list of available spheroids is located in Table 44 on page 243. For tangential projections. False easting False northing Enter values of false easting and false northing corresponding to the pole.

Pole Plane of projection Equator S.Figure 117: Polar Stereographic Projection and its Geometric Construction N. Pole Map Projections 385 .

The Equator is a straight line. 1992). Parallels (except the Equator) are nonconcentric circular arcs. All meridians. Used for 7. Parallels cross the central meridian at equal intervals but get farther apart at the east and west peripheries. Polyconic projections compromise properties such as equal-area and conformality. Construction Property Meridians Parallels Cone Compromise The central meridian is a straight line. Spheroid Name Datum Name Select the spheroid and datum to use. except the central meridian.5-minute and 15-minute topographic USGS quad sheets. but not concentric. although the central meridian is held true to scale. The scale along each parallel and along the central meridian of the projection is accurate.Polyconic Polyconic was developed in 1820 by Ferdinand Hassler specifically for mapping the eastern coast of the US (Figure 118 on page 387). 386 Map Projections . 1992). These conic projections are placed in relation to a central meridian. Distortion increases greatly the farther east and west an area is from the central meridian. All parallels are arcs of circles. Respond to the prompts as described. are concave toward the central meridian. from 1886 to about 1957 (Environmental Systems Research Institute. but all other meridians are complex curves. Prompts The following prompts display in the Projection Chooser if Polyconic is selected. Used almost exclusively in slightly modified form for large-scale mapping in the United States until the 1950s. It increases along the meridians as the distance from the central meridian increases (Environmental Systems Research Institute. Graticule spacing Linear scale Uses This projection is used mostly for north-south oriented maps. Polyconic projections are made up of an infinite number of conic projections tangent to an infinite number of parallels.

This projection is used by the USGS for topographic quadrangle maps. These values must be in meters. Longitude of central meridian Latitude of origin of projection Enter values for longitude of the desired central meridian and latitude of the origin of projection. False easting at central meridian False northing at origin Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of the origin of projection. That is. the central meridian is 100°W. Figure 118: Polyconic Projection of North America In Figure 118.The list of available spheroids is located in Table 44 on page 243. Define the origin of the map projection in both spherical and rectangular coordinates. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. Map Projections 387 .

Scale is constant along each latitude.45 as long as the Equator. The McBryde-Thomas Flat-Polar Quartic projection uses Quartic Authalic as its base (Snyder and Voxland. The other meridians are curves that are equally spaced. Respond to the prompts as described. They fit a “fourth-order (quartic) equation and concave toward the central meridian” (Snyder and Voxland. 1989). False easting False northing 388 Map Projections . Scale is accurate along the Equator. Parallel spacing changes slowly. Parallels are straight parallel lines that are unequally spaced. See Meridians and Parallels. If the Quartic Authalic projection is interrupted. Longitude of central meridian Enter the value of the longitude of the central meridian. 1989 Prompts The following prompts display in the Projection Chooser once Quartic Authalic is selected. The parallels have the greatest distance between in proximity to the Equator. Used for world maps. and is 0.Quartic Authalic Construction Property Meridians Pseudocylindrical Equal-area The central meridian is a straight line. Source: Snyder and Voxland. and is the same for the latitude of opposite sign. distortion can be reduced. Symmetry exists about the central meridian or the Equator. Parallels Graticule spacing Linear scale Uses Outer meridians at high latitudes have great distortion. Poles are points. Spheroid Name Datum Name Select the spheroid and datum to use. 1989). The list of available spheroids is located in Table 44 on page 243. and parallels are perpendicular to the central meridian.

the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Figure 119: Quartic Authalic Projection Source: Snyder and Voxland. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. These values must be in meters. That is.Enter values of false easting and false northing corresponding to the desired center of the projection. 1989 Map Projections 389 .

False easting False northing 390 Map Projections . Construction Property Meridians Pseudocylinder Neither conformal nor equal-area Meridians are equally spaced. and concave toward the central meridian. Scale is constant along any specific latitude. Respond to the prompts as described. Parallels are equally spaced straight lines between 38° N and S. and look like elliptical arcs (Environmental Systems Research Institute.51 times the length of the Equator.Robinson According to ESRI. Longitude of central meridian Enter the value of the longitude of the central meridian. The list of available spheroids is located in Table 44 on page 243. The central meridian and all parallels are linear. The poles are 0. the Robinson “central meridian is a straight line 0. 1997). The projection is based upon tabular coordinates instead of mathematical formulas” (Environmental Systems Research Institute.53 times the length of the Equator. Parallels Graticule spacing Linear scale Uses This projection has been used both by Rand McNally and the National Geographic Society. and for the latitude of opposite sign. Spheroid Name Datum Name Select the spheroid and datum to use. 1997). Useful for thematic and common world maps. Scale is true along latitudes 38° N and S. Source: Environmental Systems Research Institute. spacing decreases beyond these limits. Parallels are equally spaced straight lines between 38° N and S. 1997 Prompts The following prompts display in the Projection Chooser once Robinson is selected.

Figure 120: Robinson Projection Source: Snyder and Voxland. 1989 Map Projections 391 .Enter values of false easting and false northing corresponding to the desired center of the projection. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. These values must be in meters. That is.

Respond to the prompts as described. and is each country’s national projection. Uses Source: Environmental Systems Research Institute. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. Spheroid Name Datum Name Select the spheroid and datum to use. 1997). Construction Property Meridians Parallels Graticule spacing Linear scale Cylinder Conformal Two meridians are 180 degrees apart.RSO The acronym RSO stands for Rectified Skewed Orthomorphic. These values must be in meters. 1997 Prompts The following prompts display in the Projection Chooser once RSO is selected. This projection should be used to map areas of Brunei and Malaysia. 392 Map Projections . “A line of true scale is drawn at an angle to the central meridian” (Environmental Systems Research Institute. RSO Type Select the RSO Type. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. That is. You can choose from Borneo or Malaysia. N/A Graticules are two meridians 180 degrees apart. This projection is used to map areas of Brunei and Malaysia. The list of available spheroids is located in Table 44 on page 243.

Sinusoidal is particularly suited for less than world areas. Sinusoidal is a projection with some characteristics of a cylindrical projection—often called a pseudocylindrical type. Parallels are also the correct distance from the Equator. Used as a world equal-area projection in atlases to show distribution patterns. Parallel spacing is equal. All parallels are straight and the correct length. Linear scale is true on the parallels and the central meridian. All parallels are straight. parallel lines. which. Meridian spacing is equal and decreases toward the poles. is twice as long as the central meridian. Uses Sinusoidal maps achieve the property of equal-area but not conformality. Central meridians may be different for the northern and southern hemispheres and may be selected to minimize distortion of continents or oceans. Used as an equal-area projection to portray areas that have a maximum extent in a north-south direction. Map Projections 393 . but distortion becomes pronounced near outer meridians. Construction Property Meridians Parallels Graticule spacing Linear scale Pseudocylinder Equal-area Meridians are sinusoidal curves. Sinusoidal is also used by the USGS as a base map for showing prospective hydrocarbon provinces and sedimentary basins of the world. especially in polar regions. The Equator and central meridian are distortion free.Sinusoidal Sometimes called the Sanson-Flamsteed. for a complete world map. Used by the USGS as the base for maps showing prospective hydrocarbon provinces of the world. such as South America or Africa. and sedimentary basins of the world. curved toward a straight central meridian. The central meridian is the only straight meridian—all others become sinusoidal curves. The graticule spacing retains the property of equivalence of area. The interrupted Sinusoidal contains less distortion because each interrupted area can be constructed to contain a separate central meridian. especially those bordering the Equator. Interrupting a Sinusoidal world or hemisphere map can lessen distortion.

Prompts The following prompts display in the Projection Chooser if Sinusoidal is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.
Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.
False easting False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Figure 121: Sinusoidal Projection

Source: Snyder and Voxland, 1989

394

Map Projections

Space Oblique Mercator

The Space Oblique Mercator (SOM) projection is nearly conformal and has little scale distortion within the sensing range of an orbiting mapping satellite such as Landsat. It is the first projection to incorporate the Earth’s rotation with respect to the orbiting satellite.

Construction Property Meridians Parallels Graticule spacing

Cylinder Conformal All meridians are curved lines except for the meridian crossed by the groundtrack at each polar approach. All parallels are curved lines. There are no graticules.

Linear scale

Scale is true along the groundtrack, and varies approximately 0.01% within sensing range (Environmental Systems Research Institute, 1992). Used for georectification of, and continuous mapping from, satellite imagery. Standard format for data from Landsats 4 and 5 (Environmental Systems Research Institute, 1992).

Uses

The method of projection used is the modified cylindrical, for which the central line is curved and defined by the groundtrack of the orbit of the satellite.The line of tangency is conceptual and there are no graticules. The SOM projection is defined by USGS. According to USGS, the X axis passes through the descending node for each daytime scene. The Y axis is perpendicular to the X axis, to form a Cartesian coordinate system. The direction of the X axis in a daytime Landsat scene is in the direction of the satellite motion—south. The Y axis is directed east. For SOM projections used by EOSAT, the axes are switched; the X axis is directed east and the Y axis is directed south. The SOM projection is specifically designed to minimize distortion within sensing range of a mapping satellite as it orbits the Earth. It can be used for the rectification of, and continuous mapping from, satellite imagery. It is the standard format for data from Landsats 4 and 5. Plots for adjacent paths do not match without transformation (Environmental Systems Research Institute, 1991).

Map Projections

395

Prompts The following prompts display in the Projection Chooser if Space Oblique Mercator is selected. Respond to the prompts as described.
Spheroid Name Datum Name Landsat vehicle ID (1-5)

Specify whether the data are from Landsat 1, 2, 3, 4, or 5.
Orbital path number (1-251 or 1-233)

For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the path range is from 1 to 233.
False easting False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Figure 122: Space Oblique Mercator Projection

Source: Snyder and Voxland, 1989

396

Map Projections

Space Oblique Mercator (Formats A & B)
Prompts

The Space Oblique Mercator (Formats A&B) projection is similar to the Space Oblique Mercator projection.

For more information, see Space Oblique Mercator on page 395.

The following prompts display in the Projection Chooser once Space Oblique Mercator (Formats A & B) is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.
False easting False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.
Format A (Generic Satellite) Inclination of orbit at ascending node Period of satellite revolution in minutes Longitude of ascending orbit at equator Landsat path flag

If you select Format A of the Space Oblique Mercator projection, you need to supply the information listed above.
Format B (Landsat) Landsat vehicle ID (1-5)

Specify whether the data are from Landsat 1, 2, 3, 4, or 5.
Path number (1-251 or 1-233)

For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the path range is from 1 to 233.

Map Projections

397

State Plane

The State Plane is an X,Y coordinate system (not a map projection); its zones divide the US into over 130 sections, each with its own projection surface and grid network (Figure 123 on page 399). With the exception of very narrow states, such as Delaware, New Jersey, and New Hampshire, most states are divided into multiple (2 - 10) zones. The Lambert Conformal projection is used for zones extending mostly in an east-west direction. The Transverse Mercator projection is used for zones extending mostly in a north-south direction. Alaska, Florida, and New York use either Transverse Mercator or Lambert Conformal for different areas. The Aleutian panhandle of Alaska is prepared on the Oblique Mercator projection. Zone boundaries follow state and county lines, and, because each zone is small, distortion is less than one in 10,000. Each zone has a centrally located origin and a central meridian that passes through this origin. Two zone numbering systems are currently in use—the USGS code system and the National Ocean Service (NOS) code system (Table 46, NAD27 State Plane Coordinate System for the United States, on page 399 and Table 47, NAD83 State Plane Coordinate System for the United States, on page 404), but other numbering systems exist.

Prompts The following prompts appear in the Projection Chooser if State Plane is selected. Respond to the prompts as described.
State Plane Zone

Enter either the USGS zone code number as a positive value, or the NOS zone code number as a negative value.
NAD27 or NAD83 or HARN

Either North America Datum 1927 (NAD27), North America Datum 1983 (NAD83), or High Accuracy Reference Network (HARN) may be used to perform the State Plane calculations. • • NAD27 is based on the Clarke 1866 spheroid. NAD83 and HARN are based on the GRS 1980 spheroid. Some zone numbers have been changed or deleted from NAD27.

Tables for both NAD27 and NAD83 zone numbers follow (Table 46, NAD27 State Plane Coordinate System for the United States, on page 399 and Table 47, NAD83 State Plane Coordinate System for the United States, on page 404). These tables include both USGS and NOS code systems.

398

Map Projections

Figure 123: Zones of the State Plane Coordinate System

The following abbreviations are used in Table 46, NAD27 State Plane Coordinate System for the United States, on page 399 and Table 47, NAD83 State Plane Coordinate System for the United States, on page 404: Tr Merc = Transverse Mercator Lambert = Lambert Conformal Conic Oblique = Oblique Mercator (Hotine) Polycon = Polyconic

Table 46: NAD27 State Plane Coordinate System for the United States Code Number State
Alabama Alaska

Zone Name
East West 1 2 3 4 5 6 7 8 9 10

Type
Tr Merc Tr Merc Oblique Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert

USGS
3101 3126 6101 6126 6151 6176 6201 6226 6251 6276 6301 6326 ------

NOS
-101 -102 -5001 -5002 -5003 -5004 -5005 -5006 -5007 -5008 -5009 -5010 -5302

American Samoa

-------

Map Projections

399

Table 46: NAD27 State Plane Coordinate System for the United States Code Number State
Arizona

Zone Name
East Central West

Type
Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Lambert Tr Merc Tr Merc Polycon Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc

USGS
3151 3176 3201 3226 3251 3276 3301 3326 3351 3376 3401 3426 3451 3476 3501 3526 3551 3601 3626 3576 3651 3676 ------5876 5901 5926 5951 5976 3701 3726 3751

NOS
-201 -202 -203 -301 -302 -401 -402 -403 -404 -405 -406 -407 -501 -502 -503 -600 -700 -901 -902 -903 -1001 -1002 -5400 -5101 -5102 -5103 -5104 -5105 -1101 -1102 -1103

Arkansas California

North South I II III IV V VI VII

Colorado

North Central South

Connecticut Delaware District of Columbia Florida

--------------East West North

Use Maryland or Virginia North

Georgia Guam Hawaii

East West ------1 2 3 4 5

Idaho

East Central West

400

Map Projections

Table 46: NAD27 State Plane Coordinate System for the United States Code Number State
Illinois Indiana Iowa Kansas Kentucky Louisiana

Zone Name
East West East West North South North South North South North South Offshore

Type
Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc

USGS
3776 3801 3826 3851 3876 3901 3926 3951 3976 4001 4026 4051 6426 4076 4101 4126 4151 4176 4201 4226 4251 6351 6376 6401 4276 4301 4326 4351 4376 4401 4426 4451

NOS
-1201 -1202 -1301 -1302 -1401 -1402 -1501 -1502 -1601 -1602 -1701 -1702 -1703 -1801 -1802 -1900 -2001 -2002 -2101 -2102 -2103 -2111 -2112 -2113 -2201 -2202 -2203 -2301 -2302 -2401 -2402 -2403

Maine Maryland Massachusetts Michigan (Tr Merc)

East West ------Mainland Island East Central West

Michigan (Lambert)

North Central South

Minnesota

North Central South

Mississippi Missouri

East West East Central West

Map Projections

401

Table 46: NAD27 State Plane Coordinate System for the United States Code Number State
Montana

Zone Name
North Central South

Type
Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Lambert Lambert

USGS
4476 4501 4526 4551 4576 4601 4626 4651 4676 4701 4726 4751 4776 4801 4826 4851 4876 4901 4926 4951 4976 5001 5026 5051 5076 5101 5126 5151 6001 5176 5201 5226

NOS
-2501 -2502 -2503 -2601 -2602 -2701 -2702 -2703 -2800 -2900 -3001 -3002 -3003 -3101 -3102 -3103 -3104 -3200 -3301 -3302 -3401 -3402 -3501 -3502 -3601 -3602 -3701 -3702 -5201 -3800 -3901 -3902

Nebraska Nevada

North South East Central West

New Hampshire New Jersey New Mexico

----------------East Central West

New York

East Central West Long Island

North Carolina North Dakota Ohio

-------North South North South

Oklahoma Oregon Pennsylvania Puerto Rico Rhode Island South Carolina

North South North South North South --------------North South

402

Map Projections

Table 46: NAD27 State Plane Coordinate System for the United States Code Number State
South Dakota St. Croix Tennessee Texas

Zone Name
North South ----------------North North Central Central South Central South

Type
Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc

USGS
5251 5276 6051 5301 5326 5351 5376 5401 5426 5451 5476 5501 5526 5551 5576 6026 5601 5626 5651 5676 5701 5726 5751 5776 5801 5826 5851

NOS
-4001 -4002 -5202 -4100 -4201 -4202 -4203 -4204 -4205 -4301 -4302 -4303 -4400 -4501 -4502 -5201 -4601 -4602 -4701 -4702 -4801 -4802 -4803 -4901 -4902 -4903 -4904

Utah

North Central South

Vermont Virginia Virgin Islands Washington West Virginia Wisconsin

-------North South -------North South North South North Central South

Wyoming

East East Central West Central West

Map Projections

403

Table 47: NAD83 State Plane Coordinate System for the United States Code Number State
Alabama Alaska

Zone Name
East West 1 2 3 4 5 6 7 8 9 10

Type
Tr Merc Tr Merc Oblique Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Lambert

USGS
3101 3126 6101 6126 6151 6176 6201 6226 6251 6276 6301 6326 3151 3176 3201 3226 3251 3276 3301 3326 3351 3376 3401 3451 3476 3501 3526 3551 3601 3626 3576

NOS
-101 -102 -5001 -5002 -5003 -5004 -5005 -5006 -5007 -5008 -5009 -5010 -201 -202 -203 -301 -302 -401 -402 -403 -404 -405 -406 -501 -502 -503 -600 -700 -901 -902 -903

Arizona

East Central West

Arkansas California

North South I II III IV V VI

Colorado

North Central South

Connecticut Delaware District of Columbia Florida

--------------East West North

Use Maryland or Virginia North

404

Map Projections

Table 47: NAD83 State Plane Coordinate System for the United States Code Number State
Georgia Hawaii

Zone Name
East West 1 2 3 4 5

Type
Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert

USGS
3651 3676 5876 5901 5926 5951 5976 3701 3726 3751 3776 3801 3826 3851 3876 3901 3926 3951 3976 4001 4026 4051 6426 4076 4101 4126 4151 4176 6351 6376 6401

NOS
-1001 -1002 -5101 -5102 -5103 -5104 -5105 -1101 -1102 -1103 -1201 -1202 -1301 -1302 -1401 -1402 -1501 -1502 -1601 -1602 -1701 -1702 -1703 -1801 -1802 -1900 -2001 -2002 -2111 -2112 -2113

Idaho

East Central West

Illinois Indiana Iowa Kansas Kentucky Louisiana

East West East West North South North South North South North South Offshore

Maine Maryland Massachusetts Michigan

East West ------Mainland Island North Central South

Map Projections

405

Table 47: NAD83 State Plane Coordinate System for the United States Code Number State
Minnesota

Zone Name
North Central South

Type
Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert

USGS
4276 4301 4326 4351 4376 4401 4426 4451 4476 4551 4601 4626 4651 4676 4701 4726 4751 4776 4801 4826 4851 4876 4901 4926 4951 4976 5001 5026 5051 5076 5101

NOS
-2201 -2202 -2203 -2301 -2302 -2401 -2402 -2403 -2500 -2600 -2701 -2702 -2703 -2800 -2900 -3001 -3002 -3003 -3101 -3102 -3103 -3104 -3200 -3301 -3302 -3401 -3402 -3501 -3502 -3601 -3602

Mississippi Missouri

East West East Central West

Montana Nebraska Nevada

----------------East Central West

New Hampshire New Jersey New Mexico

----------------East Central West

New York

East Central West Long Island

North Carolina North Dakota Ohio Oklahoma Oregon

--------North South North South North South North South

406

Map Projections

Table 47: NAD83 State Plane Coordinate System for the United States Code Number State
Pennsylvania Puerto Rico Rhode Island South Carolina South Dakota Tennessee Texas

Zone Name
North South --------------------------------South --------North North Central Central South Central South

Type
Lambert Lambert Lambert Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Lambert Tr Merc Tr Merc Tr Merc Tr Merc

USGS
5126 5151 6001 5176 5201 5251 5276 5301 5326 5351 5376 5401 5426 5451 5476 5501 5526 5551 5576 6026 5601 5626 5651 5676 5701 5726 5751 5776 5801 5826 5851

NOS
-3701 -3702 -5201 -3800 -3900 -4001 -4002 -4100 -4201 -4202 -4203 -4204 -4205 -4301 -4302 -4303 -4400 -4501 -4502 -5201 -4601 -4602 -4701 -4702 -4801 -4802 -4803 -4901 -4902 -4903 -4904

Utah

North Central South

Vermont Virginia Virgin Islands Washington

--------North South --------North South

West Virginia Wisconsin

North South North Central South

Wyoming

East East Central West Central West

Map Projections

407

Stereographic

Stereographic is a perspective projection in which points are projected from a position on the opposite side of the globe onto a plane tangent to the Earth (Figure 124 on page 410). All of one hemisphere can easily be shown, but it is impossible to show both hemispheres in their entirety from one center. It is the only azimuthal projection that preserves truth of angles and local shape. Scale increases and parallels become more widely spaced farther from the center.

Construction Property

Plane Conformal Polar aspect: the meridians are straight lines radiating from the point of tangency.

Meridians

Oblique and equatorial aspects: the meridians are arcs of circles concave toward a straight central meridian. In the equatorial aspect, the outer meridian of the hemisphere is a circle centered at the projection center. Polar aspect: the parallels are concentric circles. Oblique aspect: the parallels are nonconcentric arcs of circles concave toward one of the poles with one parallel being a straight line. Equatorial aspect: parallels are nonconcentric arcs of circles concave toward the poles; the Equator is straight.

Parallels

Graticule spacing Linear scale

The graticule spacing increases away from the center of the projection in all aspects and it retains the property of conformality. Scale increases toward the periphery of the projection. The Stereographic projection is the most widely used azimuthal projection, mainly used for portraying large, continent-sized areas of similar extent in all directions. It is used in geophysics for solving problems in spherical geometry. The polar aspect is used for topographic maps and navigational charts. The American Geographical Society uses this projection as the basis for its “Map of the Arctic.” The USGS uses it as the basis for maps of Antarctica.

Uses

In the equatorial aspect, all parallels except the Equator are circular arcs. In the polar aspect, latitude rings are spaced farther apart, with increasing distance from the pole.

408

Map Projections

Prompts The following prompts display in the Projection Chooser if Stereographic is selected. Respond to the prompts as described.
Spheroid Name Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243. Define the center of the map projection in both spherical and rectangular coordinates.
Longitude of center of projection Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.
False easting False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. The Stereographic is the only azimuthal projection which is conformal. Figure 124 on page 410 shows two views: A) Equatorial aspect, often used in the 16th and 17th centuries for maps of hemispheres; and B) Oblique aspect, centered on 40°N.

Map Projections

409

Figure 124: Stereographic Projection B A 410 Map Projections .

the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Scale factor Designate the desired scale factor. Map Projections 411 . Longitude of origin of projection Latitude of origin of projection Enter the values for longitude of origin of projection and latitude of origin of projection. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is. see Stereographic on page 408. Prompts The following prompts display in the Projection Chooser once Stereographic (Extended) is selected. or to lessen scale distortion away from the central meridian. A value of one indicates true scale only along the central meridian. It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian. A factor of less than. Respond to the prompts as described. For details about the Stereographic projection. Spheroid Name Datum Name Select the spheroid and datum to use. but close to one is often used. The list of available spheroids is located in Table 44 on page 243. This parameter is used to modify scale distortion.Stereographic (Extended) The Stereographic (Extended) projection has the same attributes as the Stereographic projection. with the exception of the ability to define scale factors. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters.

or along two lines equidistant from. and parallel to. Meridians Parallels Graticule spacing Linear scale Uses Transverse Mercator also loses the straight rhumb lines of the Mercator map. the two meridians 90° away. the Equator is straight. Construction Property Cylinder Conformal Meridians are complex curves concave toward a straight central meridian that is tangent to the globe. Scale is true along the central meridian or along two straight lines equidistant from. and the Equator). In the United States. but it is a conformal projection. 412 Map Projections . The graticule retains the property of conformality. and parallel to. The contact line is then a chosen meridian instead of the Equator. Used where the north-south dimension is greater than the east west dimension. It loses the properties of straight meridians and straight parallels of the standard Mercator projection (except for the central meridian. Graticule spacing increases away from the tangent meridian.000-scale series. the line of tangency. Used as the base for the USGS 1:250. It cannot be edge-joined in an east-west direction if each sheet has its own central meridian.5-minute and 15-minute quadrangles of the National Topographic Map Series.Transverse Mercator Transverse Mercator is similar to the Mercator projection except that the axis of the projection cylinder is rotated 90° from the vertical (polar) axis. The entire Earth from 84°N to 80°S is mapped with a system of projections called the Universal Transverse Mercator. Parallels are complex curves concave toward the nearest pole. Transverse Mercator is the projection used in the State Plane coordinate system for states with predominant north-south extent. Linear scale is true along the line of tangency. the central meridian. Parallels are spaced at their true distances on the straight central meridian. The straight central meridian intersects the Equator and one meridian at a 90° angle. and for some of the 7. and this central meridian runs from pole to pole.

Scale factor at central meridian Designate the desired scale factor at the central meridian. A value of one indicates true scale only along the central meridian. but close to one is often used. The list of available spheroids is located in Table 44 on page 243. origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Longitude of central meridian Latitude of origin of projection Enter values for longitude of the desired central meridian and latitude of the origin of projection. Spheroid Name Datum Name Select the spheroid and datum to use. Respond to the prompts as described. or to lessen scale distortion away from the central meridian. These values must be in meters. A factor of less than. Finally. It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian.Prompts The following prompts display in the Projection Chooser if Transverse Mercator is selected. This parameter is used to modify scale distortion. define the origin of the map projection in both spherical and rectangular coordinates. That is. It is often convenient to make them large enough so that there are no negative coordinates within the region of the map projection. Map Projections 413 . False easting False northing Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of the origin of projection.

There is little distortion if two chosen points are within 45 degrees of each other. False easting False northing 414 Map Projections . Spheroid Name Datum Name Select the spheroid and datum to use. Uses Source: Environmental Systems Research Institute. Respond to the prompts as described. Note that the first point has to be west of the second point.Two Point Equidistant The Two Point Equidistant projection is used to show the distance from “either of two chosen points to any other point on a map” (Environmental Systems Research Institute. 1997). Construction Property Meridians Parallels Graticule spacing Linear scale Modified planar Compromise N/A N/A N/A N/A The Two Point Equidistant projection “does not represent great circle paths” (Environmental Systems Research Institute. The list of available spheroids is located in Table 44 on page 243. 1997 Prompts The following prompts display in the Projection Chooser once Two Point Equidistant is selected. This projection has been used by the National Geographic Society to map areas of Asia. 1997).

Enter values of false easting and false northing corresponding to the desired center of the projection. 1989 Map Projections 415 . It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. Longitude of 2nd point Latitude of 2nd point Enter the longitude and latitude values of the second point. These values must be in meters. Figure 125: Two Point Equidistant Projection Source: Snyder and Voxland. Longitude of 1st point Latitude of 1st point Enter the longitude and latitude values of the first point. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. That is.

With a separate projection for each UTM zone. See Transverse Mercator on page 412 for more information. the entire map may be projected for any UTM zone specified by you. The UTM system specifies the central meridian of each zone. If the map to be projected extends beyond the border of the UTM zone. UTM Zone North or South 416 Map Projections .UTM Universal Transverse Mercator (UTM) is an international plane (rectangular) coordinate system developed by the US Army that extends around the world from 84°N to 80°S. Transverse Mercator is a transverse form of the Mercator cylindrical projection. The Transverse Mercator projection is then applied to each UTM zone. Zones are numbered consecutively west to east from the 180° meridian (Figure 126. The projection cylinder is rotated 90° from the vertical (polar) axis and can then be placed to intersect at a chosen central meridian. Each zone extends three degrees eastward and three degrees westward from its central meridian. The list of available spheroids is located in Table 44 on page 243. Prompts The following prompts display in the Projection Chooser if UTM is chosen. Table 48 on page 417). Spheroid Name Datum Name Select the spheroid and datum to use. The world is divided into 60 zones each covering six degrees longitude. a high degree of accuracy is possible (one part in 1000 maximum distortion within each zone).

Table 48: UTM Zones. and Longitude Ranges Zone 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Central Meridian 177W 171W 165W 159W 153W 147W 141W 135W 129W 123W 117W 111W 105W 99W 93W 87W 81W Range 180W-174W 174W-168W 168W-162W 162W-156W 156W-150W 150W-144W 144W-138W 138W-132W 132W-126W 126W-120W 120W-114W 114W-108W 108W-102W 102W-96W 96W-90W 90W-84W 84W-78W 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 Zone Central Meridian 3E 9E 15E 21E 27E 33E 39E 45E 51E 57E 63E 69E 75E 81E 87E 93E 99E 0-6E Range 6E-12E 12E-18E 18E-24E 24E-30E 30E-36E 36E-42E 42E-48E 48E-54E 54E-60E 60E-66E 66E-72E 72E-78E 78E-84E 84E-90E 90E-96E 96E-102E Map Projections 417 . Central Meridians.Figure 126: Zones of the Universal Transverse Mercator Grid in the United States 126° 120° 114° 108° 102° 96° 90° 84° 78° 72° 66° All values in Table 48 are in full degrees east (E) or west (W) of the Greenwich prime meridian (0).

and Longitude Ranges (Continued) Zone 18 19 20 21 22 23 24 25 26 27 28 29 30 Central Meridian 75W 69W 63W 57W 51W 45W 39W 33W 27W 21W 15W 9W 3W Range 78W-72W 72W-66W 66W-60W 60W-54W 54W-48W 48W-42W 42W-36W 36W-30W 30W-24W 24W-18W 18W-12W 12W-6W 6W-0 48 49 50 51 52 53 54 55 56 57 58 59 60 Zone Central Meridian 105E 111E 117E 123E 129E 135E 141E 147E 153E 159E 165E 171E 177E Range 102E-108E 108E-114E 114E-120E 120E-126E 126E-132E 132E-138E 138E-144E 144E-150E 150E-156E 156E-162E 162E-168E 168E-174E 174E-180E 418 Map Projections . Central Meridians.Table 48: UTM Zones.

Scale is true along the Equator.Van der Grinten I The Van der Grinten I projection produces a map that is neither conformal nor equal-area (Figure 127 on page 420). The Van der Grinten projection is used by the National Geographic Society for world maps. It has been used to show distribution of mineral resources on the ocean floor. Linear scale is true along the Equator. but increases rapidly toward the poles. except for a straight Equator. The central meridian and Equator are straight lines. which are usually not represented. Van der Grinten I avoids the excessive stretching of the Mercator and the shape distortion of many of the equal-area projections. Used by the USGS to show distribution of mineral resources on the sea floor. and represents the Earth within a circle. It compromises all properties. Parallels are spaced farther apart toward the poles. Parallels are circular arcs concave toward the poles. Respond to the prompts as described. The graticule spacing results in a compromise of all properties. Meridian spacing is equal at the Equator. Construction Property Meridians Parallels Miscellaneous Compromise Meridians are circular arcs concave toward a straight central meridian. Meridian spacing is equal at the Equator. Spheroid Name Datum Name Select the spheroid and datum to use. Longitude of central meridian Map Projections 419 . The parallels are spaced farther apart toward the poles. Scale increases rapidly toward the poles. The list of available spheroids is located in Table 44 on page 243. The poles are commonly not represented. Graticule spacing Linear scale Uses All lines are curved except the central meridian and the Equator. Prompts The following prompts display in the Projection Chooser if Van der Grinten I is selected.

False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. but it is not conformal. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. Figure 127: Van der Grinten I Projection The Van der Grinten I projection resembles the Mercator. 420 Map Projections . That is.Enter a value for the longitude of the desired central meridian to center the projection. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. These values must be in meters.

Parallels have the widest space between them at the Equator. False easting False northing Enter values of false easting and false northing corresponding to the center of the projection. Respond to the prompts as described. The other meridians are portions of ellipses that are equally spaced. Poles are lines one half as long as the Equator. Scale is constant along any specific latitude as well as the latitude of opposite sign. See Meridians and Parallels. The list of available spheroids is located in Table 44 on page 243.Wagner IV Construction Property Meridians Pseudocylinder Equal-area The central meridian is a straight line one half as long as the Equator. Source: Snyder and Voxland. Parallels are unequally spaced. Parallels Graticule spacing Linear scale Uses The Wagner IV Projection has distortion primarily in the polar regions. Spheroid Name Datum Name Select the spheroid and datum to use. The meridians at 103° 55’ E and W of the central meridian are circular arcs. 1989 Prompts The following prompts display in the Projection Chooser if Wagner IV is selected. and are perpendicular to the central meridian. Longitude of central meridian Enter a value for the longitude of the desired central meridian to center the projection. They are concave towards the central meridian. Useful for world maps. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. Scale is accurate along latitudes 42° 59’ N and S. These values must be in meters. Map Projections 421 . Symmetry exists around the central meridian or the Equator. That is. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

1989 422 Map Projections .Figure 128: Wagner IV Projection Source: Snyder and Voxland.

which are concave toward the closest pole. The Equator is straight. Other parallels are unequally spaced curves. Construction Property Meridians Modified azimuthal Equal-area Central meridian is straight and half the Equator’s length. Source: Snyder and Voxland.Wagner VII The Wagner VII projection is modified based on the Hammer projection. See Meridians and Parallels. The list of available spheroids is located in Table 44 on page 243. Other meridians are unequally spaced curves. “The poles correspond to the 65th parallels on the Hammer [projection]. Parallels Graticule spacing Linear scale Uses Distortion is prevalent in polar areas. Spheroid Name Datum Name Select the spheroid and datum to use. Used for world maps. Poles are curved lines. Longitude of central meridian Enter a value for the longitude of the desired central meridian to center the projection. Symmetry exists about the central meridian or the Equator. False easting False northing Map Projections 423 . 1989). Scale decreases along the central meridian and the Equator relative to distance from the center of the Wagner VII projection. 1989 Prompts The following prompts display in the Projection Chooser if Wagner IV is selected. the other parallels are curved. They are concave toward the central meridian. Respond to the prompts as described. and meridians are repositioned” (Snyder and Voxland.

1989 424 Map Projections . Figure 129: Wagner VII Projection Source: Snyder and Voxland. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. That is. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection.Enter values of false easting and false northing corresponding to the center of the projection.

Spheroid Name Datum Name Select the spheroid and datum to use. 1989). See Meridians and Parallels. Symmetry exists about the central meridian or the Equator. Map Projections 425 . Source: Snyder and Voxland. Parallels are equally spaced. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection.61 the length of the Equator. Respond to the prompts as described. Scale is true along latitudes 50° 28’ N and S. False easting False northing Enter values of false easting and false northing corresponding to the desired center of the projection. The list of available spheroids is located in Table 44 on page 243. That is. Pole lines are 0. The other meridians are sinusoidal curves that are equally spaced and concave toward the central meridian.61 the length of the Equator. 1989 Prompts The following prompts display in the Projection Chooser once Winkel I is selected. Latitude of standard parallel Longitude of central meridian Enter values of the latitude of standard parallel and the longitude of central meridian. the origin of the rectangular coordinate system should fall outside of the map projection to the south and west. Parallels Graticule spacing Linear scale Uses The Winkel I projection is “not free of distortion at any point” (Snyder and Voxland. Scale is constant along any given latitude as well as the latitude of the opposite sign. These values must be in meters.Winkel I Construction Property Meridians Pseudocylinder Neither conformal nor equal-area Central meridian is a straight line 0. Used for world maps.

1989 426 Map Projections .Figure 130: Winkel I Projection Source: Snyder and Voxland.

Those descriptions are not repeated here. Simply refer to the page number in parentheses for more information. Some of these projections were discussed in the previous section.External Projections The following external projections are supported in ERDAS IMAGINE and are described in this section. • • • • • • • • • • • • • • • • • • • Albers Equal Area (see Albers Conical Equal Area on page 303) Azimuthal Equidistant (see Azimuthal Equidistant on page 306) Bipolar Oblique Conic Conformal Cassini-Soldner Conic Equidistant (see Equidistant Conic on page 333) Laborde Oblique Mercator Lambert Azimuthal Equal Area (see Lambert Azimuthal Equal Area on page 353) Lambert Conformal Conic (see Lambert Conformal Conic on page 356) Mercator (see Mercator on page 363) Minimum Error Conformal Modified Polyconic Modified Stereographic Mollweide Equal Area (see Mollweide on page 372) Oblique Mercator (see Oblique Mercator (Hotine) on page 376) Orthographic (see Orthographic on page 379) Plate Carrée (see Equirectangular (Plate Carrée) on page 336) Rectified Skew Orthomorphic (see RSO on page 392) Regular Polyconic (see Polyconic on page 386) Robinson Pseudocylindrical (see Robinson on page 390) Map Projections 427 . NOTE: ERDAS IMAGINE does not support datum shifts for these external projections.

• • • • • • • • • Sinusoidal (see Sinusoidal on page 393) Southern Orientated Gauss Conformal Stereographic (see Stereographic on page 408) Swiss Cylindrical Stereographic (Oblique) (see Stereographic on page 408) Transverse Mercator (see Transverse Mercator on page 412) Universal Transverse Mercator (see UTM on page 416) Van der Grinten (see Van der Grinten I on page 419) Winkel’s Tripel 428 Map Projections .

Linear scale is generally good. and maintains conformality for these regions. Refer to Lambert Conformal Conic on page 356 for more information. Used to represent one or both of the American continents. Linear scale is true along two lines that do not lie along any meridian or parallel. Construction Property Meridians Parallels Graticule spacing Cone Conformal Meridians are complex curves concave toward the center of the projection. 73°02’W. Linear scale Uses The two oblique conics are joined with the poles 104° apart.Bipolar Oblique Conic Conformal The Bipolar Oblique Conic Conformal projection was developed by O. Graticule spacing increases away from the lines of true scale and retains the property of conformality. Miller and William A. and terminates at 45°N and approximately 19°59’36”W. Prompts The following prompts display in the Projection Chooser if Bipolar Oblique Conic Conformal is selected.5%. The origin of the coordinates is made 17°15’N. Parallels are complex curves concave toward the nearest pole. A great circle arc 104° long begins at 20°S and 110°W. It is based upon the Lambert Conformal Conic. but there is as much as a 10% error at the edge of the projection as used. Scale is compressed between these lines and expanded beyond them. Projection Name Spheroid Type Datum Name Map Projections 429 .M. Examples are the Basement map of North America and the Tectonic map of North America. cuts through Central America. using two oblique conic projections side-by-side. Briesemeister in 1941 specifically for mapping North and South America. The scale of the map is then increased by approximately 3.

and each meridian 90° away from the central meridian. Linear scale Uses The spherical form of the projection bears the same relation to the Equidistant Cylindrical. Construction Property Meridians Parallels Graticule spacing Cylinder Compromise Central meridian. Parallels are complex curves. There is no distortion along the central meridian if it is maintained at true scale. which is the usual case. Other meridians are complex curves. such as Denmark. the lines of true scale are two straight lines on the map. Today. If it is given a reduced scale factor. Instead of having the straight meridians and parallels of the Equidistant Cylindrical. Used for topographic mapping. von Soldner in the early 19th century led to more accurate ellipsoidal formulas. except for the Equator. G. parallel to and equidistant from. and Malaysia. formerly in England and currently in a few other countries. and nearly so for the ellipsoid. the central meridian. Mathematical analysis by J. Complex curves for all meridians and parallels. although it is still in limited use outside of the United States. 430 Map Projections . the Cassini has complex curves for each. all of which are straight. projection that the spherical Transverse Mercator bears to the regular Mercator. Germany. and along lines perpendicular to the central meridian. or Plate Carrée. and the Equator are straight lines. Cassini de Thury in 1745 for the survey of France. Scale is true along the central meridian. Scale is constant but not true along lines parallel to the central meridian on the spherical form. it has largely been replaced by the Transverse Mercator projection.Cassini-Soldner The Cassini projection was devised by C. and each meridian 90° away from the central meridian. the central meridian. It was one of the major topographic mapping projections until the early 20th century. except for the Equator. all of which are straight. the central meridian. each meridian 90° from the central meridian. There is no distortion along them instead. F.

and also along any straight line perpendicular to the central meridian. It gradually increases in a direction parallel to the central meridian as the distance from that meridian increases. such as Great Britain. than regions extending in other directions. Projection Name Spheroid Type Datum Name Map Projections 431 . but the scale is constant along any straight line on the map that is parallel to the central meridian. Prompts The following prompts display in the Projection Chooser if CassiniSoldner is selected. CassiniSoldner is more suitable for regions that are predominantly north-south in extent. The Cassini-Soldner projection was adopted by the Ordnance Survey for the official survey of Great Britain during the second half of the 19th century.The scale is correct along the central meridian. but it has a compromise of both features. but it was changed to Oblique Mercator (Hotine) in 1978. Therefore. and to the Space Oblique Mercator in 1982. The projection is neither equal-area nor conformal. A system equivalent to the oblique Cassini-Soldner projection was used in early coordinate transformations for ERTS (now Landsat) satellite imagery.

See Oblique Mercator (Hotine) on page 376 for more information. Laborde combined a conformal sphere with a complex-algebra transformation of the Oblique Mercator projection for the topographic mapping of Madagascar. This variation is now known as the Laborde Oblique Mercator. The central line is a great circle arc. Prompts The following prompts display in the Projection Chooser if Laborde Oblique Mercator is selected. Projection Name Spheroid Type Datum Name 432 Map Projections .Laborde Oblique Mercator In 1928.

see New Zealand Map Grid on page 374.Minimum Error Conformal The Minimum Error Conformal projection is the same as the New Zealand Map Grid projection. For more information. Map Projections 433 .

and in 1909 it was adopted by the International Map Committee (IMC) in London as the basis for the 1:1. Graticule spacing Linear scale Uses The projection differs from the ordinary Polyconic in two principal features: all meridians are straight. which is slightly reduced. Parallels are circular arcs. Scale is true along each parallel and along two meridians. The top and bottom parallels of each sheet are nonconcentric circular arcs. but east to west. in that there is a gap between each diagonal sheet. a U. In 1962. See Polyconic on page 386 for more information. Used for the International Map of the World (IMW) series until 1962.000. Adjacent sheets fit together exactly not only north to south. and there are two meridians that are made true to scale. Prompts The following prompts display in the Projection Chooser if Modified Polyconic is selected. Construction Property Meridians Parallels Cone Compromise All meridians are straight. conference on the IMW adopted the Lambert Conformal Conic and the Polar Stereographic projections to replace the Modified Polyconic.000-scale International Map of the World (IMW) series.N.Modified Polyconic The Modified Polyconic projection was devised by Lallemand of France. and either one or the other adjacent sheet. Projection Name Spheroid Type Datum Name 434 Map Projections . The two parallels are spaced from each other according to the true scale along the central meridian. The top and bottom parallels of each sheet are nonconcentric circular arcs. but no parallel is standard. There is still a gap when mosaicking in all directions.

Scale is true along irregular lines. Used for maps of continents in the Eastern Hemisphere. although some may be straight under certain conditions. Construction Property Meridians Parallels Graticule spacing Linear scale Plane Conformal All meridians are normally complex curves. parallels.Modified Stereographic The meridians and parallels of the Modified Stereographic projection are generally curved. Uses Prompts The following prompts display in the Projection Chooser if Modified Stereographic is selected. Projection Name Spheroid Type Datum Name Map Projections 435 . and for maps of Alaska and the 50 United States. and other undesirable curves. Most of them can only be used within a limited range. and there is usually no symmetry about any point or line. All parallels are complex curves. for the Pacific Ocean. As the distance from the projection center increases. but the map is usually designed to minimize scale variation throughout a selected region. A world map using the GS50 (50-State) projection is almost illegible with meridians and parallels intertwined like wild vines. overlapping. The graticule is normally not symmetrical about any axis or point. There are limitations to these transformations. and shorelines begin to exhibit loops. although some may be straight under certain conditions. the meridians.

Suitable for thematic or distribution mapping of the entire world. with their opposite numbers on the other side of the central meridian. form complete ellipses that meet at the poles. especially as an inspiration for other important projections. Linear graticules include the central meridian and the Equator (Environmental Systems Research Institute. Meridians are equally spaced along the Equator and along all other parallels. This is because only two points on the Mollweide are completely free of distortion unless the projection is interrupted. its major axis. but they are not equally spaced. The world is shown in an ellipse with the Equator. frequently in interrupted form (Environmental Systems Research Institute. The poles are points. It is an equal-area projection of the Earth within an ellipse. 1992). Often used for world maps (Pearson.Mollweide Equal Area The second oldest pseudocylindrical projection that is still in use (after the Sinusoidal) was presented by Carl B. All other meridians are elliptical arcs which. The parallels are straight parallel lines. Construction Property Meridians Pseudocylinder Equal-area All of the meridians are ellipses. 1992). Germany. Parallels Graticule spacing Linear scale Uses The Mollweide is normally used for world maps and occasionally for a very large region. 1992). Scale is true along latitudes 40°44’N and S. 1990). 1990). such as the Van der Grinten. Mollweide (1774-1825) of Halle. These are the points at latitudes 40°44’12”N and S on the central meridian(s). The central meridian is a straight line. its minor axis. The Equator and parallels are straight lines perpendicular to the central meridian. twice as long as the central meridian. Distortion increases with distance from these lines and becomes severe at the edges of the projection (Environmental Systems Research Institute. in 1805. but they are not equally spaced. It has had a profound effect on world map projections in the 20th century. and 90° meridians are circular arcs (Pearson. 436 Map Projections . The meridians 90° east and west of the central meridian form a complete circle. such as the Pacific Ocean.

Projection Name Spheroid Type Datum Name Map Projections 437 .Prompts The following prompts display in the Projection Chooser if Mollweide Equal Area is selected.

1968) called the Oblique Mercator projection the Rectified Skew Orthomorphic projection. Projection Name Spheroid Type Datum Name 438 Map Projections .Rectified Skew Orthomorphic Martin Hotine (1898 . Prompts The following prompts display in the Projection Chooser if Rectified Skew Orthomorphic is selected. See Oblique Mercator (Hotine) on page 376 for more information.

The central meridian is a straight line 0. The continents appear as units and are in relatively correct size and location.53 times the length of the Equator. Developed for use in general and thematic world maps.51 times the length of the Equator. 1992). 1992). and concave toward the central meridian. 1992). Prompts The following prompts display in the Projection Chooser if Robinson Pseudocylindrical is selected. The poles are 0. 1990). Generally. Parallels are equally spaced straight lines between 38°N and S. The individual parallels are evenly divided by the meridians (Pearson. Parallels are straight lines. Poles are represented as lines. equally spaced.Robinson Pseudocylindrical The Robinson Pseudocylindrical projection provides a means of showing the entire Earth in an uninterrupted form. scale is made true along latitudes 38°N and S. Construction Property Meridians Parallels Graticule spacing Pseudocylinder Compromise Meridians are elliptical arcs. and for the latitude of opposite sign (Environmental Systems Research Institute. and then the spacing decreases beyond these limits. Parallels are straight lines and are parallel. The projection is based upon tabular coordinates instead of mathematical formulas (Environmental Systems Research Institute. Scale is constant along any given latitude. Used by Rand McNally since the 1960s and by the National Geographic Society since 1988 for general and thematic world maps (Environmental Systems Research Institute. concave toward the central meridian. Projection Name Spheroid Type Datum Name Map Projections 439 . Linear scale Uses Meridians are equally spaced and resemble elliptical arcs.

See Transverse Mercator on page 412 for more information. after mathematician Friedrich Gauss (1777-1855). Projection Name Spheroid Type Datum Name 440 Map Projections .Southern Orientated Gauss Conformal Southern Orientated Gauss Conformal is another name for the Transverse Mercator projection. Prompts The following prompts display in the Projection Chooser if Southern Orientated Gauss Conformal is selected. It is also called the Gauss-Krüger projection.

Swiss Cylindrical The Swiss Cylindrical projection is a cylindrical projection used by the Swiss Landestopographie. which is a form of the Oblique Mercator projection. see Oblique Mercator (Hotine) on page 376. For more information. Map Projections 441 .

and concave toward the central meridian. Construction Property Meridians Modified azimuthal Neither conformal nor equal-area Central meridian is straight. Used for world maps. Scale is true along the central meridian and constant along the Equator. Other parallels are curved and concave toward the nearest pole. Symmetry is maintained along the central meridian or the Equator. Projection Name Spheroid Type Datum Name Figure 131: Winkel’s Tripel Projection Source: Snyder and Voxland. 1992). Parallels Graticule spacing Linear scale Uses Prompts The following prompts display in the Projection Chooser if Winkel’s Tripel is selected. Equidistant spacing of parallels.Winkel’s Tripel Winkel’s Tripel was formulated in 1921 by Oswald Winkel of Germany. 1989 442 Map Projections . Other meridians are curved and are equally spaced along the Equator. It is a combined projection that is the arithmetic mean of the Plate Carrée and Aitoff’s projection (Maling. Equator and the poles are straight.

The Mosaic Express will take you through the steps of creating a Mosaic project and it is designed to simplify the mosaic process by gathering important information regarding the mosaic project from you and then building the project without a lot of pre-processing by you. Because of the different features of MosaicPro. The input images must have the same number of layers. you can smooth these images before mosaicking them together as well as color balance them. MosaicPro still offers you the most options and allows the most input from you. It is necessary for the images to contain map and projection information. In addition to MosaicPro. There are a number of features included with MosaicPro to aid you in creating a better mosaicked image from many separate images.Mosaic Introduction The mosaic process offers you the capability to stitch images together so one large. In this chapter. cohesive image of an area can be created. Mosaic Express are features designed to make the mosaic process easier for you. the following features will be discussed as part of MosaicPro input image options followed by an overview of Mosaic Epress. but they do not need to be in the same projection or have the same cell sizes. or adjust the histograms of each image in order to present a better large picture. In Input Image Mode for MosaicPro: • • • • • Exclude Areas Image Dodging Illumination Equalization Color Balancing Histogram Matching You can choose from the following when using Intersection Mode: • • • • Set Overlap Function Weighted Seamline Generation Geometry-based Seamline Generation Different options for choosing a seamline source These options are available as part of the Output Image Mode: • Output Image Options Mosaic Mosaic 443 443 .

using the Region Growing tool for your AOI. conic. Image Dodging Use Image Dodging to radiometrically balance images before you mosaic them. The Exclude Areas function works on the principal of defining an AOI (area of interest) in a particular image.• • Preview the mosaic Run the mosaic process to disk Input Image Mode Exclude Areas When you decide to mosaic images together. Areas like dark water or bright urban areas can be excluded so as not to throw off the process. rotating the image. and excluding that area if you wish. there is a tool bar with options for creating a polygon for your AOI. you can use the Exclude Areas feature to mark any types of areas you do not want to be taken into account during a Color Balancing. Image Dodging uses an algorithm to correct radiometric irregularities (such as hotspots and vignetting) in an image or group of images. Color Balancing. and so forth) while Image Dodging uses grids to localize the problem areas within an image. changing the Link cursor color. The feature makes it very easy to pinpoint and draw a polygon around specific areas by featuring two viewers. Color Balancing applies the correction to an image by modeling the shape of the problem (plane. or Histogram Matching to give the finished mosaicked image a smoother look devoid of bright patches or shadowy areas that can appear on images. Image Dodging. Many of the color differentiations are caused by camera angle or cloud cover. one with a shot of the entire image. and one zoomed to the AOI you have selected with the Link cursor. and finding and removing similar areas to your chosen AOI. Illumination Equalization. Image Dodging corrects brightness and contrast imbalances due to several image inconsistencies. displaying AOI styles. Before applying any of those features. or Histogram Matching process. • • Dark corners caused by lens vignetting Different film types in the group of images 444 Mosaic . and so on. changing band combinations. zooming in or out. you will notice several options offered to help you better view your images by fitting the image to the viewer window. Image Dodging and Color Balancing are similar. you will probably want to use Image Dodging. selecting multiple AOIs. If you right-click your mouse while your cursor is in the viewer. At the bottom of the Set Exclude Areas dialog.

Adjust color images only for brightness and contrast Deselect the Band Independent parameter (Image Dodging dialog). run dodging on the images that have similar hotspots. Enhance Shadowed Areas To pull detail out of shadow areas or low-contrast areas. You can also set the Max. • Change the Max. and if you need to make a more subtle change. This way you can control the brightness and contrast across all color bands while preserving the individual color ratios of red to green to blue. decrease the contrast by 0. set the Grid Size (Image Dodging dialog) to a larger number so that the process uses many small grids. For example. You may see a major change when you change the Max Grey Shift.1 to 0. Reduce contrast or increase contrast If your dodged images have too much contrast or not enough contrast try the following. Start at 50 and reduce slightly if you see clipping effects at black or white. if you have images from multiple flight lines that have hot spots in different areas and/or of different intensity.3. Change the Max Contrast (Set Dodging Correction Parameters dialog). Grey Shift (Set Dodging Correction Parameters dialog) to a larger number such as 50 (default = 35).• • • Different scanners used to collect the images (or different scanner settings) Varying sun positions among the images Hot spots caused by the sun position in relation to the imaging lens The following tips show when to use the various parameters to correct specific inconsistencies. If you have a large mosaic project and the images have different inconsistencies. Another method would be to set the Max. Contrast parameter (Set Dodging Correction Parameters dialog) to a higher number such as 6. Coastal Areas Mosaic 445 . then group the images with the same inconsistency and dodge that group with the parameters specific to that inconsistency. Grey Shift (Set Dodging Correction Parameters dialog). • Balance groups of images with different inconsistencies.

In Options For All Images. In this dialog you are able to change and reset the brightness and contrast and the constraints of the image.If you have images with dominant water bodies and coastline. you can first choose whether the image should be dodged by each band or as one. Color Balancing When you click Use Color Balancing. you can check the Don’t do dodging on this image box and skip to the next image you want to mosaic. you will get a prompt to Compute Settings first. This smaller grid size works well because of the difference between bright land and dark water. Also uncheck Band Independent (Image Dodging dialog) since a large area of dark water can corrupt the color interpretation of the bright land area. you can change those bands to whatever combination you wish. go ahead and compute the settings you have stipulated in the dialog. choose Manual Color Manipulation in the Set Color Balancing dialog. This will reduce vignetting and hotspots without adversely affecting the appearance of the water. Options for Current Image. 446 Mosaic . preview the dodged image in the dialog viewer so you will know if you need to do anything further to it before mosaicking. you will see a dialog titled Set Dodging Correction Parameters. After the settings are computed. and Display Setting are all above the viewer area showing the image and a place for previewing the dodged image. If using an RGB image. If you want to skip dodging for a certain image. This is helpful if you have a set of images that all look smooth except for one that may show a shadow or bright spot in it. Use Display Setting to choose either a RGB image or a Single Band image. After you compute the settings a final time. use a Grid Size (Image Dodging dialog) of 6 or 8. the method will be chosen for you. and Skip Factor Y. you can click that button so you don’t have to reenter the information with each new image. If you want to. If you want a specific number to apply to all of your images. If you click Edit Correction Settings. Skip Factor X. If you want to manually choose the surface method and display options. you are given the option of Automatic Color Balancing. You then decide if you want the dodging performed across all of the images you intend to mosaic or just one image. If you choose this option. When you bring up the Image Dodging dialog you have several different sections. In the area titled Statistics Collection. you can change the Grid Size. Options for All Images.

you will have access to the Mosaic Color Balancing tool where you can choose different surface methods. You can also alter which layer in an RGB image is the red. Depending on the shape of the bright or shadowed area you want to correct.The color difference is very bright in the center and slowly. Whenever the selector is moved. Conic is usually best for hot spots found in aerial photography although linear may be necessary in those situations due to the correction of flight line variations. In the same area. you will see a check box for Common center for all layers. and you wish to bring it back to the middle of the image. Linear . Mosaic 447 . conic. It may be necessary to experiment a bit when trying to decide what surface method to use. Display Setting The Display Setting area of the Mosaic Color Balancing tool lets you choose between RGB images and Single Band images. you can click Reset Center Point in the Surface Method area. green. all of the layers will be updated. The linear method is also useful for images with a large fall off in illumination along the look direction.The color difference will peak in brightness in the center and darken at an equal rate on all sides.Mosaic Color Balancing gives you several options to balance any color disparities in your images before mosaicking them together into one large image. If you move the center point. For more control over how the images are color balanced. If you check this option. Once you choose this option. you should choose the manual color balancing option. It can sometimes be particularly difficult to tell the difference right away between parabolic. and also with off-nadir viewing sensors. but not always evenly. When you choose to use Color Balancing in the Color Corrections dialog.The color difference is graduated across the image. especially with SAR images. darkens on all sides. Surface Methods When choosing a surface method you should concentrate on how the light abnormality in your image is dispersed. you should choose one of the following: • • • • Parabolic -The color difference is elliptical and does not darken at an equal rate on all sides. and surface settings for color balancing your images. or the reset button clicked. the text box updated. all layers in the current image will have their center points set to that of the current layer. Conic . and exponential. you will be asked if you want to color balance your images automatically or manually. display options. or blue. Exponential .

You should use the Histogram Matching option to match data of the same or adjacent scenes that was captured on different days. then you should use it. decide if you want your images to be matched according to all the other images you want to mosaic or just matched to the overlapping areas between the images. 448 Mosaic . For Histogram Type you can choose to match images band by band or by the intensity (RGB) of the images. The parameters define the surface. If you want to preview the color balanced image before accepting it. you can click Preview at the bottom of the Mosaic Color Balancing tool. When choosing a Matching Method. Histogram Matching Histogram Matching is used in other facets of IMAGINE. you can see the Image Profile graph change as well.Surface Settings When you choose a Surface Method. and whether or not to use an external reference file. you will get the choice of using an image file or parameters as your Histogram Source. and the surface will then be used to flatten the brightness variation throughout the image. By choosing Histogram Matching through the Color Corrections dialog in Mosaic Pro. but it is particularly useful to the mosaicking process. or data that is slightly different because of sun or atmospheric effects. the Surface Settings become the parameters used in that method’s formula. If you have an image that contains the characteristics you would like to see in the image you are running through Histogram Matching. You can change the following Surface Settings: • • • • • Offset Scale Center X Center Y Axis Ratio As you change the settings. If you check Use external reference. the Histogram Type. This is helpful because you can change any disparities that still exist in the image. you have the options of choosing the Matching Method.

you have several different options for handling the overlapping of your images. Pixels outside the clip boundary will be set to the background color. Allowing users to save cutlines and intersections to a pair of shapefiles Loading clip boundary output regions from AOI or vector files. The features for dealing with image overlap include: • • • • • Loading cutlines from a vector file (a shapefile or arc coverage file) Editing cutlines as vectors in the viewer Automatic clipping. and if one does not exist. The ASCII import tool is used to try to parse ASCII files that do not conform to a predetermined format. When you choose the Set Mode for Intersection button on the MosaicPro toolbar. Set Overlap Function gives you the options of no cutline existing. For those overlapping areas. you will have overlapping areas. extending. • • Set Overlap Function When you are using more than one image. you need to define how they should overlap. how to handle the overlap of images as well as if a cutline exists.Intersection Mode When you mosaic images. while the pixels on the other side of the cutline take the value of another overlapping image. you will need to choose how to handle the overlap. No Cutline Exists When no cutline exists between overlapping images. This boundary applies to all output regions. You are given the following choices: • • • Overlay Average Minimum Mosaic 449 . then what smoothing or feathering options to use concerning the cutline. you can specify a cutline so that the pixels on one side of a particular cutline take the value of one overlapping image. and merging of cutlines that cross multiple image intersections Loading images and calibration information from triangulated block files as well as setting the elevation source Selecting mosaic output areas with ASCII files containing corner coordinates of sheets that may be rotated. The cutlines can be generated manually or automatically.

The smaller the segment length. For example. While this is a very straightforward approach. The Feathering Options given are No Feathering. This is an especially important consideration for dense urban areas with many buildings. and the chances are reduced of the cutline cutting through features such as roads or bridges. and so on because of the possibility the method would make the mosaicked images look obviously inaccurate near the cutline area. The most nadir cutline is divided into sections where a section is a collection of continuous cutline segments shared by the two images. a cutline is refined based on a cost function. and Feathering by Distance. Cutline Refining Parameters The Weighted Cutline Generation dialog’s first section is Cutline Refining Parameters. if the cutline crosses a bridge. The starting and ending points of these sections are called nodes. rivers. Smaller segment lengths will usually slow down the finding of edges. If you choose Feathering by Distance. bridges. it is not recommended for images containing buildings. the smaller the search area for the next vertex will be. roads. you will be able to enter a specific distance.• • Cutline Exists Maximum Feather When a cutline does exist between images. you will need to decide on smoothing and feathering options to cover the overlap area in the vicinity of the cutline. or anything else where it is very important that the cutline not break. Weighted Cutline Generation When your overlapping images contain buildings. but at least the chances are small of cutting through important features. The Smoothing Options area allows you to choose both the Distance and the Smoothing Filter. The method uses the centerlines of the overlapping polygons as cutlines. Feathering. bridges. In this section you can choose the Segment Length. rivers. The point with the smallest cost will be picked as a new cutline vertex. which specifies the segment length of the refined cutline. The Weighted Cutline Generation option generates the most nadir cutline first. you should use the Weighted Cutline Generation option. Automatically Generate Cutlines For Intersection The current implementation of Automatic Cutline Generation is geometry-based. 450 Mosaic . the bridge may look broken at the point where the cutline crosses it. Between nodes.

a low difference in gray value. Polygon Vector File. When you increment one weighting factor. The default is Union of All Inputs. that factor will play a larger role. Geometry-based Cutline Generation runs very quickly compared to Weighted Cutline Generation. roads. or image dodging.The Bounding Width specifies the constraint to the new vertices in the vertical direction of the segment between two nodes. the geometry-based cutline can be seen as a center line of the overlapping area that cuts the region into two equal halves. Geometry-based Cutline Generation Geometry-based Cutline Generation is a bit more simplistic because it is based only on the geometry of the overlapping region between images. all three components play the same role. Output Image Options Mosaic 451 . standard deviation. the first feature you will want to use is Output Image Options. The weighting is in favor of high standard deviation. Use the geometry-based method when your images contain homogenous areas like grasses or lakes. When left at the default value. and checked overlapping images for possible cutline needs. After choosing those options. User-defined AOI. One half is closer to the center of the first image. you can preview the mosaic and then run it to disc. When you select the Set Mode for Output portion of the MosaicPro. but use Weighted Cutline Generation for images where the cutline cannot break such as buildings. and difference in gray value. If you set a weighting factor to zero. The default value is one for all three weighting factors. For an overlapping region that only involves two images. and direction that is closest to the direction between the two nodes. You will be given the choice of using Union of All Inputs. and the weighted cutline will be reduced to the most nadir cutline. Map Series File. Cost Function Weighting Factors The Cost Function used in cutline refinement is a weighted combination of direction. If you set all three weighting factors to zero. More specifically the distance from a new vertex to the segment between the two nodes must be no bigger than the half of the value specified by the Bounding Width field. or ASCII Sheet File as your defining feature for an output map area. the corresponding component will not play a role at all. This dialog lets you define your output map areas and change output map projection if you wish. Output Image Mode After you have chosen images to be mosaicked. and the other half is closer to the center of the second image. Pixel values of the involved images are not used. you are ready to output the images to an actual mosaic file. the cutline refinement will not be done. gone through any color balancing or histogram matching. Geometry-based generation does not have to factor in pixels from the images. and urban areas. USGS Maps Database. rivers.

“UR”. then you are given the choice of outputting multiple AOI objects to either multiple files or a single file. If you choose Clip Boundary. 452 Mosaic . some_sheet_name(north-up orthoimage) UL 0 0 LR 10 10 -99 second_sheet_name(rotated orthoimage) UL 0 0 UR 5 5 LL 3 3 LR 10 10 -99 The second format lists the coordinates on a single line with a blank space as a separator. if you select User-defined AOI. any area outside of the Clip Boundary will be designated as backgroung value in your output image. One lists the coordinates on separate lines with hard coded idenitifiers: “UL”. The sheet name is optional. This differs from the User-defined AOI because Clip Boundary applies to all output images. ASCII Sheet File Format There are two file formats for ASCII sheet files. If you enter two image coordinates the boundary is treated as a North-up orthoimage. “LL”. For instance. “LR”. If you enter four image coordinates. The sheet name is optional. When you are done selecting Output Image Options.Different choices yield different options to further modify the output image. and you can choose a particular Output Data Type from a dropdown list instead of the default Unsigned 8 bit. This format can contain multiple sheet files separated by the code “-99”. The Projection Chooser lets you choose a particular projection to use from categories and projections around the world. third_sheet_name 0 0 10 10(north-up orthoimage) fourth_sheet_name 0 0 3 0 3 3 0 3(rotated orthoimage) Image Boundaries Type There are two types of image boundaries. you can do that as well. You can also click Change Output Map Projection to bring up the Projection Chooser.0. you can preview the mosaicked image before saving it as a file. If you want to choose a customized map projection. You are also given the options of changing the Output Cell Size from the default of 8. If you choose Map Series File. it is treated as a Rotated orthoimage. you will be able to enter the filename you want to use and choose whether to treat the map extent as pixel centers or pixel edges. Also part of Output Image Options is the choice of choosing a Clip Boundary.

Mosaic 453 . and enter the file name for the image. and Create Output in Batch mode.Run Mosaic To Disc When you are ready to process the mosaicked image to disc. Output Background Value. There are several options on the Output Options tab such as Output to a Common Look Up Table. you can click this icon and open the Output File Name dialog. browse to the directory where you want to store your mosaicked image. You can choose from any of these according to your desired outcome. From this dialog. Ignore Input Values.

454 Mosaic .

1989). • • • This chapter discusses these enhancement techniques available with ERDAS IMAGINE: • • • • • • Correcting Data Anomalies . Enhancement makes important features of raw.enhancing images based on the values of individual pixels Spatial Enhancement .radiometric and geometric correction Radiometric Enhancement . Your expectations—what you think you are going to find. (See "Raster Data" on page 1 for more details.This section is superceeded by the IMAGINE Spectral Analysis™ User’s Guide Enhancement Enhancement 455 455 . Your background—your experience in performing enhancement. Enhancement techniques are often used instead of classification techniques for feature extraction— studying and locating areas and objects on the ground and deriving useful information from images.enhancing images by transforming the values of each pixel on a multiband basis Hyperspectral image processing . remotely sensed data more interpretable to the human eye. and other imaging sensors are selected to detect certain features.enhancing images based on the values of individual and neighboring pixels Wavelet Resolution Merge . SPOT. You must know the parameters of the bands being used before performing any enhancement. sharpening an image to identify features that can be used for training samples requires a different set of enhancement techniques than reducing the number of bands in the study. The techniques to be used in image enhancement depend upon: • Your data—the different bands of Landsat.) Your objective—for example.fusing information from several sensors into one composite image Spectral Enhancement .Enhancement Introduction Image enhancement is the process of making an image more interpretable for a particular application (Faust. You must have a clear idea of the final product desired before enhancement is performed.

and run from the Spatial Modeler component or directly from the command line. upon the image data in the data file. Then.a high order feature extraction technique that exploits the statistical characteristics of multispectral and hyperspectral imagery Fourier Analysis . when the desired results are obtained. If one is looking for certain visual effects. the values that are stored in the display device memory can be used to make the same changes to the data file.techniques for eliminating periodic noise in imagery Radar Imagery Enhancement . construct models that can be used to enhance the data. • 456 Enhancement . upon the image that is displayed in the Viewer (by manipulating the function and display memories). You can edit models created with Model Maker using SML or Model Maker. Spatial Modeling Enhancements Two types of models for enhancement can be created in ERDAS IMAGINE: • Graphical models—use Model Maker (Spatial Modeler) to easily. Display vs. and with great flexibility. edited.• Independent Component Analysis . For more information about displayed images and the memory of the display device.techniques specifically designed for enhancing radar imagery • • See "Bibliography" on page 777 to find current literature that provides a more detailed discussion of image processing enhancement techniques. File Enhancement With ERDAS IMAGINE. image enhancement may be performed: • • temporarily. or permanently. it may be beneficial to perform some trial and error enhancement techniques on the display. see "Image Display" on page 145. SML enables you to write scripts which can be written. use the Spatial Modeler Language (SML) to construct models in script form. Enhancing a displayed image is much faster than enhancing an image on disk. Script models—for even greater flexibility.

created with Model Maker. which are ready to be applied with user-input parameters at the touch of a button. Produces the pixel output DN by averaging pixels within a moving window that fall within a statistically defined range. These graphical models. Just remember. Uses a matrix to average small sets of pixels across an image. Varies the contrast stretch for each pixel depending upon the DN values in the surrounding moving window. Table 49: Description of Modeling Functions Available for Enhancement Function SPATIAL ENHANCEMENT Convolution Non-directional Edge Focal Analysis Description These functions enhance the image using the values of individual and surrounding pixels. are listed as menu functions in the Image Interpreter. See "Geographic Information Systems" on page 173 for more information on Raster Modeling. Texture Adaptive Filter Statistical Filter Resolution Merge Enhancement 457 . they produce the same results when applied. These functions are mentioned throughout this chapter. Averages the results from two orthogonal 1st derivative edge detectors. Merges imagery of differing spatial resolutions. Enables you to perform one of several analyses on class values in an image file using a process similar to convolution filtering. The modeling functions available for enhancement in Image Interpreter are briefly described in Table 49 . Defines texture as a quantitative characteristic in an image. these are modeling functions which can be edited and adapted as needed with Model Maker or the SML. Image Interpreter ERDAS IMAGINE supplies many algorithms constructed as models.Although a graphical model and a script model look different.

Applies a contrast stretch to the principal components of an image. Creates an output image that contains the data values as modified by a lookup table. Compresses redundant data values into fewer bands. Rotates the data structure axes to optimize data viewing for vegetation studies. Transforms red. Removes striping from a raw TM4 or TM5 data file. blue values to intensity. saturation values. Mathematically determines a lookup table that converts the histogram of one image to resemble the histogram of another. Dehazes Landsat 4 and 5 TM data and panchromatic data. Removes noise using an adaptive filter. which are often more interpretable than the source data. green. RADIOMETRIC ENHANCEMENT LUT (Lookup Table) Stretch Histogram Equalization These functions enhance the image using the values of individual pixels within each band. Allows both linear and nonlinear reversal of the image intensity range.Table 49: Description of Modeling Functions Available for Enhancement Function Crisp Description Sharpens the overall scene luminance without distorting the thematic content of the image. Histogram Match Brightness Inversion Haze Reduction* Noise Reduction* Destripe TM Data SPECTRAL ENHANCEMENT Principal Components These functions enhance the image by transforming the values of each pixel on a multiband basis. hue. Redistributes pixel values with a nonlinear contrast stretch so that there are approximately the same number of pixels with each value within a range. Inverse Principal Components Decorrelation Stretch Tasseled Cap RGB to IHS 458 Enhancement . Performs an inverse principal components analysis.

Computes the inverse two-dimensional Fast Fourier Transform (FFT) of the spectrum stored. Correcting Data Anomalies Each generation of sensors shows improved data acquisition and image quality over previous generations. In addition. Converts the Fourier Transform image into the more familiar Fourier Magnitude image. Enhances imagery using an illumination/reflectance model. Enhancement 459 . FOURIER ANALYSIS Fourier Transform* These functions enhance the image by applying a Fourier Transform to the data. Performs band ratios that are commonly used in mineral and vegetation studies.Table 49: Description of Modeling Functions Available for Enhancement Function IHS to RGB Indices Natural Color Description Transforms intensity. green. which can also be corrected. 1987). some anomalies still exist that are inherent to certain sensors and that can be corrected by applying mathematical formulas derived from the distortions (Lillesand and Kiefer. the natural distortion that results from the curvature and rotation of the Earth in relation to the sensor platform produces distortions in the image data. Enables you to utilize a highly efficient version of the Discrete Fourier Transform (DFT). However. Enables you to edit Fourier images using many interactive tools and filters. saturation values to red. hue. Simulates natural color for TM data. Fourier Transform Editor* Inverse Fourier Transform* Fourier Magnitude* Periodic Noise Removal* Homomorphic Filter* * Indicates functions that are not graphical models. Automatically removes striping and other periodic noise from images. NOTE: There are other Image Interpreter functions that do not necessarily apply to image enhancement. blue values.

These errors are induced by: • • sensor viewing geometry terrain variations Because of the differences in radiometric and geometric correction between traditional. Among these algorithms are simple alongline convolution. Some Landsat 1. and 3 data have striping every sixth line. Various algorithms have been advanced in current literature to help correct this problem in the older data. The stripes are not constant data values. These variations include: • • • differing sensitivities or malfunctioning of the detectors topographic effects atmospheric effects Geometric Correction Geometric correction addresses errors in the relative positions of pixels. high-pass filtering. 2. 1989a). the two are discussed separately. This problem has been largely eliminated in the newer sensors. 460 Enhancement . Radiometric correction addresses variations in the pixel intensities (DNs) that are not caused by the object or scene being scanned. there are two types of data correction: radiometric and geometric. because of improper calibration of some of the 24 detectors that were used by the MSS. Radiometric Correction: Visible/Infrared Imagery Striping Striping or banding occurs if a detector goes out of adjustment—that is. and forward and reverse principal component transformations (Crippen. The differing response of the errant detector is a complex function of the data value sensed. See Radar Imagery Enhancement on page 525. passively detected visible/infrared imagery and actively acquired radar imagery. it provides readings consistently greater than or less than the other detectors for the same band over the same ground cover.Radiometric Correction Generally. nor is there a constant error factor or bias.

creating a horizontal streak until the detector(s) recovers. if it recovers. Atmospheric Effects The effects of the atmosphere upon remotely-sensed data are not considered errors.The Spatial Modeler editing capabilities allow you to adapt the algorithms to best address the data. Line dropout is usually corrected by replacing the bad line with a line of estimated data file values. which is based on the lines above and below it. The result is a line or partial line of data with higher data file values. These artifacts can be minimized by correcting each scan line to a scene-derived average (Kruse. However. 1983). Use the Image Interpreter or the Spatial Modeler to implement algorithms to eliminate striping. Line Dropout Another common remote sensing device error is line dropout. especially for scene matching and change detection analysis. Line dropout occurs when a detector either completely fails to function. Over the past 30 years. The IMAGINE Radar Interpreter Adjust Brightness function also corrects some of these problems. This effect can be further exacerbated by unfavorable sun angle.Data from airborne multispectral or hyperspectral imaging scanners also shows a pronounced striping pattern due to varying offsets in the multielement detectors. Four categories are mentioned here: • • • • dark pixel subtraction radiance to reflectance conversion linear regressions atmospheric modeling Use the Spatial Modeler to construct the algorithms for these operations. Enhancement 461 . a number of algorithms have been developed to correct for variations in atmospheric transmission. it is often important to remove atmospheric effects. since they are part of the signal received by the sensing device (Bernstein. 1988). or becomes temporarily saturated during a scan (like the effect of a camera flash on the retina).

Offsets from this represent the additive extraneous components. For some applications. and 3 were not corrected before distribution. pressure. Accurate atmospheric modeling is essential in preprocessing hyperspectral data sets where bandwidths are typically 10 nm or less. The slope then equals the relative reflectivities for the two spectral bands. water vapor. ozone). Chavez et al. 1977). The latter approach involves assumptions about the targets in the image. 1987). such as stereo viewing or DEM generation. In the past. some data from Landsat MSS 1. These can come from either at-site reflectance measurements. Atmospheric Modeling Atmospheric modeling is computationally complex and requires either assumptions or inputs concerning the atmosphere at the time of imaging. Many visible/infrared sensors are not nadir-viewing: they look to the side. some of these errors are commonly removed at the sensor’s data processing center. 1990). and sensor viewing angle. This model requires inputs such as atmospheric profile (for example. solar zenith angle. due to atmosphere effects (Crippen. 1971. For other applications. These techniques use bispectral plots and assume that the position of any pixel along that plot is strictly a result of illumination. The atmospheric model used to define the computations is frequently Lowtran or Modtran (Kneizys et al. These narrow bandwidth corrections can then be combined to simulate the much wider bandwidths of Landsat or SPOT sensors (Richter. this is an advantage. These assumptions are very tenuous and recent work indicates that this method may actually degrade rather than improve the data (Crippen. it is a complicating factor. and hence its radiometric value (DN) is the result of atmosphere-induced additive errors (Crane. 1987). Today. At an illumination of zero. 1988). aerosol type. geometric correction is applied to raw sensor data to correct errors of perspective due to the Earth’s curvature and sensor motion. 462 Enhancement .Dark Pixel Subtraction The dark pixel subtraction technique assumes that the pixel of lowest DN in each band should really be zero. elevation. temperature. Linear Regressions A number of methods using linear regressions have been tried. 2. Radiance to Reflectance Conversion Radiance to reflectance conversion requires knowledge of the true ground reflectance of at least two targets in the image. Geometric Correction As previously noted. the regression plots should pass through the bispectral origin. or they can be taken from a reflectance table for standard materials.

are viewed off-nadir. even a nadir-viewing sensor is viewing only the scene center at true nadir. These factors can be addressed by rectifying the image to a map. Other pixels. This is orthocorrection. while gained on others. Terrain variations have the same distorting effect. Radiometric enhancement usually does not bring out the contrast of every pixel in an image. 1989). It differs from spatial enhancement (discussed in Spatial Enhancement on page 474). result in geometric imperfections in the sensor image. A more rigorous geometric correction utilizes a DEM and sensor position information to correct these distortions.In addition. especially those on the view periphery. which takes into account the values of neighboring pixels. Figure 132: Histograms of Radiometrically Enhanced Data (j and k are reference points) Frequency 0 j k Frequency 255 0 j k Enhanced Data 255 Original Data Enhancement 463 . Contrast can be lost between some pixels. but on a smaller (pixel-by-pixel) scale. For scenes covering very large geographic areas (such as AVHRR). See "Rectification" on page 251 for more information on geometric correction using rectification and "Photogrammetric Concepts" on page 595 for more information on orthocorrection. the radiometric enhancement of a multiband image can usually be considered as a series of independent. Therefore. Radiometric Enhancement Radiometric enhancement deals with the individual values of the pixels in the image. Depending on the points and the bands in which they appear. single-band enhancements (Faust. radiometric enhancements that are applied to one band may not be appropriate for other bands. this can be a significant problem. This and other factors. such as Earth curvature.

Contrast Stretching When radiometric enhancements are performed on the display device. However. Therefore. the pixels between j and k gain contrast—it is easier to distinguish different brightness values in these pixels. Figure 133: Graph of a Lookup Table 255 output brightness values 0 0 input data file values 255 Notice that the graph line with the steepest (highest) slope brings out the most contrast by stretching output values farther apart. but the output brightness values for the same pixels are stretched over a wider range. A piecewise linear stretch uses a polyline function to increase contrast to varying degrees over different ranges of the data. This process is called contrast stretching.In Figure 132. When the same data are radiometrically enhanced. when describing types of spectral enhancement. as in Figure 134. the range between j and k can be widened. Figure 133 shows the graph of a lookup table that increases the contrast of data file values in the middle range of the input data (the range within the brackets). For example. Linear and Nonlinear The terms linear and nonlinear. 464 Enhancement . the pixels outside the range between j and k are more grouped together than in the original histogram to compensate for the stretch between j and k. the range between j and k in the histogram of the original data is about one third of the total range of the data. the transformation of data file values into brightness values is illustrated by the graph of a lookup table. refer to the function that is applied to the data to perform the enhancement. Note that the input range within the bracket is narrow. Contrast among these pixels is lost.

Usually. so that they can be seen on the display. That range can be expanded to utilize the total range of the display device (usually 0 to 255). The graph of the function in Figure 135 shows one example.5% from left end and 1.Figure 134: Enhancement with Lookup Tables output brightness values 255 linear nonlinear piecewise linear 0 0 input data file values 255 Linear Contrast Stretch A linear contrast stretch is a simple way to improve the visible contrast of an image.0% from right end of the histogram linear contrast stretch is automatically applied to images displayed in the Viewer. the data file values fall within a narrow range—usually a range much narrower than the display device is capable of displaying. A Percentage LUT with clip of 2. Nonlinear Contrast Stretch A nonlinear spectral enhancement can be used to gradually increase or decrease contrast over a range. nonlinear enhancements bring out the contrast in one range while decreasing the contrast in other ranges. instead of applying the same amount of contrast (slope) across the entire image. Enhancement 465 . It is often necessary to contrast-stretch raw image data. In most raw data.

The data values specified can go only in an upward. Range specifications adjust in relation to any changes to maintain the data value range. the Piecewise Linear Contrast function is set up so that there are always pixels in each data file value from 0 to 255. and high. You can enhance the contrast or brightness of any section in a single color gun at a time. there can be no break in the values between High. and Low. It enables you to create a number of straight line segments that can simulate a curve. middle. but you cannot eliminate a range of data file values. increasing direction. Middle. This technique is very useful for enhancing image areas in shadow or other areas of low contrast. In ERDAS IMAGINE. as shown in Figure 136.Figure 135: Nonlinear Radiometric Enhancement output brightness values 255 0 0 input data file values 255 Piecewise Linear Contrast Stretch A piecewise linear contrast stretch allows for the enhancement of a specific portion of data by dividing the lookup table into three sections: low. 2) 466 Enhancement . You can manipulate the percentage of pixels in a particular range. A piecewise linear contrast stretch normally follows two rules: 1) The data values are continuous.

Use the Image Interpreter LUT Stretch function to create an . Since rules 1 and 2 above are enforced. it forces the contrast of the middle to decrease. they may affect the contrast and brightness of other ranges.Figure 136: Piecewise Linear Contrast Stretch 100% LUT Value Low Middle High 255 0 Data Value Range The contrast value for each range represents the percent of the available output range that particular range occupies. Contrast Stretch on the Display Usually.img output file with the same data values as the displayed contrast stretched image. See "Raster Data" on page 1 for more information on the data contained in image files. if the contrast of the low range increases. Lookup tables are created that convert the range of data file values to the maximum range of the display device. as the contrast and brightness values are changed. a contrast stretch is performed on the display device only. In ERDAS IMAGINE. Enhancement 467 . you can permanently change the data file values to the lookup table values. You can then edit and save the contrast stretch values and lookup tables as part of the raster data image file. For example. so that the data file values are not changed. The brightness value for each range represents the middle of the total range of brightness values occupied by that range. These values are loaded into the Viewer as the default display values the next time the image is displayed.

then this range represents approximately 95 percent of the data. The use of these statistics in contrast stretching is discussed and illustrated in "Image Display" on page 145. Usually the data file values that are two standard deviations above and below the mean are used. increasing contrast in some areas and decreasing it in others. If the data have a normal distribution. Figure 137 shows how the contrast stretch manipulates the histogram of the data. By manipulating the lookup tables as in Figure 137. Statistical terms are discussed in "Math Topics" on page 697. which is created by adding breakpoints to the histogram. or by a specific amount. This is also a good example of a piecewise linear contrast stretch. standard deviation. The shadow pixels are usually at the low extreme of the data file values. and other statistics on each band of data. Varying the Contrast Stretch There are variations of the contrast stretch that can be used to change the contrast of values over a specific range.The statistics in the image file contain the mean. 468 Enhancement . outside the range of two standard deviations from the mean. The mean and standard deviation are used to determine the range of data file values to be translated into brightness values or new data file values. the maximum contrast in the features of an image can be brought out. The mean and standard deviation are used instead of the minimum and maximum data file values because the minimum and maximum data file values are usually not representative of most of the data. You can specify the number of standard deviations from the mean that are to be used in the contrast stretch. A notable exception occurs when the feature being sought is in shadow.

and Effect on Histogram output brightness values output brightness values 255 output histogram 255 input histogram 0 0 input data file values 255 0 0 input data file values 255 1. 255 output brightness values 0 0 output brightness values 255 input data file values 0 0 3. This can have the visual effect of a crude classification. The result approximates a flat histogram. Enhancement 469 . A breakpoint is added to the linear function. Therefore. Values are clipped at 255. redistributing the contrast. input data file values 255 Histogram Equalization Histogram equalization is a nonlinear stretch that redistributes pixel values so that there is approximately the same number of pixels with each value within a range. Another breakpoint added.Figure 137: Contrast Stretch Using Lookup Tables. 4. Contrast at the peak of the histogram continues to increase. Histogram equalization can also separate pixels into distinct groups if there are few output values over a wide range. contrast is increased at the peaks of the histogram and lessened at the tails. The breakpoint at the top of the function is moved so that values are not clipped. 255 2. Linear stretch.

as shown in the following equation: T A = --N Where: N T A = the number of bins = the total number of pixels in the image = the equalized number of pixels per bin The pixels of each input value are assigned to bins. the pixel values of an image (either data file values or brightness values) are reassigned to a certain number of bins. based upon the bins to which they are assigned. The range of the output values is from 0 to M. If there are many bins or many pixels with the same value(s). • The total number of pixels is divided by the number of bins. Consider Figure 139: 470 Enhancement .the maximum of the range of the output values.Figure 138: Histogram Equalization Original Histogram peak After Equalization tail pixels at peak are spread apart . The following parameters are entered: • N .the number of bins to which pixel values can be assigned. The pixels are then given new values.contrast is gained pixels at tail are grouped contrast is lost To perform a histogram equalization. M . so that the number of pixels in each bin is as close to A as possible. which are simply numbered sets of pixels. equaling the number of pixels per bin. some bins may be empty.

The output histogram of this equalized image looks like Figure 140: Enhancement 471 . M = 9. the following equation is used: ⎛ i–1 ⎞ H i ⎜ H ⎟ + ----⎜ ∑ k⎟ 2 B i = int ⎝ k = 1 ⎠ ---------------------------------A Where: A Hi int Bi = equalized number of pixels per bin (see above) = the number of values with the value i (histogram) = integer function (truncating real numbers to integer) = bin number for pixels with value i Source: Modified from Gonzalez and Wintz. there would be: 240 pixels / 10 bins = 24 pixels per bin = A To assign pixels to bins.Figure 139: Histogram Equalization Example 60 60 number of pixels 40 30 A = 24 15 10 5 5 10 5 0 1 2 3 4 5 6 7 8 9 data file values There are 240 pixels represented by this histogram. In this example. To equalize this histogram to 10 bins. 1977 The 10 bins are rescaled to the range 0 to M. because the input values ranged from 0 to 9. so that the equalized histogram can be compared to the original.

To perform a true color level slice. which usually make up the darkest and brightest regions of the input image. The lookup table is then stair-stepped so that there is an equal number of input pixels in each of the output levels. A level slice on a true color display creates a stair-stepped lookup table. data values at the tails of the original histogram are grouped together. For example. Sets of pixels with the same value are never split up to form equal bins. The resulting histogram is not exactly flat. contrast among the tail pixels. Level Slice A level slice is similar to a histogram equalization in that it divides the data into equal amounts. the input range of 3 to 7 is stretched to the range 1 to 8. since the pixels can rarely be grouped together into bins with an equal number of pixels. each with one output brightness value. So. is lost. 472 Enhancement . The effect on the data is that input file values are grouped together at regular intervals into a discrete number of levels. you can see that the enhanced image gains contrast in the peaks of the original histogram. Input values 0 through 2 all have the output value of 0.Figure 140: Equalized Histogram numbers inside bars are input data file values 60 60 number of pixels 40 30 4 20 2 1 0 15 3 0 0 0 5 A = 24 6 7 8 9 15 0 1 2 3 4 5 6 7 8 9 output data file values Effect on Contrast By comparing the original histogram of the example data with the one above. However. you must specify a range for the output brightness values and a number of output levels.

or are slightly different because of sun angle or atmospheric effects. The AOI function is available from the Viewer menu bar. even when matching scenes that are not of the same area. Histogram matching is useful for matching data of the same or adjacent scenes that were scanned on separate days. as illustrated in Figure 141. a lookup table is mathematically derived. mapped through the lookup table (b). This can be done using the AOI function. approximates model histogram (c). The relative distributions of land covers should be about the same. histogram matching is performed band-toband (for example. To match the histograms. the two input images should have similar characteristics: • • • • The general shape of the histogram curves should be similar.Histogram Matching Histogram matching is the process of determining a lookup table that converts the histogram of one image to resemble the histogram of another. To achieve good results with histogram matching. In ERDAS IMAGINE. For some applications. which serves as a function for converting one histogram to the other. then the clouds should be removed before matching the histograms. If one image has clouds and the other does not. the spatial resolution of the data should be the same. Enhancement 473 . This is especially useful for mosaicking or change detection. band 2 of one image is matched to band 2 of the other image). Relative dark and light features in the image should be the same. Figure 141: Histogram Matching (a) (b) frequency frequency + = 0 input 255 0 input 255 frequency 0 input 255 Source histogram (a).

DNin Source: Pratt. Dark detail becomes light.255) to 0 . 1991 Spatial Enhancement While radiometric enhancements operate on each pixel individually. which is the difference between the highest and lowest values of a contiguous set of pixels. The output image is in floating point format.Brightness Inversion The brightness inversion functions produce images that have the opposite contrast of the original image. and light detail becomes dark.1 ÷ DNin if 0. in which every pixel has the same value low spatial frequency—an image consisting of a smoothly varying gray scale highest spatial frequency—an image consisting of a checkerboard of black and white pixels 474 Enhancement . 1986) defines spatial frequency as “the number of changes in brightness value per unit distance for any particular part of an image.1 DNout = 0.1. Spatial enhancement deals largely with spatial frequency.0 .0. Brightness inversion has two options: inverse and reverse. This function applies the following algorithm: DNout = 1. This can also be used to invert a negative image that has been scanned to produce a positive image. spatial enhancement modifies pixel values based on the values of surrounding pixels. so a minmax stretch is used to convert the output image into 8-bit format.0 if 0.” Consider the examples in Figure 142: • • • zero spatial frequency—a flat image. Jensen (Jensen.0 < DNin < 0. Inverse is useful for emphasizing detail that would otherwise be lost in the darkness of the low DN pixels. Both options convert the input data range (commonly 0 . A min-max remapping is used to simultaneously stretch the image and handle any input bit format.1 < DNin < 1 Reverse is a linear function that simply reverses the DN values: DNout = 1.

Enhancement 475 . 1996). These numbers are often called coefficients. Convolution Filtering Convolution filtering is the process of averaging small sets of pixels across an image. Convolution filtering is one method of spatial filtering. Some texts may use the terms synonymously. because they are used as such in the mathematical equations. A convolution kernel is a matrix of numbers that is used to average the value of each pixel with the values of surrounding pixels in a particular way. 1996). which refers to the altering of spatial or spectral features for image enhancement (Jensen.Figure 142: Spatial Frequencies zero spatial frequency low spatial frequency high spatial frequency This section contains a brief description of the following: • • Convolution. Crisp. and Adaptive filtering Resolution merging See Radar Imagery Enhancement on page 525 for a discussion of Edge Detection and Texture Analysis. you can apply convolution filtering to an image using any of these methods: • • • • Filtering dialog in the respective multispectral or panchromatic image type option Convolution function in Spatial Resolution option Spatial Resolution Non-directional Edge enhancement function Convolve function in Model Maker Filtering is a broad term. In ERDAS IMAGINE. Convolution filtering is used to change the spatial frequency characteristics of an image (Jensen. The numbers in the matrix serve to weight this average toward particular pixels. These spatial enhancement techniques can be applied to any type of data.

imagine that the convolution kernel is overlaid on the data file values of the image (in one band).Convolution Example To understand how one pixel is convolved. When the pixels in this example image are convolved. Figure 143: Applying a Convolution Kernel 2 2 2 2 2 8 8 2 2 2 6 6 8 2 2 6 6 6 8 2 6 6 6 6 8 -1 -1 -1 -1 16 -1 Kernel -1 -1 -1 Input Data Figure 143 shows a 3 × 3 convolution kernel being applied to the pixel in the third column. this is called Fill. the pseudo data are derived by reflection. so that the pixel to be convolved is in the center of the window. pseudo data must be generated in order to provide values on which the kernel can operate. here we have used ?s to show the unknown values. To compute the output value for this pixel. An alternative to reflection is to create background value (usually zero) pseudo data. third row of the sample data (the pixel that corresponds to the center of the kernel). the second data row or column is copied above or left of the first copy and so on. the last row and column of an image are either reflected or filled just like the first row and column. This means the top row is duplicated above the first data row and the left column is duplicated left of the first data column. In practice. and the total is divided by the sum of the values in the kernel. output values cannot be calculated for the last row and column. as shown here: integer [(-1 × 8) + (-1 × 6) + (-1 × 6) + (-1 × 2) + (16 × 8) + (-1 × 6) + (-1 × 2) + (-1 × 2) + (-1 × 8) ÷ (-1 + -1 + -1 + -1 + 16 + -1 + -1 + -1 + -1)] = int [(128-40) / (16-8)] = int (88 / 8) = int (11) = 11 In order to convolve the pixels at the edges of an image. If a second row or column is needed (for a 5 × 5 kernel for example). In the example below. These products are summed. 476 Enhancement . each value in the convolution kernel is multiplied by the image pixel value that corresponds to it.

Enhancement 477 . F is set to 1 if the sum is zero. V is clipped to 0.j (in the kernel) = the data value of the pixel that corresponds to fij = the dimension of the kernel. thus increasing the spatial frequency of the image. Source: Modified from Jensen. so that the output values are in relatively the same range as the input values. 1996. It is important to note that the relatively lower values become lower. and the higher values become higher. 1983 The sum of the coefficients (F) is used as the denominator of the equation above. Convolution Formula The following formula is used to derive an output data file value for the pixel being convolved (in the center): ⎛ q ⎞ ⎜ f ij d ij⎟ ∑⎜∑ ⎟ V = i = 1 ⎝j = 1 ⎠ ---------------------------------F Where: q fij dij q F V = the coefficient of a convolution kernel at position i. assuming a square kernel (if q = 3. or 1 if the sum of coefficients is 0 = the output pixel value In cases where V is less than 0.Figure 144: Output Values for Convolution Kernel pseudo data (shaded) 2 2 2 2 2 2 2 2 2 2 2 2 8 8 8 2 2 2 6 6 6 8 2 2 6 6 6 6 8 2 6 6 6 6 6 8 0 1 1 2 ? 11 11 0 1 ? 5 5 11 0 ? 6 5 6 11 ? ? ? ? ? ? Input Data Output Data The kernel used in this example is a high frequency kernel. Schowengerdt. as explained below. the kernel is 3 × 3) = either the sum of the coefficients of the kernel. Since F cannot equal zero (division by zero is not defined).

which usually smooths out or zeros out areas of low spatial frequency and creates a sharp contrast where spatial frequency is high. Zero-sum kernels can be biased to detect edges in a particular direction. High-frequency kernels serve as edge enhancers. this 3 × 3 kernel is biased to the south (Jensen. no division is performed (F = 1). has the effect of increasing spatial frequency. then the sum of the coefficients is not used in the convolution equation. When a zero-sum kernel is used. low values become much lower) Therefore. -1 1 1 -1 -2 1 -1 1 1 See the section on Edge Detection on page 532 for more detailed information. Unlike edge detectors (such as zero-sum kernels). they highlight edges and do not necessarily eliminate other features. -1 -1 -1 -1 16 -1 -1 -1 -1 478 Enhancement . For example. which is at the edges between homogeneous (homogeneity is low spatial frequency) groups of pixels. 1996). a zero-sum kernel is an edge detector. since division by zero is not defined. High-Frequency Kernels A high-frequency kernel. or high-pass kernel. This generally causes the output values to be: • • • zero in areas where all input values are equal (no edges) low in areas of low spatial frequency extreme in areas of high spatial frequency (high values become much higher.Zero-Sum Kernels Zero-sum kernels are kernels in which the sum of all coefficients in the kernel equals zero. In this case. since they bring out the edges between homogeneous groups of pixels. The resulting image often consists of only edges and zeros. as above.

. causing them to be more homogeneous. The algorithm used for this function is: 1) Calculate principal components of multiband input image.. see "Geographic Information Systems" on page 173.. spatial frequency is increased by this kernel. Enhancement 479 . or low-pass kernel.the high value becomes higher. or a broad point spread function of the sensor...the low value gets lower. Inversely...When this kernel is used on a set of pixels in which a relatively low value is surrounded by higher values. which decreases spatial frequency. like this. 2) Convolve PC-1 with summary filter. For information on applying filters to thematic layers. Low-Frequency Kernels Below is an example of a low-frequency kernel. This is a useful enhancement if the image is blurred due to atmospheric haze. BEFORE 64 61 58 60 125 60 57 69 70 AFTER 64 61 58 60 187 60 57 69 70 . rapid sensor motion. when the kernel is used on a set of pixels in which a relatively high value is surrounded by lower values. The resulting image looks either more smooth or more blurred. In either case. BEFORE 204 201 198 200 106 200 197 209 210 AFTER 204 201 198 200 9 200 197 209 210 . 1 1 1 1 1 1 1 1 1 This kernel simply averages the values of the pixels.. Crisp The Crisp filter sharpens the overall scene luminance without distorting the interband variance content of the image.

3) Retransform to RGB space. but variance is retained. spatial. A number of models have been suggested to achieve this image merge. SPOT panchromatic has one broad band with very good spatial resolution—10 m. or temporal resolution. spectral. It is unacceptable to resample the thermal band (TM6) based on the visible (SPOT panchromatic) image. In the above two techniques. Landsat TM sensors have seven bands with a spatial resolution of 28. Resolution Merge The resolution of a specific sensor can refer to radiometric. replacing PC-1. Since SPOT data do not cover the full spectral range that TM data do. this technique is limited to three bands (R. B). SPOT panchromatic) additively with the high spectral resolution Landsat TM image.5 m. among others. 1987) used forward-reverse RGB to IHS transforms. replacing I (from transformed TM data) with the SPOT panchromatic image. Thus. 1991). G. See "Raster Data" on page 1 for a full description of resolution types. Luminance is sharpened. this assumption does not strictly hold. Combining these two images to yield a seven-band data set with 10 m resolution provides the best characteristics of both sensors. Chavez (Chavez et al. uses the forward-reverse principal components transforms with the SPOT image. The logic of the algorithm is that the first principal component (PC-1) of an image is assumed to contain the overall scene luminance. you can sharpen only PC-1 and then reverse the principal components calculation to reconstruct the original image. Welch and Ehlers (Welch and Ehlers. Another technique (Schowengerdt. The other PCs represent intra-scene variance. and that all the spectral information is contained in the other PCs or in H and S. it is assumed that the intensity component (PC-1 or I) is spectrally equivalent to the SPOT panchromatic image. 1980) combines a high frequency image derived from the high spatial resolution data (that is. The Resolution Merge function has two different options for resampling low spatial resolution data to a higher spatial resolution while retaining spectral information: • • forward-reverse principal components transform multiplicative 480 Enhancement . However.

this algorithm is mathematically rigorous. and scene luminance in the SWIR bands is identical to visible scene luminance. three bands are used according to the following formula: [DNB1 / DNB1 + DNB2 + DNB3] × [DNhigh res. or PC transform. image] = DNB3_new Where: B(n) = band (number) Enhancement 481 . The result is an increased presence of the intensity component. 1989a). image] = DNB1_new [DNB2 / DNB1 + DNB2 + DNB3] × [DNhigh res. The high spatial resolution image is then remapped so that its histogram shape is kept constant.5. this is desirable. 1987). Multiplicative The second technique in the Image Interpreter uses a simple multiplicative algorithm: (DNTM1) (DNSPOT) = DNnew TM1 The algorithm is derived from the four component technique of Crippen (Crippen. subtraction. city planning. only multiplication is unlikely to distort the color. PC-1 is removed and its numerical range (min to max) is determined. Brovey Transform In the Brovey Transform method. and utilities routing often want roads and cultural features (which tend toward high reflection) to be pronounced in the image. all interband variation is contained in the other 5 PCs. It is assumed that: • • PC-1 contains only overall scene luminance. The algorithm shown above operates on the original image. However. This remapping is done so that the mathematics of the reverse transform do not distort the thematic information (Welch and Ehlers. People involved in urban or suburban studies. in his study Crippen first removed the intensity component via band ratios. For many applications. With the above assumptions. spectral indices. but it is in the same numerical range as PC-1. division. image] = DNB2_new [DNB3 / DNB1 + DNB2 + DNB3] × [DNhigh res. 7). In this paper. the forward transform into PCs is made.Principal Components Merge Because a major goal of this merge is to retain the spectral information of the six TM bands 1 . and multiplication). it is argued that of the four possible arithmetic methods to incorporate an intensity image into a chromatic image (addition. It is then substituted for PC-1 and the reverse transform is applied.

1982. However. and digitized photographs. these are the scenes one would prefer to obtain from imagery sources such as Space Imaging or SPOT. Since the Brovey Transform is intended to produce RGB images. to provide contrast in shadows. Landsat. However. There are many circumstances where this is not the optimum approach. 2. only three bands at a time should be merged from the input multispectral scene. coastal studies where much of the water detail is spread through a very low DN range and the land detail is spread through a much higher DN range would be such a circumstance. These scenes need an increase in both contrast and overall scene luminance. even adjustable stretches like the piecewise linear stretch act on the scene globally. Peli and Lim. it is good for producing RGB images with a higher degree of contrast in the low and high ends of the image histogram and for producing visually appealing images. The Image Enhancement function in IMAGINE Radar Interpreter is better for degraded or difficult images. Low luminance—these scenes have an overall or regional less than optimum intensity. • 482 Enhancement . 3. 2. An underexposed photograph (scanned) or shadowed areas would be in this category. Consequently. water and high reflectance areas such as urban features). such as bands 3. Scenes to be adaptively filtered can be divided into three broad and overlapping categories: • Undegraded—these scenes have good and uniform illumination overall. 3 to RGB. The resulting merged image should then be displayed with bands 1. Schwartz and Soha. Adaptive Filter Contrast enhancement (image stretching) is a widely applicable standard image processing technique. In these cases. ERDAS IMAGINE supplies two adaptive filters with user-adjustable parameters. 1 from a SPOT or Landsat TM image or 4. such as SPOT. The Adaptive Filter function in Image Interpreter can be applied to undegraded images. Given a choice.The Brovey Transform was developed to visually increase contrast in the low and high ends of an image’s histogram (that is. Adaptive filters attempt to achieve this (Fahnestock and Schowengerdt. the Brovey Transform should not be used if preserving the original scene radiometry is important. a filter that adapts the stretch to the region of interest (the area within the moving window) would produce a better enhancement. For example. 2 from a Landsat TM image. 1983. 1977).

Wavelet Resolution Merge The ERDAS IMAGINE Wavelet Resolution Merge allows multispectral images of relatively low spatial resolution to be sharpened using a coregistered panchromatic image of relatively higher resolution. These two component parts are then recombined in various relative amounts using multipliers derived from LUTs. In addition. the different bands would have to be separated into one-band files. These LUTs are driven by the overall scene luminance: DNout = K(DNHi) + DNLL Where: K Hi LL = user-selected contrast multiplier = high luminance (derives from the LUT) = local luminance (derives from the LUT) Figure 145: Local Luminance Intercept 255 Local Luminance 0 Intercept (I) Low Frequency Image DN 255 Figure 145 shows the local luminance intercept. Examples of such circumstances would be an over-exposed (scanned) photograph or a scene with a light cloud cover or haze. Increasing the spatial resolution of multispectral imagery in this fashion is. A primary intended target dataset is Landsat 7 ETM+. Without the use of adaptive filters. Enhancement 483 . multiband images may require different parameters for each band. No single filter with fixed parameters can address this wide variety of conditions. and then recombined. the image is separated into high and low frequency component images. For this function.• High luminance—these scenes are characterized by overall excessively high DN values. in fact. These scenes need a decrease in luminance and an increase in contrast. which is the output luminance value that an input luminance value of 0 would be assigned. the rationale behind the Landsat 7 sensor design. enhanced. The low frequency image is considered to be overall scene luminance.

Given a sufficient number of waveforms in the family. The results of pixel-level fusion are primarily for presentation to a human observer/analyst (Rockinger and Fechner. In the Fourier transform. radar with SPOT Pan. In image processing terms. The wavelet transform uses short. 1998). However. the “mother wavelet” or “basis”. The input signal (image) is broken down into successively smaller multiples of this basis. Thus. we use the above theory to derive moving window. 2002a. pixel. 484 Enhancement . 1992). For example. it must be considered that computer-based analysis (for example. long continuous (sine and cosine) waves are used as the basis. this algorithm can be used to merge any two images. A key element of using wavelets is selection of the base waveform to be used. a family of multiples can be created with incrementally increasing frequency. and so forth. and symbolic. The “basis” is the basic waveform to be used to represent the image.The ERDAS IMAGINE algorithm is a modification of the work of King and Wang (King et al. Lemeshewsky. discrete “wavelets” instead of a long wave. 1997). they have a finite length as opposed to sine waves which are continuous and infinite in length. it is vital that the algorithm preserve the spectral fidelity of the input dataset. feature. Wavelet Theory Wavelet-based image reduction is similar to Fourier transform analysis. Once the waveform family is defined. This algorithm works at the pixel level. The wavelets are rarely even calculated (Shensa. Aside from traditional Pan-Multispectral image sharpening. the coefficients of the discrete high-pass filter are of more interest than the wavelets themselves. Thus. for example. Fusing information from several sensors into one composite image can take place on four levels. four times the frequency. For example. in the case of pan/multispectral image sharpening. wavelets are discrete. Wavelets are derived waveforms that have a lot of mathematically useful characteristics that make them preferable to simple sine or cosine functions. we want relatively rapid processing kernels (moving windows). related wavelets can be created of twice the frequency. we do not want to get deeply involved in mathematical waveform decomposition. high-pass kernels which approximate the waveform decomposition. supervised classification) could be a logical follow-on. that is. Thus the new transform is much more local (Strang et al. the image can be decomposed by applying coefficients to each of the waveforms. 2001) with extensive input from Lemeshewsky (Lemeshewsky. In practice. In image processing. the wavelet can be parameterized as a finite size moving window. 1999. signal. three times the frequency. 2002b). Lemeshewsky. Once the basis waveform is mathematically defined. all the detail in the image can be defined by coefficient multiples of the ever-finer waveforms.

3 denotes a biorthogonal wavelet with reconstruction order 3 and decomposition order 3. Biorthogonal wavelets are ideal for image processing applications because of their symmetry and perfect reconstruction properties. Although biorthogonal wavelets are phase linear. 362-363). the wavelets are applied to the input image recursively via a pyramid algorithm or filter bank. For biorthogonal transforms. and “diagonal”) are retained for later image reconstruction. The analysis and reconstruction filters are not required to be the same. The same filters are used for analysis and reconstruction. The signal processing properties of the Discrete Wavelet Transform (DWT) are strongly determined by the choice of high-pass (bandpass) filter (Shensa. The matrices are unitary and the transform is lossless. In general. biorthogonal 3. a shift of the input image can produce large changes in the values of the wavelet decomposition coefficients. mathematically constrained so that no information is lost. which saves only even-numbered averages and differences. biorthogonal (and symmetrical) wavelets are more appropriate than orthogonal wavelets for image processing applications (Strang et al. however. One way to overcome this is to use an average of each average and difference pair. The new axes are not necessarily perpendicular. This is commonly implemented as a cascading series of highpass and lowpass filters. they are shift variant due to the decimation process. After filtering at any level.For image processing. 1997. 1992). applied sequentially to the low-pass image of the previous recursion. perfect reconstruction is possible and the matrices are invertible. the new axes are mutually perpendicular and the output signal has the same length as the input signal. the lengths of and angles between the new axes may change. The high-pass images (termed “horizontal”. three or four recursions are sufficient. In practice. Once selected. For example. based on the mother wavelet. This means that the resultant subimage changes if the starting point is shifted (translated) by one pixel. Each biorthogonal wavelet has a reconstruction order and a decomposition order associated with it. For the commonly used. 1989) discrete wavelet decomposition algorithm. “vertical”. fast (Mallat. With orthogonal transforms. 2-D Discrete Wavelet Transform A 2-D Discrete Wavelet Transform of an image yields four components: • • approximation coefficients W ϕ H horizontal coefficients W ψ – variations along the columns Enhancement 485 . the low-pass image (commonly termed the “approximation” image) is passed to the next finer filtering in the filter bank. p. orthogonal and biorthogonal transforms are of interest. They are.

and W ψ . the low-pass and high-pass wavelet filters used for decomposition. for each input image. This yields two subimages whose horizontal resolutions are reduced by a factor of 2. W ψ . 2-D Inverse Discrete Wavelet Transform The reduced components of the input images are passed as input to the ˜ ˜ low-pass and high-pass reconstruction filters h ϕ and h ψ (different from the ones used for decomposition) as shown in Figure 147. 486 Enhancement . The rows of the image are convolved with the low-pass and high-pass filters and the result is downsampled along the columns. Thus. W ψ . respectively. we have four subimages each reduced by H V D a factor of 4 compared to the original image. vertical information. 2001) Figure 146: Schematic Diagram of the Discrete Wavelet Transform .DWT hϕ hϕ Wϕ low pass input image subimage low pass hψ H Wψ high pass hϕ V Wψ hψ high pass subimage column decimation low pass hψ D Wψ high pass row decimation Symbols h ϕ and h ψ are.• • V vertical coefficients W ψ – variations along the rows D diagonal coefficients W ψ – variations along the diagonals (Gonzalez and Woods. Both subimages are again filtered columnwise with the same low-pass and high-pass filters and downsampled along rows. W ϕ . The high-pass or detailed coefficients characterize the image’s high frequency information with vertical orientation while the low-pass component contains its low frequency.

the low-frequency image is of lower resolution and the high-frequency image contains the detail of the image. starting with a 5-meter image. adding the two together would yield the original image. Any image can be broken into various high. corresponding. These two images contain all of the information in the original image. By definition. corresponding.and low-pass filters. the subimages are upsampled along rows (since the last step in the DWT was downsampling along rows) and convolved with the low-pass and high-pass filters columnwise (in the DWT we filtered along the columns last). high-frequency images. The wavelet family can be thought of as a high-pass filter. a low-pass filter can be used to create a low-frequency image. upsampled along columns and then filtered rowwise and finally concatenated to yield the original image. Subtracting this low-frequency image from the original image would create the corresponding high-frequency image.and low-frequency components using various high. If they were added together the result would be the original image. a 10meter low-pass image and the corresponding high-pass image could be created. Algorithm Theory The basic theory of the decomposition is that an image can be separated into high-frequency and low-frequency components. Thus. This process can be repeated recursively.and. A second iteration would create a 20-meter low. For example. Thus wavelet-based high. The same could be done by high-pass filter filtering an image and the corresponding low-frequency image could be derived. high-pass images. and so forth. A third recursion would create a 40meter low. Enhancement 487 .Figure 147: Inverse Discrete Wavelet Transform .DWT-1 Wϕ ˜ hϕ low pass H Wψ ˜ hϕ ˜ hψ low pass output image ˜ hψ high pass V Wψ ˜ hϕ low pass ˜ hψ D Wψ column padding high pass row padding high pass The sequence of steps is the opposite of that in the DWT. Again.and. The created low-frequency image could be again processed with the kernels to create new images with even lower resolution.and low-frequency images can be created from any input image. These intermediate outputs are concatenated.

and so forth) one can derive a multispectral image that has the high-pass (highfrequency) details from the 5-meter image. from the multispectral image. can be replaced with the 40-meter multispectral image and the whole wavelet decomposition process reversed. the other 40-meter multispectral. Using wavelets. There are tools available to compress the multispectral image into a single band for substitution using the IHS transform or PC transform. it should be noted that the high-resolution image (panchromatic.Consider two images taken on the same day of the same area: one a 5-meter panchromatic. must also be a single band. perhaps) is a single band and so the substitution image. In the above scenario. using the high-pass images derived during the decomposition. Alternately. to reconstruct a 5-meter resolution multispectral image. single bands can be processed sequentially. The 5-meter has better spatial resolution. but the 40-meter has better spectral resolution. derived from the original 5-meter pan image. If all of the above calculations are done in a mathematically rigorously way (histomatch and resample before substitution. and diagonal components of the high spatial resolution image are fused into a new output image. vertical. The approximation component of the high spectral resolution image and the horizontal. one can decompose the 5-meter image through several iterations until a 40-meter low-pass image is generated plus all the corresponding high-pass images derived during the recursive decomposition. Figure 148: Wavelet Resolution Merge high spectral res Resample Histogram Match a h high spatial res DWT h v d v d DWT -1 fused image 488 Enhancement . It would be desirable to take the high-pass information from the 5-meter image and combine it with the 40-meter multispectral image yielding a 5-meter multispectral image. This 40-meter low-pass image.

A larger number of tie points and more attention to precise work would then be required to attain the same registration accuracy.Prerequisites and Limitations Precise Coregistration A first prerequisite is that the two images be precisely co-registered. For some sensors (for example. In ERDAS IMAGINE. and Swipe Tools can be used to visually evaluate the precision of the coregistration. while a SPOT Panchromatic image can be used to sharpen TM bands 1-4. 2002b) that there can be spectrallyinduced contrast reversals between visible and NIR bands at. for example. Landsat 7 ETM+) this co-registration is inherent in the dataset. Thus. At that time. Enhancement 489 . This can produce degraded edge definition or artifacts. the spectral fidelity of the MS dataset will be lost. a greatly over-defined 2nd order polynomial transform should be used to coregister one image to the other. soil-vegetation boundaries. it may be desirable to use it as the Reference Image. Then the Fade. stacked image layers are resampled to a common pixel size. When doing the coregistration. It has been noted (Lemeshewsky. Since the Wavelet Resolution Merge algorithm does the pixel resampling at an optimal stage in the calculation. welldistributed tie points are collected until the predicted point consistently falls exactly were it should.and Y-Residual and the RMS Error columns in the ERDAS IMAGINE GCP Tool will indicate the accuracy of registration. If this is not the case. by having far more than the minimum number of tie points). If the datasets are not spectrally identical. Flicker. the transform must be correct. this avoids multiple resamplings. it is possible to reduce the random RMS error to the subpixel level. This may require 30-60 tie points for a typical Landsat TM—SPOT Pan co-registration. if the lowest resolution image has georeferencing that is to be retained. an underlying assumption of resolution merge algorithms is that the two images are spectrally identical. Identical Spectral Range Secondly. it would be questionable to use it for TM bands 5 and 7 and totally inappropriate for TM band 6 (thermal emission). Evaluation of the X. However. In practice. This will allow the greatest accuracy of registration. they should be codisplayed in an ERDAS IMAGINE Viewer. the high resolution image is used as the Reference Image. By over-defining the transform (that is. This is easily accomplished by using the Point Prediction option in the GCP Tool. it is generally preferable to register the lower resolution image to the higher resolution image. that is. It is preferable to store the high and low resolution images as separate image files rather than Layerstacking them into a single image file. After creating the coregistered images.

a single band of a multispectral image. For the most common scenarios. lakes have grown or shrunk. Landsat ETM+. If. Thus. Clearly. trees have dropped their foliage. For example. and so forth. to some (unknown) extent. however. Certain ratios can result in a degradation of the substitution image that may not be fully overcome by the subsequent wavelet sharpening. all images are degraded due to atmospheric refraction and scattering of the returning signal. for example. Theoretical Limitations As described in the discussion of the discrete wavelet transform. This produces approximation (a) images with pixel sizes reduced by a factor of two with each iteration. both images in a resolution merge operation have. It has been suggested that this technique produces an output image that is the best for visual interpretation. a resolution increase of greater than two or three becomes theoretically questionable. Any other pixel size ratio will require resampling of the low (spatial) resolution image prior to substitution. Thus. IKONOS and QuickBird. If the areas of change are small. already been degraded.Temporal Considerations A trivial corollary is that the two images must have no temporallyinduced differences. It is not reasonable to assume that each multispectral pixel can be precisely devolved into nine or more subpixels. The low (spatial) resolution image will substitute exactly for the “a” image only if the input images have relative pixel sizes differing by a multiple of 2. Thus. The IHS method accepts only 3 input bands. the only option is to select which band to use. to an unknown extent. been “smeared”. B image. if one wished to sharpen more data layers. This will result in a less than optimal enhancement. the areas of change are large. G. the algorithm downsamples the high spatial resolution input image by a factor of two with each iteration. In this case. the histogram matching step may introduce data distortions. this is not a problem. Although the mathematics of the algorithm are precise for any pixel size ratio. Since a visual product is likely to be only an R. the bands could be done as separate groups of 3 and then the whole dataset layerstacked back together. the 3-band limitation on this method is not a distinct limitation. If the low resolution image to be processed is a multispectral image. This is termed “point spread”. The simplest is when the input low (spatial) resolution image is only one band. this technique would be appropriate when producing a final output product for map production. two methods will be offered for creating the grayscale representation of the multispectral image intensity. IHS and PC. both images in a resolution merge operation have. Spectral Transform Three merge scenarios are possible. the merge can proceed and those areas removed from evaluation. 490 Enhancement . If a crop has been harvested. then merging of the two images in that area is inappropriate.

particularly red. Consequently. however. and PC calculations produce single precision floating point output. B) In this documentation. and discusses theoretical explanations. In practice.Lemeshewsky (Lemeshewsky. Keep in mind that processing such data sets can require a large amount of computer swap space. 2002a) that this technique produces an output image that better preserves the spectral integrity of the input dataset. for example. ERDAS IMAGINE programs allow an unlimited number of bands to be used. some examples are illustrated with twodimensional graphs. this is a risky practice unless you are very familiar with your data and the changes that you are making to it. IHS. the resultant image must undergo a data compression to get it back to 8 bit format. 1995) demonstrates that the IHS transform can distort colors. The PC Method will accept any number of input data layers. Anytime you alter values. the principles outlined below apply to any number of bands. this method would be most appropriate if further processing of the data is intended. G. It has been suggested (Lemeshewsky. Enhancement 491 . Note. Spectral Enhancement The enhancement techniques that follow require more than one band of data. Yocky (Yocky. However. 1999) has found equivocal results with the PC versus IHS approaches. Some of these enhancements can be used to prepare data for classification. 2002b) discusses some theoretical limitations on IHS sharpening that suggest that sharpening of the bands individually (as discussed above) may be preferable. if the next step was a classification operation. you risk losing some information. you are not limited to twodimensional (two-band) data. The wavelet. They can be used to: • • • • compress bands of data that are similar extract new bands of data that are more interpretable to the eye apply mathematical transforms and algorithms display a wider variety of information in the three available color guns (R. that Zhang (Zhang. Thus. However.

changing the coordinates of each pixel in spectral space. and are often more interpretable than the source data (Jensen. The values of one band are plotted against those of the other. ellipsoid (3 dimensions). The bands of PCA data are noncorrelated and independent.) To perform PCA. (The term ellipse is used for general purposes here. an ellipse (2 dimensions). an ellipse shape results. 1989). It allows redundant data to be compacted into fewer bands—that is. 492 Enhancement . First Principal Component The length and direction of the widest transect of the ellipse are calculated using matrix algebra in a process explained below. as well as the data file values. 1996. or hyperellipsoid (more than 3 dimensions) is formed if the distributions of each input band are normal or near normal. The transect. The direction of the first principal component is the first eigenvector. Scatterplots and normal distributions are discussed in "Math Topics" on page 697.Principal Components Analysis Principal components analysis (PCA) is often used as a method of data compression. which shows the relationships of data file values in two bands. If both bands have normal distributions. is called the first principal component of the data. the axes of the spectral space are rotated. The process is easily explained graphically with an example of data in two bands. 1977). Figure 149: Two Band Scatterplot 255 Band B data file values histogram Band B 0 0 histogram Band A Band A data file values 255 Ellipse Diagram In an n-dimensional histogram. Faust. and its length is the first eigenvalue (Taylor. which corresponds to the major (longest) axis of the ellipse. The new axes are parallel to the axes of the ellipse. Below is an example of a two-band scatterplot. the dimensionality of the data is reduced.

Figure 151: Range of First Principal Component 255 range of pc 1 Band B data file values range of Band A 0 0 range of Band B Band A data file values 255 Enhancement 493 . Since. just as the hypotenuse of a right triangle must always be longer than the legs. In Figure 151 it is easy to see that the first eigenvalue is always greater than the ranges of the input bands. These values are stored in the first principal component band of a new data file. it measures the highest variation within the data. Therefore. in spectral space. The points in the scatterplot are now given new coordinates.A new axis of the spectral space is defined by this first principal component. the coordinates of the points are the data file values. which correspond to this new axis. Figure 150: First Principal Component 255 Principal Component (new axis) 0 0 255 The first principal component shows the direction and length of the widest transect of the ellipse. as an axis in spectral space. new data file values are derived from this process.

almost 100%. In other applications. there are n principal components. and accounts for a decreasing amount of the variation in the data which is not already accounted for by previous principal components (Taylor. the second principal component describes the largest amount of variance in the data that is not already described by the first principal component (Taylor. the second principal component corresponds to the minor axis of the ellipse. 1989). 494 Enhancement . the striping in old MSS data) (Faust. These bands may also show regular noise in the data (for example. In a two-dimensional analysis.Successive Principal Components The second principal component is the widest transect of the ellipse that is orthogonal (perpendicular) to the first principal component. • Although there are n output bands in a PCA. As such. These bands can show subtle details in the image that were obscured by higher contrast in the original image. 1977). Figure 152: Second Principal Component 255 PC 2 PC 1 90° angle (orthogonal) 0 0 255 In n dimensions. Therefore. PCA is useful for compressing data into fewer bands. 1989). 1977). Each successive principal component: • is the widest transect of the ellipse that is orthogonal to the previous components in the n-dimensional space of the scatterplot (Faust. useful information can be gathered from the principal component bands with the least variance. the first few bands account for a high proportion of the variance in the data—in some cases.

the first eigenvalue is the largest and represents the most variance in the data.. and the eigenvalues are the variance values for each band. in which all nondiagonal elements are zeros V is computed so that its nonzero elements are ordered from greatest to least.. The matrix V is the covariance matrix of the output principal component file.. which shows the direction of the principal component (the ellipse axis). 0 V = 0 v 2 0 . This means that the coordinates of each pixel in spectral space (the original data file values) are recomputed using a linear equation. To perform the linear transformation. 0 0 0 .Computing Principal Components To compute a principal components transformation. Because the eigenvalues are ordered from v1 to vn. a linear transformation is performed on the data. E.. v n E Cov ET = V Where: Cov E T V = the covariance matrix = the matrix of eigenvectors = the transposition function = a diagonal matrix of eigenvalues. 0 ... to transform the original data file values into the principal component values. as shown in the following equation: v 1 0 0 . describes a unitlength vector in spectral space. 1989 A full explanation of this computation can be found in Gonzalez and Wintz... The result of the transformation is that the axes in n-dimensional spectral space are shifted and rotated to be relative to the axes of the ellipse.. The zeros represent the covariance between bands (there is none). so that v1 > v2 > v3. Each column of the resulting eigenvector matrix. the eigenvectors and eigenvalues of the n principal components must be mathematically derived from the covariance matrix. 1977. > vn Source: Faust. The numbers are used as coefficients in the following equation. Enhancement 495 ..

positioned according to its DN value in each band. not to the original image. lies within the N-dimensional space. 496 Enhancement . The new stretched PC composite image is then retransformed to the original data areas. This clustering of the pixels is termed the data structure (Crist and Kauth. Each pixel. This pixel distribution is determined by the absorption/reflection spectra of the imaged material. Tasseled Cap The different bands in a multispectral image can be visualized as defining an N-dimensional space where N is the number of bands. The decorrelation stretch stretches the principal components of an image. Either the original PCs or the stretched PCs may be saved as a permanent image file for viewing after the stretch.255 range of the display device. single precision is probably appropriate in this case. second) = the output principal component value for principal component number e = a particular input band = the total number of bands = an input data file value in band k = the eigenvector matrix element at row k.255 data range. 1977 The purpose of a contrast stretch is to: • • alter the distribution of the image DN values within the 0 . A principal components transform converts a multiband image into a set of mutually orthogonal images portraying inter-band variance. and utilize the full range of values in a linear fashion.n Pe = Where: ∑ dk Eke k=1 e Pe k n dk Eke Decorrelation Stretch = the number of the principal component (first. Depending on the DN ranges and the variance of the individual input bands. Each PC is separately stretched to fully utilize the data range. column e Source: Modified from Gonzalez and Wintz. these new images (PCs) occupy only a portion of the possible 0 . NOTE: Storage of PCs as floating point. 1986).

These rotations are sensor-dependent. 1986): • • Brightness—a weighted sum of all bands. 1986. Greenness—orthogonal to brightness. Wetness—relates to canopy and soil moisture (Lillesand and Kiefer. a contrast between the nearinfrared and visible bands. MSS allowed Crist et al (Crist et al. a geologist and a botanist are interested in different absorption features. For viewing purposes. The data structure can be considered a multidimensional hyperellipsoid. The principal axes of this data structure are not necessarily aligned with the axes of the data space (defined as the bands of the input image). • A simple calculation (linear combination) then rotates the data space to present any of these axes to you. but once defined for a particular sensor (say Landsat 4 TM). and TM5 imagery. Research has produced three data structure axes that define the vegetation information content (Crist et al. Both would benefit from viewing the data in a way that would maximize visibility of the data structure of interest. termed Haze. 1986) to define three additional axes. different data structure axes. it is advantageous to rotate the N-dimensional space such that one or two of the data structure axes are aligned with the Viewer X and Y axes. defined in the direction of the principal variation in soil reflectance. They are more directly related to the absorption spectra. Strongly related to the amount of green vegetation in the scene. 1991) has used this haze parameter to devise an algorithm to dehaze Landsat imagery. 1987). For TM4. you could view the axes that are largest for the data structure produced by the absorption peaks of special interest for the application. See the discussion on Principal Components Analysis on page 492. The Tasseled Cap algorithm implemented in the Image Interpreter provides the correct coefficient for MSS. Crist and Kauth. In particular. and Sixth. Fifth. the calculations are: Enhancement 497 . Lavreau (Lavreau. the same rotation works for any scene taken by that sensor. TM4. They would want to view different data structures and therefore. The increased dimensionality (number of bands) of TM vs.See "Raster Data" on page 1 for more information on absorption/reflection spectra. For example. The Tasseled Cap transformation offers a way to optimize data viewing for vegetation studies.

8832 (TM1) .0563 (TM5) + . However.. and saturation (S) as the three positioned parameters (in lieu of R.B).. the viewed image is said to be in R.7243 (TM4) + .G.4572 (TM7) Haze = .1509 (TM1) + .G.1800 (TM7) Wetness = .. it could be defined as any data range. it is possible to define an alternate color space that uses intensity (I). 1986.7112 (TM5) . green. and B).0840 (TM5) . hue (H).. 1979).3279 (TM3) + . However.Brightness = .5436 (TM3) + .B space.2793)(TM2) + . These correspond to red.0032 (TM4) . 0 to 255 is the selected range..0130 (TM7) Source: Modified from Crist et al. Hue is representative of the color or dominant wavelength of the pixel... In Figure 153. It is a circular dimension (see Figure 153). This system is advantageous in that it presents colors more nearly as perceived by the human eye. and blue (R.0819 (TM2) .. Jensen. When displaying three bands of a multiband data set. It varies from 0 at the red midpoint through green and blue back to the red midpoint at 360.2435 (TM2) . hue must vary from 0 to 360 to define the entire sphere (Buchanan.4580 (TM3) . 498 Enhancement .1863 (TM7) Greenness = -.1973 (TM2) + . • • • Intensity is the overall brightness of the scene (like PC-1) and varies from 0 (black) to 1 (white).5585 (TM4) + . the additive primary colors.3406 (TM4) .G.5082 (TM5) + ..4743 (TM3) + . 1996 RGB to IHS The color monitors used for image display on image processing systems have three color guns. Saturation represents the purity of color and also varies linearly from 0 to 1.2848 (TM1) .3037(TM1) + .

0. or B values is 0.G. g.0. r. Hue. g.0 Red HUE Source: Buchanan. and Saturation Color Coordinate System 255 INTENSITY 255 Blue SATURATION 0 Green 255. b are each in the range of 0 to 1. M = largest value. 1979 To use the RGB to IHS transform. corresponding to the color with the largest value. or b m = least value. The equation for calculating intensity in the range of 0 to 1. 1980): M – rR = -------------M–m Where: M–g G = -------------M–m M–b B = -------------M–m R. or b NOTE: At least one of the R. corresponding to the color with the least value. The algorithm used in the Image Interpreter RGB to IHS transform is (Conrac Corporation.B are each in the range of 0 to 1. use the RGB to IHS function from Image Interpreter.Figure 153: Intensity. G. r. or B values is 1. and at least one of the R. g.0 is: M+m I = -------------2 Enhancement 499 . G. r.

1993).and high-frequency radar imagery. H and I are replaced by low. or B m = least value.The equations for calculating saturation in the range of 0 to 1. so that they more fully utilize the 0 to 1 value range. 2–M–m The equations for calculating hue in the range of 0 to 360 are: If M = m. You can also replace I with radar intensity before the IHS to RGB transform (Croft (Holcomb). R. H = 60 (4 + r . S. the full IHS image is retransformed back to the original RGB space.r) Where: R. It is not essential that the input parameters (IHS) to this transform be derived from an RGB to IHS transform.B are each in the range of 0 to 1.0 are: S = 0.g) If G = M. M = largest value.0. R. G. 500 Enhancement . a circular dimension. depending on the dynamic range of the DN values of the input image. a min-max stretch is applied to either intensity (I). 1991). saturation (S). You could define I and/or S as other parameters. or both. M+m M–m S = ----------------------.G. are 0 to 360.. H = 60 (6 + g . so that they more fully utilize the 0 to 1 value range. it largely defines what we perceive as color. set Hue at 0 to 360.. H = 60 (2 + b . NOTE: Use the Spatial Modeler for this analysis. Chavez evaluates the use of the IHS to RGB transform to resolution merge Landsat TM with SPOT panchromatic imagery (Chavez et al. a min-max stretch is applied to either I. G.b) If B = M. If M = m If I ≤ 0. In this model. or B IHS to RGB The family of IHS to RGB is intended as a complement to the standard RGB to IHS transform.5 M–m S = -------------. and then transform to RGB space. As the parameter Hue is not modified. In another approach (Daily. The values for hue (H).5 If I > 0. and the resultant image looks very much like the input image. H = 0 If R = M. In the IHS to RGB algorithm. After stretching. or both. it is possible that I or S or both occupy only a part of the 0 to 1 range. This is a method of color coding other data sets. However. 1983).

R = m The equations for calculating G in the range of 0 to 1. B = M Indices Indices are used to create output images by mathematically combining the DN values of different bands.Band Y) or more complex: Band X . G = m If 120 ≤ H < 180.m)(H ÷ 60) If 60 ≤ H < 180. B = M If 60 ≤ H < 120.m)((H .I (S) m=2*1-M The equations for calculating R in the range of 0 to 1.0 If I ≤ 0. R = M If 180 ≤ H < 240. M = I + S .H) ÷ 60) If 120 ≤ H < 240.H )÷ 60) If 240 ≤ H ≤ 360. R = m + (M . G = m + (M-m)((360 . m + (M .m)((H .H) ÷ 60) Equations for calculating B in the range of 0 to 1. R = m + (M . B = m + (M .0 are: If H < 60.240) ÷ 60) If 300 ≤ H ≤ 360. The algorithm used by ERDAS IMAGINE for the IHS to RGB function is (Conrac Corporation.0: If H < 60.5. These may be simplistic: (Band X . I and S in the range of 0 to 1. B = m If 240 ≤ H < 300. these indices are ratios of band DN values: Band X Band Y Enhancement 501 . 1980): Given: H in the range of 0 to 360. M = I (1 + S) If I > 0.Band Y Band X + Band Y In many instances. G = M If 300 ≤ H ≤ 360.0 are: If H < 120.5. B = m + (M .120) ÷ 60) If 180 ≤ H < 300.m)((240 .See the previous section on RGB to IHS transform for more information.m)((120 .

Indices can also be used to minimize shadow effects in satellite and aircraft multispectral images. If there are two bands. Thus. which may very well be a substantial part of the data image. integer scaling would give very little contrast. In this case. If A>B and A is never much greater than B. In many cases. judiciously chosen indices can highlight and enhance differences that cannot be observed in the display of the original color bands. The absorption is based on the molecular bonds in the (surface) material. A and B. then: ratio = A/B If A>>B (much greater than). For cases in which A<B or A<<B. Green 5/4. 502 Enhancement . integer scaling would always truncate to 0. Applications • Indices are used extensively in mineral exploration and vegetation analysis to bring out small differences between various rock types and vegetation classes. One approach to handling the entire ratio range is to actually process the function: ratio = atan(A/B) This would give a better representation for A/B < 1 as well as for A/B > 1. then a normal integer scaling would be sufficient. the ratio often gives information on the chemical composition of the target.These ratio images are derived from the absorption/reflection spectra of the material of interest. All fractional data would be lost. scaling might be a problem in that the data range might only go from 1 to 2 or from 1 to 3. A multiplication constant factor would also not be very effective in seeing the data contrast between 0 and 1. Black and white images of individual indices or a color combination of three ratios may be generated. • • Integer Scaling Considerations The output images obtained by applying indices are generally created in floating point to preserve all numerical precision. Blue 3/1. See "Raster Data" on page 1 for more information on the absorption/reflection spectra. For example: Red 5/7. Certain combinations of TM ratios are routinely used by geologists for interpretation of Landsat imagery for mineral type.

Jensen.+ 0. 3/1 Hydrothermal Composite = TM 5/7. 5/4. 1979 The following table shows the infrared (IR) and red (R) band for some common sensors (Tucker. 1996): Sensor Landsat MSS SPOT XS Landsat TM NOAA AVHRR IR Band 7 3 4 2 R Band 5 2 3 1 Image Algebra Image algebra is a general term used to describe operations that combine the pixels of two or more raster layers in mathematical combinations. 3/1. 4/3 Source: Modified from Sabins. For example. 1979. 1987. the calculation: (infrared band) .5 IR + R IR – R IR + R • • • • • • Transformed NDVI (TNDVI) = Iron Oxide = TM 3/1 Clay Minerals = TM 5/7 Ferrous Minerals = TM 5/4 Mineral Composite = TM 5/7.(red band) DNir . Jensen. 1996.DNred Enhancement 503 . Tucker.Index Examples The following are examples of indices that have been preprogrammed in the Image Interpreter in ERDAS IMAGINE: • • • • IR/R (infrared/red) SQRT (IR/R) Vegetation Index = IR-R Normalized Difference Vegetation Index (NDVI) = --------------IR – R --------------.

classification. Principal Components Analysis (PCA) and Minimum Noise Fraction (MNF) model the data using a Gaussian distribution. These are derived from the absorption spectra of the material of interest. and division: IR – R NDVI = --------------IR + R Hyperspectral Image Processing Independent Component Analysis For a complete treatment of hyperspectral image processing. target detection.yields a simple. the main objective of a feature extraction technique is to accurately retrieve these features. Skewness is a measure of asymmetry in an image histogram. measure of the presence of vegetation. unmixing. departure from a Normal Distribution). The numerator is a baseline of background absorption and the denominator is an absorption peak. Any given remote sensing image can be decomposed into several features. yet very useful. Independent component analysis (ICA) is a high order feature extraction technique. please see the IMAGINE Spectral Analysis™User’s Guide. Band ratios. The term ‘feature’ here refers to remote sensing scene objects (for example. and so forth). The extracted features can be subsequently utilized for improving the performance of various remote sensing applications (for example. vegetation types or urban materials) with similar spectral characteristics. however most remote sensing data do not follow a Gaussian distribution. 504 Enhancement . ICA exploits the higher order statistical characteristics of multispectral and hyperspectral imagery such as skewness and kurtosis. such as: TM5 ----------TM7 = clay minerals are also commonly used. NDVI is a combination of addition. At the other extreme is the Tasseled Cap calculation (described in the following pages). which uses a more complicated mathematical combination of as many as six bands to define vegetation. Kurtosis is a measure of peakedness or flatness of an image histogram (that is. Therefore. See "Raster Data" on page 1 for more information on absorption/reflection spectra. subtraction.

Component Ordering It is always desirable to have the number of components greater than the number of features in order to ensure that all the features are recovered as ICs. Three basic statistical measures provided for component ordering are as follows Enhancement 505 . Each independent component (IC) will contain information corresponding to a specific feature in the original image. the direction of the first principal axis denotes a representation with maximum data variance. the additional components will contain very little feature information and will resemble a noisy image. second principal axis with second highest data variance and so forth).. A.Unlike the Principal Component axes obtained by PCA which exhibit a precedence in their order. 1994. it is worth noting that these axes are not restricted to being orthogonal and thus lead to transformed data that is not only uncorrelated (second order statistics) but also independent (higher order statistics). For a detailed description on the mathematical formulation of ICA refer to Shah. ICA performs a linear transformation of the spectral bands such that the resulting components are decorrelated and independent. C. the labeling of Independent Component axes does not imply any order. P. 2003 and Common. the ICs appear in an arbitrary order. (that is. you are provided with the option of ordering the ICs so that the noisy images will be the last few components and can be easily eliminated from further analysis. In ERDAS IMAGINE. as a result. Additionally. In cases where the number of features happens to be smaller than the number of components.. The available options for component ordering are: • • • • • • • None Correlation Coefficient Skewness Kurtosis Combinations of the above Entropy Negentropy None: No ordering is applied to the components. Noisy components may occur at any position in the image stack.

--------------------------------. departure from Normal Distribution). the higher the correlation the greater is the similarity between their pixel values. 0 ≤ skewness x < ∞ skewness x = ----------.Correlation Coefficient: Correlation coefficient (Rxy) between two images X and Y is defined as: P ∑ ( Xi – Xi ) ( Yi – Yi ) i=1 R XY = ------------------------------------------------------------------------. 0 ≤ R XY ≤ 1 P P ∑ ( Xi – Xi ) ∑ ( Yi – Yi ) i=1 i=1 2 2 Where: P is the total number of image pixels. The ICs are ordered based on their correlation with the spectral bands. The skewness of image X is defined as P 1 . higher band numbers). Skewness: Skewness is a measure of asymmetry in an image histogram. ICs with low correlation correspond to noisy images and will therefore be lower in the image stack (that is. Kurtosis: Kurtosis is a measure of peakedness or flatness of an image histogram (that is. The correlation coefficient is a measure of similarity between two images.i = 1 3 P–1 σ x ∑ ( Xi – Xi ) 3 Output bands are ordered by increasing asymmetry.. The kurtosis of image X is defined as: 506 Enhancement .

P 1 .i = 1 4 P–1 σ x ∑ ( Xi – Xi ) 4 An image with a normally distributed histogram has zero skewness and kurtosis. Combinations: Two combinations of the above three measures are also options for component ordering in ERDAS IMAGINE: | skewness X × kurtosis X | | RXY × skewness X × kurtosis X | Entropy: Entropy is a measure of image information content. Entropy of image X is defined as: G–1 entropy x = – ∑ P ( k )log2 ( P ( k ) ) k=0 Where: G is the number of gray levels K is the gray level P is probability Enhancement 507 . – 3 ≤ kurtosis x < ∞ kurtosis x = ----------.– 3.-------------------------------.

Lower values of skewness/kurtosis/negentropy correspond to ICs resembling noisy images. Shadow Removal. However. Band Generation for Multispectral Imagery It must be noted that the number of desired components cannot exceed the number of spectral bands in the imagery. Negentropy of image X is defined as: 112 2 negentropy x = ----. • Spectral unmixing: 508 Enhancement . • Visual analysis: ICs can be used to improve the visual interpretability through component color coding. A linear combination of the original spectral bands will not lead to additional information required for ICA feature extraction from multispectral imagery. 2007. X i ⋅ X j (where X i is a multispectral band and i ≠ j ). and Classification may be found in Shah. you should generate additional spectral bands through non-linear operations such as log ( X i ) . Xi X j . feature extraction from multispectral imagery necessitates the generation of additional spectral bands as explained below. et al.( kurtosis x ) 12 48 and is proportional to its skewness and kurtosis. A similar approach can be used for enhanced visual interpretability of hyperspectral imagery employing ICs. Therefore. Xi Remote Sensing Applications for ICs 2. • Resolution merge: Improved integration of imagery at different spatial resolution can be attained by substituting a high spatial resolution image for an IC followed by an inverse transformation.Negentropy: Negentropy is a measure of how far the distribution of a particular image departs from a normal distribution. This should not be of concern when processing hyperspectral imagery. Xi . More information and examples of Spectral Unmixing. where the number of spectral bands is significantly higher compared to the number of features in the scene.( skewness x ) + ----.

Ai ( λ ) and the ith feature estimated by ICA correspond to the spectral response and abundance respectively of the ith feature in spectral band at wavelength λ . ICs obtained from these bands would recover the shadow in one of the components. Enhancement 509 . • Land use/ land cover classification: ICs can be further analyzed based on their spectral. • Anomaly/Target detection: In cases where there is no prior information regarding material of the target features present in the scene. however. when employed for such applications. ICA. the spectral difference between the shadow and the shadow occluded features can be enhanced. will remove the anomalous features (that is. and contextual information in order to obtain an improved thematic map. Those anomalous features are contained in the independent components. These components can be further analyzed for improved anomaly/target detection. ICA extracted features can be further analyzed for identifying the proportion of each feature in a pixel. ICs are well suited in the analysis of multi temporal data.ICA can be employed for linear spectral unmixing when you have no prior information regarding the spectral response of features present in the scene. Hence. By employing band combinations (for example. spectra from libraries can not be used for detecting them. features with spectral response significantly different from other features present in the scene). • Shadow detection: High spatial resolution multispectral images necessitate shadow detection to facilitate improved feature analysis. band ratio) for band generation. The formulation of each band in the case of three features can be expressed as spectral band ( λ ) = A1 ( λ ) ⋅ feature1 + A2 ( λ ) ⋅ feature2 + A3 ( λ ) ⋅ feature3 where Ai ( λ ) is the sensor response to feature i at wavelength λ . • Multi temporal data analysis: Since feature based change detection techniques necessitate extraction of features with high accuracy. textural.

Also. use a subset of that image by eliminating background pixels for ICA feature extraction. For example. employing a Normalized Differential Vegetation Index (NDVI) as an additional band would certainly enhance the performance of ICA in extracting vegetation features. • Desired number of components: A typical scene imaged by a hyperspectral sensor would not contain more than 10 to 15 features. ICA is performed on an imagery and the desired number of components is 3. Hence. Next. • Visual inspection of ICs: In addition to using any of the component ordering techniques. The 3 ICs obtained in the first case would not be identical to any of the 4 ICs in the second case. the ICs may be visually inspected to ensure that they do not resemble noisy images. Let us consider the following two scenarios: First. the number of desired components impacts the result of ICA. the ICA is performed on the same imagery and this time the number of desired components is 4. • Image background with zero pixel values: In cases where the images have background with zero values. 510 Enhancement .Tips and Tricks • Band generation: Use of meaningful band combinations for non-linear band generation in case of multispectral imagery improves the performance of ICA in extracting features. the number of desired components should be restricted to less than 20 in order to ensure accurate and efficient results.

As a result. An enhancement that requires a convolution operation in the spatial domain can be implemented as a simple multiplication in frequency space—a much faster calculation. Analysts can edit the Fourier image to reduce noise or remove periodic features. f(x) (which might be a row of pixels). such as striping. classification. but the magnitude of the image can be calculated. NOTE: You may also want to refer to the works cited at the end of this section for more information. Also included are some examples of techniques that generally work for specific applications. Low spatial frequencies represent infrequent gray scale changes that occur gradually over a relatively large number of pixel distances. Some rules and guidelines for using these tools are presented in this document. which can then be displayed either in the Viewer or in the FFT Editor. The Fourier image itself cannot be easily viewed. Enhancement 511 . This section focuses on the Fourier editing techniques available in the FFT Editor. as the size of the moving window increases. it is then transformed back into the spatial domain by using an IFFT. The most common way of implementing these enhancements is via a moving window convolution. a line of pixels with a high spatial frequency gray scale pattern might be represented in terms of a single coefficient multiplied by a sin(x) function. f(x). can be represented by a Fourier series consisting of some combination of sine and cosine terms and their associated coefficients. Once the Fourier image is edited. These techniques include contrast stretches (nonadaptive). However. The FFT calculation converts the image into a series of two-dimensional sine waves of various frequencies. The basic premise behind a Fourier transform is that any onedimensional function. High spatial frequencies are those that represent frequent gray scale changes in a short pixel distance. The result is an enhanced version of the original image.Fourier Analysis Image enhancement techniques can be divided into two basic categories: point and neighborhood. and level slices. In ERDAS IMAGINE. the FFT is used to convert a raster image from the spatial (normal) domain into a frequency domain image. For example. such as striping. Neighborhood techniques enhance a pixel based on the values of surrounding pixels. these techniques require the processing of a possibly large number of pixels for each output pixel. the number of requisite calculations becomes enormous. Point techniques enhance the pixel based only on its value. might have to be represented by many sine and cosine terms with their associated coefficients. A more complicated function. with no concern for the values of neighboring pixels.

striping).sin 5x 5 1 -. where electrical signals are continuous and not discrete. 512 Enhancement . the Fourier series is approaching the original function. or vibration in imagery by identifying periodicities (areas of high spatial frequency).Figure 154: One-Dimensional Fourier Analysis Original Function f(x) Fourier Series of f(x) sin x 1 -. a highly efficient version of the DFT was developed and called the FFT. Applications Fourier transformations are typically used for the removal of noise such as striping. To handle images which consist of many one-dimensional rows of pixels. This analysis technique can also be used across bands as another form of pattern/feature recognition. Fourier editing can be used to remove regular errors in data such as those caused by sensor anomalies (for example. Because of the computational load in calculating the values for all the sine and cosine terms along with the coefficient multiplications. These images are symmetrical about the origin. a two-dimensional FFT has been devised that incrementally uses one-dimensional FFTs in each direction and then combines the result. A Fourier transform is a linear transformation that allows calculation of the coefficients necessary for the sine and cosine terms to adequately represent the image. This theory is used extensively in electronics and signal processing. The first three terms of the Fourier series are plotted in the upper right graph and the plot of the sum is shown below it. In this example the function is a square wave whose cosine coefficients are zero leaving only sine terms. spots.sin 3x 3 0 π 2π 0 π 2π Sum of first 9 terms in series Sum of first 3 terms in series 0 π 2π 0 π 2π Figure 154 shows how a function f(x) can be represented as a linear combination of sine and cosine. Therefore. After nine iterations. DFT has been developed.

In this transformation. |X|max. the following computation is performed for each FFT element magnitude x: Enhancement 513 . they are padded up to the next highest power of two. Fourier Magnitude The raster image generated by the FFT calculation is not an optimum image for viewing or editing. Images computed by this algorithm are saved with an . in the Fourier magnitude image. Then. each . Therefore. v ) ← Where: ∑ ∑ [ f ( x. 1988.v e j = = = = = the number of pixels horizontally the number of pixels vertically spatial frequency variables 2. If the dimensions of the input image are not a power of two. the Fourier Magnitude calculation involves a logarithmic function. y )e x = 0y = 0 ] M N u. a Fourier image is symmetric about the origin (u. these components are combined in a root-sum of squares operation. it has two components: real and imaginary). the natural logarithm base the imaginary component of a complex number The number of pixels horizontally and vertically must each be a power of two.fft layer is processed twice. 0). The FFT Editor automatically displays the magnitude without further processing. You should run a Fourier Magnitude transform on an . Also. the origin is shifted to the center of the raster array. If the origin is plotted at the upper left corner. Press et al. First. There is more information about this later in this section. Each pixel of a fourier image is a complex number (that is.fft file before viewing it in the Viewer. the maximum magnitude.fft file extension. Finally. v = 0. For display as a single image.71828. is computed. 1975. Source: Modified from Oppenheim and Schafer. the symmetry is more difficult to see than if the origin is at the center of the image. since the dynamic range of Fourier spectra vastly exceeds the range of a typical display device.FFT The FFT calculation is: M – 1N – 1 – j2πux ⁄ M – j2πvy ⁄ N F ( u.

71828. the natural logarithm base the magnitude operator This function was chosen so that y would be proportional to the logarithm of a linear function of x. 0) in the upper left corner. 514 Enhancement . 0) is in the center of the raster.x y ( x ) = 255. v) = (0. In Figure 155. these raster images are very different symmetrically. the majority of the information in an image is in the low frequencies.0 ln ⎛ -------------⎞ ( e – 1 ) + 1 ⎝ x max⎠ Where: x y |x|max e || = = = = = input FFT element the normalized log magnitude of the FFT element the maximum magnitude 2. Image A is one band of a badly striped Landsat TM scene. This is indicated by the bright area at the center (origin) of the Fourier image. y) = (0. with y(0)=0 and y (|x|max) = 255. the origin (u. Image B is the Fourier Magnitude image derived from the Landsat image. In Image B. The low frequencies are plotted near this origin while the higher frequencies are plotted further out. The origin of Image A is at (x. although Image A has been transformed into Image B. Generally. Figure 155: Example of Fourier Magnitude origin Image A Image B origin Note that.

0) of the Fourier image.5m) into a Fourier image: –5 –1 1 Δu = Δv = ------------------------. The sampling increments in the spatial and frequency domain are related by: 1 Δu = ----------MΔx 1Δv = ---------NΔy Where: M N Δx Δy = = = = horizontal image size in pixels vertical image size in pixels pixel size pixel size For example. designated as (u.85 × 10 m 512 × 28.= 6. these lower frequencies are plotted nearer to the center (u. does not always represent the same frequency.42 × 10 m 1024 × 28. v = 0.85 × 10-5 × m-1 13.7 × 10-5 × m-1 If the Landsat TM image was 1024 × 1024: –5 –1 1 Δu = Δv = ---------------------------. for example. v).= 3. A large spatial domain image contains components of lower frequency than a small spatial domain image. because it depends on the size of the input raster image. As mentioned. Note that the units of spatial frequency are inverse length.It is important to realize that a position in a Fourier image.5 u or v 0 1 2 Frequency 0 6.5 u or v 0 Frequency 0 Enhancement 515 . m-1. converting a 512 × 512 Landsat TM image (pixel size = 28.

the sample images are 512 × 512 and 1024 × 1024 (powers of two). If the original image was padded by the FFT program.fft format described earlier (that is. Resample the image so that its height and width are powers of two. v) position depends on the size of the input image. output from the FFT or FFT Editor). IFFT The IFFT computes the inverse two-dimensional FFT of the spectrum stored.42 × 10-5 × m-1 6. These were selected because the FFT calculation requires that the height and width of the input image be a power of two (although the image need not be square). input images usually do not meet this criterion. • • The input file must be in the compressed . For the above calculation. 64 × 64. Figure 156: The Padding Technique • 300 512 400 mean value 512 The padding technique is automatically performed by the FFT program. the frequency represented by a (u. Three possible solutions are available in ERDAS IMAGINE: • • Subset the image. It produces a minimum of artifacts in the output Fourier image. Pad the image—the input raster is increased in size to the next power of two by imbedding it in a field of the mean value of the entire input image. If the image is subset using a power of two (that is. as noted above. 128 × 128. 64 × 128). In practice.u or v 1 2 Frequency 3. no padding is used. 516 Enhancement . the padding is automatically removed by IFFT.85 × 10-5 × m-1 So.

• This program creates (and deletes. low-pass. h The names high-pass. the natural logarithm base Source: Modified from Oppenheim and Schafer. y ) ← -------------1 N N 1 2 ∑ ∑ u = 0v = 0 [ F ( u. y) = h(x.The specific expression calculated by this program is: M – 1N – 1 f ( x. Enhancement 517 . F. y) is equivalent to G(u. H = Fourier transforms of g.fft data. 0 ≤ y ≤ N – 1 Where: M N u. y) = output image G. v) = H(u. f. y) = position invariant operation (convolution kernel) g(x. upon normal termination) a temporary file large enough to contain one entire band of . Filtering Operations performed in the frequency (Fourier) domain can be visualized in the context of the familiar convolution function. v )e j2πux ⁄ M + j2πvy ⁄ N ] 0 ≤ x ≤ M – 1.img file extension by default. high-frequency indicate that these convolution functions derive from the frequency domain. 1988.71828. v e = = = = the number of pixels horizontally the number of pixels vertically spatial frequency variables 2. v) Where: f(x. The mathematical basis of this interrelationship is the convolution theorem.ifft. which states that a convolution operation in the spatial domain is equivalent to a multiplication operation in the frequency domain: g(x. y) * f(x. 1975 and Press et al. v) × F(u. Images computed by this algorithm are saved with an . y) = input image h(x.

Depending on the size of the input image and the size of the kernel. low-pass kernel. Figure 157 compares Direct and Fourier domain processing for finite area convolution. In practice. particularly. 2 2 2 518 Enhancement . the calculation becomes more time-consuming. it can be faster to generate a low-pass image via Fourier processing. 1991 In the Fourier domain. Figure 157: Comparison of Direct and Fourier Domain Processing Size of neighborhood for calculation 16 Fourier processing more efficient 12 8 Direct processing more efficient 4 0 200 400 600 800 1000 Size of input image 1200 Source: Pratt. this is easily achieved in the spatial domain by the M = N = 3 kernel: 1 1 1 1 1 1 1 1 1 Obviously.Low-Pass Filtering The simplest example of this relationship is the low-pass kernel. The name. is derived from a filter that would pass low frequencies and block (filter out) high frequencies. as the size of the image and. the size of the lowpass kernel increases. the low-pass operation is implemented by attenuating the pixels’ frequencies that satisfy: u + v > D0 D0 is often called the cutoff frequency.

Thus. a smaller radius (r) has the same effect as a larger N (where N is the size of a kernel) in a spatial domain low-pass convolution. Thus. the low-pass information is concentrated toward the origin of the Fourier image. images can be sharpened and edge-enhanced by attenuating the low-frequency components using high-pass filters. the frequency represented by a particular u. the highpass operation is implemented by attenuating the pixels’ frequencies that satisfy: u + v < D0 2 2 2 Enhancement 519 . depending on the size of the input image. In the Fourier domain. High-Pass Filtering Just as images can be smoothed (blurred) by attenuating the highfrequency components of an image using low-pass filters. For example: Image Size 64 × 64 Fourier Low-Pass r= 50 30 20 10 5 Convolution Low-Pass N= 3 3. As was pointed out earlier. v (or r) position depends on the size of the input image. a lowpass operation of r = 20 is equivalent to a spatial low-pass of various kernel sizes.As mentioned.5 5 9 14 13 22 25 42 128 × 128 20 10 256 × 256 20 10 This table shows that using a window on a 64 × 64 Fourier image with a radius of 50 as the cutoff is the same as using a 3 × 3 low-pass kernel on a 64 × 64 spatial domain image.

v) > D0 All frequencies inside a circle of a radius D0 are retained completely (passed). so named because its cutoff point is absolute. v) = 0 if D(u. High-pass filtering using the ideal window looks like the following illustration: 520 Enhancement . v) = 1 if D(u. v) ≤ D0 H(u.Windows The attenuation discussed above can be done in many different ways. In ERDAS IMAGINE Fourier processing. Each window is discussed in more detail: Ideal The simplest low-pass filtering is accomplished using the ideal window. five window functions are provided to achieve different types of attenuation: • • • • • Ideal Bartlett (triangular) Butterworth Gaussian Hanning (cosine) Each of these windows must be defined when a frequency domain process is used. Figure 158: An Ideal Cross Section H(u. Note that in Figure 158 the cross section is ideal. This application is perhaps easiest understood in the context of the high-pass and low-pass filter operations. The point D0 is termed the cutoff frequency.v) H(u.v) gain 1 0 frequency D0 D(u. and all frequencies outside the radius are completely attenuated.

v) ≤ D0 H(u. Gaussian. The smoother functions (for example. A major disadvantage of the ideal filter is that it can cause ringing artifacts. The differences between them are minor and are of interest mainly to experts.v) gain High-Pass 1 1 0 frequency D0 D(u.and highpass cross sections illustrate this: Enhancement 521 .v) H(u. Butterworth and Hanning) minimize this effect. particularly if the radius (r) is small.Figure 159: High-Pass Filtering Using the Ideal Window H(u. For most normal types of Fourier image enhancement.v) gain 1 0 D0 frequency D(u. The following low.v) Butterworth. The Butterworth window reduces the ringing effect because it does not contain abrupt changes in value or slope. and Hanning The Butterworth. and all frequencies outside the radius are retained completely (passed). v) > D0 All frequencies inside a circle of a radius D0 are completely attenuated. they are essentially interchangeable.v) 0 frequency D0 D(u. as shown in the following low.v) gain Low-Pass H(u. and Hanning windows are all smooth and greatly reduce the effect of ringing.and high-pass cross sections: Figure 160: Filtering Using the Bartlett Window H(u. v) = 1 if D(u. Bartlett Filtering using the Bartlett window is a triangular function. v) = 0 if D(u. Gaussian.

v ) = 0 Fourier Noise Removal otherwise Occasionally.5 0 1 2 frequency 3 D(u. When these images are transformed into Fourier space. v ) = e The equation for the Hanning low-pass window is: 1 πxH ( u.v) D0 The equation for the low-pass Butterworth window is: 1 H ( u.v) 1 gain High-Pass 0. The Fourier Analysis functions provide two main tools for reducing noise in images: • • editing automatic removal of periodic noise 522 Enhancement .⎞ ⎝ D 0⎠ H ( u. An example of this is the scan lines that are present in some TM images.5 0 0. v ) ) ⁄ D 0 ] 2n NOTE: The Butterworth window approaches its window center gain asymptotically.v) D0 1 2 frequency 3 D(u. The equation for the Gaussian low-pass window is: x.v) 1 gain Low-Pass H(u.⎛ 1 + cos ⎛ ---------⎞ ⎞ for 0 ≤ x ≤ 2 D 0 ⎝ ⎝ 2 D 0⎠ ⎠ 2 H ( u. the periodic line pattern becomes a radial line. v ) = ---------------------------------------------------1 + [ ( D ( u. images are corrupted by noise that is periodic in nature.2 – ⎛ ----.Figure 161: Filtering Using the Butterworth Window H(u. v ) = -.

As these artifacts are always symmetrical in the Fourier magnitude image. 0) are best removed using back-to-back wedges centered at (0. Homomorphic Filtering Homomorphic filtering is based upon the principle that an image may be modeled as the product of illumination and reflectance components: I(x. These can be removed using the tools provided in the FFT Editor. 1988). It is possible to remove these lines using very narrow wedges with the Ideal window.v = 0. This effect can be lessened by using a less abrupt window. The Fourier Transform of each block is calculated and the log-magnitudes of each FFT block are averaged. However. some sort of periodic interference). the sudden transitions resulting from zeroing-out sections of a Fourier image causes a ringing of the image when it is transformed back into the spatial domain. The averaging removes all frequency domain quantities except those that are present in each block (that is. Use of this algorithm requires a minimum of input from you. When the IFFT is performed. operator interaction and a bit of trial and error are required.0 or circular spots in the Fourier image. v = 0. The image is first divided into 128 × 128 pixel blocks. However. such as Butterworth. This method is partially based on the algorithms outlined in Cannon (Cannon. as described above. Other types of noise can produce artifacts.Editing In practice. y) × r(x. 1983) and Srinivasan et al (Srinivasan et al. The automatic periodic noise removal algorithm has been devised to address images degraded uniformly by striping or other periodic anomalies. y) = i(x. editing tools operate on both components simultaneously. 0). The FFT Editor contains tools that enable you to attenuate a circular or rectangular region anywhere on the image. the result is an image that should have any periodic noise eliminated or significantly reduced. it has been found that radial lines centered at the Fourier origin (u. Automatic Periodic Noise Removal The use of the FFT Editor. such as lines not centered at u. y) Enhancement 523 . Select the Periodic Noise Removal option from Image Interpreter to use this function. enables you to selectively and accurately remove periodic noise from any image. The average power spectrum is then used as a filter to adjust the FFT of the entire image.

Where: I(x. y) = image intensity (DN) at pixel x. With the two component images separated. y) = illumination of pixel x. y) This transforms the image from multiplicative to additive superposition. while the illumination image (related to the scene illumination) is de-emphasized. while the reflectance component dominates the higher frequencies. y) = reflectance at pixel x. y The illumination image is a function of lighting conditions and shadows. y) + ln r(x. The flow chart in Figure 162 summarizes the homomorphic filtering process in ERDAS IMAGINE. By using a filter on the Fourier image. y) = ln i(x. the reflectance image (related to the target material) may be enhanced. the enhanced image is returned to the normal spatial domain. Because the illumination component usually dominates the low frequencies. the image is now transformed into Fourier space. r = high freq. y i(x. Butterworth Filter Log FFT Enhanced Image Exponential IFFT Filtered Fourier Image i decreased r increased 524 Enhancement . y r(x. the image may be effectively manipulated in the Fourier domain. A log function can be used to separate the two components (i and r) of the image: ln I(x. In this application. Figure 162: Homomorphic Filtering Process Input Input Image i×r Log Image ln i + ln r Fourier Image i = low freq. any linear operation can be performed. which increases the highfrequency components. By applying an IFFT followed by an exponential function. The reflectance image is a function of the object being imaged. Select the Homomorphic Filter option from Image Interpreter to use this function.

radar and VIS/IR data are complementary. Enhancement 525 .As mentioned earlier. rather than the chemical composition. See "Raster Data" on page 1 and "Raster and Vector Data Sources" on page 55 for more information on radar data. For manual editing. if an input image is not a power of two. When radar microwaves strike a surface. Thus. While these techniques can be applied to other types of image data. 1975) and Press (Press et al. When VIS/IR radiation strikes a surface it is either absorbed. the artifacts induced by the padding may have a deleterious effect on the output image. A detailed description of the theory behind Fourier series and Fourier transforms is given in Gonzales and Wintz (Gonzalez and Wintz. The absorption is based on the molecular bonds in the (surface) material. they provide different information about the target area. the ERDAS IMAGINE Fourier analysis software automatically pads the image to the next largest size to make it a power of two. However. An image in which these two data types are intelligently combined can present much more information than either image by itself. reflected. The ERDAS IMAGINE Radar Interpreter provides a sophisticated set of image processing tools designed specifically for use with radar imagery. and vegetation cover. it is recommended that images that are not a power of two be subset before being used in an automatic process. this causes no problems. For information on the Radar Image Enhancement function. see the section on Radiometric Enhancement on page 463. 1988). in automatic processing. this discussion focuses on the special requirements of radar imagery enhancement. roughness. 1977). The strength of radar return is affected by slope. For this reason. such as the homomorphic filter. Consequently. this imagery provides information on the chemical composition of the target. This section describes the functions of the ERDAS IMAGINE Radar Interpreter. The conductivity of a target area is related to the porosity of the soil and its water content. See also Oppenheim (Oppenheim and Schafer. or transmitted. This section describes enhancement techniques that are particularly useful for radar imagery. they are reflected according to the physical and electrical properties of the surface. Radar Imagery Enhancement The nature of the surface phenomena involved in radar imaging is inherently different from that of visible/infrared (VIS/IR) images.

but not advisable. although it may appear in any type of remotely sensed image utilizing coherent radiation. or classify the pixel values before removing speckle noise. enhance. or single versus multiple bounce scattering. these waves are no longer in phase. or in any way resample.Speckle Noise Speckle noise is commonly observed in radar (microwave or millimeter wave) sensing systems. Like the light from a laser. Because any image processing done before removal of the speckle results in the noise being incorporated into and degrading the image. Since different applications and different sensors necessitate different speckle removal models. After interaction with the target area. unlike a passive microwave sensor that simply receives the low-level radiation naturally emitted by targets. Speckle noise must be reduced before the data can be effectively utilized. However. However. Once out of phase. An active radar sensor gives off a burst of coherent radiation that reflects from the target. radar waves can interact to produce light and dark pixels known as speckle noise. This is because of the different distances they travel from targets. Functions using Nearest Neighbor are technically permissible. the image processing programs used to reduce speckle noise produce changes in the image. ERDAS IMAGINE Radar Interpreter includes several speckle reduction algorithms: • • • • • • • Mean filter Median filter Lee-Sigma filter Local Region filter Lee filter Frost filter Gamma-MAP filter NOTE: Speckle noise in radar images cannot be completely removed. These filters are described in the following sections: 526 Enhancement . you should not rectify. the waves emitted by active sensors travel in phase and interact minimally on their way to the target area. correct to ground range. it can be reduced significantly.

A Median filter is useful for removing pulse or spike noise. which argues for a small window size. This filter does not remove the aberrant (speckle) value. This consideration would argue in favor of a large window size (for example. It is useful for applications where loss of resolution is not a problem. Pulse functions of less than one-half of the moving window width are suppressed or eliminated.Mean Filter The Mean filter is a simple calculation. but still simplistic. averaging results in a loss of detail. step functions or ramp functions are retained. 7 × 7). In addition. Figure 163: Effects of Mean and Median Filters ORIGINAL MEAN FILTERED MEDIAN FILTERED Step Ramp Single Pulse Double Pulse Enhancement 527 . a bright and a dark pixel within the same window would cancel each other out. In general. Median Filter A better way to reduce speckle. In theory. this is the least satisfactory method of speckle reduction. The pixel of interest is replaced by the value in the center of this distribution. The effect of Mean and Median filters on various signals is shown (for one dimension) in Figure 163. However. it averages it into the data. is the Median filter. The pixel of interest (center of window) is replaced by the arithmetic average of all values within the window. This filter operates by arranging all DN values in sequential order within the window that you define.

It is also applicable in removing pulse function noise. it is an edge preserving filter (Pratt. 1989a). the most uniform region). 1991). An example of the application of the Median filter is the removal of dead-detector striping. A region with low variance is assumed to have pixels minimally affected by wave interference. NW. The pixel of interest is replaced by the mean of all DN values within the region with the lowest variance (that is. Figure 164 shows a 5 × 5 moving window and the regions of the Local Region filter. It does not affect step or ramp functions. Local Region Filter The Local Region filter divides the moving window into eight regions based on angular position (North.The Median filter is useful for noise suppression in any image. y – Mean ) Variance = ---------------------------------------------n–1 Source: Nagao and Matsuyama. as found in Landsat 4 TM data (Crippen. NE. East. yet very similar to the pixel of interest. the variance is calculated as follows: Σ ( DN x. SW. West. A region of low variance is probably such for several surrounding pixels. 528 Enhancement . 1978 2 The algorithm compares the variance values of the regions surrounding the pixel of interest. which results from the inherent pulsing of microwaves. Figure 164: Regions of Local Region Filter = pixel of interest = North region = NE region = SW region For each region. South. and SE).

The result is that the output image is composed of numerous uniform areas. or in preparing a 3-band ratio color composite (Crippen.26 .37 .Mean] Where: Mean = average of pixels in a moving window Enhancement 529 . Speckle in imaging radar can be mathematically modeled as multiplicative noise with a mean of 1. In practice.52 for 1-look radar data and SD = . as a scene-derived parameter.26 for 4-look radar data. It can be assumed that imaging radar data noise follows a Gaussian distribution. It is also useful in evaluating and modifying VIS/IR data for input to a 4-band composite image. The actual calculation used for the Lee filter is: DNout = [Mean] + K[DNin . this filter can be utilized sequentially 2 or 3 times.52 . Sigma and Lee Filters The Sigma and Lee filters utilize the statistical distribution of the DN values within the moving window to estimate what the pixel of interest should be. is used as an input parameter in the Sigma and Lee filters. The standard deviation of the noise can be mathematically defined as: Standard Deviation = sigma (σ) VARIANCE ---------------------------------MEAN = Coefficient Of Variation = The coefficient of variation. the size of which is determined by the moving window size. of Variation Value . 1989a). The resultant output image is an appropriate input to a classification application. This would yield a theoretical value for Standard Deviation (SD) of .30 . Table 50 gives theoretical coefficient of variation values for various look-average radar scenes: Table 50: Theoretical Coefficient of Variation Values # of Looks (scenes) 1 2 3 4 6 8 Coef. increasing the window size.18 The Lee filters are based on the assumption that the mean and variance of the pixel of interest are equal to the local mean and variance of all pixels within the moving window you select.21 .

or 0. As with the Statistics filter. The statistical filters (Sigma and Statistics) are logically applicable to any data set for preprocessing.Var ( x ) K = --------------------------------------------------2 2 [ Mean ] σ + Var ( x ) The variance of x [Var (x)] is defined as: ⎛ [ Variance within window ] + [ Mean within window ] 2⎞ 2 Var ( x ) = ⎜ -----------------------------------------------------------------------------------------------------------------------------. three passes of the Sigma filter with the following parameters are very effective when used with any type of data: Pass 1 2 3 Sigma Value 0. most natural scenes are found to follow a normal distribution of DN values. For example. Finally.26 0. resulting in a few erratic pixels. 1. 1981 The Sigma filter is based on the probability of a Gaussian distribution. These speckle filters can be used iteratively. This is particularly true of experimental sensor systems that frequently have significant noise problems. The following sequence is useful prior to a classification: 530 Enhancement .26 0. This noise suppression filter replaces the pixel of interest with the average of all DN values within the moving window that fall within the designated range.5% of random samples are within a 2 standard deviation (2 sigma) range. and then decide if another pass is appropriate and what parameters to use on the next pass.⎟ – [ Mean within window ] 2 ⎝ ⎠ [ Sigma ] + 1 Source: Lee. a coefficient of variation specific to the data set must be input.5) to define the accepted range.26 Sigma Multiplier 0. there is no reason why successive passes must be of the same filter. It is assumed that 95. you must specify a moving window size. you must specify how many standard deviations to use (2. You must view and evaluate the resultant image after each pass (the data histogram is useful for this). In VIS/IR imagery. Any sensor system has various sources of noise. As with all the radar speckle filters.5 1 2 Window Size 3×3 5×5 7×7 Similarly. thus filtering at 2 standard deviations should remove this noise. The center pixel of the moving window is the pixel of interest.

Filter Lee Lee Local Region Pass 1 2 3 Sigma Value 0. The formula used is: DN = Where: n×n ∑ Kαe 2 –α t α = ( 4 ⁄ nσ ) ( σ ⁄ I ) and 2 2 K I σ σ |t| n = normalization constant = local mean = local variance = image coefficient of variation value = |X-X0| + |Y-Y0| = moving window size Source: Lopes et al. This algorithm assumes that noise is multiplicative with stationary statistics. The local statistics serve as weighting parameters for the impulse response of the filter (moving window). Frost Filter The Frost filter is a minimum mean square error algorithm that adapts to the local statistics of the image. Each data set and each application have a different acceptable balance between these two factors. The ERDAS IMAGINE filters have been designed to be versatile and gentle in reducing noise (and resolution).26 NA Sigma Multiplier NA NA NA Window Size 3×3 5×5 5 × 5 or 7 × 7 With all speckle reduction filters there is a playoff between noise reduction and loss of resolution. 1990 Enhancement 531 .26 0.

This algorithm incorporates this assumption. For this purpose. For example. Lee-Sigma. geologists are often interested in mapping lineaments. 1982 Edge Detection Edge and line detection are important operations in digital image processing. MAP logic maximizes the a posteriori probability density function with respect to the original image. it is first necessary to understand the nature of what is being enhanced. Natural vegetated areas have been shown to be more properly modeled as having a Gamma distributed cross section. which may be fault lines or bedding structures. a line. Frost) assume a Gaussian distribution for the speckle noise. 532 Enhancement . Recent work has shown this to be an invalid assumption. or a spot (see Figure 165). The exact formula used is the cubic equation: ˆ 3 – I ˆ 2 + σ ( ˆ – DN ) = 0 I I I Where: Î I DN σ = sought value = local mean = input value = original image variance Source: Frost et al. Edge detection could imply amplifying an edge. Lee. In selecting an algorithm. which is assumed to lie between the local average and the degraded (actual) pixel DN.Gamma-MAP Filter The Maximum A Posteriori (MAP) filter attempts to estimate the original pixel DN. edge and line detection are major enhancement techniques. Many speckle reduction filters (for example.

width must be less than the moving window size.Figure 165: One-dimensional. Figure 166: A Noisy Edge Superimposed on an Ideal Edge Actual data values Ideal model step edge Intensity Edge detection algorithms can be broken down into 1st-order derivative and 2nd-order derivative operations. slope. hence the need for edge detection algorithms. increasing in DN value from a low to a high level. There are no perfect edges in raster data. However. real data values vary to produce a more distorted edge due to sensor noise or vibration (see Figure 166). or vice versa. and Line Models DN Value DN Value slope DN change slope midpoint Ramp edge x or y 90o DN change Step edge x or y DN Value DN change Line x or y DN Value width width near 0 DN change x or y Roof edge • Ramp edge—an edge modeled as a ramp. Line—a region bounded on each end by an edge. Roof edge—a line with a width near zero. Enhancement 533 . Continuous Edge. • • • The models in Figure 165 represent ideal theoretical edges. and slope midpoint. Step edge—a ramp edge with a slope angle of 90 degrees. Distinguished by DN change. Figure 167 shows ideal onedimensional edge and line intensity curves with the associated 1storder and 2nd-order derivatives.

that is. Southeast. To avoid positional shift. West. (Gradient kernels with zero weighting. South.= 2 ∂y –1 –1 –1 2 2 2 –1 –1 –1 The ERDAS IMAGINE Radar Interpreter utilizes sets of template matching operators. Northwest. all operating windows are odd number arrays. For example. Southwest).= ∂x 1 1 1 0 0 0 –1 –1 –1 and 1 0 –1 ∂---. it may be advantageous to extend the 3-level (Prewitt. the sum of the kernel coefficient is zero. East. with the center pixel being the pixel of interest. The compass names indicate the slope direction creating maximum response.Figure 167: Edge and Line Derivatives Ramp Edge Line Original Feature g(x) x g(x) x 1st Derivative ∂g ----∂x x ∂g ----∂x x 2nd Derivative ∂g ------2 ∂x 2 ∂g ------2 ∂x x 2 x The 1st-order derivative kernel(s) derives from the simple Prewitt kernel: ∂---.= 1 0 – 1 ∂y 1 0 –1 The 2nd-order derivative kernel(s) derives from Laplacian operators: –1 2 –1 ∂ 2------.= – 1 2 – 1 2 ∂x –1 2 –1 1st-Order Derivatives (Prewitt) and ∂ 2------.) The detected edge is orthogonal to the gradient direction. Northeast. have no output in uniform regions. Extension of the 3 × 3 impulse response arrays to a larger size is not clear cut—different authors suggest different lines of rationale. These operators approximate to the eight possible compass orientations (North. 1970) to: 534 Enhancement .

the coefficients are designed to add up to zero. Enhancement 535 . but are computationally more demanding.1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 –1 –1 –1 –1 –1 –1 –1 –1 –1 –1 or the following might be beneficial: 2 2 2 2 2 1 1 1 1 1 0 0 0 0 0 –1 –1 –1 –1 –1 –2 –2 –2 –2 –2 or 4 4 4 4 4 2 2 2 2 2 0 0 0 0 0 –2 –2 –2 –2 –2 –4 –4 –4 –4 –4 Larger template arrays provide a greater noise immunity. Zero-Sum Filters A common type of edge detection kernel is a zero-sum filter. For this type of filter. Following are examples of two zero-sum filters: –1 –2 –1 Sobel = 0 0 0 1 2 1 horizontal 1 0 –1 2 0 –2 1 0 –1 vertical –1 –1 –1 1 0 –1 Prewitt = 0 0 0 1 0 –1 1 1 1 1 0 –1 horizontal vertical Prior to edge enhancement. you should reduce speckle noise by using the ERDAS IMAGINE Radar Interpreter Speckle Suppression function.

However. Texture According to Pratt (Pratt. See Eberlein and Weszka (Eberlein and Weszka. 1975) for information about subtracting the 2nd-order derivative (Laplacian) image from the 1st-order derivative image (gradient). 31 × 31. You could also prepare a three-color image using three different functions operating through the same (or different) size moving window(s).” As an enhancement. texture is particularly applicable to radar data. 536 Enhancement . In these areas the scene can often be characterized as exhibiting a consistent structure analogous to the texture of cloth. although it may be applied to any type of data with varying results. These are best for line (or spot) detection as distinct from ramp edges.2nd-Order Derivatives (Laplacian Operators) The second category of edge enhancers is 2nd-order derivative or Laplacian operators. each data set and application would need different moving window sizes and/or texture measures to maximize the discrimination. “Many portions of images of natural scenes are devoid of sharp edges over large areas. 1991 Some researchers have found that a combination of 1st. 1982) that a three-layer variance image using 15 × 15. The same could apply to a vegetation classification. it has been shown (Blom and Daily. 1991). For example. Image texture measurements can be used to segment an image and classify its segments. ERDAS IMAGINE Radar Interpreter offers two such arrays: Unweighted line: –1 2 –1 –1 2 –1 –1 2 –1 Weighted line: –1 2 –1 –2 4 –2 –1 2 –1 Source: Pratt. green. blue) image that is useful for geologic discrimination. and 61 × 61 windows can be combined into a three-color RGB (red.and 2ndorder derivative images produces the best output.

Iron and Petersen. you interactively decide which algorithm and window size is best for your data and application. This later point becomes critical as the moving window size increases. other mathematical texture definitions may prove useful and will be added to the ERDAS IMAGINE Radar Interpreter. 1981). the phenomena involved is absorption at the molecular level. Research has shown that very large moving windows are often needed for proper enhancement. the proper texture image (function and window size) can greatly increase the discrimination. as we know from array-type antennae. Adding the radar intensity image as an additional layer in a (vegetation) classification is fairly straightforward and may be useful. Blom (Blom and Daily. As radar data come into wider use. The interaction of the radar waves with the surface of interest is dominated by reflection involving the surface roughness at the wavelength scale. In VIS/IR imaging. 1979. Using known test sites. Texture Analysis Algorithms While texture has typically been a qualitative measure. Also. For example. it is showing even greater applicability to radar imagery. The algorithms incorporated into ERDAS IMAGINE are those which are applicable in a wide variety of situations and are not computationally over-demanding. it can be enhanced with mathematical algorithms. In practice. one can experiment to discern which texture image best aids the classification. However. the texture image could then be added as an additional layer to the TM bands. radar is especially sensitive to regularity that is a multiple of its wavelength. The texture transforms can be used in several ways to enhance the use of radar imagery. The ability to use radar data to detect texture and provide topographic information about an image is a major advantage over other types of imagery where texture is not a quantitative characteristic. Many algorithms appear in literature for specific applications (Haralick. This provides for a more precise method for quantifying the character of texture in a radar return. 1982) uses up to a 61 × 61 window. Four algorithms are currently utilized for texture enhancement in ERDAS IMAGINE: • mean Euclidean distance (1st-order) Enhancement 537 .Radar Texture Analysis While texture analysis has been useful in the enhancement of VIS/IR image data. For example. this stems from the nature of the imaging process itself. In part.

where: Σx ij Mean = -------n Skewness Σ ( x ij – M ) Skew = -----------------------------(n – 1 )(V ) 3 -2 3 538 Enhancement .j) number of pixels in a window Mean of the moving window. 1981): 1 -2 2 Σ [ Σ λ ( x cλ – x ijλ ) ] Mean Euclidean Distance = -------------------------------------------n–1 Where: xijl xcl n Variance = = = DN value for spectral band λ and pixel (i.• • • variance (2nd-order) skewness (3rd-order) kurtosis (4th-order) Mean Euclidean Distance These algorithms are shown below (Iron and Petersen.j) of a multispectral image DN value for spectral band λ of a window’s center pixel number of pixels in a window Σ ( x ij – M ) Variance = --------------------------n–1 Where: 2 xij n M = = = DN value of pixel (i.

speckle) the inherently stronger signal from a near range (closest to the sensor flight path) than a far range (farthest from the sensor flight path) target Many imaging radar systems use a single antenna that transmits the coherent radar burst and receives the return echo. This causes the received signal to be slightly distorted radiometrically. However. In addition. no antenna is perfect.j) number of pixels in a window Mean of the moving window (see above) Variance (see above) Σ ( x ij – M ) Kurtosis = ---------------------------2 (n – 1 )(V) Where: 4 xij n M V = = = = DN value of pixel (i. dead spots. it may have various lobes.Where: xij n M V Kurtosis = = = = DN value of pixel (i. range fall-off causes far range targets to be darker (less return signal). Radiometric Correction: Radar Imagery The raw radar image frequently contains radiometric errors due to: • • • imperfections in the transmit and receive pattern of the radar antenna errors due to the coherent pulse (that is. Enhancement 539 . and imperfections.j) number of pixels in a window Mean of the moving window (see above) Variance (see above) Texture analysis is available from the Texture function in Image Interpreter and from the ERDAS IMAGINE Radar Interpreter Texture Analysis function.

you must tell ERDAS IMAGINE whether the lines of constant range are in columns or rows in the displayed image. Figure 169 shows the lines of constant range in columns.. This depends upon the character of the scene itself. Be careful not to use too small an image. it is not specific to any particular radar sensor. the number of data values must be large enough to provide good average values.ax Overall average = x Overall average ax = calibration coefficient of line x This subset would not give an accurate average for correcting the entire scene.. Range Lines/Lines of Constant Range Lines of constant range are not the same thing as range lines: • • • Range lines—lines that are perpendicular to the flight of the sensor Lines of constant range—lines that are parallel to the flight of the sensor Range direction—same as range lines Because radiometric errors are a function of the imaging geometry. the image must be correctly oriented during the correction process. parallel to the sides of the display screen: 540 Enhancement . The Adjust Brightness function in ERDAS IMAGINE works by correcting each range line average.. This approach is generic. Figure 168: Adjust Brightness Function rows of data a = average data value of each row columns of data Add the averages of all data rows: a1 + a2 + a3 + a4 .These two problems can be addressed by adjusting the average brightness of each range line to a constant—usually the average overall scene brightness (Chavez and Berlin. 1986). This requires that each line of constant range be long enough to reasonably approximate the overall scene brightness (see Figure 168). For the algorithm to correctly address the data set. For this to be a valid approach.

The following methods are suggested for experimentation: • • • • Codisplaying in a View RGB to IHS transforms Principal components transform Multiplicative The ultimate goal of enhancement is not mathematical or logical purity. Enhancement Flight (Azimuth) Direction Range Lines 541 . The option that proves to be most useful depends upon the data sets (both radar and VIS/IR). and your final objective. The methods for merging radar and VIS/IR data are still experimental and open for exploration. the resultant image conveys both chemical and physical information and could prove more useful than either image alone. Because these two sensor types give different information about the same target (chemical vs. the phenomena involved in radar imaging is quite different from that in VIS/IR imaging. There are currently no rules to suggest which options yield the best results for a particular application.Figure 169: Range Lines vs. your experience. they are complementary data sets. you must experiment. If the two images are correctly combined. physical). it is feature extraction. Lines of Constant Range Display Screen Lines Of Constant Range Range Direction Merging Radar with VIS/IR Imagery As aforementioned.

which is assigned to the image intensity. In this technique. the chromatic components are usually band ratios or PCs. the radar image is input multiplicatively as intensity (Croft (Holcomb). Multiplicative A final method to consider is the multiplicative technique.Codisplaying The simplest and most frequently used method of combining radar with VIS/IR imagery is codisplaying on an RGB color monitor. For more information. PC-1. In this technique. an RGB color composite of bands (or band derivatives. This technique integrally merges the two data types. 542 Enhancement . RGB to IHS Transforms Another common technique uses the RGB to IHS transforms. the radar image is displayed with one (typically the red) gun. For more information. With this transform. This value is replaced by the radar image and the reverse transform is applied. Use the Viewer with the Clear Display option disabled for this type of merge. This technique follows from no logical model and does not truly merge the two data sets. Principal Components Transform A similar image merge involves utilizing the PC transformation of the VIS/IR image. These are converted to a series of principal components. The intensity component is replaced by the radar image. In practice. such as ratios) is transformed into IHS color space. This requires several chromatic components and a multiplicative component. 1993). is generally accepted to correlate with overall scene brightness. The first PC. Select the color guns to display the different layers. see RGB to IHS on page 498. see Principal Components Analysis on page 492. and the scene is reverse transformed. while the green and blue guns display VIS/IR bands or band ratios. more than three components can be used.

1993). 5/4. TM5/TM4. If the target area is accompanied by silicification. then the radar return could be weaker than the surrounding rock.The two sensor merge models using transforms to integrate the two data sets (PC and RGB to IHS) are based on the assumption that the radar intensity correlates with the intensity that the transform derives from the data inputs. radar would not correlate with high 5/7. this should be the case. which results in an area of dense angular rock. the VIS/IR intensity. For example. TM3/TM1. The acceptability of this assumption depends on the specific case. or equivalent to. However. Enhancement 543 . It cannot be assumed that the radar intensity is a surrogate for. However. the logic being that if all three ratios are high. the logic of mathematically merging radar with VIS/IR data sets is inherently different from the logic of the SPOT/TM merges (as discussed in Resolution Merge on page 480). the sites suited for mineral exploration are bright overall. A common display for this purpose is RGB = TM5/TM7. if the alteration zone is basaltic rock to kaolinite/alunite. Landsat TM imagery is often used to aid in mineral exploration. In this case. 3/1 intensity and the substitution would not produce the desired results (Croft (Holcomb).

544 Enhancement .

By spatially and spectrally enhancing an image. such as aerial photos. classes may be associated with known features on the ground or may simply represent areas that look different to the computer. you can instruct the computer system to identify pixels with similar characteristics. is required before classification.Classification Introduction Multispectral classification is the process of sorting pixels into a finite number of individual classes. the computer system must be trained to recognize patterns in the data. bare land. Training First. pattern recognition can be performed with the human eye. If the classification is accurate. spectral pattern recognition can be more scientific. you select pixels that represent patterns or land cover features that you recognize. the resulting classes represent the categories within the data that you originally identified. In this process. the pixels are sorted based on mathematical criteria. which can be extracted through classification. An example of a classified image is a land cover map. or that you can identify with help from other sources. Training can be performed with either a supervised or an unsupervised method. Classification Classification 545 545 . If a pixel satisfies a certain set of criteria. In a computer system. or categories of data. 1982). This process is also referred to as image segmentation. and of the classes desired. Then. etc. the human brain automatically sorts certain textures and colors into categories. based on their data file values. urban. Supervised Training Supervised training is closely controlled by the analyst. Depending on the type of information you want to extract from the original data. The Classification Process Pattern Recognition Pattern recognition is the science—and art—of finding meaningful patterns in data. showing vegetation. the pixel is assigned to the class that corresponds to that criteria. Knowledge of the data. By identifying patterns. pasture. ground truth data. or maps. as explained below. Training is the process of defining the criteria by which these patterns are recognized (Hord. The classification process breaks down into two parts: training and classifying (using a decision rule). Statistics are derived from the spectral characteristics of all pixels in an image.

such as contiguous. 546 Classification ..g. and is used with a decision rule (explained below) to assign the pixels in the image file to a class. Unsupervised classification is useful only if the classes can be appropriately interpreted. A parametric signature is based on statistical parameters (e. They are simply clusters of pixels with similar spectral characteristics. maximum likelihood) to define the classes. However. 1994).g. the only feature space object for which this would be mathematically valid would be an ellipse (Kloer. This method is usually used when less is known about the data before classification. ERDAS IMAGINE enables you to generate statistics for a nonparametric signature. easily recognized areas of a particular soil type or land use. When both parametric and nonparametric signatures are used to classify an image. Signatures in ERDAS IMAGINE can be parametric or nonparametric. A set of parametric signatures can be used to train a statistically-based classifier (e. Each signature corresponds to a class. 1996). you are more able to analyze and visualize the class definitions than either type of signature provides independently (Kloer. It enables you to specify some parameters that the computer uses to uncover statistical patterns that are inherent in the data. 1994). Signatures The result of training is a set of signatures that defines a training sample or cluster.. A nonparametric classifier uses a set of nonparametric signatures to assign pixels to a class based on their location either inside or outside the area in the feature space image. mean and covariance matrix) of the pixels that are in the training sample or cluster. after classification. These feature space objects are used to define the boundaries for the classes. Unsupervised training is dependent upon the data itself for the definition of classes. It is then the analyst’s responsibility. Supervised training is used to generate nonparametric signatures (Kloer. In some cases. it may be more important to identify groups of pixels with similar spectral characteristics than it is to sort pixels into recognizable categories. This function allows a feature space object to be used to create a parametric signature from the image being classified. since a parametric classifier requires a normal distribution of data. These patterns do not necessarily correspond to directly meaningful characteristics of the scene. Supervised and unsupervised training can generate parametric signatures. to attach meaning to the resulting classes (Jensen. A nonparametric signature is not based on statistics. but on discrete objects (polygons or rectangles) in a feature space image.Unsupervised Training Unsupervised training is more computer-automated. 1994).

the pixels of the image are sorted into classes based on the signatures by use of a classification decision rule. values. 1994). Classification 547 . therefore. The decision rule is a mathematical algorithm that. Decision Rule After the signatures are defined. a nonparametric decision rule determines whether or not the pixel is located inside of nonparametric signature boundary. every pixel is assigned to a class since the parametric decision space is continuous (Kloer. performs the actual sorting of pixels into distinct class values. Output File When classifying an image file. then this decision rule assigns the pixel to the signature’s class. using data contained in the signature. These signatures are defined by the mean vector and covariance matrix for the data file values of the pixels in the signatures. the output file is an image file with a thematic raster layer.See "Math Topics" on page 697 for information on feature space images and how they are created. Parametric Decision Rule A parametric decision rule is trained by the parametric signatures. and colors can be set with the Signature Editor or the Raster Attribute Editor. Basically. This file automatically contains the following data: • • • • • class values class names color table statistics histogram The image file also contains any signature attributes that were selected in the ERDAS IMAGINE Supervised Classification utility. it is independent of the properties of the data. If a pixel is located within the boundary of a nonparametric signature. The class names. When a parametric decision rule is used. Nonparametric Decision Rule A nonparametric decision rule is not based on statistics.

Florida Department of Transportation. 550-010-001-a. Washington. 1985.C.. Procedure No. Most schemes have a hierarchical structure. using previously developed schemes. D. which can describe a study area in several levels of detail. 1976. The objective of the ERDAS IMAGINE system is to enable you to iteratively create and refine signatures and classified image files to arrive at a desired final classification. as a general framework.” U. Such a set is called a classification scheme (or classification system).Classification Tips Classification Scheme Usually. Classification of Wetlands and Deepwater Habitats of the United States. Iterative Classification A process is iterative when it repeats an action. Cowardin.: U. Some examples are below: • Signatures created from both supervised and unsupervised training can be merged and appended together. Fish and Wildlife Service. 1975. Michigan Land Cover/Use Classification System. Michigan Land Use Classification and Reference Committee. The total classification can be achieved with either the supervised or unsupervised methods. A number of classification schemes have been developed by specialists who have inventoried a geographic region. The proper classification scheme includes classes that are both important to the study and discernible from the data on hand. 548 Classification . Some references for professionally-developed schemes are listed below: • Anderson. Lewis M. Geological Survey Professional Paper 964.R.. Florida Topographic Bureau. It is recommended that the classification process is begun by defining a classification scheme for the application. classification is performed with a set of target classes in mind. et al. The ERDAS IMAGINE classification utilities are tools to be used as needed. Cover and Forms Classification System.S. et al. “A Land Use and Land Cover Classification System for Use with Remote Sensor Data. • • • Other states or government agencies may also have specialized land use/cover studies. J. Florida Land Use. Lansing.S. not a numbered list of steps that must always be followed in order. like those above. 1979. or a combination of both. Thematic Mapping Section. 1983). The purpose of such a scheme is to provide a framework for organizing and categorizing the information that can be extracted from the data (Jensen et al. Michigan: State of Michigan Office of Land Use.

This helps to determine which signatures should be merged or deleted. signatures and classifications can be generated from previous classification results. easily recognized regions. These tools also help define optimum band combinations for classification. a data file with 3 layers is said to be 3-dimensional. NOTE: Supervised classification also includes using a set of classes that is generated from an unsupervised classification. homogeneous regions that represent each class. Using a combination of supervised and unsupervised classification may yield optimum results. especially with large data sets (e. Unsupervised training enables you to define many classes easily. Supervised classification is usually appropriate when you want to identify relatively few classes. image algebra. Dimensionality Feature space and dimensionality are discussed in "Math Topics" on page 697.• Signature evaluation tools can be used to indicate which signatures are spectrally similar. Unsupervised Training In supervised training. and then create the appropriate signatures from the data. On the other hand. Classification 549 . classifying data that have been merged. since 3dimensional feature space is plotted to analyze the data. it is important to have a set of desired classes in mind. or when you can identify distinct. and identify classes that are not in contiguous. when you have selected training sites that can be verified with ground truth data. without understanding the data and the enhancements used. • Supervised vs. multiple Landsat scenes). remotely-sensed data be classified. You must also have some way of recognizing pixels that represent the classes that you want to extract. then the application is better suited to unsupervised training.g. spectrally merged or enhanced—with principal components. For example.. or other transformations—can produce very specific and meaningful results. However. Since classifications (supervised or unsupervised) can be based on a particular area of interest (either defined in a raster layer or an . if you want the classes to be determined by spectral distinctions that are inherent in the data so that you can define the classes later. it is recommended that only the original.aoi layer). For example. Dimensionality refers to the number of layers being classified. unsupervised classification may be useful for generating a basic set of classes. Classifying Enhanced Data For many specialized applications. then supervised classification can be used for further definition of the classes. Using the optimum band combination may reduce the time required to run a classification process.

To select reliable samples. The data file values of the ancillary data become an additional feature of each pixel. or elevation data. 550 Classification . Supervised Training Supervised training requires a priori (already known) information about the data. soil. and cause the computer system to perform more arduous calculations. you can incorporate data (called ancillary data) other than remotely-sensed data into the classification. Use the Signature Editor to evaluate separability to calculate the best subset of layer combinations. Therefore. Unnecessary data take up valuable disk space. certain layers of data are redundant or extraneous to the task at hand. Often. it is usually wise to reduce the dimensionality of the data as much as possible. or vegetation (or whatever) are represented by the data? In supervised training. Use the Image Interpreter functions to merge or subset layers. thus influencing the classification (Jensen. which slows down processing. which types of land cover. such as: • • What type of classes need to be extracted? Soil type? Land use? Vegetation? What classes are most likely to be present in the data? That is. Using ancillary data enables you to incorporate variables into the classification from. you should know some information—either spatial or spectral—about the pixels that you want to classify. Use the Image Information tool (on the Viewer’s tool bar) to delete a layer(s). previously classified data.Adding Dimensions Using programs in ERDAS IMAGINE. Limiting Dimensions Although ERDAS IMAGINE allows an unlimited number of layers of data to be used for one classification. 1996). you can add layers to existing image files. vector layers. for example. you rely on your own pattern recognition skills and a priori knowledge of the data to help the system determine the statistical criteria (signatures) for data classification.

For clarity. so that the data correspond as much as possible (Star and Estes. analysis of aerial photography. They should be collected at the same time as the remotely sensed data. or sample. Training Samples and Feature Space Objects Training samples (also called samples) are sets of pixels that represent what is recognized as a discernible pattern. Ground truthing refers to the acquisition of knowledge about the study area from field work. • Feature space objects are user-defined AOIs in a feature space image. 1990). such as a land cover type.The location of a specific characteristic. Usually. The data file values for these pixels are used to generate a parametric signature. and of the classes that you want to extract. Training field. with or without similar spectral characteristics Classification 551 . However. This does not necessarily mean that they must contain a large number of pixels or be dispersed across a wide region of the data. The system calculates statistics from the sample pixels to create a parametric signature for the class. Selecting Training Samples It is important that training samples be representative of the class that you are trying to identify. The selection of training samples depends largely upon your knowledge of the data. Ground truth data are considered to be the most accurate (true) data available about the area of study. is a set of pixels selected to represent a potential class. some ground data may not be very accurate due to a number of errors and inaccuracies. it is previously identified with the use of ground truth data. of the study area. The feature space signature is based on these objects. may be known through ground truthing. is the geographical AOI in the image represented by the pixels in a sample. The following terms are sometimes used interchangeably in reference to training samples. they are used in this documentation as follows: • Training sample. personal experience. or training site. ERDAS IMAGINE enables you to identify training samples using one or more of the following methods: • • • • using a vector layer defining a polygon in the image identifying a training sample of contiguous pixels with similar spectral characteristics identifying a training sample of contiguous pixels within a certain area. etc. or potential class.

against which the pixels that are contiguous to it are compared based on parameters specified by you. Polygons representing these areas are then stored as vector layers. Use the Signature Editor to create signatures from training samples that are identified with the polygons.. and other variations into account). you can identify samples by examining a displayed image of the data and drawing a polygon around the training site(s) of interest. This seed pixel is used as a model pixel. In effect. The locations of the training sites can be digitized from maps with the ERDAS IMAGINE Vector or AOI tools. date. using maps.e. you may be able to base your sample selections on the data (taking atmospheric conditions. The vector layers can then be used as input to the AOI tools and used as training samples to create signatures. the result of an unsupervised classification) Digitized Polygon Training samples can be identified by their geographical location (training sites. ground truth data). For example. sun angle. Use the AOI tools to define the polygon(s) to be used as the training sample. the mean of the sample is calculated from the accepted pixels. When one or more of the contiguous pixels is accepted. Identify Seed Pixel With the Seed Properties dialog and AOI tools. Use the Vector and AOI tools to digitize training samples from a map. the cursor (crosshair) can be used to identify a single pixel (seed pixel) that is representative of the training sample. These homogenous pixels are converted from individual raster pixels to a polygon and used as an AOI layer. The area within the polygon(s) would be used to create a signature. User-defined Polygon Using your pattern recognition skills (with or without supplemental ground truth information). 552 Classification . time. Use the Signature Editor to create signatures from training samples that are identified with digitized polygons.• using a class from a thematic raster layer from an image file of the same area (i. the sample grows outward from the model pixel with each iteration. This process repeats until no pixels that are contiguous to the sample satisfy the spectral parameters. the pixels contiguous to the sample are compared in the same way. if it is known that oak trees reflect certain frequencies of green and infrared light according to ground truth data. Then.

Select the Seed Properties option in the Viewer to identify training samples with a seed pixel. represents variance. and the boundaries can then be used as an AOI for training samples defined under Seed Properties. Table 51: Training Sample Comparison Method Digitized Polygon Advantages Disadvantages may overestimate class precise map coordinates. The training sample can be defined by as many class values as desired. Seed Pixel Method with Spatial Limits The training sample identified with the seed pixel method can be limited to a particular region by defining the geographic distance and area. less time may underestimate class variance allows iterative classifying must have previously defined thematic layer Classification 553 . The data file values in the training sample are used to create a signature. Vector layers (polygons or lines) can be displayed as the top layer in the Viewer. timeconsuming known ground information high degree of user control may overestimate class variance. Thematic Raster Layer A training sample can be defined by using class values from a thematic raster layer (see Table 51). NOTE: The thematic raster layer must have the same coordinate system as the image file being classified. timeconsuming User-defined Polygon Seed Pixel Thematic Raster Layer auto-assisted.

In ERDAS IMAGINE. or appending from one file to another. The pixel values in the feature space image can be the accumulated frequency. which is calculated when the feature space image is defined. a feature space image has the same data structure as a raster image. then mask out areas that are not classified to use in gathering more signatures. A feature space image is simply a graph of the data file values of one band of data against the values of another band (often called a scatterplot). and Map Composer. evaluate the signatures that are generated from the samples. To generate signatures that accurately represent the classes to be identified. feature space images can be used with other ERDAS IMAGINE utilities. The pixel values can also be provided by a thematic raster layer of the same geometry as the source multilayer image. virtual roam. color level slicing. See Evaluating Signatures on page 565 for methods of determining the accuracy of the signatures created from your training samples. including zoom. deleting.Evaluating Training Samples Selecting training samples is often an iterative process. Selecting Feature Space Objects The ERDAS IMAGINE Feature Space tools enable you to interactively define feature space objects (AOIs) in the feature space image(s). you may have to repeatedly select training samples. It does not define the pixel’s value. 1994). and then either take new samples or manipulate the signatures as necessary. Spatial Modeler. therefore. Signature manipulation may involve merging. 554 Classification . Figure 170: Example of a Feature Space Image band 2 band 1 The transformation of a multilayer raster image into a feature space image is done by mapping the input pixel values to a position in the feature space image. It is also possible to perform a the known signatures. Mapping a thematic layer into a feature space image can be useful for evaluating the validity of the parametric and nonparametric decision boundaries of a classification (Kloer. This transformation defines only the pixel position in the feature space image.

but multiple AOIs.e. This helps improve classification accuracies for specific nonnormal classes. can be used to define the signature. Create Nonparametric Signature You can define a feature space object (AOI) in the feature space image and use it directly as a nonparametric signature. A single feature space image. The pixels in the image that correspond to the data file values in the signature (i. it is possible to mask AOIs from the image being classified to the feature space image. 1991). feature space object) are assigned to that class. Since the Viewers for the feature space image and the image being classified are both linked to the ERDAS IMAGINE Signature Editor.fsp. The decisions made in the classification process have no dependency on the statistics of the pixels.. You can also directly link a cursor in the image Viewer to the feature space Viewer. One fundamental difference between using the feature space image to define a training sample and the other traditional methods is that it is a nonparametric signature. not the image being classified.img) in a Viewer. This signature is taken within the feature space image. Classification 555 . such as urban and exposed rock (Faust et al. the colors reflect the density of points for both bands.When you display a feature space image file (. The bright tones represent a high density and the dark tones represent a low density. These functions help determine a location for the AOI in the feature space image. and vice versa. See Feature Space Images on page 708 for more information.

The polygons in the feature space image can be easily modified and/or masked until the desired regions of the image have been identified. Any polygon or rectangle in these feature space images can be used as a nonparametric signature. the pixels under the mask are identified in the image file and highlighted in the Viewer.Once you have a desired AOI. and the pixels with the corresponding data file values are assigned Evaluate Feature Space Signatures Using the Feature Space tools. only one feature space image can be used per signature.Figure 171: Process for Defining a Feature Space Object Display the image file to be classified in a Viewer (layers 3. However. it is also possible to use a feature space signature to generate a mask. Use the Feature Space tools in the Signature Editor to create a feature space image and mask the signature. You can have as many feature space images with different band combinations as desired. layer 2). it can be used as a signature. Create feature space image from the image file being c (layer 1 vs. Use the AOI tools to draw polygons. Draw an AOI (feature space object around the desired area in the feature space image. The image displayed in the Viewer must be the image from which the feature space image was created. This process helps you to visually analyze the correlations between various spectral bands to determine which combination of bands brings out the desired features in the image. 2. 556 Classification . 1). Once it is defined as a mask. A decision rule is used to analyze each pixel in the image file being classified.

residential and urban). and classifies again.Advantages Disadvantages Provide an accurate way to The classification decision process classify a class with a nonnormal allows overlap and unclassified pixels. redefines the criteria for each class. The classification decision process is fast. otherwise manipulated. which often uses all or many of the pixels in the input data file for its analysis. because it is based on the natural groupings of pixels in image data when they are plotted in feature space.g. However. The feature space image may be difficult to interpret. The RGB clustering method is more specialized than the ISODATA method. or used as the basis of a signature. • The Iterative Self-Organizing Data Analysis Technique (ISODATA) (Tou and Gonzalez. and divides that space into sections that are used to define clusters. The clustering algorithm has no regard for the contiguity of the pixels that define each cluster. distribution (e. you have the task of interpreting the classes that are created by the unsupervised training algorithm. Clusters Clusters are defined with a clustering algorithm. According to the specified parameters. See Feature Space on page 708 for more information. Unsupervised Training Unsupervised training requires only minimal initial input from you. along with its advantages and disadvantages. Unsupervised training is also called clustering. RGB clustering plots pixels in three-dimensional feature space. but iteratively classifies the pixels. It applies to three-band.. so that the spectral distance patterns in the data gradually emerge. 8-bit data. Certain features may be more visually identifiable in a feature space image. Classification 557 . disregarded. 1974) clustering method uses spectral distance as in the sequential method. these groups can later be merged. • Each of these methods is explained below.

based on the actual spectral locations of the pixels in the cluster. Self-Organizing refers to the way in which it locates clusters with minimum user input.the maximum number of iterations to be performed. μn-σn) 558 Classification . Then.Some of the statistics terms used in this section are explained in "Math Topics" on page 697.a convergence threshold. you specify: • N . so that those means shift to the means of the clusters in the data. ISODATA Clustering Parameters To perform ISODATA clustering. a new mean for each cluster is calculated. .. ISODATA Clustering ISODATA is iterative in that it repeatedly performs an entire classification (outputting a thematic raster layer) and recalculates statistics. leaving less than N clusters. as are the one-pass clustering algorithms.. μ3-σ3. Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA clustering. M . T . μ2-σ2. • • Initial Cluster Means On the first iteration of the ISODATA algorithm. Some clusters with too few pixels can be eliminated. and then it processes repetitively. The process begins with a specified number of arbitrary cluster means or the means of existing signatures.the maximum number of clusters to be considered. Because the ISODATA method is iterative. After each iteration. Since each cluster is the basis for a class. instead of the initial arbitrary calculation. the means of N clusters can be arbitrarily determined. these new means are used for defining clusters in the next iteration. The initial cluster means are distributed in feature space along a vector that runs between the point at spectral coordinates (μ1-σ1. which is the maximum percentage of pixels whose class values are allowed to be unchanged between iterations. The process continues until there is little change between iterations (Swain. The ISODATA process begins by determining N arbitrary cluster means. 1973). it is not biased to the top of the data file. this number becomes the maximum number of classes to be formed. The ISODATA method uses minimum spectral distance to assign a cluster for each candidate pixel.

σB 0 0 μA . The pixel is assigned to the cluster whose mean is the closest.and the coordinates (μ1+σ1. μB+σB) Figure 172: ISODATA Arbitrary Clusters ISODATA Arbitrary Clusters 5 arbitrary cluster means in two-dimensional spectral space μB+ σB Band B data file values μB μB. the first iteration of the ISODATA algorithm always gives results similar to those in Figure 173. μ2+σ2. μn+σn) Such a vector in two dimensions is illustrated in Figure 172. . The spectral distance between the candidate pixel and each cluster mean is calculated..σ A μA μA+σA Band A data file values Pixel Analysis Pixels are analyzed beginning with the upper left corner of the image and going left to right. The ISODATA function creates an output image file with a thematic raster layer and/or a signature file (. The initial cluster means are evenly distributed between (μA-σA. arbitrary assignment of the initial cluster means.sig) as a result of the clustering. At the end of each iteration. μ3+σ3. Considering the regular. μB-σB) and (μA+σA. block by block.. an image file exists that shows the assignments of the pixels to the clusters. Classification 559 .

Therefore.Figure 173: ISODATA First Pass Cluster 4 Cluster 3 Cluster 5 Band B data file values Cluster 2 Cluster 1 Band A data file values For the second iteration. M. causing them to shift in feature space. the means of all clusters are recalculated. the program terminates. 560 Classification . It is possible for the percentage of unchanged pixels to never converge or reach T (the convergence threshold). or specify a reasonable maximum number of iterations. the normalized percentage of pixels whose assignments are unchanged since the last iteration is displayed in the dialog. it may be beneficial to monitor the percentage. Figure 174: ISODATA Second Pass Band B data file values Band A data file values Percentage Unchanged After each iteration. The entire process is repeated— each candidate pixel is compared to the new cluster means and assigned to the closest cluster mean. so that the program does not run indefinitely. When this number reaches T (the convergence threshold).

The image file created by ISODATA is the same as the image file that is created by a minimum distance classification. as long as enough iterations are allowed. Classification 561 . clusters that are inherent in the data. PCA can be performed on up to 256 bands with ERDAS IMAGINE. The resulting bands are noncorrelated and independent. no particular decision rule is recommended over others. because it can repeat many times. Therefore. A preliminary thematic raster layer is created. Recommended Decision Rule Although the ISODATA algorithm is the most similar to the minimum distance decision rule. Principal Components Analysis (PCA) is a method of data compression. Principal Component Method Whereas clustering creates signatures depending on pixels’ spectral reflectance by adding pixels together.Advantages Because it is iterative. In most cases. which gives results similar to using a minimum distance classifier (as explained below) on the signatures that are created. the signatures created by ISODATA are merged. As a type of spectral enhancement. Disadvantages The clustering process is timeconsuming. the signatures can produce good results with any type of classification. or appended to other signature sets. This thematic raster layer can be used for analyzing and manipulating the signatures before actual classification takes place. you can eliminate data that is redundant by compacting it into fewer bands. clustering is not geographically biased to the top or bottom pixels of the data file. With it. the principal component method actually subtracts pixels. deleted. It does not matter where the initial arbitrary cluster means are located. except for the nonconvergent pixels (100-T% of the pixels). Does not account for pixel spatial This algorithm is highly successful at finding the spectral homogeneity. you are required to specify the number of components you want output from the original data. You may find these bands more interpretable than the source data.

or between the minimum and maximum data values for each band. but it does employ a clustering algorithm. This allows for more color variation in the output file. RGB Clustering The RGB Clustering and Advanced RGB Clustering functions in Image Interpreter create a thematic raster layer. However. green. and blue directions in 3-dimensional space. In the more simplistic version of this function.Use the Merge and Delete options in the Signature Editor to manipulate signatures. without necessarily classifying any particular features. The algorithm plots all pixels in 3-dimensional feature space and then partitions this space into clusters on a grid. no signature file is created and no other classification decision rule is used. It is a fast and simple algorithm that quickly compresses a three-band image into a single band pseudocolor image. Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA clustering. The advanced version requires that a minimum threshold on the clusters be set so that only clusters at least as large as the threshold become output classes. Along each axis of the three-dimensional scatterplot. the city-block distance is calculated as the sum of the distances in the red. In practice. RGB Clustering differs greatly from the other clustering methods. and classify the resulting signatures. generate signatures. RGB clustering is a simple classification and data compression technique for three bands of data. Pixels that do not fall into any of the remaining clusters are assigned to the cluster with the smallest city-block distance from the pixel. In this case. each input histogram is scaled so that the partitions divide the histograms between specified limits—either a specified number of standard deviations above and below the mean. each of these clusters becomes a class in the output thematic raster layer. The default number of divisions per band is listed below: • • • Red is divided into 7 sections (32 for advanced version) Green is divided into 6 sections (32 for advanced version) Blue is divided into 6 sections (32 for advanced version) 562 Classification .

The order in which that can be analyzed for informational the pixels are examined does not purposes. and B sections in each dimension of the 3-dimensional scatterplot. Advantages Disadvantages Exactly three bands must be input. The fastest classification method. Classification 563 . and between 35 and 55 in GREEN. influence the outcome. simple classification for applications that do not require specific classes. It is possible to interactively change these parameters in the RGB Clustering function in the Image Interpreter. G. Not biased to the top or bottom of Does not always create thematic classes the data file. a fast. 98 G 195 R 16 34 R 0 35 55 G B 0 35 16 25 5 B Partitioning Parameters It is necessary to specify the number of R. Broad histograms should be divided into more sections. The number of sections should vary according to the histograms of each band. The number of classes is calculated based on the current parameters. and narrow histograms should be divided into fewer sections (see Figure 175).Figure 175: RGB Clustering input histograms R frequency G B 16 0 16 35 98 195 255 This cluster contains pixels between 16 and 34 in RED. It is designed to provide which is not suitable for all applications. and between 0 and 16 in BLUE. and it displays on the command screen.

or cluster. such as alarms. The following attributes are standard for all signatures (parametric and nonparametric): • name—identifies the signature and is used as the class name in the output thematic raster layer. color—the color for the signature and the color for the class in the output thematic raster layer. feature space object (AOI). The output class value does not necessarily need to be the class number of the signature. and B parameter values until the desired number of output classes is obtained. etc. decrease these values. ellipses. G. Signatures in ERDAS IMAGINE can be parametric or nonparametric. value—the output class value for the signature. allowing an iterative adjustment of the parameters until the number of clusters and the thresholds are satisfactory for analysis. masking. Signature Files A signature is a set of data that defines a training sample. The signature is used in a classification process. Adjust by raising the threshold parameter and/or decreasing the R. Each classification decision rule (algorithm) requires some signature attributes as input—these are stored in the signature file (. start with higher values for R.Advantages (Advanced version only) A highly interactive function. • • 564 Classification .sig). For the Advanced RGB clustering function. Disadvantages Tips Some starting values that usually produce good results with the simple RGB clustering are: R G B = 7 = 6 = 6 which results in 7 × 6 × 6 = 252 classes. This color is also used with other signature visualization functions. To decrease the number of output colors/classes or to darken the output. and B. This value should be a positive integer. G. The default signature name is Class <number>.

renamed. deleted. Evaluating Signatures Once signatures are created.. A parametric signature includes the following attributes in addition to the standard attributes for signatures: • • • • • the number of bands in the input image (as processed by the training program) the minimum and maximum data file value in each band for each sample or cluster (minimum vector and maximum vector) the mean data file value in each band for each sample or cluster (mean vector) the covariance matrix for each sample or cluster the number of pixels in the sample or cluster Nonparametric Signature A nonparametric signature is based on an AOI that you define in the feature space image for the image file being classified.sig file is described in the On-Line Help. • Parametric Signature A parametric signature is based on statistical parameters (e. they can be evaluated. parallelepiped limits—the limits used in the parallelepiped classification. Merging signatures enables you to perform complex classifications with signatures that are derived from more than one training method (supervised and/or unsupervised. parametric and/or nonparametric). such as signature alarms and parallelepiped classifications. either inside or outside the area in the feature space image. Information on these statistics can be found in "Math Topics" on page 697. and merged with signatures from other files. Classification 565 .• order—the order to process the signatures for order-dependent processes. A nonparametric classifier uses a set of nonparametric signatures to assign pixels to a class based on their location.g. The format of the . mean and covariance matrix) of the pixels that are in the training sample or cluster.

you view the estimated classified area for a signature (using the parallelepiped decision rule) against a display of the original image. Statistics and histograms—analyze statistics and histograms of the signatures to make evaluations and comparisons. You can evaluate signatures that were created either from supervised or unsupervised training. to determine the accuracy of a signature. and perform your own mathematical tests on the statistics. a feature space signature). as it appears in the Viewer.Use the Signature Editor to view the contents of each signature. This method is for supervised training only. Divergence—measure the divergence (statistical distance) between signatures and determine band subsets that maximize the classification. After analyzing the signatures. • • • • NOTE: If the signature is nonparametric (i. These percentages are presented in a contingency matrix. Using Signature Data There are tests to perform that can help determine whether the signature data are a true representation of the pixels to be classified for each class. The evaluation methods in ERDAS IMAGINE include: • Alarm—using your own pattern recognition ability. you can use your own pattern recognition skills. With this test. eliminate redundant bands from the data. Contingency matrix—do a quick classification of the pixels in a set of training samples to see what percentage of the sample pixels are actually classified as expected. or some ground truth data. manipulate signatures. for which polygons of the training samples exist. the pixels that fit the classification criteria are highlighted in the displayed image. add new bands of data. you can use only the alarm evaluation method. or perform any other operations to improve the classification. According to the parallelepiped decision rule. Alarm The alarm evaluation enables you to compare an estimated classification of one or more signatures against the original data. You also have the option to indicate an overlap by having it appear in a different color. it would be beneficial to merge or delete them..e. 566 Classification . Ellipse—view ellipse diagrams and scatterplots of data file values for every pair of bands.

ellipses of concentration are calculated with the means and standard deviations stored in the signature file. Some overlap. is expected. Ellipse In this evaluation. The alarm utility creates a functional layer. there is no overlap. The first graph shows how the ellipses are plotted based on the range of 2 standard deviations from the mean. the mean and the standard deviation of every signature are used to represent the ellipse in 2-dimensional feature space. means.Use the Signature Alarm utility in the Signature Editor to perform ndimensional alarms on the image in the Viewer. however. changing the ellipse plots. and the Viewer allows you to toggle between the image layer and the functional layer. Ellipses are explained and illustrated in Feature Space Images on page 708 under the discussion of Scatterplots. This range can be altered. and labels. It is also possible to generate parallelepiped rectangles. Analyzing the plots with differing numbers of standard deviations is useful for determining the limits of a parallelepiped classification. In this evaluation. In the best case. Classification 567 . then the spectral characteristics of the pixels represented by the signatures cannot be distinguished in the two bands that are graphed. Figure 176 shows how ellipses are plotted and how they can overlap. When the ellipses in the feature space image show extensive overlap. using the parallelepiped decision rule. The ellipse is displayed in a feature space image.

Then. However. then a high percentage of each sample’s pixels is classified as expected. you can determine which signatures and which bands provide accurate classification results. etc. In this evaluation. Each sample pixel only weights the statistics that determine the classes. μ A2 μ A2 +2s Band C data file values Use the Signature Editor to create a feature space image and to view an ellipse(s) of signature data. a contingency matrix is presented. μB2 = mean in Band B for signature 2. if the signature statistics for each sample are distinct from those of the other samples. which contains the number and percentages of pixels that are classified as expected. a quick classification of the sample pixels is performed using the minimum distance. μ A1= mean in Band A for signature 1. μ A2= mean in Band A for signature 2. Use the Signature Editor to perform the contingency matrix evaluation.Figure 176: Ellipse Evaluation of Signatures Signature Overlap Band D data file values signature 1 Distinct Signatures Band B data file values μμB2+2 s B2 +2s μB2 μB2 μ μB2-2s B2 -2 s signature 2 μD1 μ D2 signature 1 signature 2 μA2 Band A data file values μA2+2s μA2-2s μC2 C2 μC1 μC1 μ A2 -2 s By analyzing the ellipse graphs for all band pairs. etc. 568 Classification . maximum likelihood. The pixels of each training sample are not always so homogeneous that every pixel in a sample is actually classified to its corresponding class. or Mahalanobis distance decision rule. μA2 = mean in Band A for signature 2. Contingency Matrix NOTE: This evaluation classifies all of the pixels in the selected AOIs and compares the results to the pixels of a training sample.

All of these formulas take into account the covariances of the signatures in the bands being compared. Refer to "Math Topics" on page 697 for information on the mean vector and covariance matrix. The spectral distance is also the basis of the minimum distance classification (as explained below). Separability can be calculated for any combination of bands that is used in the classification. then they may not be distinct enough to produce a successful classification. There are three options for calculating the separability. as well as the mean vectors of the signatures. computing the distances between signatures helps you predict the results of a minimum distance classification. Therefore. For the distance (Euclidean) evaluation. 1978 Classification 569 .Separability Signature separability is a statistical measure of distance between two signatures. If the spectral distance between two samples is not significant for any pair of bands. Therefore. evaluating signature separability helps you predict the results of a maximum likelihood classification. Use the Signature Editor to compute signature separability and distance and automatically generate the report. The maximum likelihood decision rule is explained below.tr ( ( C i – C j ) ( μ i – μ j ) ( μ i – μ j ) ) 2 2 Where: i and j = Ci = μi = tr = T = the two signatures (classes) being compared the covariance matrix of signature i the mean vector of signature i the trace function (matrix algebra) the transposition function Source: Swain and Davis. the spectral distance between the mean vectors of each pair of signatures is computed. Divergence The formula for computing Divergence (Dij) is as follows: 1 1 T –1 –1 –1 –1 D ij = -.tr ( ( C i – C j ) ( C i – C j ) ) + -. enabling you to rule out any bands that are not useful in the results of the classification. The formulas used to calculate separability are related to the maximum likelihood decision rule.

then the classes can be separated. Interpreting your results after applying transformed divergence requires you to analyze those numerical divergence values. Below 1.tr ( ( C i – C j ) ( μ i – μ j ) ( μ i – μ j ) ) 2 2 – D ij TD ij = 2000 ⎛ 1 – exp ⎛ --------.⎞ ( μ i – μ j ) + -.ln ⎜ -------------------------------⎟ ⎝ 2 ⎠ 8 2 ⎝ C × C ⎠ i j JM ij = Where: 2(1 – e ) –α i and j = the two signatures (classes) being compared Ci = the covariance matrix of signature i μi = the mean vector of signature i ln = the natural logarithm function |Ci| = the determinant of Ci (matrix algebra) Source: Swain and Davis. 1978 570 Classification . As a general rule.900.⎞ ⎞ ⎝ ⎝ 8 ⎠⎠ Where: i and j = Ci = μi = tr = T = the two signatures (classes) being compared the covariance matrix of signature i the mean vector of signature i the trace function (matrix algebra) the transposition function Source: Swain and Davis.700.” The scale of the divergence values can range from 0 to 2.Transformed Divergence The formula for computing Transformed Divergence (TD) is as follows: 1 1 T –1 –1 –1 –1 D ij = -. if the result is greater than 1.900.( μ i – μ j ) ⎛ ---------------. the transformed divergence “gives an exponentially decreasing weight to increasing distances between the classes. 1978 According to Jensen.700 and 1. Jeffries-Matusita Distance The formula for computing Jeffries-Matusita Distance (JM) is as follows: –1 1 1 ⎛ ( C i + C j ) ⁄ 2-⎞ T Ci + Cj α = -. 1996).tr ( ( C i – C j ) ( C i – C j ) ) + -. the separation is fairly good. the separation is poor (Jensen.000. Between 1.

“The JM distance has a saturating behavior with increasing class separation like transformed divergence. NOTE: The weight factors do not influence the divergence equations (for TD or JM). The weight factors for each signature are used to compute a weighted divergence with the following calculation: Classification 571 . then the signatures can be said to be totally separable in the bands being studied. it is not as computationally efficient as transformed divergence” (Jensen. • • TD is between 0 and 2000. weight factors may be specified for each signature. if you know that twice as many pixels should be assigned to Class A as to Class B. the JM values that IMAGINE reports are those resulting from multiplying the values in the formula times 1000. but they do influence the report of the best average and best minimum separability. JM is between 0 and 1414. However. to determine which set of bands is the most useful for classification. 1996). Weight Factors As with the Bayesian classifier (explained below with maximum likelihood). These weight factors are based on a priori probabilities that any given pixel is assigned to each class. A separability listing is a report of the computed divergence for every class pair and one band combination. A calculated divergence of zero means that the signatures are inseparable. Separability Both transformed divergence and Jeffries-Matusita distance have upper and lower bounds. For example. The listing contains every divergence value for the bands studied for every possible pair of signatures. If the calculated divergence is equal to the appropriate upper bound. The separability listing also contains the average divergence and the minimum divergence for the band set.According to Jensen. These numbers can be compared to other separability listings (for other band combinations). then Class A should receive a weight factor that is twice that of Class B. That is.

∑ f i⎟ – ∑ f i 2 ⎟ 2 ⎜ ⎝i = 1 ⎠ i=1 c – 1⎛ c Where: i and j = Uij = Wij = = c fi = Probability of Error the two signatures (classes) being compared the unweighted divergence between i and j the weighted divergence between i and j the number of signatures (classes) the weight factor for signature i The Jeffries-Matusita distance is related to the pairwise probability of error. After each signature file is evaluated. training must be repeated several times before the desired signatures are produced. Signatures can be gathered from different sources—different training samples. Within a range. delete. which is the probability that a pixel assigned to class i is actually in class j. 572 Classification . or create new signatures.⎞ ⎜ ∑ ⎜ ∑ fi fj Uij⎟ ⎟ ⎠ i = 1 ⎝j = i + 1 W ij = --------------------------------------------------2 c ⎛ c ⎞ 1 ⎜ -. and different clustering programs—all using different techniques. 1978 Signature Manipulation In many cases.JM 2 ⎞ ij ⎝ 16 2 2 ij⎠ Where: i and j = the signatures (classes) being compared JMij = the Jeffries-Matusita distance between i and j = the probability that a pixel is misclassified from i to j Pe Source: Swain and Davis.( 2 – JM 2 ) ≤ P e ≤ 1 – 1 ⎛ 1 + -. this probability can be estimated according to the expression below: 2 1 1-----. you may merge. The desired signatures can finally be moved to one signature file to be used in the classification. feature space images.

the pixel is assigned to that class. Classification 573 .e. This rule results in the following conditions: • • If the nonparametric test results in one unique class. then the pixel is classified using only the parametric rule.. Pixels that pass the criteria that are established by the decision rule are then assigned to the class for that signature. Figure 177 on page 575 shows the flow of an image pixel through the classification decision making process in ERDAS IMAGINE (Kloer. You can combine signatures that are derived from different training methods for use in one classification. ERDAS IMAGINE enables you to classify the data both parametrically with statistical representation. the next step is to perform a classification of the data. If the nonparametric test results in zero classes (i. the pixel lies outside all the nonparametric decision boundaries). Use the Signature Editor to view statistics and histogram listings and to delete. or algorithm. All of the parametric signatures are tested. 1994). so that they form one larger class when classified Append signatures from other files. The measurement vector for each pixel is compared to each signature. merge. If a nonparametric rule is set.The following operations upon signatures and signature files are possible with ERDAS IMAGINE: • • • • • View the contents of the signature statistics View histograms of the samples or clusters that were used to derive the signatures Delete unwanted signatures Merge signatures together. and nonparametrically as objects in feature space. and rename signatures within a signature file. If a nonparametric rule is not set. according to a decision rule. the pixel is either classified by the parametric rule or left unclassified. then the unclassified rule is applied. Each pixel is analyzed independently. With this rule. Classification Decision Rules Once a set of reliable signatures has been created and evaluated. the pixel is tested against all of the signatures with nonparametric definitions. append.

processing order. or left unclassified. the overlap rule is applied. Nonparametric Rules ERDAS IMAGINE provides these decision rules for nonparametric signatures: • • parallelepiped feature space Unclassified Options ERDAS IMAGINE provides these options if the pixel is not classified by the nonparametric rule: • • Overlap Options ERDAS IMAGINE provides these options if the pixel falls into more than one feature space object: • • • parametric rule by order unclassified parametric rule unclassified Parametric Rules ERDAS IMAGINE provides these commonly-used decision rules for parametric signatures: • • • minimum distance Mahalanobis distance maximum likelihood (with Bayesian variation) 574 Classification .• If the pixel falls into more than one class as a result of the nonparametric test. the pixel is either classified by the parametric rule. With this rule.

Classification 575 . These limits can be either: • • • the minimum and maximum data file values of each band in the signature. the data file values of the candidate pixel are compared to upper and lower limits. the mean of each band. or any limits that you specify. plus and minus a number of standard deviations. based on your knowledge of the data and signatures. This knowledge may come from the signature evaluation techniques discussed above.Figure 177: Classification Flow Diagram Candidate Pixel No Nonparametric Rule Yes Resulting Number of Classes 1 0 >1 Unclassified Options Parametric Unclassified Overlap Options By Order Parametric Unclassified Parametric Rule Unclassified Assignment Class Assignment Parallelepiped In the parallelepiped decision rule.

Overlap Region In cases where a pixel may fall into the overlap region of two or more parallelepipeds. If one of the signatures is first and the other signature is fourth. There are high and low limits for every signature in every band.These limits can be set using the Parallelepiped Limits utility in the Signature Editor. 576 μA2-2s μA2 Classification . you must define how the pixel can be classified. Figure 178: Parallelepiped Classification With Two Standard Deviations as Limits l ? 3 class 3 ? ? 3 3 3 3 ? ? 3 3 ? 3 3 3 3 ? ? ? 3 3 ? ? 3 3 3 ? ? 2 2 2 2 3 3 3 ? 2 ? ? ? ? ? 2 2 2 ? ? 2 ? ? ? 2 2 2 ? ? ? 2 2 1 2 1 1 1 1 2 ? 2 1 class 1 1 2 2 ? ? ? 2 ? ? 3 ? ? 2 3 ? Band B data file values μB2+2s μB2 μB2-2s = pixels in class 1 = pixels in class 2 = pixels in class 3 = unclassified pixels μA2 = mean of Band A. When a pixel’s data file values are between the limits for every band in a signature. class 2 μB2 = mean of Band B. This order can be set in the Signature Editor. class 2 class 2 μA2+2s Band A data file values The large rectangles in Figure 178 are called parallelepipeds. the pixel is assigned to the first signature’s class. They are the regions within the limits for each signature. Figure 178 is a two-dimensional example of a parallelepiped classification. • The pixel can be classified by the order of the signatures. then the pixel is assigned to that signature’s class.

Advantages Fast and simple. The pixel can be left unclassified. • Regions Outside of the Boundaries If the pixel does not fall into one of the parallelepipeds. then the pixel is left unclassified. • Use the Supervised Classification utility in the Signature Editor to perform a parallelepiped classification. If none of the signatures is parametric. broad classification. Mahalanobis distance. minimum distance. If only one of the signatures is parametric. • The pixel can be classified by the defined parametric decision rule. this decision rule quickly narrows down the number of possible classes to which each pixel can be assigned before the more timeconsuming calculations are made. Classification 577 .g. since the data file values are compared to limits that remain constant for each band in each signature. then the pixel is left unclassified. The pixel can be left unclassified. from the mean of the signature may be classified..• The pixel can be classified by the defined parametric decision rule. spectrally. then you must define how the pixel can be classified. The pixel is tested against the overlapping signatures only. Disadvantages Since parallelepipeds have corners. or maximum likelihood). If neither of these signatures is parametric. Not dependent on normal distributions. Often useful for a first-pass. An example of this is shown in Figure 179. thus cutting processing time (e. pixels that are actually quite far. The pixel is tested against all of the parametric signatures. then the pixel is automatically assigned to that signature’s class.

When a pixel’s data file values are in the feature space signature. you must define how the pixel can be classified. then the pixel is assigned to that signature’s class. 578 Classification . Figure 180: Feature Space Classification 3 3 ? ? 3 3 3 3 3 3 3 3 3 3 3 ? ? ? ? ? class 3 ? ? ? ? ? ? Band B data file values ? l ?? ? ? ? ? ? ? 2 3 ? = pixels in class 1 = pixels in class 2 = pixels in class 3 = unclassified pixels 2 2 2 ? 2 2 2 2 2 2 2 2 2 2 2 2 2 2 l class 2 l ? ? ?? ? ? ? ? ? l l l l l l l l l l l l class 1 ? ? ? Band A d t fil l Overlap Region In cases where a pixel may fall into the overlap region of two or more AOIs. Figure 180 is a twodimensional example of a feature space classification. The polygons in this figure are AOIs used to define the feature space signatures.Figure 179: Parallelepiped Corners Compared to the Signature Ellipse Parallelepiped Corners Band B data file values Signature Ellipse μB * Parallelepiped boundary candidate pixel μA Band A data file values Feature Space The feature space decision rule determines whether or not a candidate pixel lies within the nonparametric signature in the feature space image.

broad classification.g. Disadvantages The feature space decision rule allows overlap and unclassified pixels. The feature space method is fast. distribution (e. then the pixel is left unclassified. The pixel can be classified by the defined parametric decision rule. If none of the signatures is parametric. • The pixel can be classified by the defined parametric decision rule. then you must define how the pixel can be classified. Classification 579 . the pixel is assigned to the first signature’s class. Use the Decision Rules utility in the Signature Editor to perform a feature space classification. residential and urban). The pixel can be left unclassified. then the pixel is left unclassified. The pixel is tested against the overlapping signatures only. • • Regions Outside of the AOIs If the pixel does not fall into one of the AOIs for the feature space signatures. then the pixel is assigned automatically to that signature’s class.• The pixel can be classified by the order of the feature space signatures. If one of the signatures is first and the other signature is fourth. Certain features may be more visually identifiable. If only one of the signatures is parametric. The feature space image may be difficult Provides an accurate way to classify a class with a nonnormal to interpret. which can help discriminate between classes that are spectrally similar and hard to differentiate with parametric information. • Advantages Often useful for a first-pass. The pixel is tested against all of the parametric signatures.. This order can be set in the Signature Editor. The pixel can be left unclassified. If neither of these feature space signatures is parametric.

The equation for classifying by spectral distance is based on the equation for Euclidean distance: n SD xyc = ∑ ( μci – Xxyi ) i=1 2 Where: n i c Xxyi μci = number of bands (dimensions) = a particular band = a particular class = data file value of pixel x.y to the mean of class c Source: Swain and Davis.y in band i = mean of data file values in band i for the sample for class c SDxyc = spectral distance from pixel x. spectral distance is illustrated by the lines from the candidate pixel to the means of the three signatures. Figure 181: Minimum Spectral Distance candidate pixel Band B data file values μB3 u μ3 μB2 μ2 u μB1 o o u μ1 μA2 μA3 μA1 Band A data file values In Figure 181 on page 580. 1978 580 Classification .Minimum Distance The minimum distance decision rule (also called spectral distance) calculates the spectral distance between the measurement vector for the candidate pixel and the mean vector for each signature. The candidate pixel is assigned to the class with the closest mean.

Inversely. which may tend to be farther from the mean of the signature. Using this decision rule. or by performing a first-pass parallelepiped classification. Variance and covariance are figured in so that clusters that are highly varied lead to similarly varied classes. For example.When spectral distance is computed for all possible values of c (all possible classes). when classifying urban areas— typically a class whose pixels vary widely—correctly classified pixels may be farther from the mean than those of a class for water. outlying urban pixels may be improperly classified. a class like an urban land cover class is made up of pixels with a high variance. Advantages Since every pixel is spectrally closer to either one sample mean or another. 1978). within limits that are reasonable to you) become classified. classify more pixels than are appropriate to the class).e. Mahalanobis Distance The Mahalanobis distance algorithm assumes that the histograms of the bands have normal distributions. like water. there are no unclassified pixels. except for parallelepiped. and vice versa. they are not spectrally close to the mean of any sample. may tend to overclassify (that is. The equation for the Mahalanobis distance classifier is as follows: D = (X-Mc)T (Covc-1) (X-Mc) Classification 581 . because the pixels that belong to the class are usually spectrally closer to their mean than those of other classes to their means. If this is not the case. a class with less variance. However. Disadvantages Pixels that should be unclassified (i.) Does not consider class variability. For example. you may have better results with the parallelepiped or minimum distance decision rule.. Mahalanobis distance is similar to minimum distance. (See the discussion on Thresholding on page 589. which is usually not a highly varied class (Swain and Davis. except that the covariance matrix is used in the equation. The fastest decision rule to compute. the class of the candidate pixel is assigned to the class for which SD is the lowest. this problem is alleviated by thresholding out the pixels that are farthest from the means of their classes.

Disadvantages Tends to overclassify signatures with relatively large values in the covariance matrix. meaning that it relies heavily on a normal distribution of the data in each input band. unlike minimum distance or parallelepiped. Slower to compute than parallelepiped May be more useful than or minimum distance.Where: D c X Mc Covc = Mahalanobis distance = a particular class = the measurement vector of the candidate pixel = the mean vector of the signature of class c = the covariance matrix of the pixels in the signature of class c Covc-1 = inverse of Covc T = transposition function The pixel is assigned to the class. but the weighting factors that are available with the maximum likelihood/Bayesian option are not needed. minimum distance in cases where statistical criteria (as expressed in the covariance matrix) must be taken into account. then the covariance matrix of that signature contains large values. c. Advantages Takes the variability of classes into account. for which D is the lowest. Maximum Likelihood/Bayesian The maximum likelihood algorithm assumes that the histograms of the bands of data have normal distributions. If this is not the case. you may have better results with the parallelepiped or minimum distance decision rule. Mahalanobis distance is parametric. 582 Classification . If there is a large dispersion of the pixels in a cluster or training sample. or by performing a first-pass parallelepiped classification.

The maximum likelihood decision rule is based on the probability that a pixel belongs to a particular class. This variation of the maximum likelihood decision rule is known as the Bayesian decision rule (Hord. c.[0.0 in the equation. you can specify weight factors for particular classes. The pixel is assigned to the class. The computation time increases with the number of input bands. Disadvantages An extensive equation that takes a long time to compute. because it takes the most variables into consideration. for which D is the lowest.5 (X-Mc)T (Covc-1) (X-Mc)] = weighted distance (likelihood) = a particular class = the measurement vector of the candidate pixel = the mean vector of the sample of class c = percent probability that any candidate pixel is a member of class c (defaults to 1.[0. The equation for the maximum likelihood/Bayesian classifier is as follows: D Where: = ln(ac) . 1982). Classification 583 .0. these weights default to 1. it is recommended that they not be specified. and that the input bands have normal distributions. In this case. The basic equation assumes that these probabilities are equal for all classes. Unless you have a priori knowledge of the probabilities.5 ln(|Covc|)] . Advantages The most accurate of the classifiers in the ERDAS IMAGINE system (if the input samples/clusters have a normal distribution). would be explained in a textbook of matrix algebra. or is entered from a priori knowledge) D c X Mc ac Covc = the covariance matrix of the pixels in the sample of class c |Covc| = determinant of Covc (matrix algebra) Covc-1 = inverse of Covc (matrix algebra) ln = natural logarithm function = transposition function (matrix algebra) T The inverse and determinant of a matrix. along with the difference and transposition of vectors. Bayesian Classifier If you have a priori knowledge that the probabilities are not equal for all classes.

Disadvantages Maximum likelihood is parametric. “Clearly. Fuzzy Convolution The Fuzzy Convolution operation creates a single classification layer by calculating the total weighted inverse distance of all the classes in a window of pixels. meaning that it relies heavily on a normal distribution of the data in each input band. a pixel cannot be definitively assigned to one category. that is. 584 Classification . Fuzzy classification works using a membership function. Like traditional classification. Fuzzy classification is designed to help you work with data that may not fall into exactly one category or another. Using the multilayer classification and distance file. fuzzy classification still uses training. If there is a large dispersion of the pixels in a cluster or training sample. the training sites do not have to have pixels that are exactly the same. Jensen notes that. and each pixel can belong to several different classes (Jensen. In the fuzzy method. Then. 1996). 1996). but the biggest difference is that “it is also possible to obtain information on the various constituent classes found in a mixed pixel. it assigns the center pixel in the class with the largest total inverse distance summed over the entire set of fuzzy classification layers.Advantages Takes the variability of classes into account by using the covariance matrix. . wherein a pixel’s value is determined by whether it is closer to one class than another. the fuzzy convolution utility allows you to perform a moving window convolution on a fuzzy classification with multiple output class assignments.” (Jensen. then the covariance matrix of that signature contains large values. there needs to be a way to make the classification algorithms more sensitive to the imprecise (fuzzy) nature of the real world” (Jensen. . A fuzzy classification does not have definite boundaries. Fuzzy Methodology Fuzzy Classification The Fuzzy Classification method takes into account that there are pixels of mixed make-up. Jensen goes on to explain that the process of collecting training sites in a fuzzy classification is not as strict as a traditional classification. Tends to overclassify signatures with relatively large values in the covariance matrix. 1996). as does Mahalanobis distance. the convolution creates a new single class output file by computing a total weighted distance for all classes in the window. Once you have a fuzzy classification.

and GIS modeling. The constituent information consists of user-defined variables and includes raster imagery. 5. In essence. post-classification refinement. Classification 585 . spatial models. or 7) = layer index of fuzzy set = number of fuzzy layers used = weight table for window = class value = distance file value for class k = total weighted distance of window for class k The center pixel is assigned the class with the maximum T[k]. or a decision tree. external programs. Classes with a very small distance value remain unchanged while classes with higher distance values may change to a neighboring value if there is a sufficient number of neighboring pixels with class values and small corresponding distance values. that describes the conditions under which a set of low level constituent information gets abstracted into a set of high level informational classes. The following equation is used in the calculation: s s n T[k] = ∑∑∑ i = 0j = 0l = 0 w ij --------------D ijl [ k ] Where: i j s l n W k D[k] T[k] = row index of window = column index of window = size of window (3. and simple scalars.This has the effect of creating a context-based classification to reduce the speckle or salt and pepper in the classification. vector coverages. The expert classification software provides a rules-based approach to multispectral image classification. an expert classification system is a hierarchy of rules. Expert Classification Expert classification can be performed using the IMAGINE Expert Classifier™.

The Knowledge Classifier provides an interface for a nonexpert to apply the knowledge base and create the output classification. Confidence values associated with each condition are also combined to provide a confidence image corresponding to the final output classified image. rules. Multiple rules and hypotheses can be linked together into a hierarchy that ultimately describes a final set of target informational classes or terminal hypotheses. The branch containing the currently selected hypotheses. or list of conditional statements. and output classes of interest and create the hierarchical decision tree.A rule is a conditional statement. which are presented as decision trees in editing windows. rule. about the variable’s data values and/or attributes that determine an informational component or hypotheses. Figure 182: Knowledge Engineer Editing Window In Figure 182. Knowledge Engineer With the Knowledge Engineer. the upper left corner of the editing window is an overview of the entire decision tree with a green box indicating the position within the knowledge base of the currently displayed portion of the decision tree. The Knowledge Engineer provides the interface for an expert with first-hand knowledge of the data and the application to identify the variables. This box can be dragged to change the view of the decision tree graphic in the display window on the right. you can open knowledge bases. or condition is highlighted in the overview. 586 Classification . The IMAGINE Expert Classifier is composed of two parts: the Knowledge Engineer and the Knowledge Classifier.

The Variable editor provides for the definition of the variable objects to be used in the rules conditions. However. and conditions. determines the hypothesis. This may occur when there is an association between classes. only one rule must be true to satisfy the hypothesis. While both conditions must still be true to fire a rule. all of which must be satisfied for the rule to be true. Figure 183: Example of a Decision Tree Branch Conditions Hypothesis Rule Aspect > 135 Aspect <= 225 Good Location Gentle Southern Slope Slope < 12 Slope > 0 In this example. which is Gentle Southern Slope. the rule. its rule. Figure 184: Split Rule Decision Tree Branch Aspect > 135 Aspect <= 225 Good Location Southern Slope Gentle Slope Slope < 12 Slope > 0 Variable Editor The Knowledge Engineer also makes use of a Variable Editor when classifying images. Classification 587 . The terminal hypotheses of the decision tree represent the final classes of interest. The rule has four conditions depicted on the right side. Figure 183 represents a single branch of a decision tree depicting a hypothesis. the rule may be split if either Southern or Gentle slope defines the Good Location hypothesis. Good Location. Intermediate hypotheses may also be flagged as being a class of interest.The decision tree grows in depth when the hypothesis of one rule is referred to by a condition of another rule.

and assigns the hypothesis with the highest confidence. To facilitate this process. 588 Classification . First. The user interface is designed as a wizard to lead you though pages of input parameters. Evaluating the Output of the Knowledge Engineer The task of creating a useful. or defined as the output from a model or external program. and optionally. you can use the Test Classification to produce a test classification using the current knowledge base. Raster variables may be defined by imagery. you can use the Classification Pathway Cursor to evaluate the results. the classification output options. and refinement. An inference engine then evaluates all hypotheses at each location (calculating variable values. evaluation. Second. a confidence image. output files. This tool allows you to move a crosshair over the image in a Viewer to establish a confidence level for areas in the image. two options are provided. graphic spatial models. The following is an example classes dialog: Figure 185: Knowledge Classifier Classes of Interest After you select the input data for classification. and a command line executable. or by running other programs. the Knowledge Classifier process can begin. output area. output cell size. if required). Scalar variables my be defined with an explicit value. and output map projection. The output of the Knowledge Classifier is a thematic image. you are prompted to select classes. well-constructed knowledge base requires numerous iterations of trial. Knowledge Classifier The Knowledge Classifier is composed of two parts: an application with a user interface. The user interface application allows you to input a limited set of parameters to control the use of the knowledge base. feature layers (including vector layers).The two types of variables are raster and scalar. After selecting a knowledge base.

32-bit offset continuous raster layer in which each data file value represents the result of a spectral distance equation. Classification 589 . each distance value is the Euclidean spectral distance between the measurement vector of the pixel and the mean vector of the pixel’s class. If supervised training was used. • In a minimum distance classification. these methods are available for testing the accuracy of the classification: • • Thresholding—Use a probability image file to screen out misclassified pixels. These pixels are put into another class (usually class 0). Distance File When a minimum distance. the darkest pixels are usually the training samples. Accuracy Assessment—Compare the classification to ground truth or other data. Thresholding Thresholding is the process of identifying the pixels in a classified image that are the most likely to be classified incorrectly. These pixels are identified statistically. A distance image file is a one-band. The darker pixels are spectrally nearer. and more likely to be classified correctly. or maximum likelihood classification is performed.Evaluating Classification After a classification is performed. They are more likely to be misclassified. In a Mahalanobis distance or maximum likelihood classification. a distance image file can be produced in addition to the output thematic raster layer. the distance value is the Mahalanobis distance between the measurement vector of the pixel and the mean vector of the pixel’s class. Mahalanobis distance. based upon the distance measures that were used in the classification decision rule. • The brighter pixels (with the higher distance file values) are spectrally farther from the signature means for the classes to which they re assigned. depending upon the decision rule used.

590 Classification . This distribution is called a chi-square distribution.Figure 186: Histogram of a Distance Image number of pixels 0 0 distance value Figure 186 shows how the histogram of the distance image usually appears. This option enables you to select a chi-square value by selecting the cutoff value in the distance histogram. which is a symmetrical bell curve. thresholding has the effect of cutting the tail off of the histogram of the distance image file. as opposed to a normal distribution. so that the threshold can be calculated statistically. Threshold The pixels that are the most likely to be misclassified have the higher distance file values at the tail of this histogram. At some point that you define—either mathematically or visually—the tail of this histogram is cut off. To determine the threshold: • interactively change the threshold with the mouse. or input a chi-square parameter or distance measurement. when a distance histogram is displayed while using the threshold function. The cutoff point is the threshold. • In both cases. representing the pixels with the highest distance values.

and cut off the tail. 1978). With each example is an explanation of what the curve might mean.Figure 187: Interactive Thresholding Tips Smooth chi-square shape—try to find the breakpoint where the curve becomes more horizontal. and how to threshold it. Indicates that the signature mean is off-center from the pixels it represents. You probably want to threshold these features out. The signature for this class probably represented a polymodal (multipeaked) distribution. Figure 187 shows some example distance histograms. You may need to take a new signature and reclassify. if Mahalanobis or maximum likelihood are used. the threshold is more clearly defined as follows: T is the distance value at which C% of the pixels in a class have a distance value greater than or equal to T. However. then the threshold is simply a certain spectral distance. Not a good class. Chi-square Statistics If the minimum distance classifier is used. then chi-square statistics are used to compare probabilities (Swain and Davis. Peak of the curve is shifted from 0. Where: T C% = the threshold for a class = the percentage of pixels that are believed to be misclassified. When statistics are used to calculate the threshold. Minor mode(s) (peaks) in the curve probably indicate that the class picked up other features that were not represented in the signature. known as the confidence level Classification 591 .

The reference pixels are randomly selected (Congalton. The chi-square table is built into the threshold application. the value of X2 is an approximation. a set of reference pixels is usually used. you can run an accuracy assessment on a thematic layer that was classified in ERDAS Version 7. A further discussion of chi-square statistics can be found in a statistics text. Use the Classification Threshold utility to perform the thresholding..e.. number of bands) used for the classification. Chi-square statistics are generally applied to independent variables (having no covariance). Therefore. This layer does not have to be classified by ERDAS IMAGINE (e. the classified image automatically has the degrees of freedom (i.). which is not usually true of image data. 1991. Remote Sensing of Environment 37: 35-46. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data.T is related to the distance values by means of chi-square statistics. R. in order to determine the accuracy of the classification process. The value X2 (chi-squared) is used in the equation. X2 is a function of: • • the number of bands of data used—known in chi-square statistics as the number of degrees of freedom the confidence level When classifying an image in ERDAS IMAGINE. NOTE: You can use the ERDAS IMAGINE Accuracy Assessment utility to perform an accuracy assessment for any thematic layer. NOTE: In this application of chi-square statistics. Usually.g.5 and imported into ERDAS IMAGINE). Reference pixels are points on the classified image for which actual data are (or will be) known. the assumed-true data are derived from ground truth data. Accuracy Assessment Accuracy assessment is a general term for comparing the classification to geographical data that are assumed to be true. It is usually not practical to ground truth or otherwise test every pixel of a classified image. 592 Classification .

The CellArray data reside in an image file. Accuracy Assessment CellArray An Accuracy Assessment CellArray is created to compare the classified image with reference data. This biases the test. Error Reports From the Accuracy Assessment CellArray. R. This CellArray is simply a list of class values for the pixels in the classified image file and the class values for the corresponding reference pixels. The number of reference pixels is an important factor in determining the accuracy of the classification. Use the Accuracy Assessment CellArray to enter reference pixels for the class values. By allowing the reference pixels to be selected at random. The class values for the reference pixels are input by you. 1991. The size of the window can be defined by you. Classification 593 . Remote Sensing of Environment 37: 35-46.Random Reference Pixels When reference pixels are selected by the analyst. 1991. Remote Sensing of Environment 37: 35-46. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. It has been shown that more than 250 reference pixels are needed to estimate the mean accuracy of a class to within plus or minus five percent (Congalton.). ERDAS IMAGINE uses a square window to select the reference pixels. the possibility of bias is lessened or eliminated (Congalton. two kinds of reports can be derived.). since the training samples are the basis of the classification. it is often tempting to select the same pixels for testing the classification that were used in the training samples. R. Three different types of distribution are offered for selecting the random pixels: • • • random—no rules are used stratified random—the number of points is stratified to the distribution of thematic layer classes equalized random—each class has an equal number of random points Use the Accuracy Assessment utility to generate random reference points.

R. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sensing of Environment 37: 35-46. • When interpreting the reports.82 implies that the classification process is avoiding 82 percent of the errors that a completely random classification generates (Congalton. Use the Accuracy Assessment utility to generate the error matrix and accuracy reports. where c is the number of classes (including class 0). based upon the results of the error matrix. it is important to observe the percentage of correctly classified pixels and to determine the nature of errors of the producer and yourself. The accuracy report calculates statistics of the percentages of accuracy. For example.).• The error matrix simply compares the reference points to the classified points in a c × c matrix. 594 Classification . Kappa Coefficient The Kappa coefficient expresses the proportionate reduction in error generated by a classification process compared with the error of a completely random classification. 1991. a value of .

. and applied by you. photogrammetric techniques have also been applied to process satellite images and close range images in order to acquire topographic or nontopographic information of photographed objects. the computer replaces some expensive optical and mechanical components. Photogrammetric Concepts Photogrammetric Concepts 595 595 . The output products are in digital form. and has continued to develop over the last 140 years.. Photogrammetry was invented in 1851 by Laussedat. However. such as digital maps and DEMs. they can be easily stored. managed. application of photogrammetry is to extract topographic information (e. The traditional. analytical photogrammetry. Analytical aerotriangulation. photographs taken on the ground were used to extract the relationships between objects using geometric principles.g. Prior to the invention of the airplane. such as digital maps. Digital images can be scanned from photographs or can be directly captured by digital cameras. DEMs. In analytical photogrammetry. photogrammetric techniques are more closely integrated into remote sensing and GIS. Digital photogrammetry is photogrammetry as applied to digital images that are stored and processed on a computer. Over time. 1994). This was during the phase of plane table photogrammetry. automatic DEM extraction and digital orthophoto generation). and has now entered the phase of digital photogrammetry (Konecny. Therefore. With the development of digital photogrammetry. 1980). topographic maps) from aerial images. and orthophoto projectors were the main developments during this phase. optical or mechanical instruments were used to reconstruct threedimensional geometry from two overlapping photographs. analog photogrammetry. but can also be digital products. and largest. The resulting devices were analog/digital hybrids. the development of photogrammetry has passed through the phases of plane table photogrammetry.Photogrammetric Concepts Introduction What is Photogrammetry? Photogrammetry is the “art.g. The main product during this phase was topographic maps. analytical plotters. and digital orthophotos saved on computer storage media. science and technology of obtaining reliable information about physical objects and the environment through the process of recording. In analog photogrammetry. measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena” (American Society of Photogrammetry. Many photogrammetric tasks can be highly automated in digital photogrammetry (e. Digital photogrammetry is sometimes called softcopy photogrammetry. starting with stereomeasurement in 1901. Outputs of analytical photogrammetry can be topographic maps.

Digital imagery can be obtained from various sources. etc. such as creating DEMs. Aerial photographs and images are commonly captured from an aircraft or satellite. and oblique.Digital photogrammetric systems employ sophisticated software to automate the tasks associated with conventional photogrammetry. except the camera axis is intentionally inclined at an angle with the vertical. Oblique photographs and images are commonly used for reconnaissance and corridor mapping applications. Types of Photographs and Images The types of photographs and images that can be processed within IMAGINE LPS Project Manager include aerial. architecture. These include: • • • Digi