Geographic Imaging by Leica Geosystems GIS & Mapping

Using I mage Analysis for ArcGI S
Julie Booth-Lamirand
Using the Image Analysis Extension for ArcGIS
Copyright © 2003 Leica Geosystems GIS & Mapping, LLC
All rights reserved.
Printed in the United States of America.
The information contained in this document is the exclusive property of Leica Geosystems GIS & Mapping, LLC. This work is protected under United States copyright law
and other international copyright treaties and conventions. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopying and recording, or by any information storage or retrieval system, as expressly permitted in writing by Leica Geosystems GIS & Mapping, LLC. All
requests should be sent to Attention: Manager of Technical Documentation, Leica Geosystems GIS & Mapping, LLC, 2801 Buford Highway NE, Suite 400, Atlanta, GA,
30329-2137, USA.
The information contained in this document is subject to change without notice.
CONTRIBUTORS
Contributors to this book and the On-line Help for Image Analysis for ArcGIS include: Christine Beaudoin, Jay Pongonis, Kris Curry, Lori Zastrow, Mladen Stojic

, and
Cheryl Brantley of Leica Geosystems GIS & Mapping, LLC.
U. S. GOVERNMENT RESTRICTED/LIMITED RIGHTS
Any software, documentation, and/or data delivered hereunder is subject to the terms of the License Agreement. In no event shall the U.S. Government acquire greater than
RESTRICTED/LIMITED RIGHTS. At minimum, use, duplication, or disclosure by the U.S. Government is subject to restrictions set forth in FAR §52.227-14 Alternates I,
II, and III (JUN 1987); FAR §52.227-19 (JUN 1987), and/or FAR §12.211/12.212 (Commercial Technical Data/Computer Software); and DFARS §252.227-7015 (NOV
1995) (Technical Data) and/or DFARS §227.7202 (Computer Software), as applicable. Contractor/Manufacturer is Leica Geosystems GIS & Mapping, LLC, 2801 Buford
Highway NE, Suite 400, Atlanta, GA, 30329-2137, USA.
ERDAS, ERDAS IMAGINE, and IMAGINE OrthoBASE are registered trademarks. Image Analysis for ArcGIS is a trademark.
ERDAS
®
is a wholly owned subsidiary of Leica Geosystems GIS & Mapping, LLC.
Other companies and products mentioned herein are trademarks or registered trademarks of their respective trademark owners.
III
Contents
Contents
Contents iii
Foreword vii
Getting started
1 Introducing Image Analysis for ArcGIS 3
Learning about Image Analysis for ArcGIS 10
2 Quick-start tutorial 11
Exercise 1: Starting Image Analysis for ArcGIS 12
Exercise 2: Adding images and applying Histogram Stretch 14
Exercise 3: Identifying similar areas in an image 18
Exercise 4: Finding areas of change 22
Exercise 5: Mosaicking images 30
Exercise 6: Orthorectification of camera imagery 33
What’s Next? 38
3 Applying data tools 39
Using Seed Tool Properties 40
Image Info 45
Options 47
Working with features
4 Using Data Preparation 55
Create New Image 56
Subset Image 58
Mosaic Images 63
Reproject Image 66
USING IMAGE ANALYSIS FOR ARCGIS IV
5 Performing Spatial Enhancement 69
Convolution 70
Non-Directional Edge 75
Focal Analysis 77
Resolution Merge 79
6 Using Radiometric Enhancement 83
LUT Stretch 84
Histogram Equalization 87
Histogram Matching 91
Brightness Inversion 93
7 Applying Spectral Enhancement 95
RGB to IHS 96
IHS to RGB 99
Vegetative Indices 101
Color IR to Natural Color 104
8 Performing GIS Analysis 107
Information versus data 108
Neighborhood Analysis 109
Thematic Change 111
Recode 114
Summarize Areas 120
9 Using Utilities 123
Image Difference 124
Layer Stack 126
10 Understanding Classification 129
The Classification Process 130
CONTENTS V
Classification tips 132
Unsupervised Classification/Categorize Image 134
Supervised Classification 138
Classification decision rules 140
11 Using Conversion 143
Conversion 144
Converting raster to features 145
Converting features to raster 147
12 Applying Geocorrection Tools 149
When to rectify 150
Geocorrection property dialogs 153
SPOT 158
The Spot Properties dialog 160
Polynomial transformation 161
The Polynomial Properties dialog 168
Rubber Sheeting 169
Camera Properties 171
IKONOS, QuickBird, and RPC Properties 173
Landsat 177
Glossary 183
References 201
Index 205
USING IMAGE ANALYSIS FOR ARCGIS VI
VII
Foreword
An image of the earth’s surface is a wealth of information. Images capture a
permanent record of buildings, roads, rivers, trees, schools, mountains, and other
features located on the earth’s surface. But images go beyond simply recording
features. Images also record relationships and processes as they occur in the real
world. Images are snapshots of geography, but they are also snapshots of reality.
Images chronicle our earth and everything associated with it; they record a specific
place at a specific point in time. They are snapshots of our changing cities, rivers,
and mountains. Images are snapshots of life on earth.
The data in a GIS needs to reflect reality, and snapshots of reality need to be
incorporated and accurately transformed into instantaneously ready, easy-to-use
information. From snapshots to digital reality, images are pivotal in creating and
maintaining the information infrastructure used by today’s society. Today’s
geographic information systems have been carefully created with features,
attributed behavior, analyzed relationships, and modeled processes.
There are five essential questions that any GIS needs to answer: Where, What,
When, Why, and How. Uncovering Why, When, and How are all done within the
GIS; images allow you to extract the Where and What. Precisely where is that
building? What is that parcel of land used for? What type of tree is that? The new
extensions developed by Leica Geosystems GIS and Mapping, LLC use imagery
to allow you to accurately address the questions Where and What, so you can then
derive answers for the other three.
But our earth is changing! Urban growth, suburban sprawl, industrial usage and
natural phenomena continually alter our geography. As our geography changes, so
USING IMAGE ANALYSIS FOR ARCGIS VIII
does the information we need to understand it. Because an
image is a permanent record of features, behavior,
relationships, and processes captured at a specific moment in
time, using a series of images of the same area taken over
time allows you to more accurately model and analyze the
relationships and processes that are important to our earth.
The new extensions by Leica Geosystems are technological
breakthroughs which allow you to transform a snapshot of
geography into information that digitally represents reality in
the context of a GIS. Image Analysis™ for ArcGIS and
Stereo Analyst® for ArcGIS are tools built on top of a GIS to
maintain that GIS with up-to-date information. The
extensions provided by Leica Geosystems reliably transform
imagery directly into your GIS for analyzing, mapping,
visualizing, and understanding our world.
On behalf of the Image Analysis for ArcGIS and Stereo
Analyst for ArcGIS product teams, I wish you all the best in
working with these new products and hope you are
successful in your GIS and mapping endeavors.
Sincerely,
Mladen Stojic

Product Manager
Leica Geosystems GIS & Mapping, LLC
Get t i ng st ar t ed
Sect ion 1
3
1I nt roduci ng I mage Ana l ysi s f or Arc GI S
Image Analysis for ArcGIS™ is primarily designed for natural resource and
infrastructure management. The extension is very useful in the fields of forestry,
agriculture, environmental assessment, engineering, and infrastructure projects
such as facility siting and corridor monitoring, and general geographic database
update and maintenance.
Today, imagery of the earth’s surface is an integral part of desktop mapping and
GIS, and it’s more important than ever to have the ability to provide realistic
backdrops to geographic databases and to be able to quickly update details
involving street use or land use data.
Image Analysis for ArcGIS gives you the ability to perform many tasks:
• Import and incorporate raster imagery into ArcGIS.
• Categorize images into classes corresponding to land cover types such as
vegetation.
• Evaluate images captured at different times to identify areas of change.
• Identify and automatically map a land cover type with a single click.
• Find areas of dense and thriving vegetation in an image.
• Enhance the appearance of an image by adjusting contrast and brightness or by
applying histogram stretches.
• Align an image to a map coordinate system for precise area location.
• Rectify satellite images through Geocorrection Models.
IN THIS CHAPTER
• Updating a database
• Categorizing land cover and
characterizing sites
• Identifying and summarizing
natural hazard damage
• Identifying and monitoring urban
growth and changes
• Extracting features automatically
• Assessing vegetation stress
1
I nt roduci ng I mage Anal ysi s
f or ArcGI S
USING IMAGE ANALYSIS FOR ARCGIS 4
Updati ng databases
There are many kinds of imagery to choose from in a wide range of scales, spatial, and spectral resolutions, and map accuracies. Aerial
photography is often the choice for map updating because of its high precision. With Image Analysis for ArcGIS you are able to use imagery
to identify changes and make revisions and corrections to your geographic database.
Airphoto with shapefile of streets
INTRODUCING IMAGE ANALYSIS FOR ARCGIS 5
Cat egori zi ng l and cover and charact eri zi ng si t es
Transmission towers for radio-based telecommunications must all be visible from each other, must be within a certain range of elevations,
and must avoid fragile areas like wetlands. With Image Analysis for ArcGIS, you can categorize images into land cover classes to help
identify suitable locations. You can use imagery and analysis techniques to identify wetlands and other environmentally sensitive areas.
The Classification features enable you to divide an image into many different classes, and then highlight them as you wish. In this case the
areas not suitable for tower placement are highlighted, and the placement for the towers can be sited appropriately.
Classified image for radio towers
USING IMAGE ANALYSIS FOR ARCGIS 6
I denti fyi ng and summari zi ng natural hazard damage
When viewing a forest hit by a hurricane, you can use the mapping tools of Image Analysis for ArcGIS to show where the damage occurred.
With other ArcGIS tools, you can show the condition of the vegetation, how much stress it suffers, and how much damage it sustained in
the hurricane.
Below, Landsat images taken before and after the hurricane, in conjunction with a shapefile that identifies the forest boundary, are used for
comparison. Within the shapefile, you can see detailed tree stand inventory and management information.

The upper two pictures show the area in 1987 and in 1989 after Hurricane Hugo. The lower image features the shapefile.
INTRODUCING IMAGE ANALYSIS FOR ARCGIS 7
I denti fyi ng and moni tori ng urban growth and changes
Cities grow over time, and images give a good sense of how they grow, and how remaining land can be preserved by managing that growth.
You can use Image Analysis for ArcGIS to reveal patterns of urban growth over time.
Here, Landsat data spanning 21 years was analyzed for urban growth. The final view shows the differences in extent of urban land use and
land cover between 1973 and 1994. Those differences are represented as classes. The yellow urban areas from 1994 represent how much
the city has grown beyond the red urban areas from 1973.
The top two images represent urban areas in red, first in 1974 and then in 1994. The bottom image shows the actual growth.
INTRODUCING IMAGE ANALYSIS FOR ARCGIS 8
Ext ract i ng features aut omat i cal l y
Suppose you are responsible for mapping the extent of an oil spill as part of a rapid response effort. You can use synthetic aperture radar
(SAR) data and Image Analysis for ArcGIS tools to identify and map the extent of such environmental hazards.
The following image shows an oil spill of the northern coast of Spain. The first image shows the spill, and the second image gives you an
example of how you can isolate the exact extent of a particular pattern using Image Analysis for ArcGIS.
Images depicting an oil spill off the coast of Spain and a polygon grown in the spill using Seed Tool.
INTRODUCING IMAGE ANALYSIS FOR ARCGIS 9
Assessi ng vegetati on stress
Crops experience different stresses throughout the growing season. You can use multispectral imagery and analysis tools to identify and
monitor a crop’s health.
In these images, the Vegetative Indices function is used to see crop stress. The stressed areas are then automatically digitized and saved as
a shapefile. This kind of information can be used to help identify sources if variability in growth patterns. Then, you can quickly update
crop management plans.
Crop stress shown through Vegetative Indices
USING IMAGE ANALYSIS FOR ARCGIS 10
Lear ni ng about I mage Anal ysi s f or Ar cGI S
If you are just learning about geographic information systems
(GISs), you may want to read the books about ArcCatalog and
ArcMap: Using ArcCatalog and Using ArcMap. Knowing about
these applications will make your use of Image Analysis for
ArcGIS much easier.
If you’re ready to learn about how Image Analysis for ArcGIS
works, see the Quick-start tutorial. In the Quick-start tutorial, you’ll
learn how to adjust the appearance of an image, how to identify
similar areas of an image, how to align an image to a feature theme,
as well as finding areas of change and mosaicking images.
Fi ndi ng answers to questi ons
This book describes the typical workflow involved in creating and
updating GIS data for mapping projects. The chapters are set up so
that you first learn the theory behind certain applications, then you
are introduced to the typical workflow you’d apply to get the results
you want. A glossary is provided to help you understand any terms
you haven’t seen before.
Getti ng hel p on your computer
You can get a lot of information about the features of Image
Analysis for ArcGIS by accessing the online help. To browse the
online help contents for Image Analysis for ArcGIS, click Help
near the bottom of the Image Analysis menu. From this point you
can use the Table of contents, index, or search feature to locate the
information you need. If you need online help for ArcGIS, click
Help on the ArcMap toolbar and choose ArcGIS Desktop Help.
Cont act i ng Lei ca Geosyst ems GI S &
Mappi ng
If you need to contact Leica Geosystems for technical support, see
the product registration and support card you received with Image
Analysis for ArcGIS. You can also contact Customer Support at
404/248-9777. Visit Leica Geosystems on the Web at
www.gis.leica-geosystems.com.
Contacti ng ESRI
If you need to contact ESRI for technical support refer to “Getting
technical support” in the Help system’s “Getting more help”
section. The telephone number for Technical Support is 909-793-
3744. You can also visit ESRI on the Web at www.esri.com.
Lei ca Geosystems GI S & Mappi ng
Educati on Sol uti ons
Leica Geosystems GIS & Mapping Division offers instructor-based
training about Image Analysis for ArcGIS. For more information,
got to the training Web site located at www.gis.leica-
geosystems.com. You can follow the training link to Training
Centers, Course Schedules, and Course Registration.
ESRI educati on sol uti ons
ESRI provides educational opportunities related to GISs, GIS
applications, and technology. You can choose among instructor-led
courses, Web-based courses, and self-study workbooks to find
educational solutions that fit your learning style and pocketbook.
For more information, visit the Web site www.esri.com/education.
11
2Qui ck-st ar t t ut ori al
Now that you know a little bit about the Image Analysis for ArcGIS extension and
its potential applications, the following exercises give you hands-on experience in
using many of the extension’s tools. By working through the exercises, you are
going to use the most important components of the Image Analysis for ArcGIS
extension and learn about the types of problems it can solve.
In Image Analysis for ArcGIS, you can quickly identify areas with similar
characteristics. This is useful for identification in cases such as environmental
disasters, burn areas or oil spills. Once an area has been defined, it can also be
quickly saved into a shapefile. This avoids the need for manual digitizing. This
tutorial will show you how to use some Image Analysis for ArcGIS tools and give
you a good introduction to using Image Analysis for ArcGIS for your own GIS
needs.
IN THIS CHAPTER
• Starting Image Analysis for
ArcGIS
• Adjusting the appearance of an
image
• Identifying similar areas in an
image
• Finding areas of change
• Mosaicking images
• Orthorectifying an image
2
USING IMAGE ANALYSIS FOR ARCGIS 12
Exer ci se 1: St ar t i ng I mage Anal ysi s f or Ar cGI S
In the following exercises, we’ve assumed that you are using
a single monitor or dual monitor workstation that is
configured for use with ArcMap and Image Analysis for
ArcGIS. That being the case, you will be lead through a
series of tutorials in this chapter to help acquaint you with
Image Analysis for ArcGIS and further show you some of the
abilities of Image Analysis for ArcGIS.
In this exercise, you’ll learn how to start Image Analysis for
ArcGIS and activate the toolbar associated with it. You will
be able to gain access to all the important Image Analysis for
ArcGIS features through its toolbar and menu list. After
completing this exercise, you’ll be able to locate any Image
Analysis for ArcGIS tool you need for preparation,
enhancement, analysis, or geocorrection.
This exercise assumes you have already successfully
completed installation of Image Analysis for ArcGIS on your
computer. If you have not installed Image Analysis for
ArcGIS, refer to the installation guide packaged with the
Image Analysis for ArcGIS CD, and install now.
St ar t i ng I mage Anal ysi s for ArcGI S
1. Click the Start button on your desktop, then click
Programs, and point to ArcGIS.
2. Click ArcMap to start the application.
Addi ng t he I mage Analysi s for ArcGI S
extensi on
1. If the ArcMap dialog opens, keep the option to create a
new empty map, then click OK.
2. In the ArcMap window, click the Tools menu, then click
Extensions.
2
1
1
QUICK-START TUTORIAL 13
3. In the Extensions dialog, click the check box for Image
Analysis Extension to add the extension to ArcMap.
Once the Image Analysis Extension check box has been
selected, the extension is activated.
4. Click Close in the Extensions dialog.
Addi ng tool bars
1. Click the View menu, then point to Toolbars, and click
Image Analysis to add that toolbar to the ArcMap
window.
The Image Analysis toolbar is your gateway to many of the
tools and features you can use with the extension. From the
Image Analysis toolbar you can choose many different
analysis types from the menu, choose a geocorrection type,
and set links in an image.
4
3
1
USING IMAGE ANALYSIS FOR ARCGIS 14
Exer ci se 2: Addi ng i mages and appl yi ng Hi st ogr am St r et ch
Image data, displayed without any contrast manipulation,
may appear either too light or too dark, making it difficult to
begin your analysis. Image Analysis for ArcGIS allows you
to display the same data in many different ways. For
example, changing the distribution of pixels allows you to
alter the brightness and contrast of the image. This is called
histogram stretching. Histogram stretching enables you to
manipulate the display of data to make your image easier to
visually interpret and evaluate.
Add an I mage Anal ysi s for ArcGI S t heme of
Moscow
1. Open a new view. If you are starting this exercise
immediately after Exercise 1, you should have a new,
empty view ready.
2. Click the Add Data button .
3. In the Add Data dialog, select moscow_spot.tif, and
click Add to draw it in the view. The path to the example
data directory is ArcGIS\ArcTutor\ImageAnalysis.
4. Click Add to display the image in the view.
The image Moscow_spot.tif appears in the view.
Apply a Hi stogram Equal i zati on
Standard deviations is the default histogram stretch applied
to images by Image Analysis for ArcGIS. You can apply
histogram equalization to redistribute the data so that each
display value has roughly the same number of data points.
More information about histogram equalization can be found
in chapter 6 “Using Radiometric Enhancement”.
1. Select moscow_spot.tif in the Table of contents, right
click your mouse, and select Properties to bring up
Layer Properties.
2. Click the Symbology tab and under Show, select RGB
Composite.
3. Check the Bands order and click the dropdown arrows
to change any of the Bands.
3
4
QUICK-START TUTORIAL 15
You can also change the order of the bands in your current
image by clicking on the color bar beside each band in the
Table of contents. If you want bands to appear in a certain
order for each image that you draw in the view, go to
Tools\Options\Raster in ArcMap, and change the Default
RGB Band Combinations.
4. Click the dropdown arrow and select Histogram
Equalize as the Stretch Type.
5. Click Apply and OK.
6. Click the Image Analysis menu dropdown arrow, point
to Radiometric Enhancement, and click Histogram
Equalization.
7. In the Histogram Equalization dialog, make sure
moscow_spot.tif is in the Input Image box.
8. The Number of Bins will default to 256. For this
exercise, leave the number at 256, but in the future, you
can change it to suit your needs.
9. Navigate to the directory where you want your output
images stored, type a name for your image, and click
Save. The path will appear in Output Image.
You can go to the Options dialog, accessible from the Image
Analysis toolbar, and enter the working directory you want to
use on the General tab of the dialog. This step will save you
time by automatically bringing up your working directory
whenever you click the browse button to navigate to it in
order to store an output image.
5
1
3
2
4
6
USING IMAGE ANALYSIS FOR ARCGIS 16
10. Click OK.
The equalized image will appear in your Table of
contents and in your view.
This is the histogram equalized image of Moscow.
Apply an I nver t Stretch to the i mage of
Moscow
In this example, you apply the Invert Stretch to the image to
redisplay it with its brightness values reversed. Areas that
originally appeared bright are now dark, and dark areas are
bright.
1. Select the equalized file in the Table of contents, and
right-click your mouse. Click Properties and go to the
Symbology tab.
2. If you want to see the histograms for the image, click the
Histograms button located in the Stretch box.
3. Check the Invert box.
4. Click Apply and OK.
7
9
8
10
1
2
3
4
QUICK-START TUTORIAL 17
This is an inverted image of Moscow_spot.tif.
You can apply different types of stretches to your image to
emphasize different parts of the data. Depending on the
original distribution of the data in the image, one stretch may
make the image appear better than another. Image Analysis
for ArcGIS allows you to rapidly make those comparisons.
The Layer Properties Symbology tab can be a learning tool to
see the effect of stretches on the input and output histograms.
You’ll learn more about these stretches in chapter 6 “Using
Radiometric Enhancement”.
USING IMAGE ANALYSIS FOR ARCGIS 18
Exer ci se 3: I dent i f yi ng si mi l ar ar eas i n an i mage
With Image Analysis for ArcGIS you can quickly identify
areas with similar characteristics. This is useful for
identification of environmental disasters or burn areas. Once
an area has been defined, it can also be quickly saved into a
shapefile. This action lets you avoid the need for manual
digitizing. To define the area, you use the Seed Tool to point
to an area of interest such as a dark area on an image
depicting an oil spill. The Seed Tool returns a graphic
polygon outlining areas with similar characteristics.
Add and draw an I mage Analysi s for
ArcGI S t heme depi ct i ng an oi l spi l l
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File
button on your ArcMap tool bar. You do not need to
save the image. If you are beginning here, start ArcMap
and load the Image Analysis for ArcGIS extension.
2. Click the Add Data button.
3. In the Add Data dialog, select radar_oilspill.img, and
click Add to draw it in the view.
This is a radar image showing an oil spill off the
northern coast of Spain.
Creat e a shapef i l e
In this exercise, you use the Seed Tool (also called the
Region Growing Tool). The Seed Tool grows a polygon
graphic in the image that encompasses all similar and
contiguous areas. In order to use the Seed Tool, you will first
need to create a shapefile in ArcCatalog and start editing in
order to enable the Seed Tool. After going through these
steps, you can point and click inside the area you want to
highlight, in this case an oil spill, and create a polygon. The
polygon enables you to see how much of an area the oil spill
covers.
1. Click the Zoom In tool, and drag a rectangle around the
black area to see the spill more clearly.
1 2
QUICK-START TUTORIAL 19
2. Click the ArcCatalog button. You can store the shapefile
you’re going to create in the example data directory or
navigate to a different directory if you wish.
3. Select the directory in the Table of contents and right
click or click File, point to New, and click Shapefile.
4. In the Create New Shapefile dialog, name the new
shapefile oilspill, and click the Feature Type dropdown
arrow and select Polygon.
5. Check Show Details.
6. Click Edit.
7. In the Spatial Reference Properties dialog, click Import,
and select radar_oilspill.img and click Add from the
Browse for Dataset dialog that will pop up containing
the example data directory.
8. Click Apply and OK.
9. Click OK in the Create New Shapefile dialog.
10. Select the oilspill shapefile, and drag and drop it in the
ArcMap window. Oilspill will appear in the Table of
contents.
11. Close ArcCatalog.
1
2
3
9
4
6
5
USING IMAGE ANALYSIS FOR ARCGIS 20
Draw the polygon wi th the Seed Tool
1. Click the Image Analysis dropdown arrow, and click
Seed Tool Properties.
2. Type a Seed Radius of 10 pixels in the Seed Radius text
box.
3. Uncheck the Include Island Polygons box.
The Seed Radius is the number of pixels surrounding the
target pixel. The range of values of those surrounding
pixels is considered when the Seed Tool grows the
polygon.
4. Click OK.
5. Click the Editor toolbar button on the ArcMap toolbar to
display the Editor toolbar.
6. Click Editor on the Editor toolbar in ArcMap, and select
Start Editing.
8
7
2
3
4
1
QUICK-START TUTORIAL 21
7. Click the Seed Tool and click a point in the center of the
oil spill. The Seed Tool will take a few moments to
produce the polygon.
This is a polygon of an oil spill grown by the Seed Tool.
If you don’t automatically see the formed polygon in the
image displayed in the view, click the refresh button at the
bottom of the view screen in ArcMap.
You can see how the tool identifies the extent of the spill. An
emergency team could be informed of the extent of this
disaster in order to effectively plan a clean up of the oil.
6
5
6
USING IMAGE ANALYSIS FOR ARCGIS 22
Exer ci se 4: Fi ndi ng ar eas of change
The Image Analysis for ArcGIS extension allows you to see
changes over time. You can perform this type of analysis on
either continuous data using Image Difference or thematic
data using Thematic Change. In this exercise, you’ll learn
how to use Image Difference and Thematic Change. Image
Difference is useful for analyzing images of the same area to
identify land cover features that may have changed over time.
Image Difference performs a subtraction of one theme from
another. This change is highlighted in green and red masks
depicting increasing and decreasing values.
Fi nd changed areas
In the following example, you are going to work with two
continuous data images of the north metropolitan Atlanta,
Georgia, area—one from 1987 and one from 1992.
Continuous data images are those obtained from remote
sensors like Landsat and SPOT. This kind of data measures
reflectance characteristics of the earth’s surface, analogous to
exposed film capturing an image. You will use Image
Difference to identify areas that have been cleared of
vegetation for the purpose of constructing a large regional
shopping mall.
Add and draw the i mages of Atl anta
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File
button on your ArcMap tool bar. You do not need to
save the image. If you are beginning here, start ArcMap
and load the Image Analysis for ArcGIS extension.
2. Click the Add Data button.
3. Press the Shift or Ctrl key, and click on
atl_spotp_87.img and atl_spotp_92.img in the Add Data
dialog.
4. Click OK.
With images active in the view, you can calculate the
difference between them.
Comput e t he di f ference due t o
devel opment
1. Click the Image Analysis dropdown arrow, click
Utilities, and click Image Difference.
QUICK-START TUTORIAL 23
2. In the Image Difference dialog, click the Before Theme
dropdown arrow, and select Atl_spotp_87.img.
3. Click the After Theme dropdown arrow, and select
Atl_spotp_92.img.
4. Choose As Percent in the Highlight Changes box.
5. Click the arrows to 15 in the Increases more than box.
6. Click the arrows to 15 in the Decreases more than box.
7. Navigate to the directory where you want to store your
Image Difference file, type the name of the file, and
click Save.
8. Navigate to the directory where you want to store your
Highlight Change file, type the name of the file, and
click Save.
9. Click OK in the Image Difference dialog.
The Highlight Change and Image Difference files
appear in the Table of contents and the view.
1
9
3
8
4
6
5
7
2
USING IMAGE ANALYSIS FOR ARCGIS 24
Highlight Change shows the difference in red and green
areas.
10. In the Table of contents, click the check box to turn off
Highlight Change, and check Image Difference to
display it in the view.
The Image Difference image shows the results of the
subtraction of the Before Theme from the After Theme.
Image Difference calculates the difference in pixel values.
With the 15 percent parameter you set, Image Difference
finds areas that are at least 15 percent increased than before
(designated clearing) and highlights them in green. Image
Difference also finds areas that are at least 15 percent
decreased than before (designating an area that has increased
vegetation or an area that was once dry, but is now wet) and
highlights them in red.
Cl ose t he vi ew
You can now clear the view and either go to the next portion
of this exercise, Thematic Change, or end the session by
closing ArcMap. If you want to shut down ArcMap with
Image Analysis for ArcGIS, click the File menu, and click
Exit. Click No when asked to save changes.
Usi ng Themat i c Change
Image Analysis for ArcGIS provides the Thematic Change
feature to make comparisons between thematic data images.
Thematic Change creates a theme that shows all possible
combinations of change and how an area’s land cover class
changed over time. Thematic Change is similar to Image
Difference in that it computes changes between the same area
at different points in time. However, Thematic Change can
only be used with thematic data (data that is classified into
distinct categories). An example of thematic data is a
vegetation class map.
This next example uses two images of an area near Hagan
Landing, South Carolina. The images were taken in 1987 and
1989, before and after Hurricane Hugo. Suppose you are the
forest manager for a paper company that owns a parcel of
land in the hurricane’s path. With Image Analysis for
ArcGIS, you can see exactly how much of your forested land
has been destroyed by the storm.
QUICK-START TUTORIAL 25
Add the i mages of an area damaged by
Hurri cane Hugo
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File
button on your ArcMap toolbar. You do not need to save
the image. If you are beginning here, start ArcMap and
load the Image Analysis for ArcGIS extension.
2. Open a new view and click Add Data.
3. Press either the Shift key or Ctrl key, and select both
tm_oct87.img and tm_oct89.img in the Add Data
dialog. Click Add.
This view shows an area damaged by Hurricane Hugo.
Create three cl asses of l and cover
Before you calculate Thematic Change, you must first
categorize the Before and After Themes. You can access
Categorize through Unsupervised Classification, which is an
option available from the Image Analysis dropdown menu.
You’ll use the thematic themes created from those
classifications to complete the Thematic Change calculation.
1. Click the dropdown arrow in the Layers section of the
Image Analysis toolbar to make sure tm_oct87.img is
active.
2. Click the Image Analysis dropdown arrow, point to
Classification, and click Unsupervised/Categorize.
3. Click the Input Image dropdown arrow to make sure
tm_oct87.img is in the text box.
4. Click the arrows to 3 or type 3 in the Desired Number of
Classes box.
5. Navigate to the directory where you want to store the
output image, type the file name (use
unsupervised_class_87 for this example), and click
Save.
6. Click OK in the Unsupervised Classification dialog.
3
4
5
6
1
USING IMAGE ANALYSIS FOR ARCGIS 26
Using Unsupervised Classification to categorize continuous
images into thematic classes is particularly useful when you
are unfamiliar with the data that makes up your image. You
simply designate the number of classes you would like the
data divided into, and Image Analysis for ArcGIS performs a
calculation assigning pixels to classes depending on their
values. By using Unsupervised Classification, you may be
better able to quantify areas of different land cover in your
image. You can then assign the classes names like water,
forest, and bare soil.
7. Click the check box of tm_oct87.img so the original
theme is not drawn in the view. This step makes the
remaining themes draw faster in the view.
Gi ve the cl asses names and assi gn col ors
t o represent t hem
1. Double-click the title unsupervised_class_87.img to
access the Layer Properties dialog.
2. Click the Symbology tab.
3. Verify that Class_names is selected in the Value Field.
4. Select Class 001, and double-click Class 001 under
Class_names. Type the name Water.
5. Double-click the color bar under Symbol for Class 001,
and choose blue from the color palette.
6. Select Class 002, and double-click Class 002 under
Class_names. Type the name Forest.
7. Double-click the color bar under Symbol for Class 002,
and choose green.
8. Select Class 003, and double-click Class 003 under
Class_names. Type the name Bare Soil.
9. Double-click the color bar under Symbol for Class 003,
and choose a tan or light brown color.
10. Click Apply and OK.
5
3
4
10
2
QUICK-START TUTORIAL 27
Cat egori ze and name t he areas i n t he post -
hurri cane i mage
1. Follow the steps provided for the theme tm_oct87.img
on pages 25 and 26 under “Create three classes of land
cover” and “Give the classes names and assign colors to
represent them” to categorize the classes of the
tm_oct89.img theme.
2. Click the box of the tm_oct89.img theme so that it does
not draw in the view.
Recode to permanentl y wri te cl ass names
and col ors t o a f i l e
After you have classified both of your images, you need to do
a recode in order to permanently save the colors and class
names you have assigned to the images. Recode lets you
create a file with the specific images you’ve classified.
1. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Recode.
2. Click the Input Image dropdown arrow to select one of
the classified images.
3. The Map Pixel Value through Field will read <From
view>. Leave this as is.
4. Click the browse button to bring up your working
directory, and name the Output Image.
5. Click OK.
5
4
3
2
1
USING IMAGE ANALYSIS FOR ARCGIS 28
Now do the same thing and perform a recode on the other
classified image you did of the Hugo area. Both of the images
will have your class names and colors permanently saved.
Use Themati c Change to see how l and
cover changed because of Hugo
1. Make sure both recoded images are checked in the Table
of contents so both will be active in the view.
2. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Thematic Change.
3. Click the Before Theme dropdown arrow and select the
87 classification image.
4. Click the After Theme dropdown arrow, and select the
89 classification image.
5. Navigate to the directory where you want to store the
Output Image, type the file name, and click Save.
6. Click OK.
7. Click the check box of Thematic Change to draw it in
the view.
8. Double-click the Thematic Change title to access Layer
Properties.
9. In the Symbology tab, double-click the symbol for was:
Class 002, is now: Class 003 (was Forest, is now Bare
Soil) to access the color palette.
10. Click the color red in the color palette, and click Apply.
You don’t have to choose red, you can use any color you
like.
11. Click OK.
You can see the amount of destruction in red. The red
shows what was forest and is now bare soil.
Add a feat ure t heme t hat shows t he
proper ty boundary
Using Thematic Change, the overall damage caused by the
hurricane is clear. Next, you will want to see how much
damage actually occurred on the paper company’s land.
1. Click Add Data.
2. Select property.shp, and click Add.
5
4
3
6
QUICK-START TUTORIAL 29
Thematic Change image with the property shapefile
Make t he proper t y t ransparent
1. Double-click on the property theme to access Layer
Properties.
2. Click the Symbology tab, and double-click the color
symbol.
3. In the Symbol Selector, click the Hollow symbol.
4. Click the Outline Width arrows, or type the number 3 in
the box.
5. Click the Outline Color dropdown arrow, and choose a
color that will easily stand out to show your property
line.
6. Click OK.
7. Click Apply and OK on the Symbology tab.
The yellow outline clearly shows the devastation within
the paper company’s property boundaries.
5
4
3
6
USING IMAGE ANALYSIS FOR ARCGIS 30
Exer ci se 5: Mosai cki ng i mages
Image Analysis for ArcGIS allows you to mosaic multiple
images. When you mosaic images, you join them together to
form one single image that covers the entire area. To mosaic
images, simply display them in the view, ensure that they
have the same number of bands, then select Mosaic.
In the following exercise, you are going to mosaic two
airphotos with the same resolution.
Add and draw the i mages
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File
button on your ArcMap tool bar. You do not need to
save the image. If you are beginning here, start ArcMap
and load the Image Analysis for ArcGIS extension with
a new map.
2. Click the Add Data button.
3. Press the Shift key and select Airphoto1.img and
Airphoto2.img in the Add Data dialog. Click Add.
4. Click Airphoto1.img and drag it so that it is at the top of
the Table of contents.
The two airphotos display in the view. The Mosaic tool
joins them as they appear in the view: whichever is on
top is also on top in the mosaicked image.
Zoom i n to see i mage detai l s
1. Select Airphoto1.img, and right-click your mouse.
2. Click Zoom to raster resolution.
The two images are displayed at a 1:1 resolution. You
can now use Pan to see how they overlap.
3. Click the Pan button, then maneuver the images in the
view.
QUICK-START TUTORIAL 31
This illustration shows where the two images overlap.
4. Click the Full Extent button so that both images display
their entirety in the view.
Use Mosai c to j oi n the i mages
1. If you want to use some other extent than Union of
Inputs for your mosaic, you must first go to the Extent
tab in the Options dialog and change the Extent before
opening Mosaic Images. After opening the Mosaic
Images dialog, you cannot access the Options dialog.
However, it is recommended that you keep the default of
Union of Inputs for mosaicking.
2. Click the Image Analysis dropdown arrow, point to Data
Preparation, and click Mosaic Images.
3. Click the Handle Images overlaps dropdown arrow and
choose Use Order Displayed.
3
4
1
USING IMAGE ANALYSIS FOR ARCGIS 32
4. If you want to automatically crop your images, check
the box, and use the arrows or type the percentage by
which to crop the images.
5. Choose Brightness/Contrast as the Color Balancing
option.
6. If you have changed the extent to something other than
Union of Inputs, check this box, but for this exercise you
will need to leave the extent set at Union of Inputs and
the box unchecked.
7. Navigate to the directory where you want to save your
files, type the file name, and click Save.
8. Click OK.
The Mosaic function joins the two images as they
appear in the view. In this case Airphoto1 is mosaicked
over Airphoto2.
3
5
7
6
8
4
QUICK-START TUTORIAL 33
Exer ci se 6: Or t hor ect i f i cat i on of camer a i mager y
The Image Analysis for ArcGIS extension for ArcGIS has a
feature called Geocorrection Properties. The function of this
feature is to rectify imagery. One of the tools that makes up
Geocorrection Properties is the Camera model.
In this exercise you will orthorectify images using the
Camera model in Geocorrection Properties.
Add rast er and feat ure dat aset s
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File
button on your ArcMap tool bar. You do not need to
save the image. If you are beginning here, start ArcMap
and load the Image Analysis for ArcGIS extension with
a new map.
2. Click the Add Data button.
3. Hold the Shift key down and select both ps_napp.img
and ps_streets.shp in the Add Data dialog. Click Add.
4. Right click on ps_napp.img and click Zoom to Layer.
The images are drawn in the view. You can see the
fiducial markings around the edges and at the top.
Sel ect the coordi nate system for the i mage
This procedure defines the coordinate system for the data
frame in Image Analysis for ArcGIS.
1. Either select Layers in the Table of contents and right
click, or move your cursor into the view and right click.
2. Select Properties at the bottom of the menu to bring up
the Data Frame Properties dialog.
3. Click the Coordinate System tab.
4. In the box labeled Select a coordinate system, click
Predefined.
7
5
3
6
4
USING IMAGE ANALYSIS FOR ARCGIS 34
5. Click Projected Coordinate Systems, and then click
Utm.
6. Click NAD 1927, then click NAD 1927 UTM Zone
11N.
7. Click Apply, and click OK.
Or thorecti fyi ng your i mage usi ng
Geocorrecti on Proper ti es
1. Click the Model Types dropdown arrow, and click
Camera.
2. Click the Geocorrection Properties button on the toolbar
to open the Camera dialog.
3. Click the Elevation tab, and select File to use as the
Elevation Source.
4. Navigate to the ArcGIS ArcTutor directory, and choose
ps_dem.img as the Elevation File.
5. Click the Elevation Units dropdown arrow and select
Meters.
6. Check Account for Earth’s curvature.
7. Click the Camera tab.
8. Click the Camera Name dropdown arrow, and select
Default Wild.
9. In the Principal Point box, enter -0.004 for X and 0.000
for Y.
10. Enter a Focal Length of 152.804.
11. Click the arrows, or type 4 for the number of Fiducials.
12. Click in the Film X and Film Y box where the number
of Fiducials will reduce to 4.
2
1
4
5
6
3
QUICK-START TUTORIAL 35
13. Type the following coordinates in the corresponding
fiducial spaces. Use the Tab key to move from space to
space.
1. -106.000 106.000
2. 105.999 105.994
3. 105.998 -105.999
4. -106.008 -105.999
14. Name the camera in the Camera Name box.
15. Click Save to save the camera information with the
Camera Name.
16. Click Apply and move to the next section.
Fi duci al pl acement
1. Click the Fiducials tab, and make sure the first fiducial
orientation is selected.
2. Click the Green fiducial, and the software will take you
to the approximate location of the first fiducial
placement. Your cursor has become a crosshair.
3. Click the Fixed Zoom In tool, and zoom in until you can
see the actual fiducial, and click the crosshair there. The
software will take you to each of the four points where
you can click the crosshair in the fiducial marker.
When you are done placing fiducials, make sure to click
Apply then OK to close. You can then right click on the
image in the Table of contents, and click Zoom to Layer. You
will notice that both the image and the shape file are now
displayed in the view. To look at the root mean square error
(RMSE) on the fiducials tab, you can reopen the Camera
Properties dialog. The RMSE should be less than 1.0. Now,
it is time to rectify the images.
8
11
10
9
7
14
12
16
15
1
2
3
USING IMAGE ANALYSIS FOR ARCGIS 36
After placing fiducials, both the image and the shapefile
are shown in the view for rectification.
Pl aci ng l i nks
1. Click the Add Links button.
2. Looking closely at the image and shapefile in the view,
and using the next image as a guide, line up where you
should place the first link. Follow the markers in the
next image to place the first three links. You will need to
click the crosshair on the point in the image first and
then drag the cursor over to the point in the shapefile
where you want to click.
Your first link should look approximately like this:
3. Place links 2 and 3.
QUICK-START TUTORIAL 37
After placing the third link, your image should look
something like this:
4. Zoom to the upper left portion of the image, and place a
link according to this next image.
5. Zoom to the lower left portion of the image, and place a
link according to the previous image.
Your image should warp and become aligned with the streets
shapefile. You can use the Zoom tool to draw a rectangle
around the aligned area and zoom in to see it more clearly.
Now take a look at the RMS Error on the Links tab of Camera
Properties. You can go to Save As on the Image Analysis
menu and save the image if you wish.
USING IMAGE ANALYSIS FOR ARCGIS 38
What ’ s Next ?
This tutorial has introduced you to some features and basic
functions of Image Analysis for ArcGIS. The following
chapters go into greater detail about the different tools and
elements of Image Analysis for ArcGIS, and include
instructions on how to use them to your advantage.
39
3Appl yi ng dat a t ool s
You will notice when you look at the Image Analysis menu that there are three
choices called Seed Tool Properties, Image Info, and Options. All three aid you in
manipulating, analyzing, and altering your data so you can produce results that are
easier to interpret than they would be with no data tool input.
• Seed Tool Properties automatically generates feature layer polygons of similar
spectral value.
• Image Info gives you the ability to apply a NoData Value and recalculate
statistics.
• Options lets you change extent, cell size, preferences, and more.
IN THIS CHAPTER
• Seed Tool Properties
• Image Info
• Options
3
USING IMAGE ANALYSIS FOR ARCGIS 40
Usi ng Seed Tool Pr oper t i es
As stated in the opening of the chapter, the main function of Seed
Tool Properties is to automatically generate feature layer polygons
of similar spectral value. After creating a shapefile in ArcCatalog,
you can either click in an image on a single point, or you can click
and drag a rectangle in a portion of the image that interests you.
You can decide which method you wish to use before clicking the
tool on the toolbar, or you can experiment with which method looks
best with your data.
In order to use the Seed Tool, you must first create the shapefile for
the image you are using in ArcCatalog. You will need to open
ArcCatalog, create a new shapefile in the directory you want to use,
name it, choose polygon as the type of shapefile, and then use Start
Editing on the Editor toolbar in ArcMap to activate the Seed Tool.
Once you are finished and you have grown the polygon, you can go
back to the Editor toolbar and select Stop Editing.
The band or bands used in growing the polygon are controlled by
the current visible bands as set in Layer Properties. If you only have
one band displayed, such as the red band, when you are interested
in vegetation analysis, then the Seed Tool only looks at the statistics
of that band to create the polygon. If you have all the bands (red,
green, and blue) displayed, then the Seed Tool evaluates the
statistics in each band of data before creating the polygon.
When a polygon shapefile is being edited, a polygon defined using
the Seed Tool is added to the shapefile. Like other ArcGIS
graphics, you can change the appearance of the polygon produced
by the Seed Tool using the Graphics tools.
Control l i ng the Seed Tool
You can use the Seed Tool simply by choosing it from the Image
Analysis toolbar and clicking on an image after generating a
shapefile. The defaults usually produce a good result. However, if
you want more control over the parameters of the Seed Tool, you
can open Seed Tool Properties from the Image Analysis menu.
Seed Tool dialog
Seed Radi us
When you use the simple click method, the Seed Tool is controlled
by the Seed Radius. You can change the number of pixels of the
Seed Radius by opening the dialog from the Image Analysis menu.
From this dialog, you select your Seed Radius in pixels. The Image
Analysis for ArcGIS default Seed Radius is 5 pixels.
The Seed Radius determines how selective the Seed Tool is when
selecting contiguous pixels. A larger Seed Radius includes more
pixels to calculate the range of pixel values used to grow the
polygon, and typically produces a larger polygon. A smaller Seed
Radius uses fewer pixels to determine the range. Setting the Seed
Radius to 0.5 or less restricts the polygon to growing over pixels
with the exact value as the pixel you click on in the image. This can
be useful for thematic images in which a contiguous area might
have a single pixel value, instead of a range of values like
continuous data.
APPLYING DATA TOOLS 41
I sl and Polygons
The other option on the Seed Tool Properties dialog is Include
Island Polygons. You should leave this option checked for use with
Find Like Areas. For single feature mapping where you want to see
a more refined boundary, you may want to turn it off.
USING IMAGE ANALYSIS FOR ARCGIS 42
Prepari ng t o use t he Seed Tool
Go through the following steps to activate the Seed Tool and
generate a polygon in your image.
1. Open ArcCatalog and make sure your working directory
appears in ArcCatalog, or navigate to it.
2. Click File, point to New, and click Shapefile.
3. Rename the New_Shapefile.
4. Click the dropdown arrow and select Polygon.
5. Check Show Details.
6. Click Edit. X
1
2
9
4
6 5
3
APPLYING DATA TOOLS 43
7. Click Select, Import, or New to input the coordinate
system the new shapefile will use. Clicking Import will
allow you to import the coordinates of the image you are
creating the shapefile for.
8. Click Apply and OK in the Spatial Reference Properties
dialog.
9. Click OK in the Create New Shapefile dialog.
10. Close ArcCatalog and click the dropdown arrow on the
Editor toolbar.
11. Select Start Editing.
8
7
11
USING IMAGE ANALYSIS FOR ARCGIS 44
Usi ng t he Seed Tool
These processes will take you through steps to change the
Seed Radius and include Island Polygons. For an in-depth
tutorial on using the Seed Tool and generating a polygon, see
chapter 2 “Quick-start tutorial”.
Changi ng the Seed Radi us
1. Click the Image Analysis dropdown arrow, and click Seed
Tool Properties.
2. Type a new value in the Seed Radius text box.
3. If you need to enable Include Island Polygons, check the
box.
4. Click OK.
After growing the polygon in the image with the Seed Tool, go
back to the Editor toolbar, click the dropdown arrow, and click
Stop Editing.
2
1
3
4
APPLYING DATA TOOLS 45
I mage I nf o
When analyzing images, you often have pixel values you need to
alter or manipulate in order to perceive different parts of the image
better. The Image Info feature of Image Analysis for ArcGIS lets
you choose a NoData Value and recalculate the statistics for your
image so that a pixel value that is unimportant in your image can be
designated as such.
You can apply NoData to a single layer of your image instead of to
the entire image if you want or need to do so. When you choose to
apply NoData to single layers, it is important that you click Apply
on the dialog before moving to the next layer. You can also
recalculate statistics (Recalc Stats) for single bands by choosing
Current Band in the Statistics box on the Image Info dialog. It is
important to remember that if you click Recalc Stats while Current
Band is selected, Image Info will only recalculate the statistics for
that band. If you want to set NoData for a single band, but
recalculate statistics for all bands, you can choose All Bands after
setting NoData in the single bands, and recalculate for all.
The Image Info dialog is found on the Image Analysis menu. When
you choose it, the images in your view will be displayed on a
dropdown menu under Layer Selection. You can then type the pixel
value that you wish to give the NoData pixels in your image. The
Statistics portion of the dialog also features a dropdown menu so
you can designate the layer for which to calculate NoData. This
area of the dialog also names the Pixel Type and the Minimum and
Maximum values. When you click Recalc Stats, the statistics for the
image are recalculated using the NoData Value, and you can close
the image in the view, then reopen it to see the NoData Value
applied. The Representation Type area of the dialog will
automatically choose Continuous or Thematic depending on what
kind of image you have in your view. If you find that a file you need
to be continuous is listed as thematic, you can change it here.
NoDat a Val ue
The NoDataValue section of the Image Info dialog gives you the
opportunity to label certain areas of your image as NoData. In order
to do this, you assign a certain value that no other pixel in the image
has to the pixels you want to classify as NoData. You will want to
do this when the pixel values in that particular area of the image are
not important to your statistics or image. You have to assign some
type of value to those pixels to hold their place, so you need to come
up with a value that's not being used for any of the other pixels you
want to include. Using 0 does not work because 0 does contain
value. Look at the Minimum value and the Maximum value under
Statistics on the Image Info dialog and choose your NoData value
to be any number between the Minimum and Maximum.
Sometimes the pixel value you choose as NoData will already be
used so that NoData matches some other part of your image. This
problem becomes evident when the image is displayed in the view
and there are black spots or triangles where it should be clear, or
perhaps clear spots where it should be black. Also remember that
you can type N/A or leave the area blank so that you have no
NoData assigned if you don't want to use this option.
USING IMAGE ANALYSIS FOR ARCGIS 46
Usi ng the I mage I nfo di al og
1. Click the Image Analysis dropdown arrow, and click Image
Info.
2. Click the Layer Selection dropdown arrow to make sure the
correct image is displayed.
3. Click the Statistics dropdown arrow to make sure the layer
you want to recalculate is selected.
4. Choose All Bands or Current Band.
5. Type the NoDataValue in the box.
6. Make sure the correct Representation Type is chosen for
your image.
7. Click Recalc Stats.
8. Click Apply and OK.
9. Close the image and re-open to view the results visually.
7
2
1
5
3
6
4
8
APPLYING DATA TOOLS 47
Opt i ons
You can access the Options dialog through the Image Analysis
menu. Through this dialog, you can set an analysis mask as well as
setting the extent, cell size, and preferences for future operations or
a single operation. It’s usually best to leave the options set at what
they are, but there may be times you want or need to change them.
When you’re mosaicking images, you can go to the Extent tab on
the Options dialog in order to set the extent at something other than
Union of Inputs, which it automatically defaults to when
mosaicking. The default extent is usually Intersection of Inputs. It
is recommended that you leave the default Union of Inputs when
mosaicking, but you can change it. If you do so, you will need to
check the Use Extent from Analysis Options box on the Mosaic
Image dialog. You can use the Options dialog with any Image
Analysis feature, but you may find it particularly useful with the
Data Preparation features that will be covered in the next chapter.
The Options dialog has four tabs on it for General, Extent, Cell
Size, and Preferences. On the General tab, your output directory is
displayed, and the Analysis mask will default to none, but if you
click the dropdown arrow, you can set it to any raster dataset. If you
want to store your output images and shapefiles in one working
directory, you can navigate to that directory or type the directory
name in the Working directory box. This will allow your working
directory to automatically come up every time you click the browse
button for an output image. The Analysis Coordinate System lets
you choose which coordinate system you would like the image to
be saved with—the one for the input or the one for the active data
frame. Finally, you can select whether or not to have a warning
message display if raster inputs have to be projected during analysis
operation.
The Image Analysis Options dialog
Extent
The Extent tab lets you control how much of a theme you want to
use during processing. You do this by setting the Analysis extent.
The rest of the tab will become active when Same as Display, As
Specified below, and Same as Layer "......" (whatever layer is active
in the view) are chosen. Same as Display refers to the area currently
displayed in the view. If the view has been zoomed in on a portion
of a theme, then the functions would only operate on that portion of
the theme. When you choose Same as Layer, all of the information
in the Table of contents for that layer is considered regardless of
whether or not they are displayed in the view. As Specified below
lets you fill in the information for the extent. You can also click the
open file button on the Extent tab to choose a dataset to use as the
Analysis extent. If you click this button, you can navigate to the
directory where your data is stored and select a file that has extents
falling within the selected project area.
USING IMAGE ANALYSIS FOR ARCGIS 48
The other options on the Analysis extent dropdown list are
Intersection of Inputs and Union of Inputs. When you choose
Intersection (which is the default extent for all functions except
Mosaic), Image Analysis for ArcGIS performs functions on the
area of overlap common to the input images to the function.
Portions of the images outside the area of overlap are discounted
from analysis. Union is the default setting of Analysis extent for
mosaicking. When the extent is set to Union of Inputs, Image
Analysis for ArcGIS uses the union of every input theme. It is
highly recommended that you keep this default setting when
mosaicking images.
When you choose an extent that activates the rest of the Extent tab,
the fields are Top, Right, Bottom, and Left. If you are familiar with
the data and want to enter exact coordinates, you can do so in these
fields. Same as Display and As Specified Below activate the Snap
extent to field where you can choose an image to snap the Analysis
mask to.
The Extent tab on the Options dialog
Cel l Si ze
The third tab on the Options dialog is Cell Size. This is for the cell
size of images you produce using Image Analysis for ArcGIS. The
first field on the tab is a dropdown list for Analysis cell size. You
can choose Maximum of Inputs, Minimum of Inputs, As Specified
below, or Same as Layer ".....". Choosing Maximum of Inputs
yields an output that has the maximum resolution of the input files.
For example, if you use Image Difference on a 10 meter image and
a 20 meter image, the output is a 20 meter image.
The Minimum of Inputs option produces an output that has the
minimum resolution of the input files. For example, if you use
Image Difference on a 10 meter image and a 20 meter image, the
output is a 10 meter image.
When you choose As Specified below, you can enter whatever cell
size you wish to use, and Image Analysis for ArcGIS will adjust the
output accordingly.
If you choose Same as Layer "....", indicating a layer in the view,
the cell size reflects the current cell size of that layer.
The Cell Size field will display in either meters or feet. To choose
one, click View in ArcMap, click Data Frame Properties, and on the
General Tab, click the dropdown arrow for Map Units and choose
either Feet or Meters.
The Number of Rows and Number of Columns fields should not be
updated manually as they will update as analysis properties are
changed.
APPLYING DATA TOOLS 49
The Cell Size tab on the Options dialog
Preferences
It is recommended that you leave the preference choice to the
default of Bilinear Interpolation, but you can change it to Nearest
Neighbor or Cubic Convolution if your data requires one of those
choices. Bilinear Interpolation is a resampling method that uses the
data file values of four pixels in a 2 × 2 window to calculate an
output data file value by computing a weighted average of the input
data file values with a bilinear function.
The Nearest Neighbor option is a resampling method in which the
output data file value is equal to the input pixel that has coordinates
closest to the retransformed coordinates of the output pixel.
The Cubic Convolution option is a resampling method that uses the
data file values of sixteen pixels in a 4 × 4 window to calculate an
output data file value with a cubic function.
The Preferences tab on the Options dialog
USING IMAGE ANALYSIS FOR ARCGIS 50
Usi ng t he Opt i ons di al og
The following processes will take you through the parts you
can change on the Options dialog.
The General Tab
1. Click the Image Analysis dropdown arrow, and click
Options.
2. Navigate to the Working directory if it’s not displayed in
the box.
3. Click the dropdown arrow and select the Analysis mask if
you want one, or navigate to the directory where it is
stored.
4. Choose the Analysis Coordinate System.
5. Check or uncheck the Display warning box according to
your needs.
6. Click the Extent tab to change Extents or OK to finish.
3
2
1
6
4
5
APPLYING DATA TOOLS 51
The Extent Tab
1. Click the dropdown arrow for Analysis extent, and
choose an extent, or navigate to a directory to choose a
dataset for the extent.
2. If the coordinate boxes are on, you can type in
coordinates if you know the exact ones to use.
3. If activated, click the dropdown arrow, and choose an
image to Snap extent to, or navigate to the directory
where it is stored.
4. Click the Cell Size tab, or OK.
2
1
4
3
USING IMAGE ANALYSIS FOR ARCGIS 52
Cel l Si ze tab
1. Click the dropdown arrow, and choose the cell size, or
navigate to the directory where it is stored.
2. If activated, type the cell size you want to use.
3. Type the number of rows.
4. Type the number of columns.
5. Click the Preferences tab or OK.
The Preferences tab has only the one option of clicking the
dropdown arrow and choosing to resample using either
Nearest Neighbor, Bilinear Interpolation, or Cubic
Convolution.
1
2
3
4
5
Worki ng wi t h f eat ures
Sect ion 2
55
4Usi ng Dat a Preparat i on
When using the Image Analysis for ArcGIS extension, it is sometimes necessary
to prepare your data first. It is important to understand how to prepare your data
before moving on to the different ways Image Analysis for ArcGIS gives you to
manipulate your data. You are given several options for preparing data in Image
Analysis for ArcGIS.
In this chapter you will learn how to:
• Create a new image
• Subset an image
• Mosaic images
• Reproject an image
IN THIS CHAPTER
• Create New Image
• Subset Image
• Mosaic Images
• Reproject Image
4
USING IMAGE ANALYSIS FOR ARCGIS 56
Cr eat e New I mage
The Create New Image function makes it easy to create a new
image file. It also allows you to define the size and content of the
file as well as choosing whether or not the new image type will be
thematic or continuous.
Choose thematic for raster layers that contain qualitative and
categorical information about an area. Thematic layers lend
themselves to applications in which categories or themes are used.
They are used to represent data measured on a nominal or ordinal
scale, such as soils, land use, land cover, and roads.
Continuous data is represented in raster layers that contain
quantitative (measuring a characteristic on an interval or ratio
scale) and related, continuous values. Continuous raster layers can
be multiband or single band such as Landsat, SPOT, digitized
(scanned) aerial photograph, DEM, slope, and temperature.
With this feature, you also get to choose the value of columns and
rows (the default value is 512, but you can change that) and you
choose the data type as well. The data type determines the type of
numbers and the range of values that can be stored in a raster layer.
The Number of Layers allows you to select how many layers to
create in the new file.
The Initial Value lets you choose the number to initialize the new
file. Every cell is given this value.
When you are finished entering your information into the fields,
you can click OK to create the image, or Cancel to close the dialog.
Data Type
Minimum
Value
Maximum
Value
Unsigned 1 bit 0 1
Unsigned 2 bit 0 3
Unsigned 4 bit 0 15
Unsigned 8 bit 0 255
Signed 8 bit -128 127
Unsigned 16 bit 0 65,535
Signed 16 bit -32,768 32,767
Unsigned 32 bit
Signed 32 bit -2 billion 2 billion
Float Single
USING DATA PREPARATION 57
Creat i ng a new i mage
1. Click the Image Analysis dropdown arrow, point to Data
Preparation, and click Create New Image.
2. Navigate to the directory where the Output Image should
be stored.
3. Choose Thematic or Continuous as the Output Image
Type.
4. Type or click the arrows to enter how many Columns or
Rows if different from the default number of 512.
5. Click the dropdown arrow to choose the Data Type.
6. Type or click the arrows to enter Number of Layers.
7. Type or click the arrows to enter the Initial Value.
8. Click OK.
1
3
5
7
8
4
6
2
USING IMAGE ANALYSIS FOR ARCGIS 58
Subset I mage
This function allows you to copy a portion (a subset) of an input
data file into an output data file. This may be necessary if you have
an image file that is much larger than the particular area you need
to study. Subset Image has the advantage of not only eliminating
extraneous data, but it also speeds up processing as well, which can
be important when dealing with multiband data.
The Subset Image function works on multiband continuous data to
separate that data into bands. For example, if you are working with
a TM image that has seven bands of data, you may wish to make a
subset of bands 2, 3, and 4, and discard the rest.
The Subset Image function can be used to subset an image either
spatially or spectrally. You will probably spatially subset more
frequently than spectrally. To subset spatially, you first bring up the
Options dialog, which allows you to apply a mask or extent or set
the cell size. These options are used for all Image Analysis for
ArcGIS functions including Subset Image. Spatial subsets are
particularly useful if you have a large image and you only want to
subset part of it for analysis. You can use the Zoom In tool to draw
a rectangle around the specific area you wish to subset and go from
there. If you wish to subset an image spectrally, you do it directly
in the Subset Image dialog by entering the desired band numbers to
extract from the image.
Following are illustrations of a TM image of the Amazon as it
undergoes a spectral subset.
This feature is also accessible from the Utilities menu.
The Amazon TM image before subsetting
Amazon TM after a spectral subset
USING DATA PREPARATION 59
The next illustrations reflect images using the spatial subsetting
option.
The image of the Pentagon before spatial subsetting
In order to specify the particular area to subset, you click the Zoom
In tool, draw a rectangle over the area, open the options dialog, and
select Same As Display on the Extent tab. The rectangle is defined
by Top, Left, Bottom, and Right coordinates. Top and Bottom are
measured as the locations on the Y-axis and the Left and Right
coordinates are measured on the X-axis. You can then save the
subset image and work from there on your analysis.
The Options dialog
The Pentagon subset image after setting the Analysis Extent in Options
USING IMAGE ANALYSIS FOR ARCGIS 60
Subsetti ng an i mage spectral l y
1. Click Add Data to add the image to the view.
2. Double-click the image name in the Table of contents to
open Layer Properties.
3. Click the Symbology tab in Layer Properties.
4. Click Stretched in the Show panel.
5. Click the Band dropdown arrow, and select the layer you
want to subset.
6. Click Apply and OK. X
1
6
5
3
4
USING DATA PREPARATION 61
7. Click the Image Analysis dropdown arrow, point to Data
Preparation, and click Subset Image.
8. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
9. Using a comma for separation, type the band numbers
you want to subset in the text box.
10. Type the file name of the Output Image, or navigate to
the directory where it should be stored.
11. Click OK.
10
8
7
9
11
USING IMAGE ANALYSIS FOR ARCGIS 62
Subset t i ng an i mage spat i al ly
1. Click the Add Data button to add your image.
2. Click the Zoom In tool, and draw a rectangle over the
area you want to subset.
3. Click the Image Analysis menu, and click Options.
4. Click the Extent tab.
5. Click the Analysis extent dropdown arrow, and select
Same As Display.
6. Click Apply and OK.
7. Click the Image Analysis dropdown arrow and click Save
As, and save the image in the appropriate directory.
3
7
5
6
4
2
1
USING DATA PREPARATION 63
Mosai c I mages
Mosaicking is the process of joining georeferenced images together
to form a larger image. The input images must all contain map and
projection information, although they need not be in the same
projection or have the same cell sizes. Calibrated input images are
also supported. All input images must have the same number of
layers. You can mosaic single or multiband continuous data, or
thematic data.
It is extremely important when mosaicking to arrange your images
in the view as you want the output theme to appear before you
mosaic them. Image Analysis for ArcGIS mosaics images strictly
based on their appearance in the view. This allows you to mosaic a
large number of images without having to make them all active.
It is also important that the images you plan to mosaic contain the
same number of bands. You cannot mosaic a seven band TM image
with a six band TM image. You can, however, use Subset Image to
subset bands from an existing image and then mosaic regardless of
the number of bands they originally contained.
You can mosaic images with different cell sizes or resolutions.
When this happens you can consult the settings in the Image
Analysis Options dialog for Cell Size. The Cell Size is initially set
to the maximum cell size so if you mosaic two images, one with a
4-meter resolution and one with a 5-meter resolution, the output
mosaicked image has a 5-meter resolution. You can set the Cell
Size in the Options dialog to whatever cell size you like so that the
output mosaicked image has the cell size you selected.
The Extent tab on the Options dialog will default to Union of Inputs
for mosaicking images. If, for some reason, you want to use a
different extent, you can change it in the Options dialog and check
the Use Extent from Analysis Options box on the Mosaic Images
dialog. It is recommended that you leave it at the default of Union
of Inputs.
Another Options feature to take note of is the Preferences tab. For
mosaicking images, you should resample using Nearest Neighbor.
This will ensure that the mosaicked pixels do not differ in their
appearance from the original image. Other resampling methods use
averages to compute pixel values and can produce an edge effect.
When you apply Mosaic, the images are processed using whatever
stretch you’ve specified in the Layer Properties dialog. During
processing, each image is fed through its own lookup table, and the
output mosaicked image has the stretch built in, and should be
viewed with no stretch. This allows you to adjust the stretch of each
image independently to achieve the desired overall color balance.
With the Mosaic tool you are also given a choice of how to handle
image overlaps by using the order displayed, maximum value,
minimum value, or average value.
Choose:
Order Displayed — replaces each pixel in the overlap area with the
pixel value of the image that is on top in the view.
Maximum Value — in order to replace each pixel in the overlap
area with the greater value of corresponding pixels in the
overlapping images.
Minimum Value — replaces each pixel of the overlap area by the
lesser value of the corresponding pixels in the overlapping images.
Average Value — replaces each pixel in the overlap area with the
average of the values of the corresponding pixels in the overlapping
images.
USING IMAGE ANALYSIS FOR ARCGIS 64
The color balancing options let you choose between balancing by
brightness/contrast, histogram matching, or none. If you choose
brightness/contrast, the mosaicked image will be balanced by
utilizing the adjustments you have made in Layer Properties/
Symbology. If you choose Histogram Matching, the input images
are adjusted to have similar histograms to the top of the image in
the view. Select None if you don’t want the pixel values adjusted.
USING DATA PREPARATION 65
How to Mosai c I mages
1. Add the images you want to mosaic to the view.
2. Arrange images in the view in the order that you want
them in the mosaic.
3. Click the Image Analysis dropdown arrow, point to Data
Preparation, and click Mosaic Images.
4. Click the Handle Image Overlaps by dropdown arrow,
and click the method you want to use.
5. If you want the images automatically cropped, check the
box, and enter the Percent by which to crop the images.
6. Choose the Color Balance method.
7. Check the box if you want to use the extent you set in
Analysis Options.
8. Navigate to the directory where the Output Image should
be stored.
9. Click OK.
For more information on mosaicking images, see chapter 2
“Quick-start tutorial’’.
2
3
5
7
4
6
8
USING IMAGE ANALYSIS FOR ARCGIS 66
Repr oj ect I mage
Reproject Image gives you the ability to reproject raster image data
from one map projection to another. Reproject Image, like all
Image Analysis for ArcGIS functions, observes the settings in the
Options dialog so don’t forget to use Options to set Extent, Cell
Size, and so on if so desired.
ArcMap has the capability to reproject images on the fly by simply
setting the desired projection and choosing View/Data Frame
Properties and selecting the Coordinate System tab. The desired
projection may then be selected. After you select the coordinate
system, you apply it and go to Reproject Image n Image Analysis
for ArcGIS.
At times you may need to produce an image in a specific projection.
By having the desired output projection specified in the Data Frame
Properties, the only things you need to specify in Reproject Image
are the input and output images.
Before Reproject Image
Here is the reprojected image after changing the Coordinate System
to Mercator (world):
After Reproject Image
USING DATA PREPARATION 67
How to Reproj ect an I mage
1. Click Add Data, and add the image you want to reproject
to the view.
2. Right-click in the view, and click on Properties to bring up
the Data Frame Properties dialog.
3. Click on the Coordinate System tab.
4. Click Predefined and choose whatever coordinate
system you want to use to reproject the image.
5. Click Apply and OK. X
2
4
1
5
3
USING IMAGE ANALYSIS FOR ARCGIS 68
6. Click the Image Analysis dropdown arrow, point to Data
Preparation, and click Reproject Image.
7. Click the Input Image dropdown arrow and click the file
you want to use, or navigate to the directory where it is
stored.
8. Navigate to the directory where the Output Image should
be stored.
9. Click OK.
6
7
8
9
69
1Per f or mi ng Spat i al Enhancement
Spatial Enhancement is a function that enhances an image using the values of
individual and surrounding pixels. Spatial Enhancement deals largely with spatial
frequency, which is the difference between the highest and lowest values of a
contiguous set of pixels. Jensen (1986) defines spatial frequency as “the number of
changes in brightness value per unit distance for any part of an image.”
There are three types of spatial frequency:
• zero spatial frequency — a flat image, in which every pixel has the same value
• low spatial frequency — an image consisting of a smoothly varying gray scale
• high spatial frequency — an image consisting of drastically changing pixel
values such as a checkerboard of black and white pixels
The Spatial Enhancement feature lets you use convolution, non-directional edge,
focal analysis, and resolution merge to enhance your images. Depending on what
you need to do to your image, you will select one feature from the Spatial
Enhancement menu. This chapter will focus on the explanation of these features as
well as how to apply them to your data.
This chapter is organized according to the order in which the Spatial Enhancement
tools appear. You may want to skip ahead if the information you are seeking is
about one of the tools near the end of the menu list.
IN THIS CHAPTER
• Convolution
• Non-Directional Edge
• Focal Analysis
• Resolution Merge
5
USING IMAGE ANALYSIS FOR ARCGIS 70
Convol ut i on
Convolution filtering is the process of averaging small sets of pixels
across an image. Convolution filtering is used to change the spatial
frequency characteristics of an image (Jensen 1996).
A convolution kernel is a matrix of numbers that is used to average
the value of each pixel with the values of surrounding pixels. The
numbers in the matrix serve to weight this average toward
particular pixels. These numbers are often called coefficients,
because they are used as such in the mathematical equations.
Applyi ng convol uti on fi l teri ng
Apply Convolution filtering by clicking the Image Analysis
dropdown arrow, and choosing Convolution from the Spatial
Enhancement menu. The word filtering is a broad term, which
refers to the altering of spatial or spectral features for image
enhancement (Jensen 1996). Convolution filtering is one method of
spatial filtering. Some texts use the terms synonymously.
Convol uti on exampl e
To understand how one pixel is convolved, imagine that the
convolution kernel is overlaid on the data file values of the image
(in one band) so that the pixel to be convolved is in the center of the
window. To compute the output value for this pixel, each value in the
convolution kernel is multiplied by the image pixel value that
corresponds to it. These products are summed, and the total is
divided by the sum of the values in the kernel, as shown in this
equation:
integer [((-1 × 8) + (-1 × 6) + (-1 × 6) +
(-1 × 2) + (16 × 8) + (-1 × 6) +
(-1 × 2) + (-1 × 2) + (-1 × 8))/
: (-1 + -1 + -1 + -1 + 16 + -1 + -1 + -1 + -1)]
= int [(128-40) / (16-8)]
= int (88 / 8) = int (11) = 11
2 8 6 6 6
2 8 6 6 6
2 2 8 6 6
2 2 2 8 6
2 2 2 2 8
Kernel
-1 -1 -1
-1 16 -1
-1 -1 -1
Data
PERFORMING SPATIAL ENHANCEMENT 71
When the 2 × 2 set of pixels near the center of this 5 × 5 image is
convolved, the output values are:
The kernel used in this example is a high frequency kernel. The
relatively lower values become lower, and the higher values
become higher, thus increasing the spatial frequency of the image.
Convol uti on formul a
The following formula is used to derive an output data file value for
the pixel being convolved (in the center):

1 2 3 4 5
1
- - - - -
2
- 11 5 - -
3
- 0 11 - -
4
- - - - -
5
- - - - -
Where:
f
ij
= the coefficient of a convolution kernel at
position i,j (in the kernel)
d
ij
= the data value of the pixel that corresponds to
f
ij

q = the dimension of the kernel, assuming a square
kernel (if q = 3, the kernel is 3 × 3)
F = either the sum of the coefficients of the kernel,
or 1 if the sum of coefficients is zero
V = the output pixel value
Source: Modified from Jensen 1996; Schowengerdt 1983
The sum of the coefficients (F) is used as the denominator of the
equation above, so that the output values are in relatively the same
range as the input values. Since F cannot equal zero (division by
zero is not defined), F is set to 1 if the sum is zero.
V
f
ij
d
ij
j 1 =
q

\ .
|
|
| |
i 1 =
q

F
----------------------------------- =
USING IMAGE ANALYSIS FOR ARCGIS 72
Zero sum kernel s
Zero sum kernels are kernels in which the sum of all coefficients in
the kernel equals zero. When a zero sum kernel is used, then the
sum of the coefficients is not used in the convolution equation, as
above. In this case, no division is performed (F = 1), since division
by zero is not defined.
This generally causes the output values to be:
• zero in areas where all input values are equal (no edges)
• low in areas of low spatial frequency
• extreme in areas of high spatial frequency (high values become
much higher, low values become much lower)
Therefore, a zero sum kernel is an edge detector, which usually
smooths out or zeros out areas of low spatial frequency and creates
a sharp contrast where spatial frequency is high, which is at the
edges between homogeneous (homogeneity is low spatial
frequency) groups of pixels. The resulting image often consists of
only edges and zeros.
Zero sum kernels can be biased to detect edges in a particular
direction. For example, this 3 × 3 kernel is biased to the south
(Jensen 1996).
-1 -1 -1
1 -2 1
1 1 1
Hi gh f requency kernel s
A high frequency kernel, or high pass kernel, has the effect of
increasing spatial frequency.
High frequency kernels serve as edge enhancers, since they bring
out the edges between homogeneous groups of pixels. Unlike edge
detectors (such as zero sum kernels), they highlight edges and do
not necessarily eliminate other features.
When a high frequency kernel is used on a set of pixels in which a
relatively low value is surrounded by higher values, like this...
...the low value gets lower. Inversely, when the high frequency
kernel is used on a set of pixels in which a relatively high value is
surrounded by lower values...
-1 -1 -1
-1 16 -1
-1 -1 -1
BEFORE AFTER
204 200 197 - - -
201 106 209 - 10 -
198 200 210 - - -
PERFORMING SPATIAL ENHANCEMENT 73
...the high value becomes higher. In either case, spatial frequency is
increased by this kernel.
Low f requency kernel s
Below is an example of a low frequency kernel, or low pass kernel,
which decreases spatial frequency.
This kernel simply averages the values of the pixels, causing them
to be more homogeneous. The resulting image looks either more
smooth or more blurred.
BEFORE AFTER
64 60 57 - - -
61 125 69 - 188 -
58 60 70 - - -
1 1 1
1 1 1
1 1 1
Convolution With High Pass
Convolution with High Pass
USING IMAGE ANALYSIS FOR ARCGIS 74
Apply Convol uti on
1. Click the Image Analysis dropdown arrow, point to
Spatial Enhancement, and click Convolution.
2. Click the Input Image dropdown arrow, and click a file, or
navigate to the directory where the file is stored.
3. Click the Kernel dropdown arrow, and click the kernel you
want to use.
4. Choose Reflection or Background Fill.
5. Navigate to the directory where the Output Image should
be stored.
6. Click OK.
1
3
4
5
6
2
Appl yi ng Convol ut i on
Reflection fills in the area beyond the edge of the of the image
with a reflection of the values at the edge. Background fill uses
zeros to fill in the kernel area beyond the edge of the image.
Convolution allows you to perform image enhancement
operations such as averaging and high pass or low pass filtering.
Each data file value of the new output file is calculated by
centering the kernel over a pixel and multiplying the original
values of the center pixel and the appropriate surrounding pixels
by the corresponding coefficients from the matrix. To make
sure the output values are within the general range of the input
values, these numbers are summed and then divided by the sum
of the coefficients. If the sum is zero, the division is not
performed.
PERFORMING SPATIAL ENHANCEMENT 75
Non- Di r ect i onal Edge
The Non-Directional Edge function averages the results of two
orthogonal first derivative edge detectors. The filters used are the
Sobel and Prewitt filters. Both of these filters are based on a
calculation of the 1st derivative, or slope, in both the x and y
directions. Both use orthogonal kernels convolved separately with
the original image, and then combined.
The Non-Directional Edge is based on the Sobel zero-sum
convolution kernel. Most of the standard image processing filters
are implemented as a single pass moving window (kernel)
convolution. Examples include low pass, edge enhance, edge
detection, and summary filters.
For this model, a Sobel filter has been selected. To convert this
model to the Prewitt filter calculation, the kernels must be changed
according to the example below.
1 0 1 –
2 0 2 –
1 0 1 –
vertical
1 – 2 – 1 –
0 0 0
1 2 1
horizontal
Sobel=
1 0 1 –
1 0 1 –
1 0 1 –
vertical
1 – 1 – 1 –
0 0 0
1 1 1
horizontal
Prewitt=
Image of Seattle before applying Non-Directional Edge
After Non-Directional Edge
USING IMAGE ANALYSIS FOR ARCGIS 76
Usi ng Non- Di rect i onal Edge
1. Click the Image Analysis dropdown arrow, point to
Spatial Enhancement, and click Non-Directional Edge.
2. Click the Input Image dropdown arrow, and click a file, or
navigate to the directory where the file is stored.
3. Choose Sobel or Prewitt.
4. Choose Reflection or Background Fill.
5. Type the file name of the Output Image, or navigate to
the directory where it should be stored.
6. Click OK.
1
3
4
5
6
2
Usi ng Non- Di r ect i onal Edge
In step 4, reflection fills in the area beyond the edge of the
image with a reflection of the values at the edge. Background
fill uses zeros to fill in the kernel area beyond the edge of the
image.
PERFORMING SPATIAL ENHANCEMENT 77
Focal Anal ysi s
The Focal Analysis function enables you to perform one of several
types of analysis on class values in an image file using a process
similar to convolution filtering.
This model (Median Filter) is useful for reducing noise such as
random spikes in data sets, dead sensor striping, and other impulse
imperfections in any type of image. It is also useful for enhancing
thematic images.
Focal Analysis evaluates the region surrounding the pixel of
interest (center pixel). The operations that can be performed on the
pixel of interest include:
• Standard Deviation — measure of texture
• Sum
• Mean — good for despeckling radar data
• Median — despeckle radar
• Min
• Max
These functions allow you to select the size of the surrounding
region to evaluate by selecting the window size.
An image before Focal Analysis
After Focal Analysis is performed
USING IMAGE ANALYSIS FOR ARCGIS 78
Applyi ng Focal Anal ysi s
1. Click the Image Analysis dropdown arrow, point to
Spatial Enhancement, and click Focal.
2. Click the Input Image dropdown arrow, and click a file, or
navigate to the directory where the file is stored.
3. Click the Focal Function dropdown arrow, and click the
function you want to use.
4. Click the Neighborhood Shape dropdown arrow, and click
the shape you want to use.
5. Click the Neighborhood Definition dropdown arrow, and
click the Matrix size you want to use.
6. Type the file name of the Output Image, or navigate to
the directory where it should be stored.
7. Click OK.
1
3
4
6
7
2
5
Focal Anal ysi s Resul t s
Focal Analysis is similar to Convolution in the process that it
uses. With Focal Analysis, you are able to perform several
different types of analysis on the pixel values in an image file.
PERFORMING SPATIAL ENHANCEMENT 79
Resol ut i on Mer ge
The resolution of a specific sensor can refer to radiometric, spatial,
spectral, or temporal resolution. This function merges imagery of
differing spatial resolutions.
Landsat TM sensors have seven bands with a spatial resolution of
28.5 m. SPOT panchromatic has one broad band with very good
spatial resolution—10 m. Combining these two images to yield a
seven-band data set with 10 m resolution provides the best
characteristics of both sensors.
A number of models have been suggested to achieve this image
merge. Welch and Ehlers (1987) used forward-reverse RGB to IHS
transforms, replacing I (from transformed TM data) with the SPOT
panchromatic image. However, this technique is limited to three
bands (R,G,B).
Chavez (1991), among others, uses the forward-reverse principal
components transforms with the SPOT image, replacing PC-1.
In the above two techniques, it is assumed that the intensity
component (PC-1 or I) is spectrally equivalent to the SPOT
panchromatic image, and that all the spectral information is
contained in the other PCs or in H and S. Since SPOT data does not
cover the full spectral range that TM data does, this assumption
does not strictly hold. It is unacceptable to resample the thermal
band (TM6) based on the visible (SPOT panchromatic) image.
Another technique (Schowengerdt 1980) additively combines a
high frequency image derived from the high spatial resolution data
(i.e., SPOT panchromatic) with the high spectral resolution Landsat
TM image.
The Resolution Merge function uses the Brovey Transform method
of resampling low spatial resolution data to a higher spatial
resolution while retaining spectral information:
Brovey Transform
In the Brovey Transform, three bands are used according to the
following formula:
DNB1_new = [DNB1 / DNB1 + DNB2 + DNB3] ×
[DNhigh res. image]
DNB2_new = [DNB2 / DNB1 + DNB2 + DNB3] ×
[DNhigh res. image]
DNB3_new = [DNB3 / DNB1 + DNB2 + DNB3] ×
[DNhigh res. image]
Where:
B = band
The Brovey Transform was developed to visually increase contrast
in the low and high ends of an image’s histogram (i.e., to provide
contrast in shadows, water and high reflectance areas such as urban
features). Brovey Transform is good for producing RGB images
with a higher degree of contrast in the low and high ends of the
image histogram and for producing visually appealing images.
Since the Brovey Transform is intended to produce RGB images,
only three bands at a time should be merged from the input
multispectral scene, such as bands 3, 2, 1 from a SPOT or Landsat
TM image or 4, 3, 2 from a Landsat TM image. The resulting
merged image should then be displayed with bands 1, 2, 3 to RGB.
USING IMAGE ANALYSIS FOR ARCGIS 80
Resol uti on Merge
1. Click the Image Analysis dropdown arrow, point to
Spatial Enhancement, and click Resolution Merge.
2. Click the High Resolution Image dropdown arrow, and
click a file, or navigate to the directory where the file is
stored.
3. Click the Multi-Spectral Image dropdown arrow, and click
a file, or navigate to the directory where the file is stored.
4. Navigate to the directory where the Output Image should
be stored.
5. Click OK.
1
3
4
5
2
Usi ng Resol ut i on Mer ge
Use Resolution Merge to integrate imagery of different spatial
resolutions (pixel size).
PERFORMING SPATIAL ENHANCEMENT 81
The following images display the Resolution Merge function:
High Resolution Image
Multi-Spectral Image
Resolution Merge
USING IMAGE ANALYSIS FOR ARCGIS 82
83
1Usi ng Radi omet ri c Enhancement
Radiometric enhancement deals with the individual values of the pixels in an
image. It differs from Spatial Enhancement, which takes into account the values of
neighboring pixels.
Radiometric Enhancement consists of functions to enhance your image by using
the values of individual pixels within each band. Depending on the points and the
bands in which they appear, radiometric enhancements that are applied to one band
may not be appropriate for other bands. Therefore, the radiometric enhancement of
a multiband image can usually be considered as a series of independent, single-
band enhancements (Faust 1989).
IN THIS CHAPTER
• LUT (Lookup Table) Stretch
• Histogram Equalization
• Histogram Matching
• Brightness Inversion
6
USING IMAGE ANALYSIS FOR ARCGIS 84
LUT St r et ch
LUT Stretch creates an output image that contains the data values
as modified by a lookup table. The output is 3 bands.
Contrast stretch
When radiometric enhancements are performed on the display
device, the transformation of data file values into brightness values
is illustrated by the graph of a lookup table.
Contrast stretching involves taking a narrow input range and
stretching the output brightness values for those same pixels over a
wider range. This process is done in Layer Properties in Image
Analysis for ArcGIS.
Li near and nonl i near
The terms linear and nonlinear, when describing types of spectral
enhancement, refer to the function that is applied to the data to
perform the enhancement. A piecewise linear stretch uses a
polyline function to increase contrast to varying degrees over
different ranges of the data.
Li near contrast stretch
A linear contrast stretch is a simple way to improve the visible
contrast of an image. It is often necessary to contrast-stretch raw
image data, so that they can be seen on the display.
In most raw data, the data file values fall within a narrow range—
usually a range much narrower than the display device is capable of
displaying. That range can be expanded to utilize the total range of
the display device (usually 0 to 255).
Nonl i near cont rast st ret ch
A nonlinear spectral enhancement can be used to gradually increase
or decrease contrast over a range, instead of applying the same
amount of contrast (slope) across the entire image. Usually,
nonlinear enhancements bring out the contrast in one range while
decreasing the contrast in other ranges.
Pi ecewi se l i near cont rast st ret ch
A piecewise linear contrast stretch allows for the enhancement of a
specific portion of data by dividing the lookup table into three
sections: low, middle, and high. It enables you to create a number
of straight line segments that can simulate a curve. You can
enhance the contrast or brightness of any section in a single color
gun at a time. This technique is very useful for enhancing image
areas in shadow or other areas of low contrast.
A piecewise linear contrast stretch normally follows two rules:
1. The data values are continuous; there can be no break in the
values between High, Middle, and Low. Range specifications
adjust in relation to any changes to maintain the data value
range.
2. The data values specified can go only in an upward,
increasing direction.
The contrast value for each range represents a percentage of the
available output range that particular range occupies. Since rules 1
and 2 above are enforced, as the contrast and brightness values are
changed, they may affect the contrast and brightness of other
ranges. For example, if the contrast of the low range increases, it
forces the contrast of the middle to decrease.
USING RADIOMETRIC ENHANCEMENT 85
Contrast stretch on the di spl ay
Usually, a contrast stretch is performed on the display device only,
so that the data file values are not changed. Lookup tables are
created that convert the range of data file values to the maximum
range of the display device. You can then edit and save the contrast
stretch values and lookup tables as part of the raster data image file.
These values are loaded into the view as the default display values
the next time the image is displayed.
The statistics in the image file contain the mean, standard deviation,
and other statistics on each band of data. The mean and standard
deviation are used to determine the range of data file values to be
translated into brightness values or new data file values. You can
specify the number of standard deviations from the mean that are to
be used in the contrast stretch. Usually the data file values that are
two standard deviations above and below the mean are used. If the
data has a normal distribution, then this range represents
approximately 95 percent of the data.
The mean and standard deviation are used instead of the minimum
and maximum data file values because the minimum and maximum
data file values are usually not representative of most of the data. A
notable exception occurs when the feature being sought is in
shadow. The shadow pixels are usually at the low extreme of the
data file values, outside the range of two standard deviations from
the mean.
Varyi ng the contrast stretch
There are variations of the contrast stretch that can be used to
change the contrast of values over a specific range, or by a specific
amount. By manipulating the lookup tables as in the following
illustration, the maximum contrast in the features of an image can
be brought out.
This figure shows how the contrast stretch manipulates the
histogram of the data, increasing contrast in some areas and
decreasing it in others. This is also a good example of a piecewise
linear contrast stretch, which is created by adding breakpoints to the
histogram.
USING IMAGE ANALYSIS FOR ARCGIS 86
Apply LUT St ret ch Cl ass
1. Click the Image Analysis dropdown arrow, point to
Radiometric Enhancement, and click LUT Stretch.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
3. Navigate to the directory where the Output Image should
be stored. Set the output type to TIFF.
4. Click OK.
1
3
4
2
LUT St r et ch Cl ass
LUT Stretch Class provides a means of producing an output
image that has the stretch built into the pixel values to use with
packages that have no stretching capabilities.
USING RADIOMETRIC ENHANCEMENT 87
Hi st ogr am Equal i zat i on
Histogram Equalization is a nonlinear stretch that redistributes
pixel values so that there is approximately the same number of
pixels with each value within a range. The result approximates a flat
histogram. Therefore, contrast is increased at the peaks of the
histogram and lessened at the tails.
Histogram Equalization can also separate pixels into distinct
groups if there are few output values over a wide range. This can
have the visual effect of a crude classification.
Original Histogram
After Equalization
peak
tail
pixels at peak are spread
apart - contrast is gained
pixels at
tail are
grouped -
contrast
is lost
To perform a Histogram Equalization, the pixel values of an image
(either data file values or brightness values) are reassigned to a
certain number of bins, which are simply numbered sets of pixels.
The pixels are then given new values, based upon the bins to which
they are assigned.
The total number of pixels is divided by the number of bins,
equaling the number of pixels per bin, as shown in the following
equation:
Where:
N = the number of bins
T = the total number of pixels in the image
A = the equalized number of pixels per bin
The pixels of each input value are assigned to bins, so that the
number of pixels in each bin is as close to A as possible. Consider
the following:
There are 240 pixels represented by this histogram. To equalize this
histogram to 10 bins, there would be:
240 pixels / 10 bins = 24 pixels per bin = A
A
T
N
--- - =
USING IMAGE ANALYSIS FOR ARCGIS 88
To assign pixels to bins, the following equation is used:
Where:
A = equalized number of pixels per bin (see above)
H
i
= the number of values with the value i
(histogram)
int = integer function (truncating real numbers to
integer)
B
i
= bin number for pixels with value i
0 1 2 3 4 5 6 7 8 9
5 5
10
15
60 60
40
30
10
5
n
u
m
b
e
r

o
f

p
i
x
e
l
s
data file values
A = 24
B
i
int
H
k
k 1 =
i 1 –

\ .
|
|
| |
H
i
2
----- +
A
---------------------------------- =
Source: Modified from Gonzalez and Wintz 1977
The 10 bins are rescaled to the range 0 to M. In this example, M =
9, because the input values ranged from 0 to 9, so that the equalized
histogram can be compared to the original. The output histogram of
this equalized image looks like the following illustration:
Ef fect on cont rast
By comparing the original histogram of the example data with the
one above, you can see that the enhanced image gains contrast in
the peaks of the original histogram. For example, the input range of
3 to 7 is stretched to the range 1 to 8. However, data values at the
tails of the original histogram are grouped together. Input values 0
through 2 all have the output value of 0. So, contrast among the tail
pixels, which usually make up the darkest and brightest regions of
the input image, is lost.
0 1 2 3 4 5 6 7 8 9
15
60 60
40
30
n
u
m
b
e
r

o
f

p
i
x
e
l
s
output data file values
A = 24
20
15
0
1
2
3
4 5
7
8
9
6
numbers inside bars are input data file values
0 0 0
USING RADIOMETRIC ENHANCEMENT 89
The resulting histogram is not exactly flat, since the pixels can
rarely be grouped together into bins with an equal number of pixels.
Sets of pixels with the same value are never split up to form equal
bins.
USING IMAGE ANALYSIS FOR ARCGIS 90
Performi ng Hi stogram Equal i zati on
1. Click the Image Analysis dropdown arrow, point to
Radiometric Enhancement, and click Histogram
Equalization.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
3. Type or click the arrows to enter the Number of Bins.
4. Navigate to the directory where the Output Image should
be stored.
5. Click OK.
1
3
5
4
2
Hi st ogr am Equal i zat i on
Perform Histogram Equalization when you need to redistribute
pixels to approximate a flat histogram.
The Histogram Equalization process works by redistributing
pixel values so that there are approximately the same number of
pixels with each value within a range.
Histogram Equalization can also separate pixels into distinct
groups if there are few output values over a wide range. This
process can have the effect of a crude classification.
USING RADIOMETRIC ENHANCEMENT 91
Hi st ogr am Mat chi ng
Histogram Matching is the process of determining a lookup table
that converts the histogram of one image so that it resembles the
histogram of another. Histogram Matching is useful for matching
data of the same or adjacent scenes that were collected on separate
days, or are slightly different because of sun angle or atmospheric
effects. This is especially useful for mosaicking or change
detection.
To achieve good results with Histogram Matching, the two input
images should have similar characteristics:
• The general shape of the histogram curves should be similar.
• Relative dark and light features in the image should be the
same.
• For some applications, the spatial resolution of the data should
be the same.
• The relative distributions of land covers should be about the
same, even when matching scenes that are not of the same area.
To match the histograms, a lookup table is mathematically derived,
which serves as a function for converting one histogram to the
other, as illustrated here.
Source histogram (a), mapped through the lookup table (b),
approximates model histogram (c).
f
r
e
q
u
e
n
c
y
input
0 255
f
r
e
q
u
e
n
c
y
input
0 255
f
r
e
q
u
e
n
c
y
input
0 255
+
=
(a) (b)
(c)
USING IMAGE ANALYSIS FOR ARCGIS 92
Performi ng Hi stogram Matchi ng
1. Click the Image Analysis dropdown arrow, point to
Radiometric Enhancement, and click Histogram Match.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
3. Click the Match Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
4. Navigate to the directory where the Output Image should
be stored.
5. Click OK.
1
3
5
4
2
Hi st ogr am Mat chi ng
Perform Histogram Matching when using matching data of the
same or adjacent scenes that were gathered on different days
and have differences due to the angle of the sun or atmospheric
effects
Histogram Matching mathematically determines a lookup table
that will convert the histogram of one image to resemble the
histogram of another, and is particularly useful for mosaicking
images or change detection.
USING RADIOMETRIC ENHANCEMENT 93
Br i ght ness I nver si on
The Brightness Inversion functions produce images that have the
opposite contrast of the original image. Dark detail becomes light,
and light detail becomes dark. This can also be used to invert a
negative image that has been scanned to produce a positive image.
Inverse is useful for emphasizing detail that would otherwise be lost
in the darkness of the low DN pixels. This function applies the
following algorithm:
DN
out
= 1.0 if 0.0 < DN
in
< 0.1
DN
out
=
An image before Brightness Inversion
0.1
DN
in
if 0.1 < DN < 1
The same image after Brightness Inversion
USING IMAGE ANALYSIS FOR ARCGIS 94
Applyi ng Bri ghtness I nversi on
1. Click the Image Analysis dropdown arrow, point to
Radiometric Enhancement, and click Brightness
Inversion.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
3. Navigate to the directory where the Output Image should
be stored.
4. Click OK.
1
3
4
2
Br i ght ness I nver si on
This function allows both linear and nonlinear reversal of the
image intensity range. Images can be produced that have the
opposite contrast of the original image. Dark detail becomes
light, and light becomes dark
95
1Appl yi ng Spect ral Enhancement
Spectral Enhancement enhances images by transforming the values of each pixel
on a multiband basis. The techniques in this chapter all require more than one band
of data. They can be used to:
• extract new bands of data that are more interpretable to the eye
• apply mathematical transforms and algorithms
• display a wider variety of information in the three available color guns (R, G, B)
You can use the features of Spectral Enhancement to study such patterns as might
occur with deforestation or crop rotation and to see images in a more natural state
or view images in different ways, such as changing the bands in an image from red,
green, and blue to intensity, hue, and saturation.
IN THIS CHAPTER
• RGB to IHS
• IHS to RGB
• Vegetative Indices
• Color IR to Natural Color
7
USING IMAGE ANALYSIS FOR ARCGIS 96
RGB t o I HS
The color monitors used for image display on image processing
systems have three color guns. These correspond to red, green, and
blue (R,G,B), the additive primary colors. When displaying three
bands of a multiband data set, the viewed image is said to be in
R,G,B space.
However, it is possible to define an alternate color space that uses
intensity (I), hue (H), and saturation (S) as the three positioned
parameters (in lieu of R, G, and B). This system is advantageous in
that it presents colors more nearly as perceived by the human eye.
• Intensity is the overall brightness of the scene (like PC-1) and
varies from 0 (black) to 1 (white).
• Saturation represents the purity of color and also varies linearly
from 0 to 1.
• Hue is representative of the color or dominant wavelength of
the pixel. It varies from 0 at the red midpoint through green and
blue back to the red midpoint at 360. It is a circular dimension.
In the following image, 0 to 255 is the selected range; it could
be defined as any data range. However, hue must vary from 0
to 360 to define the entire sphere (Buchanan 1979).
The variance of intensity and hue in RGB to IHS
The algorithm used in the Image Analysis for ArcGIS RGB to IHS transform
(Conrac 1980)
saturation
hue
i
n
t
e
n
s
i
t
y
R
M r –
M m –
--------------- =
G
M g –
M m –
--------------- =
B
M b –
M m –
--------------- =
APPLYING SPECTRAL ENHANCEMENT 97
Where:
R, G, B are each in the range of 0 to 1.0.
r, g, b are each in the range of 0 to 1.0.
M = largest value, r, g, or b
m = least value, r, g, or b
At least one of the R, G, or B values is 0, corresponding to the color
with the largest value, and at least one of the R, G, or B values is 1,
corresponding to the color with the least value.
The equation for calculating intensity in the range of 0 to 1.0 is:
The equations for calculating saturation in the range of 0 to 1.0 are:
If M = m, S = 0
If I ≤ 0.5,
If I > 0.5,
The equations for calculating hue in the range of 0 to 360 are:
If M = m, H = 0
If R = M, H = 60 (2 + b - g)
If G = M, H = 60 (4 + r - b)
If B = M, H = 60 (6 + g - r)
I
M m +
2
--------------- =
S
M m –
M m +
--------------- =
S
M m –
2 M – m –
------------------------ =
Where:
R, G, B are each in the range of 0 to 1.0.
M = largest value, R, G, or B
m = least value, R, G, or B
USING IMAGE ANALYSIS FOR ARCGIS 98
RGB to I HS
1. Click the Image Analysis dropdown arrow, point to
Spectral Enhancement, and click RGB to IHS.
2. Click the Input Image dropdown arrow, and click the
image you want to use, or navigate to the directory where
it is stored.
3. Navigate to the directory where the Output Image should
be stored.
4. Click OK.
1
3
4
2
RGB t o I HS
Using RGB to IHS applies an algorithm that transforms red,
green, and blue (RGB) values to the intensity, hue, and
saturation (IHS) values.
APPLYING SPECTRAL ENHANCEMENT 99
I HS t o RGB
IHS to RGB is intended as a complement to the standard RGB to
IHS transform. In the IHS to RGB algorithm, a min-max stretch is
applied to either intensity (I), saturation (S), or both, so that they
more fully utilize the 0 to 1 value range. The values for hue (H), a
circular dimension, are 0 to 360. However, depending on the
dynamic range of the DN values of the input image, it is possible
that I or S or both occupy only a part of the 0 to 1 range. In this
model, a min-max stretch is applied to either I, S, or both, so that
they more fully utilize the 0 to 1 value range. After stretching, the
full IHS image is retransformed back to the original RGB space. As
the parameter Hue is not modified, it largely defines what we
perceive as color, and the resultant image looks very much like the
input image.
It is not essential that the input parameters (IHS) to this transform
be derived from an RGB to IHS transform. You could define I and/
or S as other parameters, set Hue at 0 to 360, and then transform to
RGB space. This is a method of color coding other data sets.
In another approach (Daily 1983), H and I are replaced by low- and
high-frequency radar imagery. You can also replace I with radar
intensity before the IHS to RGB transform (Holcomb 1993).
Chavez evaluates the use of the IHS to RGB transform to resolution
merge Landsat TM with SPOT panchromatic imagery (Chavez
1991).
The algorithm used by Image Analysis for ArcGIS for the IHS to
RGB function is (Conrac 1980):
Given: H in the range of 0 to 360; I and S in the range of 0 to 1.0
If I 0.5,
If I > 0.5,
The equations for calculating R in the range of 0 to 1.0 are:
M I 1 S + ( ) =
M I S I S ( ) – + =
m 2 1 M – ⋅ =
If H < 60,
If 60 H < 180,
If 180 H < 240,
If 240 H 360,
The equations for calculating G in the range of 0 to 1.0 are:
If H < 120,
If 120 H < 180,
If 180 H < 300,
If 300 H 360,
Equations for calculating B in the range of 0 to 1.0:
If H < 60,
If 60 H < 120,
If 120 H < 240,
If 240 H < 300,
If 300 H 360,
R m M m – ( )
H
60
------
\ .
| |
+ =
R M =
R m M m – ( )
240 H –
60
-------------------
\ .
| |
+ =
R m =
G m =
G m M m – ( )
H 120 –
60
-------------------
\ .
| |
+ =
G M =
G m M m – ( )
360 H –
60
-------------------
\ .
| |
+ =
B M =
B m M m – ( )
120 H –
60
-------------------
\ .
| |
+ =
B M =
B m M m – ( )
H 240 –
60
-------------------
\ .
| |
+ =
B M =
USING IMAGE ANALYSIS FOR ARCGIS 100
Conver ti ng I HS to RGB
1. Click the Image Analysis dropdown arrow, point to
Spectral Enhancement, and click IHS to RGB.
2. Click the Input Image dropdown arrow, and click the
image you want to use, or navigate to the directory where
it is stored.
3. Navigate to the directory where the Output Image should
be stored.
4. Click OK.
1
3
4
2
I HS t o RGB
Using IHS to RGB applies an algorithm that transforms
intensity, hue, and saturation (IHS) values to red, green, and
blue (RGB) values.
APPLYING SPECTRAL ENHANCEMENT 101
Veget at i ve I ndi ces
Mapping vegetation is a common application of remotely sensed
imagery. To help you find vegetation quickly and easily, Image
Analysis for ArcGIS includes a Vegetative Indices feature.
Indices are used to create output images by mathematically
combining the DN values of different bands. These may be
simplistic:
(Band X - Band Y)
or more complex:
In many instances, these indices are ratios of band DN values:
These ratio images are derived from the absorption/reflection
spectra of the material of interest. The absorption is based on the
molecular bonds in the (surface) material. Thus, the ratio often
gives information on the chemical composition of the target.
Appl i cati ons
• Indices are used extensively in mineral exploration and
vegetation analysis to bring out small differences between
various rock types and vegetation classes. In many cases,
judiciously chosen indices can highlight and enhance
differences that cannot be observed in the display of the
original color bands.
BandX BandY –
BandX BandY +
-----------------------------------------
BandX
BandY
-----------------
• Indices can also be used to minimize shadow effects in satellite
and aircraft multispectral images. Black and white images of
individual indices, or a color combination of three ratios, may
be generated.
• Certain combinations of TM ratios are routinely used by
geologists for interpretation of Landsat imagery for mineral
type. For example: Red 5/7, Green 5/4, Blue 3/1.
I ndex exampl es
The following are examples of indices that have been
preprogrammed in Image Analysis for ArcGIS:
• IR/R (infrared/red)
• SQRT (IR/R)
• Vegetation Index = IR-R
• Normalized Difference Vegetation Index (NDVI) =
• Transformed NDVI (TNDVI) =
Source: Modified from Sabins 1987; Jensen 1996; Tucker 1979
IR R –
IR R +
----------------
IR R –
IR R +
---------------- 0.5 +
USING IMAGE ANALYSIS FOR ARCGIS 102
The following table shows the infrared (IR) and red (R) band for
some common sensors (Tucker 1979, Jensen 1996):
I mage al gebra
Image algebra is a general term used to describe operations that
combine the pixels of two or more raster layers in mathematical
combinations. For example, the calculation:
(infrared band) - (red band)
DN
ir
- DN
red
yields a simple, yet very useful, measure of the presence of
vegetation.
Band ratios are also commonly used. These are derived from the
absorption spectra of the material of interest. The numerator is a
baseline of background absorption and the denominator is an
absorption peak.
Sensor IR Band R Band
Landsat MSS 4 2
SPOT XS 3 2
Landsat TM 4 3
NOAA AVHRR 2 1
APPLYING SPECTRAL ENHANCEMENT 103
Usi ng Vegetati ve I ndi ces
1. Click the Image Analysis dropdown arrow, point to
Spectral Enhancement, and click Vegetative Indices.
2. Navigate to the directory where the image is stored.
3. Click the dropdown list to add the Near Infrared Band
number.
4. Click the dropdown list to add the Visible Red Band
number.
5. Choose the Desired Index from the dropdown list.
6. Navigate to the directory where the Output Image should
be stored.
7. Click OK.
1
2
6
4
3
5
7
USING IMAGE ANALYSIS FOR ARCGIS 104
Col or I R t o Nat ur al Col or
This function lets you simulate natural colors from other types of
data so that the output image is a fair approximation of the natural
colors from an infrared image. If you are not familiar with the bands
designated to reflect infrared and natural color for a particular type
of imagery, Image Analysis for ArcGIS can help you apply either
scheme through the Color IR to Natural Color choice in Spectral
Enhancement. You cannot apply this feature to images having only
one band of data (i.e. grayscale images).
When an image is displayed in natural color, the bands are arranged
to approximate the most natural representation of the image in the
real world. Vegetation becomes green in color, and water becomes
dark in color. To create natural color, certain bands of data need to
be assigned to red, green, and blue. You will need to assign bands
to color depending on how many bands are in the image you want
to change to natural color.

The infrared image of a golf course.
After using Color IR to Natural Color, the image appears in natural colors.
APPLYING SPECTRAL ENHANCEMENT 105
Usi ng Col or I R t o Natural Col or
1. Click the Image Analysis dropdown arrow, point to
Spectral Enhancement, and click Color IR to
Natural Color.
2. Click the dropdown arrow or navigate to the
directory to select the Input Image.
3. Click the Near Infrared Band dropdown arrow, and
select the appropriate band.
4. Click the Visible Red Band dropdown arrow, and
select the appropriate band.
5. Click the Visible Green Band dropdown arrow, and
select the appropriate band.
6. Navigate to the directory where the Output Image
should be stored.
7. Click OK.
2
1
6
5
4
3
7
USING IMAGE ANALYSIS FOR ARCGIS 106
107
1Per f or mi ng GI S Anal ysi s
A GIS is a unique system designed to input, store, retrieve, manipulate, and
analyze layers of geographic data to produce interpretable information. A GIS
should also be able to create reports and maps (Marble 1990). The GIS database
may include computer images, hardcopy maps, statistical data, or any other data
that is needed in a study. Although the term GIS is commonly used to describe
software packages, a true GIS includes knowledgeable staff, a training program,
budgets, marketing, hardware, data, and software (Walker and Miller 1990). GIS
technology can be used in almost any geography-related discipline, from
Landscape Architecture to natural resource management to transportation routing.
The central purpose of a GIS is to turn geographic data into useful information—
the answers to real-life questions—questions such as:
• How should political districts be redrawn in a growing metropolitan area?
• How can we monitor the influence of global climatic changes on the earth’s
resources?
• What areas should be protected to ensure the survival of endangered species?
This chapter is about using the different analysis functions in Image Analysis for
ArcGIS to better use the images, data, maps, and so on located in a GIS. You can
use GIS technology in any geography related discipline. The tools contained in
GIS Analysis will help you turn geographic data into useful information.
IN THIS CHAPTER
• Performing Neighborhood
Analysis
• Performing Thematic Change
• Using Recode
• Using Summarize Areas
8
USING IMAGE ANALYSIS FOR ARCGIS 108
I nf or mat i on ver sus dat a
Information, as opposed to data, is independently meaningful. It is
relevant to a particular problem or question:
• “The land cover at coordinate N875250, E757261 has a data
file value 8,” is data.
• “Land cover with a value of 8 are on slopes too steep for
development,” is information.
You can input data into a GIS and output information. The
information you wish to derive determines the type of data that
must be input. For example, if you are looking for a suitable refuge
for bald eagles, zip code data is probably not needed, while land
cover data may be useful.
For this reason, the first step in any GIS project is usually an
assessment of the scope and goals of the study. Once the project is
defined, you can begin the process of building the database.
Although software and data are commercially available, a custom
database must be created for the particular project and study area.
The database must be designed to meet the needs and objectives of
the organization.
A major step in successful GIS implementation is analysis. In the
analysis phase, data layers are combined and manipulated in order
to create new layers and to extract meaningful information from
them.
Once the database (layers and attribute data) is assembled, the
layers can be analyzed and new information extracted. Some
information can be extracted simply by looking at the layers and
visually comparing them to other layers. However, new
information can be retrieved by combining and comparing layers
using the following procedures.
PERFORMING GIS ANALYSIS 109
Nei ghbor hood Anal ysi s
Neighborhood Analysis applies to any image processing technique
that takes surrounding pixels into consideration, such as
convolution filtering and scanning. This is similar to the
convolution filtering performed on continuous data. Several types
of analyses can be performed, such as boundary, density, mean,
sum, and so on.
With a process similar to the convolution filtering of continuous
raster layers, thematic raster layers can also be filtered. The GIS
filtering process is sometimes referred to as scanning, but is not to
be confused with data capture via a digital camera. Neighborhood
analysis is based on local or neighborhood characteristics of the
data (Star and Estes 1990).
Every pixel is analyzed spatially, according to the pixels that
surround it. The number and the location of the surrounding pixels
is determined by a scanning window, which is defined by you.
These operations are known as focal operations.
Neighborhood analysis creates a new thematic layer. There are
several types of analysis that can be performed upon each window
of pixels, as described below:
• Density—outputs the number of pixels that have the same class
value as the center (analyzed) pixel. This is also a measure of
homogeneity (sameness), based upon the analyzed pixel. This
is often useful in assessing vegetation crown closure.
• Diversity—outputs the number of class values that are present
within the window. Diversity is also a measure of
heterogeneity (difference).
• Majority—outputs the class value that represents the majority
of the class values in the window. This option operates like a
low-frequency filter to clean up a salt and pepper layer.
• Maximum—outputs the greatest class value within the
window. This can be used to emphasize classes with the higher
class values or to eliminate linear features or boundaries.
• Minimum—outputs the least or smallest class value within the
window. This can be used to emphasize classes with the low
class values.
• Minority—outputs the least common of the class values that
are within the window. This option can be used to identify the
least common classes. It can also be used to highlight
disconnected linear features.
• Rank—outputs the number of pixels in the scan window whose
value is less than the center pixel.
• Sum—totals the class values. In a file where class values are
ranked, totaling enables you to further rank pixels based on
their proximity to high-ranking pixels.
USING IMAGE ANALYSIS FOR ARCGIS 110
Performi ng Nei ghborhood Anal ysi s
1. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Neighborhood.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
3. Click the Neighborhood Function dropdown arrow, and
choose the function you want to use.
4. Click the Neighborhood Shape dropdown arrow, and
choose the shape you want to use.
5. Click the Matrix size dropdown arrow, and choose the
size you want to use.
6. Navigate to the directory where the Output Image should
be stored.
7. Click OK.
1
2
3
4
6
5
7
Nei ghbor hood Anal ysi s
Neighborhood Analysis applies to any analysis function that
takes neighboring pixels into account. This function creates a
new thematic layer.
The Neighborhood Analysis process is similar to convolution
filtering. Every pixel is spatially analyzed according to the
pixels surrounding it.
The different types of analysis that can be performed on each
window of pixels are listed in the dropdown menu for
Neighborhood Function.
PERFORMING GIS ANALYSIS 111
Themat i c Change
Thematic Change identifies areas that undergo change over time. Typically, you use Thematic Change after you perform categorizations
of your data. By using the categorizations of Before Theme and After Theme in the dialog, you can quantify both the amount and the type
of changes that take place over time. Image Analysis for ArcGIS produces a thematic image that has all the possible combinations of
change.
Thematic Change creates an output image from two input raster files. The class values of the two input files are organized into a matrix.
The first input file specifies the columns of the matrix, and the second one specifies the rows. Zero is not treated specially in any way. The
number of classes in the output file is the product of the number of classes from the two input files.
Both before and after images prior to performing Thematic Change.
USING IMAGE ANALYSIS FOR ARCGIS 112
Performi ng Themati c Change
1. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Thematic Change.
2. Click the Before Theme dropdown arrow, and click the
file you want to use, or navigate to the directory where it
is stored.
3. Click the After Theme dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
4. Navigate to the directory where the Output Image should
be stored.
5. Click OK.

1
2
4
5
3
Themat i c Change
Use Thematic Change to identify areas that have undergone
change over time.
PERFORMING GIS ANALYSIS 113
The following illustration is an example of the previous image after undergoing Thematic Change. In the Table of contents you see the
combination of classes from the Before and After images.
Note the areas of classification that show the changes between 1973 and 1994.
USING IMAGE ANALYSIS FOR ARCGIS 114
Recode
By using Recode, class values can be recoded to new values.
Recoding involves the assignment of new values to one or more
classes of an existing file. Recoding is used to:
• reduce the number of classes
• combine classes
• assign different class values to existing classes
• write class name and color changes to the Attribute table
When an ordinal, ratio, or interval class numbering system is used,
recoding can be used to assign classes to appropriate values.
Recoding is often performed to make later steps easier. For
example, in creating a model that outputs good, better, and best
areas, it may be beneficial to recode the input layers so all of the
best classes have the highest class values.
You can also use Recode to save any changes made to the color
scheme or class names of a classified image to the Attribute Table
for later use. Just saving an image will not record these changes.
Recoding an image involves two major steps. First, you must group
the discrete classes together into common groups. Secondly, you
perform the actual recoding process, which rewrites the Attribute
table using the information from your grouping process.
The three recoding methods described below are more accurately
described as three methods of grouping the classified image to get
it ready for the recode process. These methods are recoding by class
name, recoding by symbology, and recoding a previously grouped
image. The following exercises will take you through each of the
three recoding methods.
Thematic Image of South Carolina soil types before Recode by class name.
South Carolina soils after the recode. Notice the changed and grouped class
names in the Table of contents.
PERFORMING GIS ANALYSIS 115
Performi ng Recode by cl ass name
You will group the classified image in the ArcMap Table of
contents, and then perform the recode.
1. Click Add Data to open a classified image.
2. Identify the classes you want to group together in the
Table of contents.
3. Triple-click each class you wish to rename, and rename
it.
4. Click the color of each class, and change it to the color
scheme you want to use. X
2
3
4
USING IMAGE ANALYSIS FOR ARCGIS 116
5. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Recode.
6. Navigate to the directory where the Output Image should
be stored.
7. Click OK.
5
6
7
PERFORMING GIS ANALYSIS 117
Performi ng Recode by symbol ogy
This process will show you how to recode by symbology. You
will see similarities with recoding by class name, but you
should be aware of some different procedures. You will notice
that steps 1-3 and 10-12 are the same as the previous
Recode exercise.
1. Click Add Data to open an classified image.
2. Identify the classes you want to group together.
3. Click the colors of the classes to change to your desired
color scheme.
4. Double-click the image name in the Table of contents.
5. Click the Symbology tab in the Layer Properties dialog.
6. Press the Ctrl key while clicking on the first set of classes
you want to group together.
7. Right click on the selected classes, and click Group
Values.
8. Click in the Label column and type the new name for the
class.
9. Follow steps 5-7 to group the rest of your classes.
10. Click Apply and OK.
11. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Recode.
12. Navigate to the directory where the Output Image should
be stored.
13. Click OK.
4
7
6
5
8
USING IMAGE ANALYSIS FOR ARCGIS 118
Recodi ng wi th previ ously grouped i mage
You may need to open an image that has been classified
and grouped in another program such as ERDAS
IMAGINE
®
. These images may have more than one valid
attribute column that can be used to perform the recode.
1. Click Add Data and add the grouped image.
2. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Recode.
3. Click the Map Pixel Value through Field dropdown
arrow, and select the attribute you want to use to
recode the image.
4. Navigate to the directory where the Output Image
should be stored.
5. Click OK.
4
2
5
3
PERFORMING GIS ANALYSIS 119
The following images depict soil data that was previously grouped
in ERDAS IMAGINE.
Previously grouped before Recode
After Recode in Image Analysis for ArcGIS
USING IMAGE ANALYSIS FOR ARCGIS 120
Summar i ze Ar eas
Image Analysis for ArcGIS also provides Summarize Areas as a
method of assessing change in thematic data. Once you complete
the Thematic Change analysis, you can use Summarize Areas to
limit the analysis to include only a portion of the entire image.
Summarize Areas works by using a feature theme or an Image
Analysis for ArcGIS theme to compile information about that area
in tabular format. Summarize Areas produces cross-tabulation
statistics that compare class value areas between two thematic files,
including number of points in common, number of acres (or
hectares or square miles) in common, and percentages.
Summarize Areas might be used to assist a regional planning office
in preparing a study of urban change for certain counties within the
jurisdiction or even within one county or city. A file containing the
area to be inventoried can be summarized by a file for the same
geographical area containing the land cover categories. The
summary report could indicate the amount of urban change in a
particular area of a larger thematic change.
PERFORMING GIS ANALYSIS 121
Usi ng Summari ze Areas
1. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Summarize Areas.
2. Click the Zone theme dropdown arrow, and click on the
theme you want to use, or navigate to the directory where
it is stored.
3. Click on the dropdown arrow for the Zone Attribute, and
click on the condition for each value of the attribute.
4. Click on the dropdown arrow for the Class Theme, and
click on the class theme, or navigate to the directory
where it is stored.
5. Click OK.
1
3
4
2
5
Summar i ze Ar eas
Use Summarize Areas to produce cross-tabulation statistics for
comparison of class value areas between two thematic files, or
one thematic and one shapefile, including number of points in
common, number of acres (or hectares or square miles) in
common, and percentages.
USING IMAGE ANALYSIS FOR ARCGIS 122
123
1Usi ng Ut i l i t i es
The core of Image Analysis for ArcGIS is the ability it gives you to interpret and
manipulate your data. The Utilities part of Image Analysis for ArcGIS provides a
number of features for you to use in this capacity. The different procedures offered
in the Utilities menu allow you to alter your images in order to see differences, set
new parameters, create images, or subset images. The information about Subset
Image, Create New Image, and Reproject Image can be found in chapter 4 “Using
Data Preparation” since the options are also accessible through that menu.
This chapter will explain the following functions and show you how to use:
• Image Difference
• Layer Stack
IN THIS CHAPTER
• Image Difference
• Layer Stack
9
USING IMAGE ANALYSIS FOR ARCGIS 124
I mage Di f f er ence
The Image Difference function gives you the ability to
conveniently perform change detection on aspects of an area by
comparing two images of the same place from different times.
The Image Difference tool is particularly useful in plotting
environmental changes such as urban sprawl and deforestation or
the destruction caused by a wildfire or tree disease. It is also a
handy tool to use in determining crop rotation or the best new place
to develop a neighborhood.
Image Difference is used for change analysis with imagery that
depicts the same area at different points in time. With Image
Difference, you can highlight specific areas of change in whatever
amount you choose. Two images are generated from this image-to-
image comparison; one is a grayscale continuous image, and the
other is a five-class thematic image.
The first image generated from Image Difference is the Difference
image. The Difference image is a grayscale image composed of
single band continuous data. This image is created by subtracting
the Before Image from the After Image. Since Image Difference
calculates change in brightness values over time, the Difference
image simply reflects that change using a grayscale image. Brighter
areas have increased in reflectance. This may mean clearing of
forested areas. Dark areas have decreased in reflectance. This may
mean an area has become more vegetated, or the area was dry and
is now wet.
The second image is the Highlight Difference image. This thematic
image divides the changes into five categories. The five categories
are Decreased, Some Decrease, Unchanged, Some Increase, and
Increased.
The Decreased class represents areas of negative (darker) change
greater than the threshold for change and is red in color. The
Increased class shows areas of positive (brighter) change greater
than the threshold and is green in color. Other areas of positive and
negative change less than the thresholds and areas of no change are
transparent. For your application, you may edit the colors to select
any color desired for your study.
Al gori t hm
Subtract two images on a pixel by pixel basis.
1. Subtract the Before Image from the After Image.
2. Convert the decrease percentage to a value.
3. Convert the increase percentage to a value.
4. If the difference is less than the decrease value, then assign
the pixel to Class 1 (Decreased).
5. If the difference is greater than the increase value then assign
the pixel to Class 5 (Increased).
USING UTILITIES 125
Usi ng I mage Di f ference
1. Click the Image Analysis dropdown arrow, point to
Utilities, and click Image Difference.
2. Click the Before Theme dropdown arrow, and click the
file you want to use, or navigate to the directory where it
is stored.
3. Click the After Theme dropdown arrow and click the file
you want to use, or navigate to the directory where it is
stored.
4. Choose As Percent or As Value for the Highlight
Changes.
5. Enter the Increases and Decreases values.
6. Click the color bar to choose the color you want to
represent the increases and decreases.
7. Type the Image Difference file name, or navigate to the
directory where it should be stored.
8. Type the Highlight Change file name, or navigate to the
directory where it should be stored.
9. Click OK.
The Image Difference Output file showing highlight change.
1
3
4
5 6
7 8
9
2
USING IMAGE ANALYSIS FOR ARCGIS 126
Layer St ack
Layer Stack lets you stack layers from different images in any order
to form a single theme. It is useful for combining different types of
imagery for analysis such as multispectral and radar data. For
example, if you stack three single-band grayscale images, you
finish with one three band image. In general, you will find that
stacking images is most useful for combining grayscale single-band
images into multiband images.
Stacking works based on the order in the Table of contents. Before
you initiate stacking, you should first ensure that the images are in
the order that you want. This order represents the order in which the
bands will be arranged in the output file.
There are several applications of this feature such as change
visualization, combining and viewing multiple resolution data, and
viewing disparate data types. Layer Stack is particularly useful if
you have received a multispectral dataset with each of the
individual bands in separate files. You can also use Layer Stack to
analyze datasets taken during different seasons when different sets
show different stages for vegetation in an area.
An example of a multispectral dataset with individual bands in
separate files would be Landsat TM data. Layer stack quickly
consolidates the bands of data into one file.
The image on this page is an example of a Layer Stack output. The
files used are from the Amazon, and the red and blue bands were
chosen from one image, while the green band was chosen from the
other.
A stacked image with bands 1 and 3 taken from Amazon LBAND image and
the rest of the layers take from Amazon TM.
USING UTILITIES 127
Usi ng Layer Stack
1. Click the Image Analysis dropdown arrow, point to
Utilities, and click Layer Stack.
2. Select a currently open layer, and click Add to include it in
the layer stack.
3. Click the browse button to navigate to a file containing
layers you want to add to the layer stack.
4. Select any files you want to remove from the layer stack
and click Remove.
5. Navigate to the directory where the Output Image should
be stored.
6. Click OK.
1
2
3
5
4
6
USING IMAGE ANALYSIS FOR ARCGIS 128
129
1Underst andi ng Cl assi f i cat i on
Multispectral classification is the process of sorting pixels into a finite number of
individual classes, or categories of data, based on their data file values. If a pixel
satisfies a certain set of criteria, the pixel is assigned to the class that corresponds
to that criteria.
Depending on the type of information you want to extract from the original data,
classes may be associated with known features on the ground or may simply
represent areas that look different to the computer. An example of a classified
image is a land cover map that shows vegetation, bare land, pasture, urban, and so
on.
This chapter covers the two ways to classify pixels into different categories:
• Unsupervised Classification
• Supervised Classification
The differences in the two are basically as their titles suggest. Supervised
Classification is more closely controlled by you than Unsupervised Classification.
IN THIS CHAPTER
• The Classification Process
• Classification Tips
• Unsupervised Classification
• Supervised Classification
• Classification Decision Rules
10
USING IMAGE ANALYSIS FOR ARCGIS 130
The Cl assi f i cat i on Pr ocess
Pattern recogni ti on
Pattern recognition is the science—and art—of finding meaningful
patterns in data, which can be extracted through classification. By
spatially and spectrally enhancing an image, pattern recognition
can be performed with the human eye; the human brain
automatically sorts certain textures and colors into categories.
In a computer system, spectral pattern recognition can be more
scientific. Statistics are derived from the spectral characteristics of
all pixels in an image. However, in Supervised Classification, the
statistics are derived from the training samples, and not the entire
image. After the statistics are derived, pixels are sorted based on
mathematical criteria. The classification process breaks down into
two parts: training and classifying (using a decision rule).
Trai ni ng
First, the computer system must be trained to recognize patterns in
the data. Training is the process of defining the criteria by which
these patterns are recognized (Hord 1982). Training can be
performed with either a supervised or an unsupervised method, as
explained below.
Supervi sed t rai ni ng
Supervised training is closely controlled by the analyst. In this
process, you select pixels that represent patterns or land cover
features that you recognize, or that you can identify with help from
other sources, such as aerial photos, ground truth data, or maps.
Knowledge of the data, and of the classes desired, is required before
classification.
By identifying patterns, you can instruct the computer system to
identify pixels with similar characteristics. If the classification is
accurate, the resulting classes represent the categories within the
data that you originally identified.
Unsupervi sed trai ni ng
Unsupervised training is more computer-automated. It enables you
to specify some parameters that the computer uses to uncover
statistical patterns that are inherent in the data. These patterns do
not necessarily correspond to directly meaningful characteristics of
the scene, such as contiguous, easily recognized areas of a
particular soil type or land use. They are simply clusters of pixels
with similar spectral characteristics. In some cases, it may be more
important to identify groups of pixels with similar spectral
characteristics than it is to sort pixels into recognizable categories.
Unsupervised training is dependent upon the data itself for the
definition of classes. This method is usually used when less is
known about the data before classification. It is then the analyst’s
responsibility, after classification, to attach meaning to the
resulting classes (Jensen 1996). Unsupervised classification is
useful only if the classes can be appropriately interpreted.
Si gnatures
The result of training is a set of signatures that defines a training
sample or cluster. Each signature corresponds to a class, and is used
with a decision rule (explained below) to assign the pixels in the
image file to a class. Signatures contain both parametric class
definitions (mean and covariance) and non-parametric class
definitions (parallelepiped boundaries that are the per band minima
and maxima).
A parametric signature is based on statistical parameters (e.g., mean
and covariance matrix) of the pixels that are in the training sample
or cluster. Supervised and unsupervised training can generate
parametric signatures. A set of parametric signatures can be used to
train a statistically-based classifier (e.g., maximum likelihood) to
define the classes.
UNDERSTANDING CLASSIFICATION 131
Deci si on rul e
After the signatures are defined, the pixels of the image are sorted
into classes based on the signatures by use of a classification
decision rule. The decision rule is a mathematical algorithm that,
using data contained in the signature, performs the actual sorting of
pixels into distinct class values.
Parametri c deci si on rul e
A parametric decision rule is trained by the parametric signatures.
These signatures are defined by the mean vector and covariance
matrix for the data file values of the pixels in the signatures. When
a parametric decision rule is used, every pixel is assigned to a class
since the parametric decision space is continuous (Kloer 1994).
There are three parametric decision rules offered:
• Minimum distance
• Mahalanobis distance
• Maximum likelihood
Nonparametri c deci si on rul e
When a nonparametric rule is set, the pixel is tested against all of
the signatures with nonparametric definitions. This rule results in
the following conditions:
• If the nonparametric test results in one unique class, the pixel
is assigned to that class.
• If the nonparametric test results in zero classes (for example,
the pixel lies outside all the nonparametric decision
boundaries), then the pixel is assigned to a class called
unclassified.
Parallelepiped is the only nonparametric decision rule in Image
Analysis for ArcGIS.
USING IMAGE ANALYSIS FOR ARCGIS 132
Cl assi f i cat i on t i ps
Cl assi f i cat i on scheme
Usually, classification is performed with a set of target classes in
mind. Such a set is called a classification scheme (or classification
system). The purpose of such a scheme is to provide a framework
for organizing and categorizing the information that can be
extracted from the data (Jensen 1983). The proper classification
scheme includes classes that are both important to the study and
discernible from the data on hand. Most schemes have a
hierarchical structure, which can describe a study area in several
levels of detail.
A number of classification schemes have been developed by
specialists who have inventoried a geographic region. Some
references for professionally-developed schemes are listed below:
• Anderson, J. R., et al. 1976. “A Land Use and Land Cover
Classification System for Use with Remote Sensor Data.” U.S.
Geological Survey Professional Paper 964.
• Cowardin, Lewis M., et al. 1979. Classification of Wetlands
and Deepwater Habitats of the United States. Washington,
D.C.: U.S. Fish and Wildlife Service.
• Florida Topographic Bureau, Thematic Mapping Section.
1985. Florida Land Use, Cover and Forms Classification
System. Florida Department of Transportation, Procedure No.
550-010-001-a.
• Michigan Land Use Classification and Reference Committee.
1975. Michigan Land Cover/Use Classification System.
Lansing, Michigan: State of Michigan Office of Land Use.
Other states or government agencies may also have specialized land
use/cover studies.
It is recommended that the classification process is begun by
defining a classification scheme for the application, using
previously developed schemes, like those above, as a general
framework.
Supervi sed versus Unsupervi sed
Cl assi f i cat i on
In supervised training, it is important to have a set of desired classes
in mind, and then create the appropriate signatures from the data.
You must also have some way of recognizing pixels that represent
the classes that you want to extract.
Supervised classification is usually appropriate when you want to
identify relatively few classes, when you have selected training
sites that can be verified with ground truth data, or when you can
identify distinct, homogeneous regions that represent each class. In
Image Analysis for ArcGIS, if you need to correctly classify small
areas with actual representation, you should choose Supervised
Classification.
On the other hand, if you want the classes to be determined by
spectral distinctions that are inherent in the data so that you can
define the classes later, then the application is better suited to
unsupervised training. Unsupervised training enables you to define
many classes easily, and identify classes that are not in contiguous,
easily recognized regions.
If you have areas that have a value of zero, and you do not classify
them as NoData (see chapter 3 “Applying data tools”), they will be
assigned to the first class when performing Unsupervised
Classification. You can assign a specific class by taking a training
sample when performing a Supervised Classification.
UNDERSTANDING CLASSIFICATION 133
Cl assi fyi ng enhanced data
For many specialized applications, classifying data that have been
merged, spectrally merged or enhanced—with principal
components, image algebra, or other transformations—can produce
very specific and meaningful results. However, without
understanding the data and the enhancements used, it is
recommended that only the original, remotely-sensed data be
classified.
Li mi ti ng di mensi ons
Although Image Analysis for ArcGIS allows an unlimited number
of layers of data to be used for one classification, it is usually wise
to reduce the dimensionality of the data as much as possible. Often,
certain layers of data are redundant or extraneous to the task at
hand. Unnecessary data take up valuable disk space, and causes the
computer system to perform more arduous calculations, which
slows down processing.
USING IMAGE ANALYSIS FOR ARCGIS 134
Unsuper vi sed Cl assi f i cat i on/ Cat egor i ze I mage
Unsupervised training requires only minimal initial input from you.
However, you have the task of interpreting the classes that are
created by the unsupervised training algorithm. Unsupervised
training is also called clustering, because it is based on the natural
groupings of pixels in image data when they are plotted in feature
space.
If you need to classify small areas with small representation, you
should use Supervised Classification. Due to the skip factor of 8
used by the Unsupervised Classification signature collection, small
areas such as wetlands, small urban areas, or grasses can be
wrongly classified on rural data sets.
Cl usters
Clusters are defined with a clustering algorithm, which often uses
all or many of the pixels in the input data file for its analysis. The
clustering algorithm has no regard for the contiguity of the pixels
that define each cluster.
The Iterative Self-Organizing Data Analysis Technique
(ISODATA) (Tou and Gonzalez 1974) clustering method uses
spectral distance as in the sequential method, but iteratively
classifies the pixels, redefines the criteria for each class, and
classifies again, so that the spectral distance patterns in the data
gradually emerge.
I SODATA cl ust eri ng
ISODATA is iterative in that it repeatedly performs an entire
classification (outputting a thematic raster layer) and recalculates
statistics. Self-Organizing refers to the way in which it locates
clusters with minimum user input.
The ISODATA method uses minimum spectral distance to assign a
cluster for each candidate pixel. The process begins with a specified
number of arbitrary cluster means or the means of existing
signatures, and then it processes repetitively, so that those means
shift to the means of the clusters in the data.
Because the ISODATA method is iterative, it is not biased to the
top of the data file, as are the one-pass clustering algorithms.
I ni ti al cl ust er means
On the first iteration of the ISODATA algorithm, the means of N
clusters can be arbitrarily determined. After each iteration, a new
mean for each cluster is calculated, based on the actual spectral
locations of the pixels in the cluster, instead of the initial arbitrary
calculation. Then, these new means are used for defining clusters in
the next iteration. The process continues until there is little change
between iterations (Swain 1973).
The initial cluster means are distributed in feature space along a
vector that runs between the point at spectral coordinates (µ
1

1
,
µ
2

2
, µ
3

3
, ... µ
n

n
) and the coordinates (µ
1

1
, µ
2

2
, µ
3

3
,
... µ
n

n
). Such a vector in two dimensions is illustrated below. The
initial cluster means are evenly distributed between (µ
A

A
, µ
B
-
σ
B
) and (µ
A

A
, µ
B

B
).
UNDERSTANDING CLASSIFICATION 135
Pi xel analysi s
Pixels are analyzed beginning with the upper left corner of the
image and going left to right, block by block.
The spectral distance between the candidate pixel and each cluster
mean is calculated. The pixel is assigned to the cluster whose mean
is the closest. The ISODATA function creates an output image file
with a thematic raster layer as a result of the clustering. At the end
of each iteration, an image file exists that shows the assignments of
the pixels to the clusters.
Considering the regular, arbitrary assignment of the initial cluster
means, the first iteration of the ISODATA algorithm always gives
results similar to those in this illustration.
ISODATA Arbitrary Clusters
5 arbitrary cluster means in two-dimensional spectral space
µ
Β

Β
µ
Β
µ
Β
−σ
Β
µ
Α

Α µ
Α
−σ
Α
µ
Α
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
For the second iteration, the means of all clusters are recalculated,
causing them to shift in feature space. The entire process is
repeated—each candidate pixel is compared to the new cluster
means and assigned to the closest cluster mean.
1
Cluster
2
Cluster
3
Cluster
4
Cluster
5
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
Cluster
USING IMAGE ANALYSIS FOR ARCGIS 136
Percent age unchanged
After each iteration, the normalized percentage of pixels whose
assignments are unchanged since the last iteration is displayed on
the dialog. When this number reaches T (the convergence
threshold), the program terminates.
It is possible for the percentage of unchanged pixels to never
converge or reach T (the convergence threshold). Since you are not
able to control the convergence threshold, it may be beneficial to
monitor the percentage, or specify a reasonable maximum number
of iterations, M, so that the program does not run indefinitely.
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
1
Cluster
2
Cluster
3
Cluster
4
Cluster
5
Cluster
UNDERSTANDING CLASSIFICATION 137
Performi ng Unsupervi sed Cl assi fi cati on/
Cat egori ze I mage
1. Click the Image Analysis dropdown arrow, point to
Classification, and click Unsupervised/Categorize.
2. Click the Input Image dropdown arrow, or navigate to the
directory where it is stored.
3. Type or click the arrows to enter the Desired Number of
Classes.
4. Navigate to the directory where the Output Image should
be stored.
5. Click OK.
1
4
2
3
USING IMAGE ANALYSIS FOR ARCGIS 138
Super vi sed Cl assi f i cat i on
Supervised classification requires a priori (already known)
information about the data, such as:
• What type of classes need to be extracted? Soil type? Land
use? Vegetation?
• What classes are most likely to be present in the data? That is,
which types of land cover, soil, or vegetation (or whatever) are
represented by the data?
In supervised training, you rely on your own pattern recognition
skills and a priori knowledge of the data to help the system
determine the statistical criteria (signatures) for data classification.
To select reliable samples, you should know some information—
either spatial or spectral—about the pixels that you want to classify.
The location of a specific characteristic, such as a land cover type,
may be known through ground truthing. Ground truthing refers to
the acquisition of knowledge about the study area from field work,
analysis of aerial photography, personal experience, and so on.
Ground truth data are considered to be the most accurate (true) data
available about the area of study. It should be collected at the same
time as the remotely sensed data, so that the data correspond as
much as possible (Star and Estes 1990). However, some ground
data may not be very accurate due to a number of errors and
inaccuracies.
UNDERSTANDING CLASSIFICATION 139
Performi ng Supervi sed Cl assi fi cati on
1. Click the Image Analysis dropdown arrow, point to
Classification, and click Supervised.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
3. Click the Signature Features dropdown arrow, and click
the file you want to use, or navigate to the directory
where it is stored.
4. Click the Class Name Field dropdown arrow, and click
the field you want to use.
5. Choose All Features or Selected Features to use during
classification.
6. Click the Classification Rule dropdown arrow, and click
the rule you want to use.
7. Navigate to the directory where the Output Image should
be stored.
8. Click OK.
1
3
2
6
7
8
4
5
USING IMAGE ANALYSIS FOR ARCGIS 140
Cl assi f i cat i on deci si on r ul es
Once a set of reliable signatures has been created and evaluated, the
next step is to perform a classification of the data. Each pixel is
analyzed independently. The measurement vector for each pixel is
compared to each signature, according to a decision rule, or
algorithm. Pixels that pass the criteria that are established by the
decision rule are then assigned to the class for that signature. Image
Analysis for ArcGIS enables you to classify the data parametrically
with statistical representation.
Parametri c rul es
Image Analysis for ArcGIS provides these commonly-used
decision rules for parametric signatures:
• minimum distance
• Mahalanobis distance
• maximum likelihood (with Bayesian variation)
Nonparametri c rul e
• Parallelepiped
Mi ni mum di stance
The minimum distance decision rule (also called spectral distance)
calculates the spectral distance between the measurement vector for
the candidate pixel and the mean vector for each signature.
In this illustration, spectral distance is illustrated by the lines from
the candidate pixel to the means of the three signatures. The
candidate pixel is assigned to the class with the closest mean.
The equation for classifying by spectral distance is based on the
equation for Euclidean distance:
µ
B3
µ
B2
µ
B1
µ
A1
µ
A2
µ
A3



µ
1
µ
2
µ
3
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
candidate pixel
o
o
SD
xyc
µ
ci
X
xyi
– ( )
2
i 1 =
n

=
UNDERSTANDING CLASSIFICATION 141
Where:
n = number of bands (dimensions)
i = a particular band
c = a particular class
X
xyi
= data file value of pixel x,y in band i
µ
ci
= mean of data file values in band i for the
sample for class c
SD
xyc
= spectral distance from pixel x,y to the mean of
class c
Source: Swain and Davis 1978
When spectral distance is computed for all possible values of c (all
possible classes) the class of the candidate pixel is assigned to the
class for which SD is the lowest.
Maxi mum l i kel i hood
Note: The maximum likelihood algorithm assumes that the
histograms of the bands of data have normal distributions. If this is
not the case, you may have better results with the minimum
distance decision rule.
The maximum likelihood decision rule is based on the probability
that a pixel belongs to a particular class. The basic equation
assumes that these probabilities are equal for all classes, and that
the input bands have normal distributions.
The Equation for the Maximum Likelihood/Bayesian Classifier is
as follows:
D a
c
( ) 0.5 Cov
c
( ) ln [ ] – 0.5 X M
c
– ( )T Cov
c
1 –
( ) X M
c
– ( ) [ ] – ln =
Where:
D = weighted distance (likelihood)
c = a particular class
X = the measurement vector of the candidate pixel
M
c
= the mean vector of the sample of class c
a
c
= percent probability that any candidate pixel is
a member of class c (defaults to 1.0, or is
entered from a priori data)
Cov
c
= the covariance matrix of the pixels in the
sample of class c
|Cov
c
| = determinant of Cov
c
(matrix algebra)
Cov
c
-1 = inverse of Cov
c
(matrix algebra)
ln = natural logarithm function
T = transposition function (matrix algebra)
Mahal anobi s di stance
Note: The Mahalanobis distance algorithm assumes that the
histograms of the bands have normal distributions. If this is not the
case, you may have better results with the parallelepiped or
minimum distance decision rule, or by performing a first-pass
parallelepiped classification.
Mahalanobis distance is similar to minimum distance, except that
the covariance matrix is used in the equation. Variance and
covariance are figured in so that clusters that are highly varied lead
to similarly varied classes, and vice versa. For example, when
classifying urban areas—typically a class whose pixels vary
widely—correctly classified pixels may be farther from the mean
than those of a class for water, which is usually not a highly varied
class (Swain and Davis 1978).
USING IMAGE ANALYSIS FOR ARCGIS 142
The equation for the Mahalanobis distance classifier is as follows:
Where:
D = Mahalanobis distance
c = a particular class
X = the measurement vector of the candidate pixel
M
c
= the mean vector of the signature of class c
Cov
c
= the covariance matrix of the pixels in the
signature of class c
Cov
c
-1
= inverse of Cov
c

T
= transposition function
The pixel is assigned to the class, c, for which D is the lowest.
Paral l el epi ped
Image Analysis for ArcGIS provides the parallelepiped decision
rule as its nonparametric decision rule. In the parallelepiped
decision rule, the data file values of the candidate pixel are
compared to upper and lower limits which are the minimum and
maximum data file values of each band in the signature.
There are high and low limits for every signature in every band.
When a pixel’s data file values are between the limits for every
band in a signature, then the pixel is assigned to that signature’s
class. In the case of a pixel falling into more than one class, then the
first class is the one assigned. When a pixel falls into no class
boundaries, it is labeled unclassified.
D X M
c
– ( )
T
Cov
c
1 –
( ) X M
c
– ( ) =
143
1Usi ng Conversi on
The Conversion feature gives you the ability to convert shape files to raster images
and raster images to shape files. This tool is very helpful when you need to isolate
or highlight certain parts of a raster image or when you have a shape file and you
need to view it as a raster image. Possible applications include viewing
deforestation patterns, urban sprawl, and shore erosion.
The Image Info tool that is discussed in chapter 3 “Applying data tools” is also an
important part of Raster/Feature Conversion. The ability to assign certain pixel
values as NoData is very helpful when converting images.
IN THIS CHAPTER
• Conversion
• Convert Raster to Features
• Convert Features to Raster
11
USING IMAGE ANALYSIS FOR ARCGIS 144
Conver si on
Always be aware of how the raster dataset will represent the
features when converting points, polygons, or polylines to a raster,
and vice versa. There is a trade off when working with a cell-based
system, and it is that even though points don't have area, cells do.
Even though points are represented by a single cell, that cell does
have area. The smaller the cell size, the smaller the area, and thus a
closer representation of the point feature. Points with area will have
an accuracy of plus or minus half the cell size. For many users
having all data types in the same format and being able to use them
interchangeably in the same language is more important than a loss
of accuracy.
Linear Data is represented by a polyline that is also comprised of
cells so it has area even though by definition, lines do not. Because
of this, the accuracy of representation will vary according to the
scale of the data the resolution of the raster dataset.
With polygonal or areal data, problems can occur from trying to
represents smooth polygon boundaries with square cells. The
accuracy of the representation is dependent on the scale of the data
and the size of the cell. The finer the cell resolution and the greater
the number of cells that represent small areas, the more accurate the
representation.
USING CONVERSION 145
Conver t i ng r ast er t o f eat ur es
During a conversion of a raster representing polygonal features to
polygonal features, the polygons are built from groups of
contiguous cells having the same cell values. Arcs are created from
cell borders in the raster. Continuous cells with the same value are
grouped together to form polygons. Cells that are NoData in the
input raster will not become features in the output polygon feature.
When a raster that represents linear features is converted to a
polyline feature, a polyline is created from each cell in the input
raster, passing through the center of each cell. Cells that are NoData
in the input raster will not become features in the output polyline
feature.
When you convert a raster representing point features to point
features, a point will be created in the output for each cell of the
input raster. Each point will be positioned at the center of the cell it
represents. NoData cells will not be transformed into points.
When you choose Convert Raster to Features, the dialog will give
you the choice of a Field to specify from the image in the
conversion. You will also be given the choice of an Output
geometry type so you can choose if the feature will be a point, a
polygon, or a polyline according to the Field and data you’re using.
In order no to have jagged or sharp edges to the new feature file,
you can check Generalize Lines to smooth out the edges. You
should note that regardless of what Field you pick, the category will
not be populated on the Attribute Table after conversion.
A raster image before conversion
After conversion to a shapefile using Value as the Field
USING IMAGE ANALYSIS FOR ARCGIS 146
Performi ng raster to feature conversi on
1. Click the Image Analysis dropdown arrow, point to
Convert, and click Convert Raster to Features.
2. Click the Input raster dropdown arrow, or navigate to the
directory where the raster image is stored.
3. Click the Field dropdown arrow and choose a Filed to use.
4. Click the Output geometry type dropdown arrow, and
choose point, polygon, or polyline.
5. Check or uncheck Generalize Lines according to your
preference.
6. Navigate to the directory where the Output feature should
be stored.
7. Click OK.
2
1
3
4
7
6
5
USING CONVERSION 147
Conver t i ng f eat ur es t o r ast er
Any polygons, polylines, or points from any source file can be
converted to a raster. You can convert features using both string and
numeric fields. Each unique string in a string field is assigned a
unique value to the output raster. A field is added to the table of the
output raster to hold the original string value from the features.
When you convert points, cells are given the value of the points
found within each cell. Cells that do not contain a point are given
the value of NoData. You are given the option of specifying the cell
size you want to use in the Feature to Raster dialog. You should
choose the cell size based on several different factors: the resolution
of the input data, the output resolution needed to perform your
analysis, and the need to maintain a rapid processing speed.
Polylines are features that, at certain resolutions, only appear as
lines representing streams or roads. When you convert polylines,
cells are given the value of the line that intersects each cell. Cells
that are not intersected by a line are given the value NoData. If more
than one line is found in a cell, the cell is given the value of the first
line encountered while processing. Using a smaller cell size during
conversion will alleviate this.
Polygons are used for buildings, forests, fields, and many other
features that are best represented by a series of connected cells.
When you convert polygons, the cells are given the value of the
polygon found at the center of each cell.
USING IMAGE ANALYSIS FOR ARCGIS 148
Performi ng Feat ure t o Rast er conversi on
1. Click the Image Analysis dropdown arrow, point to
Convert, and click Convert Feature to Raster.
2. Click the Input features dropdown arrow, or navigate to
the directory where the file is stored.
3. Click the Field dropdown arrow, and select the Field
option you want to use.
4. Type the Output cell size.
5. Navigate to the directory where the Output Raster should
be stored.
6. Click OK.
4
3
6
2
1
5
149
12Appl yi ng Geocorrect i on Tool s
The tools and methods described in this chapter concern the process of
geometrically correcting the distortions in images caused by sensors and the
curvature of the earth. Even images of seemingly flat areas are distorted, but these
images can be corrected, or rectified, so they can be represented on a planar
surface, conform to other images, and have the integrity of a map.
The terms geocorrection and rectification are used synonymously when discussing
geometric correction. Rectification is the process of transforming the data from
one grid system into another grid system using a geometric transformation. Since
the pixels of a new grid may not align with the pixels of the original grid, the pixels
must be resampled. Resampling is the process of extrapolating data values for the
pixels on the new grid from the values of the source pixels.
Orthorectification is a form of rectification that corrects for terrain displacement
and can be used if there is a DEM of the study area. It is based on collinearity
equations, which can be derived by using 3D Ground Control Points (GCPs). In
relatively flat areas, orthorectification is not necessary, but in mountainous areas
(or on aerial photographs of buildings), where a high degree of accuracy is
required, orthorectification is recommended.
IN THIS CHAPTER
• Geocorrection Properties
• Spot Properties
• Polynomial Properties
• Rubber Sheeting
• Camera Properties
• IKONOS Properties
• Landsat Properties
• QuickBird Properties
• RPC Properties
12
USING IMAGE ANALYSIS FOR ARCGIS 150
When t o r ect i f y
Rectification is necessary in cases where the pixel grid of the image
must be changed to fit a map projection system or a reference
image. There are several reasons for rectifying image data:
• comparing pixels scene to scene in applications, such as
change detection or thermal inertia
• mapping (day and night comparison)
• developing GIS databases for GIS modeling
• identifying training samples according to map coordinates
prior to classification
• creating accurate scaled photomaps
• overlaying an image with vector data, such as ArcInfo
• comparing images that are originally at different scales
• extracting accurate distance and area measurements
• mosaicking images
• performing any other analyses requiring precise geographic
locations
Before rectifying the data, you must determine the appropriate
coordinate system for the database. To select the optimum map
projection and coordinate system, the primary use for the database
must be considered. If you are doing a government project, the
projection may be predetermined. A commonly used projection in
the United States government is State Plane. Use an equal area
projection for thematic or distribution maps and conformal or equal
area projections for presentation maps. Before selecting a map
projection, consider the following:
• How large or small an area is mapped? Different projections
are intended for different size areas.
• Where on the globe is the study area? Polar regions and
equatorial regions require different projections for maximum
accuracy.
• What is the extent of the study area? Circular, north-south,
east-west, and oblique areas may all require different
projection systems (ESRI 1992).
Di sadvant ages of rect i f i cati on
During rectification, the data file values of rectified pixels must be
resampled to fit into a new grid of pixel rows and columns.
Although some of the algorithms for calculating these values are
highly reliable, some spectral integrity of the data can be lost during
rectification. If map coordinates or map units are not needed in the
application, then it may be wiser not to rectify the image. An
unrectified image is more spectrally correct than a rectified image.
Georeferenci ng
Georeferencing refers to the process of assigning map coordinates
to image data. The image data may already be projected onto the
desired plane, but not yet referenced to the proper coordinate
system. Rectification, by definition, involves georeferencing, since
all map projection systems are associated with map coordinates.
Image to image registration involves georeferencing only if the
reference image is already georeferenced. Georeferencing, by
itself, involves changing only the map coordinate information in
the image file. The grid of the image does not change.
Geocoded data are images that have been rectified to a particular
map projection and pixel size, and usually have had radiometric
corrections applied. It is possible to purchase image data that is
already geocoded. Geocoded data should be rectified only if they
must conform to a different projection system or be registered to
other rectified data.
APPLYING GEOCORRECTION TOOLS 151
Georeferenci ng only
Rectification is not necessary if there is no distortion in the image.
For example, if an image file is produced by scanning or digitizing
a paper map that is in the desired projection system, then that image
is already planar and does not require rectification unless there is
some skew or rotation of the image. Scanning or digitizing
produces images that are planar, but do not contain any map
coordinate information. These images need only to be
georeferenced, which is a much simpler process than rectification.
In many cases, the image header can simply be updated with new
map coordinate information. This involves redefining:
• the map coordinate of the upper left corner of the image
• the cell size (the area represented by each pixel)
This information is usually the same for each layer of an image file,
although it could be different. For example, the cell size of band 6
of Landsat TM data is different than the cell size of the other bands.
Ground control poi nts
GCPs are specific pixels in an image for which the output map
coordinates (or other output coordinates) are known. GCPs consist
of two X,Y pairs of coordinates:
• source coordinates — usually data file coordinates in the image
being rectified
• reference coordinates — the coordinates of the map or
reference image to which the source image is being registered
The term map coordinates is sometimes used loosely to apply to
reference coordinates and rectified coordinates. These coordinates
are not limited to map coordinates. For example, in image to image
registration, map coordinates are not necessary.
Enteri ng GCPs
Accurate GCPs are essential for an accurate rectification. From the
GCPs, the rectified coordinates for all other points in the image are
extrapolated. Select many GCPs throughout the scene. The more
dispersed the GCPs are, the more reliable the rectification is. GCPs
for large scale imagery might include the intersection of two roads,
airport runways, utility corridors, towers or buildings. For small
scale imagery, larger features such as urban areas or geologic
features may be used. Landmarks that can vary (edges of lakes,
other water bodies, vegetation and so on) should not be used.
The source and reference coordinates of the GCPs can be entered in
the following ways:
• They may be known a priori, and entered at the keyboard.
• Use the mouse to select a pixel from an image in the view. With
both the source and destination views open, enter source
coordinates and reference coordinates for image to image
registration.
• Use a digitizing tablet to register an image to a hardcopy map.
Tol erance of RMS error ( RMSE)
Acceptable RMS error is determined by the end use of the data
base, the type of data being used, and the accuracy of the GCPs and
ancillary data being used. For example, GCPs acquired from GPS
should have an accuracy of about 10 m, but GCPs from 1:24,000-
scale maps should have an accuracy of about 20 m.
It is important to remember that RMS error is reported in pixels.
Therefore, if you are rectifying Landsat TM data and want the
rectification to be accurate to within 30 meters, the RMS error
should not exceed 1.00. Acceptable accuracy depends on the image
area and the particular project.
USING IMAGE ANALYSIS FOR ARCGIS 152
Cl assi f i cat i on
Some analysts recommend classification before rectification since
the classification is then based on the original data values. Another
benefit is that a thematic file has only one band to rectify instead of
the multiple bands of a continuous file. On the other hand, it may
be beneficial to rectify the data first, especially when using GPS
data for the GCPs. Since this data is very accurate, the classification
may be more accurate if the new coordinates help to locate better
training samples.
Themati c fi l es
Nearest neighbor is the only appropriate resampling method for
thematic files, which may be a drawback in some applications. The
available resampling methods are discussed in detail later in
Geocorrection property dialogs.
APPLYING GEOCORRECTION TOOLS 153
Geocor r ect i on pr oper t y di al ogs
The individual Geocorrection Tools have their own dialog that
appears whenever you choose a model type and click on the
Geocorrection Properties button. Some of the tool dialogs offer
certain option tabs pertaining to that specific tool, but they all have
several tabs in common. Every Geocorrection Tool dialog has a
General tab and a Links tab, and all but Polynomial Properties and
Rubber Sheeting Properties have an Elevation tab.
The General tab has a Link Coloring section, a Displayed Units
section, and a Link Snapping section. The Link Coloring section
lets you set a Threshold and select or change link colors. The
Displayed Units section gives you the Horizontal and Vertical
Units if they are known. Often one will be known and the other one
not so it may say Meters for Vertical Units and Unknown for
Horizontal Units. Display Units does not have any effect on the
original data in latitude/longitude format. The image in the view
will not show the changes either.
The Link Snapping section will only be activated when you have a
vector layer (shapefile) active in ArcMap. The purpose of this
portion of the tool is to allow you to snap an edge, end, or vertex to
the edge, end, or vertex of another layer. The vector layer you want
to snap to another layer will be defined in the Link Snapping box.
You will need to check either Vertex, Edge, or End depending on
what you want to snap to in another layer. The choice is completely
up to you.
1. Click the arrows to set the Threshold, and click the Within
and Over Threshold boxes to change the link colors.
2. The Displayed Units area shows the measurement of the
Vertical Units.
3. If you have shapefiles (a vector layer) active in ArcMap,
check Vertex, Boundary, or End Point. Checking one will
activate Snap Tolerance and Snap Tolerance Units.
2
3
1
USING IMAGE ANALYSIS FOR ARCGIS 154
Li nks tab
The Links tab (this display is also called a CellArray) shows
information about the links in your image, including reference
points and RMS Error. If you have already added links to your
image, they will be listed under this tab. The program is interactive
between the image and the Links tab, so when you add links in an
image or between two images, information is automatically
updated in the CellArray. You can edit and delete information
displayed in the CellArray as well. For example, if you want to
experiment with coordinates other than the ones you’ve been given,
you can plug your own coordinates into the CellArray on the Links
tab.
Before adding links or editing the links table, you need to select the
Coordinate System in which you want to store the link coordinates.
3
2
1. Right-click in the view area and click Properties at the
bottom of the popup menu. The Data Frame Properties dialog
displays.
2. Click the Coordinate System tab.
3. If your link coordinates are predefined, click the appropriate
Predefined coordinate system. If you want to use the
coordinate system from a specific layer, select that layer from
the list of Layers.
There are a few additional checks you need to make before
proceeding.
1. Make sure that the correct layer is displayed in the Layers
box on the Image Analysis toolbar.
2. Choose your Model Type from the dropdown list.
3. Click the Add Links button to set your new links.
You can proof and edit the coordinates of the links as you enter
them.
1. Click the Geocorrection Properties button .
2. Click the Links tab. The coordinates will be displayed in the
cell array on this tab.
3. Click inside a cell and edit the contents.
4. When you are finished, you can click Export Links to Shape
file and save the new shapefile.
1 2
3
APPLYING GEOCORRECTION TOOLS 155
El evati on tab
The Elevation tab is in all Geocorrection Model Properties except
for Polynomial and Rubber Sheeting. When you click the Elevation
tab in any of the Geocorrection Model Types, the default selection
will allow you to choose a file to use as an Elevation Source,
because most of the time you will have an Elevation File to use as
your elevation source. If you do not have an Elevation File, you
should use a Constant elevation value as the elevation source.
Choosing Constant changes the options in the Elevation Source
section to allow you to specify the Elevation Value and Elevation
Units. The Constant value you should use is the average ground
elevation for the entire scene. The following examples use the
Landsat Properties dialog, but the Elevation tab is the same on all
of the Model Types that allow you to specify elevation information.
2
3
4
Elevation Source File
USING IMAGE ANALYSIS FOR ARCGIS 156
Elevation Source Constant
After the Elevation Source section you can check the box if you
want to Account for Earth’s curvature as part of the Elevation.
The following steps take you through the Elevation tab. The first set
of instructions pertains to using File as your Elevation Source. The
second set uses Constant as the Elevation Source.
1. Choose File.
2. Type the file name or navigate to the directory where the
Elevation File is stored.
3. Click the dropdown arrow and choose Feet or Meters.
4. Check if you want to Account for the Earth’s curvature.
5. Click Apply to set the Elevation Source. Click OK if you are
finished with the dialog.
1
2
3
4
APPLYING GEOCORRECTION TOOLS 157
These are the steps to take when using a Constant value as the
elevation source.
1. Choose Constant.
2. Click the arrows to enter the Elevation Value.
3. Click the dropdown arrow, and choose either Feet or Meters.
4. Check if you want to Account for the Earth’s curvature.
5. Click Apply to set the Elevation Source. Click OK if you are
finished with the dialog.
1
4
2
3
USING IMAGE ANALYSIS FOR ARCGIS 158
SPOT
The first SPOT satellite, developed by the French Centre National
d’Etudes Spatiales (CNES), was launched in early 1986. The
second SPOT satellite was launched in 1990, and the third was
launched in 1993. The sensors operate in two modes, multispectral
and panchromatic. SPOT is commonly referred to as a pushbroom
scanner, which means that all scanning parts are fixed, and
scanning is accomplished by the forward motion of the scanner.
SPOT pushes 3000/6000 sensors along its orbit. This is different
from Landsat which scans with 16 detectors perpendicular to its
orbit.
The SPOT satellite can observe the same area on the globe once
every 26 days. The SPOT scanner normally produces nadir views,
but it does have off-nadir viewing capability. Off-nadir refers to
any point that is not directly beneath the detectors, but off to an
angle. Using this off-nadir capability, one area on the earth can be
viewed as often as every 3 days.
This off-nadir viewing can be programmed from the ground control
station, and is quite useful for collecting data in a region not directly
in the path of the scanner or in the event of a natural or man-made
disaster, where timeliness of data acquisition is crucial. It is also
very useful in collecting stereo data from which elevation data can
be extracted.
The width of the swath observed varies between 60 km for nadir
viewing and 80 km for off-nadir viewing at a height of 832 km
(Jensen 1996).
Panchromati c
SPOT Panchromatic (meaning sensitive to all visible colors) has 10
× 10 m spatial resolution, contains 1 band—0.51 to 0.73 mm—and
is similar to a black and white photograph. It has a radiometric
resolution of 8 bits (Jensen 1996).
XS
SPOT XS, or multispectral, has 20 × 20 m spatial resolution, 8-bit
radiometric resolution, and contains 3 bands (Jensen 1996).
SPOT XS Bands and Wavelengths
Band
Wavelength
(microns)
Comments
1, Green 0.50 to 0.59
µm
This band corresponds to the
green reflectance of healthy
vegetation.
2, Red 0.61 to 0.68
µm
This band is useful for
discriminating between plant
species. It is also useful for soil
boundary and geological
boundary delineations.
3,
Reflective
IR
0.79 to 0.89
µm
This band is especially
responsive to the amount of
vegetation biomass present in a
scene. It is useful for crop
identification and emphasizes
soil/crop and land/water
contrasts.
APPLYING GEOCORRECTION TOOLS 159
SPOT Panchromatic versus SPOT XS
St ereoscopi c pai rs
Two observations can be made by the panchromatic scanner on
successive days, so that the two images are acquired at angles on
either side of the vertical, resulting in stereoscopic imagery.
Stereoscopic imagery can also be achieved by using one vertical
scene and one off-nadir scene. This type of imagery can be used to
produce a single image, or topographic and planimetric maps
(Jensen 1996).
Topographic maps indicate elevation. Planimetric maps correctly
represent horizontal distances between objects (Star and Estes
1990).
P
a
n
c
h
ro
m
a
tic
X
S
1 band
3 bands
1 pixel =
10 m x 10 m
1 pixel =
20 m x 20 m
radiometric
resolution
0-255
SPOT 4
The SPOT 4 satellite was launched in 1998. SPOT 4 carries High
Resolution Visible Infrared (HR VIR) instruments that obtain
information in the visible and near-infrared spectral bands.
The SPOT 4 satellite orbits the earth at 822 km above the Equator.
The SPOT 4 satellite has two sensors on board: a multispectral
sensor, and a panchromatic sensor. The multispectral scanner has a
pixel size of 20 × 20 m, and a swath width of 60 km. The
panchromatic scanner has a pixel size of 10 × 10 m, and a swath
width of 60 km.
SPOT 4 Bands and Wavelengths
Band Wavelength
1, Green 0.50 to 0.59 µm
2, Red 0.61 to 0.68 µm
3, (near-IR) 0.78 to 0.89 µm
4, (mid-IR) 1.58 to 1.75 µm
Panchromatic 0.61 to 0.68 µm
USING IMAGE ANALYSIS FOR ARCGIS 160
The Spot Pr oper t i es di al og
In addition to the General, Links, and Elevation tabs, the Spot
Properties dialog also contains a Parameters tab. Most of the
Geocorrection Properties dialogs do contain a Parameters tab, but
each one offers different options.
1. Click the Model Types dropdown arrow, and choose Spot.
2. Click the Geocorrection Properties button.
3. Click the Parameters tab on the Spot Properties dialog.
4. Choose the Sensor type.
5. Click the arrows to enter the Number of Iterations.
6. Click the arrows to enter the Incidence Angle.
7. Click the arrows to enter the Background Value, and the
layer.
8. Click OK.
1
4
7
5
2
3
8
6
APPLYING GEOCORRECTION TOOLS 161
Pol ynomi al t r ansf or mat i on
Polynomial equations are used to convert source file coordinates to
rectified map coordinates. Depending upon the distortion in the
imagery, complex polynomial equations may be required to express
the needed transformation. The degree of complexity of the
polynomial is expressed as the order of the polynomial. The order
of transformation is the order of the polynomial used in the
transformation. Image Analysis for ArcGIS allows 1st through nth
order transformations. Usually, 1st order or 2nd order
transformations are used.
Transformati on matri x
A transformation matrix is computed from the GCPs. The matrix
consists of coefficients that are used in polynomial equations to
convert the coordinates. The size of the matrix depends upon the
order of transformation. The goal in calculating the coefficients of
the transformation matrix is to derive the polynomial equations for
which there is the least possible amount of error when they are used
to transform the reference coordinates of the GCPs into the source
coordinates. It is not always possible to derive coefficients that
produce no error. For example, in the figure below, GCPs are
plotted on a graph and compared to the curve that is expressed by a
polynomial.
Source X coordinate
R
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
GCP
Polynomial curve
Every GCP influences the coefficients, even if there isn’t a perfect
fit of each GCP to the polynomial that the coefficients represent.
The distance between the GCP reference coordinate and the curve
is called RMS error, which is discussed later in this chapter in
“Camera Properties” on page 171.
Li near t ransformat i ons
A 1st order transformation is a linear transformation. It can change:
• location in X and/or Y
• scale in X and/or Y
• skew in X and/or Y
• rotation
1st order transformations can be used to project raw imagery to a
planar map projection, to convert a planar map projection to
another planar map projection, and to rectify relatively small image
areas. You can perform simple linear transformations to an image
displayed in a view or to the transformation matrix itself. Linear
transformations may be required before collecting GCPs on the
displayed image. You can reorient skewed Landsat TM data, rotate
scanned quad sheets according to the angle of declination stated in
the legend, and rotate descending data so that north is up.
A 1st order transformation can also be used for data that are already
projected onto a plane. For example, SPOT and Landsat Level 1B
data are already transformed to a plane, but may not be rectified to
the desired map projection. When doing this type of rectification, it
is not advisable to increase the order of transformation if at first a
high RMS error occurs. Examine other factors first, such as the
GCP source and distribution, and look for systematic errors.
The transformation matrix for a 1st-order transformation consists
of six coefficients—three for each coordinate (X and Y).
USING IMAGE ANALYSIS FOR ARCGIS 162
Coefficients are used in a 1st order polynomial as follows:
Where:
x and y are source coordinates (input)
x
0
and y
0
are rectified coordinates (output)
the coefficients of the transformation matrix are as above
Nonl i near transformati ons
Second-order transformations can be used to convert Lat/Lon data
to a planar projection, for data covering a large area (to account for
the earth’s curvature), and with distorted data (for example, due to
camera lens distortion). Third-order transformations are used with
distorted aerial photographs, on scans of warped maps and with
radar imagery. Fourth-order transformations can be used on very
distorted aerial photographs.
a
0
a
1
a
2
b
0
b
1
b
2
x
0
a
0
a
1
x a
2
y + + =
y
0
b
0
b
1
x b
2
y + + =
The transformation matrix for a transformation of order t contains
this number of coefficients:
It is multiplied by two for the two sets of coefficients — one set for
X and one for Y.
An easier way to arrive at the same number is:
Clearly, the size of the transformation matrix increases with the
order of the transformation.
Hi gh order pol ynomi al s
The polynomial equations for a t order transformation take this
form:
2 i
i 0 =
t 1 +

t 1 + ( )x t 2 + ( )
x
o
t
Σ
i o =
\ .
|
|
| |
i
Σ
j o =
\ .
|
|
| |
= a
k
x
i j –
× y
j
×
y
o
t
Σ
i o =
\ .
|
|
| |
i
Σ
j o =
\ .
|
|
| |
= b
k
x
i j –
× y
j
×
APPLYING GEOCORRECTION TOOLS 163
Where:
t is the order of the polynomial
a and b are coefficients
the subscript k in a and b is determined by:
Ef fect s of order
The computation and output of a higher polynomial equation are
more complex than that of a lower order polynomial equation.
Therefore, higher order polynomials are used to perform more
complicated image rectifications. To understand the effects of
different orders of transformation in image rectification, it is
helpful to see the output of various orders of polynomials.
The following example uses only one coordinate (X) instead of two
(X,Y) which are used in the polynomials for rectification. This
enables you to draw two-dimensional graphs that illustrate the way
that higher orders of transformation affect the output image.
Because only the X coordinate is used in these examples, the
number of GCPs used is less than the number required to actually
perform the different orders of transformation.
k
i i j + ⋅
2
--------------- j + =
Coefficients like those presented in this example would generally
be calculated by the least squares regression method. Suppose
GCPs are entered with these X coordinates:
These GCPs allow a 1st order transformation of the X coordinates,
which is satisfied by this equation (the coefficients are in
parentheses):
Where:
x
r
= the reference X coordinate
x
i
= the source X coordinate
This equation takes on the same format as the equation of a line
(y = mx + b). In mathematical terms, a 1st-order polynomial is
linear. Therefore, a 1st-order transformation is also known as a
linear transformation. This equation is graphed on the next page:
Source X
Coordinate
(input)
Reference X
Coordinate
(output)
1 17
2 9
3 1
x
r
25 ( ) 8 – ( )x
i
+ =
USING IMAGE ANALYSIS FOR ARCGIS 164
However, what if the second GCP were changed as follows?
Source X
Coordinate
(input)
Reference X
Coordinate
(output)
1 17
2 7
3 1
0 1 2 3 4
0
4
8
12
16
r
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
source X coordinate
x
r
= (25) + (-8)x
i
These points are plotted against each other below:
A line cannot connect these points, which illustrates that they
cannot be expressed by a 1st-order polynomial like the one above.
In this case, a 2nd-order polynomial equation expresses these
points.
Polynomials of the 2nd-order or higher are nonlinear. The graph of
this curve is drawn below:
0 1 2 3 4
0
4
8
12
16
r
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
source X coordinate
x
r
31 ( ) 16 – ( )x
i
2 ( )x
i
2
+ + =
0 1 2 3 4
0
4
8
12
16
r
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
source X coordinate
x
r
= (31) + (-16)x
i
+ (2)x
i
2
APPLYING GEOCORRECTION TOOLS 165
What if one more GCP were added to the list?
As illustrated in the graph above, this fourth GCP does not fit on the
curve of the 2nd-order polynomial equation. To ensure that all of
the GCPs fit, the order of the transformation could be increased to
3rd-order. The equation and graph below could then result.
Source X
Coordinate
(input)
Reference X
Coordinate
(output)
1 17
2 7
3 1
4 5
0 1 2 3 4
0
4
8
12
16
r
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
source X coordinate
x
r
= (31) + (-16)x
i
+ (2)x
i
2
(4,5)
x
r
25 ( ) 5 – ( )x
i
4 – ( )x
i
2
1 ( )x
i
2
+ + + =
This figure illustrates a 3rd-order transformation. However, this
equation may be unnecessarily complex. Performing a coordinate
transformation with this equation may cause unwanted distortions
in the output image for the sake of a perfect fit for all the GCPs. In
this example, a 3rd-order transformation probably would be too
high, because the output pixels in the X direction would be arranged
in a different order than the input pixels in the X direction.
Source X
Coordinate
(input)
Reference X
Coordinate
(output)
1
2
3
4
0 1 2 3 4
0
4
8
12
16
r
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
source X coordinate
x
r
= (25) + (-5)x
i
+ (-4)x
i
2

+ (1)x
i
3
x
0
1 ( ) 17 =
x
0
2 ( ) 7 =
x
0
3 ( ) 1 =
x
0
4 ( ) 5 =
USING IMAGE ANALYSIS FOR ARCGIS 166
In this case a higher order of transformation would probably not
produce the desired results.
x
0
1 ( ) x
0
2 ( ) x
0
4 ( ) x
0
3 ( ) > > >
17 7 5 1 > > >
1 2 3 4
1 2 3 4
input image
X coordinates
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
3 4 2 1
output image
X coordinates
Mi ni mum number of GCPs
Higher orders of transformation can be used to correct more
complicated types of distortion. However, to use a higher order of
transformation, more GCPs are needed. For instance, three points
define a plane. Therefore, to perform a 1st-order transformation,
which is expressed by the equation of a plane, at least three GCPs
are needed. Similarly, the equation used in a 2nd-order
transformation is the equation of a paraboloid. Six points are
required to define a paraboloid. Therefore, at least six GCPs are
required to perform a 2nd-order transformation. The minimum
number of points required to perform a transformation of order t
equals:
Use more than the minimum number of GCPs whenever possible.
Although it is possible to get a perfect fit, it is rare, no matter how
many GCPs are used.
t 1 + ( ) t 2 + ( ) ( )
2
-------------------------------------
APPLYING GEOCORRECTION TOOLS 167
For 1st through 10th-order transformations, the minimum number
of GCPs required to perform a transformation is listed in the
following table:
Number of GCPs
Order of
Transformation
Minimum GCPs
Required
1 3
2 6
3 10
4 15
5 21
6 28
7 36
8 45
9 55
10 66
USING IMAGE ANALYSIS FOR ARCGIS 168
The Pol ynomi al Pr oper t i es di al og
Polynomial Properties has a Parameters tab in addition to the
General and Links tabs. It does not need an Elevation tab. The
General tab and the Links tab are the same as the ones featured at
the beginning of this chapter.
The Parameters tab contains a CellArray that shows the
transformation coefficients table. These are filled in when the
model is solved.
1. Click the Parameters tab.
2. Using the arrows, enter the Polynomial Order.
2
1
APPLYING GEOCORRECTION TOOLS 169
Rubber Sheet i ng
Tri angl e-based fi ni te el ement anal ysi s
The finite element analysis is a powerful tool for solving
complicated computation problems which can be approached by
small simpler pieces. It has been widely used as a local
interpolation technique in geographic applications. For image
rectification, the known control points can be triangulated into
many triangles. Each triangle has three control points as its vertices.
Then, the polynomial transformation can be used to establish
mathematical relationships between source and destination systems
for each triangle. Because the transformation exactly passes
through each control point and is not in a uniform manner, finite
element analysis is also called Rubber Sheeting. It can also be
called the triangle-based rectification because the transformation
and resampling for image rectification are performed on a triangle-
by-triangle basis.
This triangle-based technique should be used when other
rectification methods such as Polynomial Transformation and
photogrammetric modeling cannot produce acceptable results.
Tri angul ati on
To perform the triangle-based rectification, it is necessary to
triangulate the control points into a mesh of triangles. Watson
(1994) summarily listed four kinds of triangulation, including the
arbitrary, optimal, Greedy, and Delaunay triangulation. Of the four
kinds, the Delaunay triangulation is most widely used and is
adopted because of the smaller angle variations of the resulting
triangles.
The Delaunay triangulation can be constructed by the empty
circumcircle criterion. The circumcircle formed from three points
of any triangle does not have any other point inside. The triangles
defined this way are the most equiangular possible.
Tri angl e- based recti f i cat i on
Once the triangle mesh has been generated and the spatial order of
the control points is available, the geometric rectification can be
done on a triangle-by-triangle basis. This triangle-based method is
appealing because it breaks the entire region into smaller subsets. If
the geometric problem of the entire region is very complicated, the
geometry of each subset can be much simpler and modeled through
simple transformation.
For each triangle, the polynomials can be used as the general
transformation form between source and destination systems.
Li near t ransformat i on
The easiest and fastest transformation is the linear transformation
with the first order polynomials:
There is no need for extra information because there are three
known conditions in each triangle and three unknown coefficients
for each polynomial.
xo a
0
a
1
x a
2
y + + =
yo b
0
b
1
x b
2
y + + =
¹
¦
´
¦
¦
USING IMAGE ANALYSIS FOR ARCGIS 170
Nonl i near transformati on
Even though the linear transformation is easy and fast, it has one
disadvantage. The transitions between triangles are not always
smooth. This phenomenon is obvious when shaded relief or contour
lines are derived from the DEM which is generated by the linear
rubber sheeting. It is caused by incorporating the slope change of
the control data at the triangle edges and vertices. In order to
distribute the slope change smoothly across triangles, the nonlinear
transformation with polynomial order larger than one is used by
considering the gradient information.
The fifth order or quintic polynomial transformation is chosen here
as the nonlinear rubber sheeting technique in this example. It is a
smooth function. The transformation function and its first order
partial derivative are continuous. It is not difficult to construct
(Akima 1978). The formulation is simply as follows:
x
0
a
k
x
i j –
y ⋅
j

j 0 =
i

i 0 =
5

=
y
0
b
k
x
i j –
y
j
⋅ ⋅
j 0 =
i

i 0 =
5

=
¹
¦
¦
¦
´
¦
¦
¦
¦
The 5th-order has 21 coefficients for each polynomial to be
determined. For solving these unknowns, 21 conditions should be
available. For each vertex of the triangle, one point value is given,
and two 1st-order and three 2nd-order partial derivatives can be
easily derived by establishing a 2nd-order polynomial using
vertices in the neighborhood of the vertex. Then the total 18
conditions are ready to be used. Three more conditions can be
obtained by assuming that the normal partial derivative on each
edge of the triangle is a cubic polynomial, which means that the
sum of the polynomial items beyond the 3rd-order in the normal
partial derivative has a value zero.
Checkpoi nt anal ysi s
It should be emphasized that the independent checkpoint analysis is
critical for determining the accuracy of rubber sheeting modeling.
For an exact modeling method like rubber sheeting, the ground
control points, which are used in the modeling process, do not have
much geometric residuals remaining. To evaluate the geometric
transformation between source and destination coordinate systems,
the accuracy assessment using independent checkpoints is
recommended.
APPLYING GEOCORRECTION TOOLS 171
Camer a Pr oper t i es
The Camera model is derived by space resection based on
collinearity equations, and is used for rectifying any image that uses
a camera as its sensor. In addition to the General, Links, and
Elevation tabs, Camera Properties has tabs for Orientation, Camera,
and Fiducials.
The Orientation feature allows you to choose different rotation
angles and perspective center positions for the camera. The
Rotation Angle lets you customize the Omega, Phi, and Kappa
rotation angles of the image to determine the viewing direction of
the camera. If you can fill in all the degrees and meters for the
Rotation Angle and the Perspective Center Position, then you do
not need the three links you normally would need for the Camera
model. If you are going to fill in this information on the Orientation
tab, then you will need to make sure you do not check Account for
Earth’s curvature on the Elevation tab. You can see the areas to fill
in on the Orientation tab below:
Camera Properties dialog
Rotation offers the following options when you click the dropdown
arrows:
• Unknown— select when the rotation angle is unknown
• Estimated — select when estimating the rotation angle
• Fixed — select when rotation angle is defined
• Omega — rotation angle is roll: around the x-axis of the
ground system
• Phi — phi rotation angle is pitch: around the y-axis (after
Omega rotation)
• Kappa — kappa rotation angle is yaw: around the z-axis
rotated by Omega and Phi
The Perspective Center Position is given in meters and allows you
to enter the perspective center for ground coordinates. You can
choose from the following options:
• Unknown — select when the ground coordinate is unknown
• Estimated — select when estimating the ground coordinate
• Fixed — select when ground coordinate is defined
• X — enter the X coordinate of the perspective center
• Y — enter the Y coordinate of the perspective center
• Z — enter the Z coordinate of the perspective center
USING IMAGE ANALYSIS FOR ARCGIS 172
The next tab on Camera Properties is also called Camera. This is
where you can specify the Camera Name, the Number of Fiducials,
the Principal Point, and the Focal Length for the camera that was
used to capture your image.
Camera tab on Camera Properties dialog
You can click Load or Save to open or save a file with certain
camera information in it.
The last tab on the Camera Properties dialog is the Fiducials tab.
Fiducials are used to compute the transformation from data file to
image coordinates. Fiducial orientation defines the relationship
between the image/photo-coordinate system of a frame and the
actual image orientation as it appears within a view. The image/
photo-coordinate system is defined by the camera calibration
information. The orientation of the image is largely dependent on
the way the photograph was scanned during the digitization stage.
The fiducials for your image will be fixed on the frame and visible
in the exposure. The Fiducial information you enter on the Camera
tab will be displayed in a cell array on the Fiducial tab after you
click Apply on the Camera Properties dialog.
In order to select the appropriate fiducial orientation, compare the
axis of the photo-coordinate system (defined in the calibration
report) with the orientation of the image. Based on the relationship
between the photo-coordinate system and the image, the
appropriate fiducial orientation can be selected. Do not use over 8
fiducials in an image. The following illustrations demonstrate the
fiducial orientation used under the various circumstances.
Fiducial One—places the marker at the left of the image
Fiducial Two—places the marker at the top of the image
Fiducial Three—places the marker at the right of the image
Fiducial Four—places the marker at the bottom of the image
Click to select where to place the fiducial in the viewer.
Selecting the inappropriate fiducial orientation results in large
RMS errors during the measurement of fiducial marks for interior
orientation and errors during the automatic tie point collection. If
initial approximations for exterior orientation have been defined
and the corresponding fiducial orientation does not correspond, the
automatic tie point collection capability provides inadequate
results. Ensure that the appropriate fiducial orientation is used as a
function of the image/photo-coordinate system.
APPLYING GEOCORRECTION TOOLS 173
I KONOS, Qui ckBi r d, and RPC Pr oper t i es
IKONOS, QuickBird, and RPC Properties are sometimes referred
to together as the Rational Function Models. They are virtually the
same except for the files they use. The dialogs for the three in
Geocorrection Properties are identical as well. IKONOS files are
images captured by the IKONOS satellite. QuickBird files are
images captured by the QuickBird satellite. RPC Properties uses
NITF data.
It is important that you click the Add Links button before you click
the Geocorrection Properties button to open one of these three
property dialogs. Once you click the Add Links button and click the
Geocorrection Properties button, the dialog will appear. The
Parameters tab in IKONOS, QuickBird, and RPC Properties calls
for an RPC file and the Elevation Range. Click the Parameters tab,
and enter the RPC File before proceeding with anything else.
IKONOS Properties Parameters tab
The Parameters tab is the same in all three of these Geocorrection
models.
I KONOS
IKONOS images are produced from the IKONOS satellite, which
was launched in September of 1999 by the Athena II rocket.
The resolution of the panchromatic sensor is 1 m. The resolution of
the multispectral scanner is 4 m. The swath width is 13 km at nadir.
The accuracy without ground control is 12 m horizontally, and 10
m vertically; with ground control it is 2 m horizontally, and 3 m
vertically.
IKONOS orbits at an altitude of 423 miles, or 681 kilometers. The
revisit time is 2.9 days at 1 m resolution, and 1.5 days at 1.5 m
resolution.
IKONOS Bands and Wavelengths
The IKONOS Properties dialog gives you the ability to rectify
IKONOS images from the satellite. Like the other property dialogs
in Geocorrection, IKONOS has a General, Links, and Elevation
tabs as well as Parameters and Chipping.
Band Wavelength (microns)
1, Blue 0.45 to 0.52 µm
2, Green 0.52 to 0.60 µm
3, Red 0.63 to 0.69 µm
4, NIR 0.76 to 0.90 µm
Panchromatic 0.45 to 0.90 µm
USING IMAGE ANALYSIS FOR ARCGIS 174
The RPC file is generated by the data provider based on the position
of the satellite at the time of image capture. The RPCs can be
further refined by using ground control points (GCPs). This file
should be located in the same directory as the image you intend to
use in the Geocorrection process.
On the Parameters tab, there is also a check box for Refinement
with Polynomial Order. This is provided so you may apply
polynomial corrections to the original rational function model. This
option corrects the remaining error and refines the mathematical
solution. Check the box to enable the refinement process, then
specify the order by clicking the arrows.
The 0-order results in a simple shift to both image X and Y
coordinates. The 1st-order is an affine transformation. The 2nd-
order results in a second order transformation, and the 3rd-order in
a third order transformation. Usually, a 0 or 1st-order is sufficient
to reduce error not addressed by the rational function model (RPC
file).
After the Parameters tab on the IKONOS Properties dialog, there is
the Chipping tab. The Chipping process allows circulation of RPCs
for an image chip rather than the full, original image from which the
chip was derived. This is made possible by specifying an affine
relationship (pixel) between the chip and the full, original image.
IKONOS Properties Chipping tab
The Chipping tab is the same for IKONOS, QuickBird, and RPC
Properties.
On the Chipping tab you are given the choice of Scale and Offset or
Arbitrary Affine as your chipping parameters. The dialog will
change depending on which chipping parameter you choose. Scale
and Offset is the more simple of the two. The formulas for
calculating the affine using scale and offset are listed on the dialog.
X and Y correspond to the pixel coordinates for the full, original
image.
APPLYING GEOCORRECTION TOOLS 175
The following is an example of the Scale and Offset dialog on the
Chipping tab:
IKONOS Chipping tab using Scale and Offset
• Row Offset—This value corresponds to value f, an offset
value. In absence of header data, this value defaults to 0.
• Row Scale—This value corresponds to value e, a scale factor
that is also used in rotation. In the absence of header data, this
value defaults to 1.
• Column Offset—This value corresponds to value c, an offset
value. In the absence of header data, this value defaults to 0.
• Column Scale—This value corresponds to value a, a scale
factor that is also used in rotation. In the absence of header
data, this value defaults to 1.
The Arbitrary Affine formulas are listed on the dialog when you
choose that option. In the formulas, x’ (x prime), and y’ (y prime),
correspond to the pixel coordinates in the chip with which you are
currently working. Values for the following variables are either
obtained from the header data of the chip, or they default to the
predetermined values described above. Also under the Chipping
tab, you’ll find a box for Full Row Count and Full Column Count.
For Full Row Count, if the chip header contains the appropriate
data, this value is the row count of the full, original image. If the
header count is absent, this value corresponds to the row count of
the chip. For Full Column Count, if the chip header contains the
appropriate data, this value is the column count of the full, original
image. If the header count is absent, the value corresponds to the
column count of the chip.
The following is an example of the Arbitrary Affine dialog on the
Chipping tab:
IKONOS Chipping tab using Arbitrary Affine
USING IMAGE ANALYSIS FOR ARCGIS 176
Qui ckBi rd
QuickBird Properties allows you to rectify images captured with
the QuickBird satellite. Like IKONOS, QuickBird requires the use
of an RPC file to describe the relationship between the image and
the earth’s surface at the time of image capture.
The QuickBird satellite was launched in October of 2001. Its orbit
has an altitude of 450 kilometers, a 93.5 minute orbit time, and a
10:30 A.M. equator crossing time. The inclination is 97.2 degrees
sun-synchronous, and the nominal swath width is 16.5 kilometers
at nadir. The sensor has both panchromatic and multispectral
capabilities. The dynamic range is 11 bits per pixel for both
panchromatic and multispectral. The panchromatic bandwidth is
450-900 nanometers. The multispectral bands are as follows:
QuickBird Bands and Wavelengths
Just like IKONOS, QuickBird has a Parameters tab as well as a
Chipping tab on its Properties dialog. The same information applies
to both tabs as is discussed in the IKONOS section.
Band Wavelength (microns)
1, Blue 0.45 to 0.52 µm
2, Green 0.52 to 0.60 µm
3, Red 0.63 to 0.69 µm
4, NIR 0.76 to 0.90 µm
RPC
RPC stands for rational polynomial coefficients. When you choose
it, the function allows you to specify the associated RPC file to be
used in Geocorrection. RPC Properties in Image Analysis for
ArcGIS allows you to work with NITF data.
NITF stands for National Imagery Transmission Format Standard.
NITF data is designed to pack numerous image compositions with
complete annotation, text attachments, and imagery associated
metadata.
The RPC file associated with the image contains rational function
polynomial coefficients that are generated by the data provider
based on the position of the satellite at the time of image capture.
These RPCs can be further refined by using GCPs. This file should
be located in the same directory as the image or images you intend
to use in orthorectification.
Just like IKONOS and QuickBird, the RPC dialog contains the
Parameters and Chipping tabs. These work the same way in all
three model properties.
APPLYING GEOCORRECTION TOOLS 177
Landsat
The Landsat dialog is used for orthorectification of any Landsat
image that uses TM or MSS as its sensor. The model is derived by
space resection based on collinearity equations. The elevation
information is required in the model for removing relief
displacement.
Landsat 1-5
In 1972, the National Aeronautics and Space Administration
(NASA) initiated the first civilian program specializing in the
acquisition of remotely sensed digital satellite data. The first
system was called ERTS (Earth Resources Technology Satellites),
and later renamed to Landsat. There have been several Landsat
satellites launched since 1972. Landsats 1, 2, and 3 are no longer
operating, but Landsats 4 and 5 are still in orbit gathering data.
Landsats 1, 2, and 3 gathered Multispectral Scanner (MSS) data and
Landsats 4 and 5 collect MSS and TM data.
MSS
The MSS from Landsats 4 and 5 has a swath width of
approximately 185 × 170 km from a height of approximately 900
km for Landsats 1, 2, and 3, and 705 km for Landsats 4 and 5. MSS
data is widely used for general geologic studies as well as
vegetation inventories.
The spatial resolution of MSS data is 56 × 79 m, with a 79 × 79 m
IFOV (instantaneous field of view). A typical scene contains
approximately 2340 rows and 3240 columns. The radiometric
resolution is 6-bit, but it is stored as 8-bit (Lillesand and Kiefer
1987).
Detectors record electromagnetic radiation (EMR) in four bands:
• Bands 1 and 2 are in the visible portion of the spectrum and are
useful in detecting cultural features, such as roads. These bands
also show detail in water.
• Bands 3 and 4 are in the near-infrared portion of the spectrum
and can be used in land/water and vegetation discrimination.
• Bands 4, 3, and 2 create a false color composite. False color
composites appear similar to an infrared photograph where
objects do not have the same colors or contrasts as they would
naturally. For instance, in an infrared image, vegetation
appears red, water appears navy or black, etc.
• Bands 5, 4, and 2 create a pseudo color composite. (A thematic
image is also a pseudo color image.) In pseudo color, the colors
do not reflect the features in natural colors. For instance, roads
may be red, water yellow, and vegetation blue.
Different color schemes can be used to bring out or enhance the
features under study. These are by no means all of the useful
combinations of these seven bands. The bands to be used are
determined by the particular application.
TM
The TM scanner is a multispectral scanning system much like the
MSS, except that the TM sensor records reflected/emitted
electromagnetic energy from the visible, reflective-infrared,
middle-infrared, and thermal-infrared regions of the spectrum. TM
has higher spatial, spectral, and radiometric resolution than MSS.
TM has a swath width of approximately 185 km from a height of
approximately 705 km. It is useful for vegetation type and health
determination, soil moisture, snow and cloud differentiation, rock
type discrimination, and so on.
USING IMAGE ANALYSIS FOR ARCGIS 178
The spatial resolution of TM is 28.5 × 28.5 m for all bands except
the thermal (band 6), which has a spatial resolution of 120 × 120 m.
The larger pixel size of this band is necessary for adequate signal
strength. However, the thermal band is resampled to 28.5 × 28.5 m
to match the other bands. The radiometric resolution is 8-bit,
meaning that each pixel has a possible range of data values from 0
to 255.
Detectors record EMR in seven bands:
• Bands 1, 2, and 3 are in the visible portion of the spectrum and
are useful in detecting cultural features such as roads. These
bands also show detail in water.
• Bands 4, 5, and 7 are in the reflective-infrared portion of the
spectrum and can be used in land/water discrimination.
• Band 6 is in the thermal portion of the spectrum and is used for
thermal mapping (Jensen 1996; Lillesand and Kiefer 1987).
TM Bands and Wavelengths
Band
Wave-
length
(microns)
Comments
1, Blue 0.45 to
0.52 µm
For mapping coastal water areas,
differentiating between soil and
vegetation, forest type mapping, and
detecting cultural features.
2,
Green
0.52 to
0.60 µm
Corresponds to the green reflectance
of healthy vegetation. Also useful for
cultural feature identification.
3, Red 0.63 to
0.69 µm
For discriminating between many
plant species. It is also useful for
determining soil boundary and
geological boundary delineations as
well as cultural features.
4, NIR 0.76 to
0.90 µm
Especially responsive to the amount
of vegetation biomass present in a
scene. It is useful for crop
identification and emphasizes soil/
crop and land/water contrasts.
APPLYING GEOCORRECTION TOOLS 179
5, MIR 1.55 to
1.75 µm
Sensitive to the amount of water in
plants, which is useful in crop
drought studies and in plant health
analyses. This is also one of the few
bands that can be used to discriminate
between clouds, snow, and ice.
6, TIR 10.40 to
12.50 µm
For vegetation and crop stress
detection, heat intensity, insecticide
applications, and for locating thermal
pollution. It can also be used to locate
geothermal activity.
7, MIR 2.08 to
2.35 µm
Important for the discrimination of
geologic rock type and soil
boundaries, as well as soil and
vegetation moisture content.
Band
Wave-
length
(microns)
Comments
Landsat MSS vs. Landsat TM
Band Combi nati ons for Di spl ayi ng TM Data
Different combinations of the TM bands can be displayed to create
different composite effects. The order of the bands corresponds to
the Red, Green, and Blue (RGB) color guns of the monitor. The
following combinations are commonly used to display images:
• Bands 3, 2, 1 create a true color composite. True color means
that objects look as they would to the naked eye—similar to a
color photograph.
• Bands 4, 3, 2 create a false color composite. False color
composites appear similar to an infrared photograph where
objects do not have the same colors or contrasts as they would
naturally. For instance, in an infrared image, vegetation
appears red, water appears navy or black, etc.
radiometric
resolution
0-127
radiometric
resolution
0- 255
1 pixel =
57 m x 79 m
1 pixel =
30 m x 30 m
3 bands
7 bands
M
S
S
T
M
USING IMAGE ANALYSIS FOR ARCGIS 180
• Bands 5, 4, 2 create a pseudo color composite. (A thematic
image is also a pseudo color image.) In pseudo color, the colors
do not reflect the features in natural colors. For instance, roads
may be red, water yellow, and vegetation blue.
Different color schemes can be used to bring out or enhance the
features under study. These are by no means all of the useful
combinations of these seven bands. The bands to be used are
determined by the particular application.
Landsat 7
The Landsat 7 satellite, launched in 1999, uses Enhanced Thematic
Mapper Plus (ETM+) to observe the earth. The capabilities new to
Landsat 7 include the following:
• 15 m spatial resolution panchromatic band
• 5% radiometric calibration with full aperture
• 60 m spatial resolution thermal IR channel
The primary receiving station for Landsat 7 data is located in Sioux
Falls, South Dakota at the USGS EROS Data Center (EDC). ETM+
data is transmitted using X-band direct downlink at a rate of 150
Mbps. Landsat 7 is capable of capturing scenes without cloud
obstruction, and the receiving stations can obtain this data in real
time using the X-band. Stations located around the globe, however,
are only able to receive data for the portion of the ETM+ ground
track where the satellite can be seen by the receiving station.
Landsat 7 dat a t ypes
One type of data available from Landsat 7 is browse data. Browse
data is a lower resolution image for determining image location,
quality and information content. The other type of data is metadata,
which is descriptive information on the image. This information is
available via the internet within 24 hours of being received by the
primary ground station. Moreover, EDC processes the data to Level
0r. This data has been corrected for scan direction and band
alignment errors only. Level 1G data, which is corrected, is also
available.
Landsat 7 speci fi cati ons
Information about the spectral range and ground resolution of the
bands of the Landsat 7 satellite is provided in the following table:
Landsat 7 Characteristics
Band
Number
Wavelength
(microns)
Resolution
(m)
1 0.45 to 0.52 µm 30
2 0.52 to 0.60 µm 30
3 0.63 to 0.69 µm 30
4 0.76 to 0.90 µm 30
5 1.55 to 1.75 µm 30
6 10.4 to 12.5 µm 60
7 2.08 to 2.35 µm 30
Panchromatic (8) 0.50 to 0.90 µm 15
APPLYING GEOCORRECTION TOOLS 181
Landsat 7 has a swath width of 185 kilometers. The repeat coverage
interval is 16 days, or 233 orbits. The satellite orbits the earth at 705
kilometers.
The Landsat di al og
The Landsat Properties dialog in Geocorrection Properties has the
General, Links, and Elevation tabs already discussed in this
chapter. It also has a Parameters tab, which is different from the
ones discussed so far. The Parameters tab has areas where you
select the type of sensor used to capture your data, the Scene
Coverage (if you choose Quarter Scene you also choose the
quadrant), the Number of Iterations, and the Background.
USING IMAGE ANALYSIS FOR ARCGIS 182
183
Glossary
Glossary
Ter ms
abstract symbol
An annotation symbol that has a geometric shape, such as a circle, square, or triangle. These
symbols often represent amounts that vary from place to place, such as population density, yearly
rainfall, and so on.
accuracy assessment
The comparison of a classification to geographical data that is assumed to be true. Usually, the
assumed true data is derived from ground truthing.
American Standard Code for Information Interchange (ASCII)
A basis of character sets...to convey some control codes, space, numbers, most basic punctuation,
and unaccented letters a-z and A-Z.
analysis mask
An option that uses a raster dataset in which all cells of interest have a value and all other cells are
no data. Analysis mask lets you perform analysis on a selected set of cells.
ancillary data
The data, other than remotely sensed data, that is used to aid in the classification process.
annotation
The explanatory material accompanying an image or a map. Annotation can consist of lines, text,
polygons, ellipses, rectangles, legends, scale bars, and any symbol that denotes geographical
features.
AOI
See area of interest.
USING IMAGE ANALYSIS FOR ARCGIS 184
a priori
Already or previously known.
area
A measurement of a surface.
area of interest
(AOI) a point, line, or polygon that is selected as a training sample
or as the image area to be used in an operation.
ASCII
See American Standard Code for Information Interchange.
aspect
The orientation, or the direction that a surface faces, with respect to
the directions of the compass: north, south, east, west.
attribute
The tabular information associated with a raster or vector layer.
average
The statistical mean; the sum of a set of values divided by the
number of values in the set.
band
A set of data file values for a specific portion of the electromagnetic
spectrum of reflected light or emitted heat (red, green, blue, near-
infrared, infrared, thermal, and so on) or some other user-defined
information created by combining or enhancing the original bands,
or creating new bands from other sources. Sometimes called
channel.
bilinear interpolation
Uses the data file values of four pixels in a 2 × 2 window to
calculate an output value with a bilinear function.
bin function
A mathematical function that establishes the relationship between
data file values and rows in a descriptor table.
bins
Ordered sets of pixels. Pixels are sorted into a specified number of
bins. The pixels are then given new values based upon the bins to
which they are assigned.
border
On a map, a line that usually encloses the entire map, not just the
image area as does a neatline.
boundary
A neighborhood analysis technique that is used to detect
boundaries between thematic classes.
GLOSSARY 185
brightness value
The quantity of a primary color (red, green, blue) to be output to a
pixel on the display device. Also called intensity value, function
memory value, pixel value, display value, and screen value.
buffer zone
A specific area around a feature that is isolated for or from further
analysis. For example, buffer zones are often generated around
streams in site assessment studies so that further analyses exclude
these areas that are often unsuitable for development.
Cartesian
A coordinate system in which data are organized on a grid and
points on the grid are referenced by their X,Y coordinates.
camera properties
Camera properties are for the orthorectification of any image that
uses a camera for its sensor. The model is derived by space
resection based on collinearity equations. The elevation
information is required in the model for removing relief
displacement.
categorize
The process of choosing distinct classes to divide your image into.
cell
1. A 1 × 1 area of coverage. DTED (Digital Terrain Elevation Data)
are distributed in cells. 2. A pixel; grid cell.
cell size
The area that one pixel represents, measured in map units. For
example, one cell in the image may represent an area 30’ × 30’ on
the ground. Sometimes called the pixel size.
checkpoint analysis
The act of using check points to independently verify the degree of
accuracy of a triangulation.
circumcircle
A triangle’s circumscribed circle; the circle that passes through
each of the triangle’s three vertices.
class
A set of pixels in a GIS file that represents areas that share some
condition. Classes are usually formed through classification of a
continuous raster layer.
class value
A data file value of a thematic file that identifies a pixel as
belonging to a particular class.
classification
The process of assigning the pixels of a continuous raster image to
discrete categories.
classification accuracy table
For accuracy assessment, a list of known values of reference pixels,
supported by some ground truth or other a priori knowledge of the
true class, and a list of the classified values of the same pixels, from
a classified file to be tested.
USING IMAGE ANALYSIS FOR ARCGIS 186
classification scheme (or classification system)
A set of target classes. The purpose of such a scheme is to provide
a framework for organizing and categorizing the information that
can be extracted from the data.
clustering
Unsupervised training; the process of generating signatures based
on the natural groupings of pixels in image data when they are
plotted in spectral space.
clusters
The natural groupings of pixels when plotted in spectral space.
coefficient
One number in a matrix, or a constant in a polynomial expression.
collinearity
A nonlinear mathematical model that photogrammetric
triangulation is based upon. Collinearity equations describe the
relationship among image coordinates, ground coordinates, and
orientation parameters.
contiguity analysis
A study of the ways in which pixels of a class are grouped together
spatially. Groups of contiguous pixels in the same class, called
raster regions, or clumps, can be identified by their sizes and
multiplied.
continuous
A term used to describe raster data layers that contain quantitative
and related values. See continuous data.
continuous data
A type of raster data that are quantitative (measuring a
characteristic) and have related, continuous values, such as
remotely sensed images ( Landsat, SPOT, and so on).
contrast stretch
The process of reassigning a range of values to another range,
usually according to a linear function. Contrast stretching is often
used in displaying continuous raster layers, since the range of data
file values is usually much narrower than the range of brightness
values on the display device.
convolution filtering
The process of averaging small sets of pixels across an image. Used
to change the spatial frequency characteristics of an image.
convolution kernel
A matrix of numbers that is used to average the value of each pixel
with the values of surrounding pixels in a particular way. The
numbers in the matrix serve to weight this average toward
particular pixels.
coordinate system
A method of expressing location. In two-dimensional coordinate
systems, locations are expressed by a column and row, also called
X and Y.
correlation threshold
A value used in rectification to determine whether to accept or
discard GCPs. The threshold is an absolute value threshold ranging
from 0.000 to 1.000.
GLOSSARY 187
correlation windows
Windows that consist of a local neighborhood of pixels.
corresponding GCPs
The GCPs that are located in the same geographic location as the
selected GCPs, but are selected in different files.
covariance
Measures the tendencies of data file values for the same pixel, but
in different bands, to vary with each other in relation to the means
of their respective bands. These bands must be linear. Covariance
is defined as the average product of the differences between the
data file values in each band and the mean of each band.
covariance matrix
A square matrix that contains all of the variances and covariances
within the bands in a data file.
cubic convolution
Uses the data file values of sixteen pixels in a 4 × 4 window to
calculate an output with cubic function.
data
1. In the context of remote sensing, a computer file containing
numbers that represent a remotely sensed image, and can be
processed to display that image. 2. A collection of numbers, strings,
or facts that requires some processing before it is meaningful.
database
A relational data structure usually used to store tabular information.
Examples of popular databases include SYBASE, dBASE, Oracle,
INFO, etc.
data file
A computer file that contains numbers that represent an image.
data file value
Each number in an image file. Also called file value, image file
value, DN, brightness value, pixel.
decision rule
An equation or algorithm that is used to classify image data after
signatures have been created. The decision rule is used to process
the data file values based upon the signature statistics.
density
A neighborhood analysis technique that outputs the number of
pixels that have the same value as the analyzed pixel in a user-
specified window.
digital elevation model (DEM)
Continuous raster layers in which data file values represent
elevation. DEMs are available from the USGS at 1:24,000 and
1:250,000 scale, and can be produced with terrain analysis
programs.
USING IMAGE ANALYSIS FOR ARCGIS 188
digital terrain model (DTM)
A discrete expression of topography in a data array, consisting of a
group of planimetric coordinates (X,Y) and the elevations of the
ground points and breaklines.
dimensionality
In classification dimensionality refers to the number of layers being
classified. For example, a data file with three layers is said to be
three dimensional.
divergence
A statistical measure of distance between two or more signatures.
Divergence can be calculated for any combination of bands used in
the classification; bands that diminish the results of the
classification can be ruled out.
diversity
A neighborhood analysis technique that outputs the number of
different values within a user-specified window.
edge detector
A convolution kernel, which is usually a zero-sum kernel, that
smooths out or zeros out areas of low spatial frequency and creates
a sharp contrast where spatial frequency is high. High spatial
frequency is at the edges between homogeneous groups of pixels.
edge enhancer
A high-frequency convolution kernel that brings out the edges
between homogeneous groups of pixels. Unlike an edge detector, it
only highlights edges, it does not necessarily eliminate other
features.
enhancement
The process of making an image more interpretable for a particular
application. Enhancement can make important features of raw,
remotely sensed data more interpretable to the human eye.
extension
The three letters after the period in a file name that usually identify
the type of file.
extent
1. The image area to be displayed in a View. 2. The area of the
earth’s surface to be mapped.
feature collection
The process of identifying, delineating, and labeling various types
of natural and human-made phenomena from remotely-sensed
images.
feature extraction
The process of studying and locating areas and objects on the
ground and deriving useful information from images.
feature space
An abstract space that is defined by spectral units (such as an
amount of electromagnetic radiation).
fiducial center
The center of an aerial photo.
GLOSSARY 189
fiducials
Four or eight reference markers fixed on the frame of an aerial
metric camera and visible in each exposure that are used to
compute the transformation from data file to image coordinates.
file coordinates
The location of a pixel within the file in x.y coordinates. The upper
left file coordinate is usually 0,0.
filtering
The removal of spatial or spectral features for data enhancement.
Convolution filtering is one method of spatial filtering. Some texts
may use the terms filtering and spatial filtering synonymously.
focal
The process of performing one of several analyses on data values
in an image file, using a process similar to convolution filtering.
GCP matching
For image to image rectification, a GCP selected in one image is
precisely matched to its counterpart in the other image using the
spectral characteristics of the data and the transformation matrix.
geocorrection
The process of rectifying remotely sensed data that has distortions
due to a sensor or the curvature of the earth.
geographic information system (GIS)
A unique system designed for a particular application that stores,
enhances, combines, and analyzes layers of geographic data to
produce interpretable information. A GIS may include computer
images, hardcopy maps, statistical data, and any other data needed
for a study, as well as computer software and human knowledge.
GISs are used for solving complex geographic planning and
management problems.
georeferencing
The process of assigning map coordinates to image data and
resampling the pixels of the image to conform to the map projection
grid.
ground control point (GCP)
Specific pixel in image data for which the output map coordinates
(or other output coordinates) are known. GCPs are used for
computing a transformation matrix, for use in rectifying an image.
high frequency kernel
A convolution kernel that increases the spatial frequency of an
image. Also called a high-pass kernel.
histogram
A graph of data distribution, or a chart of the number of pixels that
have each possible data file value. For a single band of data, the
horizontal axis of a histogram graph is the range of all possible data
file values. The vertical axis is a measure of pixels that have each
data value.
USING IMAGE ANALYSIS FOR ARCGIS 190
histogram equalization
The process of redistributing pixel values so that there are
approximately the same number of pixels with each value within a
range. The result is a nearly flat histogram.
histogram matching
The process of determining a lookup table that converts the
histogram of one band of an image or one color gun to resemble
another histogram.
hue
A component of IHS (intensity, hue, saturation) that is
representative of the color or dominant wavelength of the pixel. It
varies from 0 to 360. Blue = 0 (and 360) magenta = 60, red = 120,
yellow = 180, green = 240, and cyan = 300.
IKONOS properties
Use the IKONOS Properties geocorrection dialog to perform
orthorectification on images gathered with the IKONOS satellite.
The IKONOS satellite orbits at an altitude of 423 miles, or 681
kilometers. The revisit time is 2.9 days at 1 meter resolution, and
1.5 days at 1.5 meter resolution.
image data
Digital representations of the earth that can be used in computer
image processing and GIS analyses.
image file
A file containing raster image data.
image matching
The automatic acquisition of corresponding image points on the
overlapping area of two images.
image processing
The manipulation of digital image data, including (but not limited
to) enhancement, classification, and rectification operations.
indices
The process used to create output images by mathematically
combining the DN values of different bands.
IR
Infrared portion of the electromagnetic spectrum.
island polygons
When using Seed Tool, island polygons represent areas in the
polygon that have differing characteristics from the areas in the
larger polygon. You have the option to use the island polygons
feature or to turn it off when using Seed Tool.
ISODATA (Iterative Self-Organizing Data Analysis
Technique)
A method of clustering that uses spectral distance as in the
sequential method, but iteratively classifies the pixels, redefines the
criteria for each class, and classifies again so that the spectral
distance patterns in the data gradually emerge.
Landsat
A series of earth-orbiting satellites that gather MSS and TM
imagery operated by EOSAT.
GLOSSARY 191
layer
1. A band or channel of data. 2. A single band or set of three bands
displayed using the red, green, and blue color guns. 3. A component
of a GIS database that contains all of the data for one theme. A layer
consists of a thematic image file, and may also include attributes.
linear
A description of a function that can be graphed as a straight line or
a series of lines. Linear equations (transformations) can generally
be expressed in the form of the equation of a line or plane. Also
called 1st-order.
linear contrast stretch
An enhancement technique that outputs new values at regular
intervals.
linear transformation
A 1st-order rectification. A linear transformation can change
location in X and/or Y, scale in X and/or Y, skew in X and/or Y,
and rotation.
lookup table (LUT)
An ordered set of numbers that is used to perform a function on a
set of input values. To display or print an image, lookup tables
translate data file values into brightness values.
low frequency kernel
A convolution kernel that decreases spatial frequency. Also called
low-pass kernel.
majority
A neighborhood analysis technique that outputs the most common
value of the data file values in a user-specified window.
map projection
A method of representing the three-dimensional spherical surface
of a planet on a two-dimensional map surface. All map projections
involve the transfer of latitude and longitude onto an easily
flattened surface.
maximum
A neighborhood analysis technique that outputs the greatest value
of the data file values in a user-specified window.
maximum likelihood
A classification decision rule based on the probability that a pixel
belongs to a particular class. The basic equation assumes that these
probabilities are equal for all classes, and that the input bands have
normal distributions.
mean
1. The statistical Average; the sum of a set of values divided by the
number of values in the set. 2. A neighborhood analysis technique
that outputs the mean value of the data file values in a user-
specified window.
median
1. The central value in a set of data such that an equal number of
values are greater than and less than the median. 2. A neighborhood
analysis technique that outputs the median value of the data file
values in a user-specified window.
USING IMAGE ANALYSIS FOR ARCGIS 192
minimum
A neighborhood analysis technique that outputs the least value of
the data file values in a user-specified window.
minimum distance
A classification decision rule that calculates the spectral distance
between the measurement vector for each candidate pixel and the
mean vector for each signature. Also called spectral distance.
minority
A neighborhood analysis technique that outputs the least common
value of the data file values in a user-specified window.
modeling
The process of creating new layers from combining or operating
upon existing layers. Modeling allows the creation of new classes
from existing classes and the creation of a small set of images, or a
single image, which, at a glance, contains many types of
information about a scene.
mosaicking
The process of piecing together images side by side to create a
larger image.
multispectral classification
The process of sorting pixels into a finite number of individual
classes, or categories of data, based on data file values in multiple
bands.
multispectral imagery
Satellite imagery with data recorded in two or more bands.
multispectral scanner (MSS)
Landsat satellite data acquired in four bands with a spatial
resolution of 57 × 79 meters.
nadir
The area on the ground directly beneath a scanner’s detectors.
NDVI
See Normalized Difference Vegetation Index.
nearest neighbor
A resampling method in which the output data file value is equal to
the input pixel that has coordinates closest to the retransformed
coordinates of the output pixel.
neighborhood analysis
Any image processing technique that takes surrounding pixels into
consideration, such as convolution filtering and scanning.
no data
NoData is what you assign to pixel values you do not want to
include in a classification or function. By assigning pixel values
NoData, they are not given a value. Images that georeference to
non-rectangles need a NoData concept for display even if they are
not classified. The values that NoData pixels are given are
understood to be just place holders.
non-directional
The process using the Sobel and Prewitt filters for edge detection.
These filters use orthogonal kernels convolved separately with the
original image, and then combined.
GLOSSARY 193
nonlinear
Describing a function that cannot be expressed as the graph of a
line or in the form of the equation of a line or plane. Nonlinear
equations usually contain expressions with exponents. Second-
order (2nd-order) or higher-order equations and transformations
are nonlinear.
nonlinear transformation
A 2nd-order or higher rectification.
nonparametric signature
A signature for classification that is based on polygons or
rectangles that are defined in the feature space image for the image
file. There is not statistical basis for a nonparametric signature; it is
simply an area in a feature space image.
normalized difference vegetation index (NDVI)
The formula for NDVI is IR - R / IR + R, where IR stands for the
infrared portion of the electromagnetic spectrum, and R stands for
the red portion of the electromagnetic spectrum. NDVI finds areas
of vegetation in imagery.
observation
In photogrammetric triangulation, a grouping of the image
coordinates for a GCP.
off-nadir
Any point that is not directly beneath a scanner’s detectors, but off
to an angle. The SPOT scanner allows off-nadir viewing.
orthorectification
A form of rectification that corrects for terrain displacement and
can be used if a DEM of the study area is available.
overlay
1. A function that creates a composite file containing either the
minimum or the maximum class values of the input files. Overlay
sometimes refers generically to a combination of layers. 2. The
process of displaying a classified file over the original image to
inspect the classification.
panchromatic imagery
Single-band or monochrome satellite imagery.
parallelepiped
1. A classification decision rule in which the data file values of the
candidate pixel are compared to upper and lower limits. 2. The
limits of a parallelepiped classification, especially when graphed as
rectangles.
parameter
1. Any variable that determines the outcome of a function or
operation. 2. The mean and standard deviation of data, which are
sufficient to describe a normal curve.
parametric signature
A signature that is based on statistical parameters (such as mean
and covariance matrix) of the pixels that are in the training sample
or cluster.
USING IMAGE ANALYSIS FOR ARCGIS 194
pattern recognition
The science and art of finding meaningful patterns in data, which
can be extracted through classification.
piecewise linear contrast stretch
An enhancement technique used to enhance a specific portion of
data by dividing the lookup table into three sections: low, middle,
and high.
pixel
Abbreviated from picture element; the smallest part of a picture
(image).
pixel depth
The number of bits required to store all of the data file values in a
file. For example, data with a pixel depth of 8, or 8-bit data, have
256 values ranging from 0-255.
pixel size
The physical dimension of a single light-sensitive element (13 × 13
microns).
polygon
A set of closed line segments defining an area.
polynomial
A mathematical expression consisting of variables and coefficients.
A coefficient is a constant that is multiplied by a variable in the
expression.
principal components analysis (PCA)
1. A method of data compression that allows redundant data to be
compressed into fewer bands (Jensen 1996; Faust 1989). 2. The
process of calculating principal components and outputting
principal component bands. It allows redundant data to be
compacted into fewer bands (that is the dimensionality of the data
is reduced).
principal point
The point in the image plane onto which the perspective center is
projected, located directly beneath the interior orientation.
profile
A row of data file values from a DEM or DTED file. The profiles
of DEM and DTED run south to north (that is the first pixel of the
record is the southernmost pixel).
pushbroom
A scanner in which all scanning parts are fixed, and scanning is
accomplished by the forward motion of the scanner, such as the
SPOT scanner.
QuickBird
The QuickBird model requires the use of rational polynomial
coefficients (RPCs) to describe the relationship between the image
and the earth's surface at the time of image capture. By using
QuickBird Properties, you can perform orthorectification on
images gathered with the QuickBird satellite
GLOSSARY 195
radar data
The remotely sensed data that are produced when a radar
transmitter emits a beam of micro or millimeter waves, the waves
reflect from the surfaces they strike, and the backscattered radiation
is detected by the radar system’s receiving antenna, which is tuned
to the frequency of the transmitted waves.
radiometric correction
The correction of variations in data that are not caused by the object
or scene being scanned, such as scanner malfunction and
atmospheric interference.
radiometric enhancement
An enhancement technique that deals with the individual values of
pixels in an image.
radiometric resolution
The dynamic range, or number of possible data file values, in each
band. This is referred to by the number of bits into which the
recorded energy is divided. See pixel depth.
rank
A neighborhood analysis technique that outputs the number of
values in a user-specified window that are less than the analyzed
value.
raster data
A data type in which thematic class values have the same properties
as interval values, except that ratio values have a natural zero or
starting point.
recoding
The assignment of new values to one or more classes.
rectification
The process of making image data conform to a map projection
system. In many cases, the image must also be oriented so that the
north direction corresponds to the top of the image.
rectified coordinates
The coordinates of a pixel in a file that has been rectified, which are
extrapolated from the GCPs. Ideally, the rectified coordinates for
the GCPs are exactly equal to the reference coordinates. Because
there is often some error tolerated in the rectification, this is not
always the case.
reference coordinates
The coordinates of the map or reference image to which a source
(input) image is being registered. GCPs consist of both input
coordinates and reference coordinates for each point.
reference pixels
In classification accuracy assessment, pixels for which the correct
GIS class is known from ground truth or other data. The reference
pixels can be selected by you, or randomly selected.
reference plane
In a topocentric coordinate system, the tangential plane at the
center of the image on the earth ellipsoid, on which the three
perpendicular coordinate axes are defined.
USING IMAGE ANALYSIS FOR ARCGIS 196
reproject
Transforms raster image data from one map projection to another.
resampling
The process of extrapolating data file values for the pixels in a new
grid when data have been rectified or registered to another image.
resolution
A level of precision in data.
resolution merging
The process of sharpening a lower-resolution multiband image by
merging it with a higher-resolution monochrome image.
RGB
Red, green, blue. The primary additive colors that are used on most
display hardware to display imagery.
RGB clustering
A clustering method for 24-bit data (three 8-bit bands) that plots
pixels in three-dimensional spectral space and divides that space
into sections that are used to define clusters. The output color
scheme of an RGB-clustered image resembles that of the input file.
RMS error
The distance between the input (source) location of the GCP and
the retransformed location for the same GCP. RMS error is
calculated with a distance equation.
RPC properties
The RPC Properties uses rational polynomial coefficients to
describe the relationship between the image and the earth's surface
at the time of image capture. You can specify the associated RPC
file to be used in your geocorrection.
rubber sheeting
The application of nonlinear rectification (2nd-order or higher).
saturation
A component of IHS that represents the purity of color and also
varies linearly from 0 to 1.
scale
1. The ratio of distance on a map as related to the true distance on
the ground. 2. Cell size. 3. The processing of values through a
lookup table.
scanner
The entire data acquisition system such as the Landsat scanner or
the SPOT panchromatic scanner.
seed tool
An Image Analysis for ArcGIS feature that automatically generates
feature layer polygons of similar spectral value.
shapefile
A vector format that contains spatial data. Shapefiles have the .shp
extension.
GLOSSARY 197
signature
A set of statistics that defines a training sample or cluster. The
signature is used in a classification process. Each signature
corresponds to a GIS class that is created from the signatures with
a classification decision rule.
source coordinates
In the rectification process, the input coordinates.
spatial enhancement
The process of modifying the values of pixels in an image relative
to the pixels that surround them.
spatial frequency
The difference between the highest and lowest values of a
contiguous set of pixels.
spatial resolution
A measure of the smallest object that can be resolved by the sensor,
or the area on the ground represented by each pixel.
speckle noise
The light and dark pixel noise that appears in radar data.
spectral distance
The distance in spectral space computed as Euclidean distance in
n-dimensions, where n is the number bands.
spectral enhancement
The process of modifying the pixels of an image based on the
original values of each pixel, independent of the values of
surrounding pixels.
spectral resolution
A measure of the smallest object that can be resolved by the sensor,
or the area on the ground represented by each pixel.
spectral space
An abstract space that is defined by spectral units (such as an
amount of electromagnetic radiation). The notion of spectral space
is used to describe enhancement and classification techniques that
compute the spectral distance between n-dimensional vectors,
where n is the number of bands in the data.
SPOT
SPOT satellite sensors operate in two modes, multispectral and
panchromatic. SPOT is often referred to as the pushbroom scanner,
meaning that all scanning parts are fixed, and scanning is
accomplished by the forward motion of the scanner.
standard deviation
1. The square root of the variance of a set of values which is used
as a measurement of the spread of the values. 2. A neighborhood
analysis technique that outputs the standard deviation of the data
file values of a user-specified window.
USING IMAGE ANALYSIS FOR ARCGIS 198
striping
A data error that occurs if a detector on a scanning system goes out
of adjustment, that is, it provides readings consistently greater than
or less than the other detectors for the same band over the same
ground cover.
subsetting
The process of breaking out a portion of a large image file into one
or more smaller files.
sum
A neighborhood analysis technique that outputs the total of the data
file values in a user-specified window.
supervised training
Any method of generating signatures for classification in which the
analyst is directly involved in the pattern recognition process.
Usually, supervised training requires the analyst to select training
samples from the data that represent patterns to be classified.
swath width
In a satellite system, the total width of the area on the ground
covered by the scanner.
summarize areas
A common workflow progression with feature theme
corresponding to an area of interest to summarize the change just
within a certain area.
temporal resolution
The frequency with which a sensor obtains imagery of a particular
area.
terrain analysis
The processing and graphic simulation of elevation data.
terrain data
Elevation data expressed as a series of x, y, and z values that are
either regularly or irregularly spaced.
thematic change
Thematic Change is a feature in Image Analysis for ArcGIS that
allows you to compare two thematic images of the same area
captured at different times to notice change in vegetation, urban
areas, and so on.
thematic data
Raster data that is qualitative and categorical. Thematic layers
often contain classes of related information, such as land cover, soil
type, slope, etc.
thematic map
A map illustrating the class characterizations of a particular spatial
variable such as soils, land cover, hydrology, etc.
thematic mapper (TM)
Landsat data acquired in seven bands with a spatial resolution of 30
× 30 meters.
GLOSSARY 199
theme
A particular type of information, such as soil type or land use, that
is represented in a layer.
threshold
A limit, or cutoff point, usually a maximum allowable amount of
error in an analysis. In classification, thresholding is the process of
identifying a maximum distance between a pixel and the mean of
the signature to which it was classified.
training
The process of defining the criteria by which patterns in image data
are recognized for the purpose of classification.
training sample
A set of pixels selected to represent a potential class. Also called
sample.
transformation matrix
A set of coefficients that is computed from GCPs, and used in
polynomial equations to convert coordinates from one system to
another. The size of the matrix depends upon the order of the
transformation.
triangulation
Establishes the geometry of the camera or sensor relative to objects
on the earth’s surface.
true color
A method of displaying an image (usually from a continuous raster
layer) that retains the relationships between data file values and
represents multiple bands with separate color guns. The image
memory values from each displayed band are translated through
the function memory of the corresponding color gun.
unsupervised training
A computer-automated method of pattern recognition in which
some parameters are specified by the user and are used to uncover
statistical patterns that are inherent in the data.
variable
1. A numeric value that is changeable, usually represented with a
letter. 2. A thematic layer. 3. One band of a multiband image. 4. In
models, objects that have been associated with a name using a
declaration statement.
vector data
Data that represents physical forms (elements) such as points, lines,
and polygons. Only the vertices of vector data are stored, instead of
every point that makes up the element.
vegetative indices
A gray scale image that clearly highlights vegetation.
zoom
The process of expanding displayed pixels on an image so they can
be more closely studied. Zooming is similar to magnification,
except that it changes the display only temporarily, leaving image
memory the same.
USING IMAGE ANALYSIS FOR ARCGIS 200
201
References
References
This appendix lists references used in the creation of this book.
Akima, H., 1978, A Method for Bivariate Interpolation and Smooth Surface Fitting for
Irregularly Distributed Data Points, ACM Transactions on Mathematical Software 4(2),
pp. 148-159.
Buchanan, M.D. 1979. “Effective Utilization of Color in Multidimensional Data
Presentation. “Proceedings of the Society of Photo-Optical Engineers, Vol. 199: 9-19.
Chavez, Pat S., Jr, et al. 1991. “Comparison of Three Different Methods to Merge
Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic.”
Photogrammetric Engineering & Remote Sensing, Vol. 57, No. 3: 295-303.
Conrac Corp., Conrac Division. 1980. Raster Graphics Handbook. Covina, California:
Conrac Corp.
Daily, Mike. 1983. “Hue-Saturation-Intensity Split-Spectrum Processing of Seasat Radar
Imagery.” Photogrammetric Engineering& Remote Sensing, Vol. 49, No. 3: 349-355.
ERDAS 2000. ArcView Image Analysis. Atlanta, Georgia: ERDAS, Inc.
ERDAS 1999. Field Guide. 5th ed. Atlanta: ERDAS, Inc.
ESRI 1992. Map Projections & Coordinate Management: Concepts and Procedures.
Redlands, California: ESRI, Inc.
Faust, Nickolas L. 1989. “Image Enhancement.” Volume 20, Supplement 5 of Encyclopedia
of Computer Science and Technology, edited by Allen Kent and James G. Williams. New
York: Marcel Dekker, Inc.
Gonzalez, Rafael C., and Paul Wintz. 1977. Digital Image Processing. Reading,
Massachusetts: Addison-Wesley Publishing Company.
Holcomb, Derrold W. 1993. “Merging Radar and VIS/IR Imagery.” Paper submitted to the
1993 ERIM Conference, Pasadena, California.
Hord, R. Michael. 1982. Digital Image Processing of Remotely Sensed Data. New York.
Academic Press.
USING IMAGE ANALYSIS FOR ARCGIS 202
Jensen, John R., et al. 1983. “Urban/Suburban Land Use Analysis.” Chapter 30 in Manual of Remote Sensing, edited by Robert N.
Colwell. Falls Church, Virginia: American Society of Photogrammetry.
Jensen, John R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective. Englewood Cliffs, New Jersey:
Prentice-Hall.
Kloer, Brian R. 1994. “Hybrid Parametric/Non-parametric Image Classification.” Paper presented at the ACSM-ASPRS Annual
Convention, April 1994, Reno, Nevada.
Lillesand, Thomas M., and Ralph W. Kiefer. 1987. Remote Sensing and Image Interpretation. New York: John Wiley & Sons, Inc.
Marble, Duane F. 1990. “Geographic Information Systems: An Overview.” Introductory Readings in Geographic Information
Systems, edited by Donna J. Peuquet and Duane F. Marble. Bristol, Pennsylvania: Taylor & Francis, Inc.
McCoy, Jill, and Kevin Johnston. Using ArcGIS Spatial Analyst. Redlands, California: ESRI, Inc.
Sabins, Floyd F., Jr. 1987. Remote Sensing Principles and Interpretation. New York: W. H. Freeman and Co.
Schowengerdt, Robert A. 1983. Techniques for Image Processing and Classification in Remote Sensing. New York. Academic Press.
Schowengerdt, Robert A. 1980. “Reconstruction of Multispatial, Multispectral Image Data Using Spatial Frequency Content.”
Photogrammetric Engineering & Remote Sensing, Vol. 46, No. 10: 1325-1334.
Star, Jeffrey, and John Estes. 1990. Geographic Information Systems: An Introduction. Englewood Cliffs, New Jersey: Prentice-Hall.
Swain, Philip H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis (LARS Information Note 111572). West
Lafayette, Indiana: The Laboratory for Applications of Remote Sensing, Purdue University.
Swain, Philip H., and Shirley M. Davis. 1978. Remote Sensing: The Quantitative Approach. New York: McGraw Hill Book Company.
Tou, Julius T., and Rafael C. Gonzalez. 1974. Pattern Recognition Principles. Reading, Massachusetts: Addison-Wesley Publishing
Company.
Tucker, Compton J. 1979. “Red and Photographic Infrared Linear Combinations for Monitoring Vegetation.” Remote Sensing of
Environment, Vol. 8: 127-150.
Walker, Terri C., and Richard K. Miller. 1990. Geographic Information Systems: An Assessment of Technology, Applications, and
Products. Madison, Georgia: SEAI Technical Publications.
Watson, David, 1994, Contouring: A Guide to the Analysis and Display of Spatial Data, Elsevier Science, New York.
REFERENCES 203
Welch, R., and W.Ehlers. 1987. “Merging Multiresolution SPOT HRV and Landsat TM Data.” Photogrammetric Engineering &
Remote Sensing, Vol. 53, No. 3: 301-303.
USING IMAGE ANALYSIS FOR ARCGIS 204
205
Index
Index
A
A priori 183
Absorption spectra 101
Abstract symbol 183
Accuracy assessment 183
Ancillary data 183
Annotation 183
AOI 183
Area 184
Area of interest 184
ASCII 183
Aspect 184
Atmospheric correction 91
Attribute 184
Average 184
AVHRR 102
B
Band 184
Bilinear interpolation 184
Bin 87
Bin function 184
Bins 184
Border 184
Boundary 184
brightness inversion 94
Brightness value 184
Brovey Transform 79
Buffer zone 185
C
Camera Model
tutorial 33
Camera Properties
Fiducials 172
Camera properties 185
Camera Properties dialog 171
Cartesian 185
Categorize 185
Cell 185
Cell Size 48
Cell Size Tab
workflow 51
Checkpoint analysis 170
Class 185
value
numbering systems 114
Class value 185
Classification 152, 185
Classification accuracy table 185
Classification scheme 185
Clustering 186
Clusters 186
Coefficient 186
Collinearity 186
Contiguity analysis 186
Continuous 186
Continuous data 186
Contrast stretch
for display 85
linear 84
min/max vs. standard deviation 85
nonlinear 84
piecewise linear 84
Convolution 70
filtering 109
Convolution Filtering 70
Convolution filtering 186
Convolution kernel 186
Coordinate system 186
Correlation threshold 186
Correlation windows 186
Corresponding GCPs 187
Covariance 187
Covariance matrix 187
Creating a shapefile
tutorial 18
Cubic convolution 187
USING IMAGE ANALYSIS FOR ARCGIS 206
D
Data 108, 187
Data file 187
Data file value 187
display 84
Database 187
Decision rule 187
Digital elevation model 187
Digital terrain model 187
Display device 84, 85, 96
E
Edge detector 188
Edge enhancer 188
Effects of order 163
Enhancement 188
linear 84
nonlinear 84
radiometric 83
spatial 83
Extension 188
Extent 47
Extent Tab
workflow 51
F
Feature collection 188
Feature extraction 188
Feature space 188
Fiducial center 188
Fiducials 188
File coordinates 189
Filtering 189
Finding areas of change 22
Focal 189
Focal Analysis 77
workflow 78
Focal operation 109
G
GCP matching 189
GCPs 151
General Tab
workflow 50
Geocorrection 189
tutorial 33
Geocorrection property dialogs 153
Elevation tab 155
General tab 153
Links tab 154
Geographic information system 189
Georeferencing 150, 189
GIS
defined 107
Ground control point 189
Ground control points 151
H
High frequency kernel 189
High Frequency Kernels 72
High order polynomials 162
Histogram 189
breakpoint 85
Histogram Equalization
tutorial 14
Histogram equalization 189
formula 88
Histogram match 91
Histogram matching 190
histogram matching 92
Histogram Stretch
tutorial 14
Hue 96, 190
I
Identifying similar areas 18
IHS to RGB 99
IKONOS
Chipping tab 174
IKONOS properties 190
IKONOS Properties dialog 173
Image data 190
Image Difference
tutorial 22
Image file 190
Image Info 45
workflow 46
Image matching 190
Image processing 190
Index 101
Indices 190
Information (vs. data) 108
Intensity 96
IR 190
Island Polygons 41
ISODATA 190
L
Landsat 190
bands and wavelengths 177
MSS 102
TM 99, 102
Landsat 7 180
Landsat Properties 177
Landsat Properties dialog 181
Layer 190
Linear 191
Linear transformation 169, 191
Linear transformations 161
Lookup table 84
display 85
Lookup table (LUT) 191
M
Majority 191
Map projection 191
Maximum likelihood 191
INDEX 207
Mean 85, 191
Median 191
Minimum 191
Minimum distance 192
Minimum GCPs 166
Minority 192
Modeling 192
Mosaicking 192
Mosaicking images
tutorial 30
MSS 177
Multispectral classification 192
Multispectral imagery 192
Multispectral scanner (MSS) 192
N
Nadir 192
NDVI 192
Nearest neighbor 152, 192
Neighborhood analysis 109, 192
density 109
diversity 109
majority 109
maximum 109
minimum 109
minority 109
rank 109
sum 109
NITF 176
NoData Value 45
Non-directional 192
Non-Directional Edge 75
workflow 76
Nonlinear transformation 170, 193
Nonlinear transformations 162
Normalized difference vegetation index
193
O
Observation 193
Off-nadir 193
Options
dialog 47
Options Dialog
workflow 50
Orientation tab 171
Orthorectification 193
tutorial 33
Overlay 193
P
Panchromatic imagery 193
Parallelepiped 193
Parameter 193
Parametric 131
Parametric signature 193
Pattern recognition 193
Pixel 194
Pixel depth 194
Pixel size 194
Placing links
tutorial 36
Polygon 194
Polynomial 194
Polynomial Properties dialog 168
Polynomial Transformation 161
Preference Tab 51
Preferences 49
Principal components analysis (PCA)
194
Profile 194
Pushbroom 194
Q
QuickBird 194
QuickBird Properties 176
QuickBird Properties dialog 173
R
Radar data 194
Radiometric correction 195
Radiometric enhancement 195
Radiometric resolution 195
Raster data 195
Recode 114
Recoding 195
Rectification 150, 195
Rectified coordinates 195
Reference coordinates 195
Reference pixels 195
Reference plane 195
Reflection spectra
see absorption spectra
Reproject 195
Resampling 196
Resolution 196
spatial 91
Resolution Merge 79
workflow 80
Resolution merging 196
RGB 196
RGB clustering 196
RMS error 151, 196
RMSE 35
RPC properties 196
RPC Properties dialog 173, 176
Rubber Sheeting 169
Rubber sheeting 196
S
Saturation 96, 196
Scale 196
Scanner 196
Scanning window 109
Seed Radius 40
workflow 44
Seed Tool 18
USING IMAGE ANALYSIS FOR ARCGIS 208
controlling 40
workflow 42
Seed Tool Properties 40
Shadow
enhancing 84
Shapefile 196
Signature 196
Source coordinates 197
Spatial Enhancement 69
Spatial enhancement 197
Spatial frequency 197
Spatial resolution 197
Speckle noise 197
Spectral distance 197
Spectral enhancement 197
Spectral resolution 197
Spectral space 197
SPOT 197
panchromatic 99
XS 102
Spot 158
Panchromatic 158
XS 158
Spot 4 159
Spot Properties dialog 160
Standard deviation 85, 197
Starting Image Analysis for ArcGIS 12
Stereoscopic pairs 159
Striping 197
Subsetting 198
Summarize areas 198
Supervised training 198
Swath width 198
T
Temporal resolution 198
Terrain analysis 198
Terrain data 198
Thematic Change
tutorial 24
Thematic data 198
Thematic files 152
Thematic map 198
Thematic mapper (TM) 198
Theme 198
Threshold 199
TM 177
TM data 179
Training 199
Training sample 199
Transformation matrix 161, 199
Triangle-based finite element analysis
169
Triangle-based rectification 169
Triangulation 169, 199
True color 199
tutorial 18
U
Unsupervised Classification
tutorial 25
Unsupervised training 199
V
Variable 199
Vector data 199
Vegetative indices 199
Z
Zero Sum Kernels 72
Zoom 199

Using the Image Analysis Extension for ArcGIS

Copyright © 2003 Leica Geosystems GIS & Mapping, LLC All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive property of Leica Geosystems GIS & Mapping, LLC. This work is protected under United States copyright law and other international copyright treaties and conventions. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage or retrieval system, as expressly permitted in writing by Leica Geosystems GIS & Mapping, LLC. All requests should be sent to Attention: Manager of Technical Documentation, Leica Geosystems GIS & Mapping, LLC, 2801 Buford Highway NE, Suite 400, Atlanta, GA, 30329-2137, USA. The information contained in this document is subject to change without notice.

CONTRIBUTORS
′ Contributors to this book and the On-line Help for Image Analysis for ArcGIS include: Christine Beaudoin, Jay Pongonis, Kris Curry, Lori Zastrow, Mladen Stojic , and Cheryl Brantley of Leica Geosystems GIS & Mapping, LLC.

U. S. GOVERNMENT RESTRICTED/LIMITED RIGHTS
Any software, documentation, and/or data delivered hereunder is subject to the terms of the License Agreement. In no event shall the U.S. Government acquire greater than RESTRICTED/LIMITED RIGHTS. At minimum, use, duplication, or disclosure by the U.S. Government is subject to restrictions set forth in FAR §52.227-14 Alternates I, II, and III (JUN 1987); FAR §52.227-19 (JUN 1987), and/or FAR §12.211/12.212 (Commercial Technical Data/Computer Software); and DFARS §252.227-7015 (NOV 1995) (Technical Data) and/or DFARS §227.7202 (Computer Software), as applicable. Contractor/Manufacturer is Leica Geosystems GIS & Mapping, LLC, 2801 Buford Highway NE, Suite 400, Atlanta, GA, 30329-2137, USA. ERDAS, ERDAS IMAGINE, and IMAGINE OrthoBASE are registered trademarks. Image Analysis for ArcGIS is a trademark. ERDAS® is a wholly owned subsidiary of Leica Geosystems GIS & Mapping, LLC. Other companies and products mentioned herein are trademarks or registered trademarks of their respective trademark owners.

Contents

Contents

Contents Foreword

iii vii

Getting started
1 Introducing Image Analysis for ArcGIS
Learning about Image Analysis for ArcGIS 10

3

2 Quick-start tutorial

11
14

Exercise 1: Starting Image Analysis for ArcGIS 12 Exercise 2: Adding images and applying Histogram Stretch Exercise 3: Identifying similar areas in an image 18 Exercise 4: Finding areas of change 22 Exercise 5: Mosaicking images 30 Exercise 6: Orthorectification of camera imagery 33 What’s Next? 38

3 Applying data tools
Using Seed Tool Properties Image Info 45 Options 47

39
40

Working with features
4 Using Data Preparation
Create New Image 56 Subset Image 58 Mosaic Images 63 Reproject Image 66
III

55

5 Performing Spatial Enhancement
Convolution 70 Non-Directional Edge 75 Focal Analysis 77 Resolution Merge 79

69

6 Using Radiometric Enhancement
LUT Stretch 84 Histogram Equalization 87 Histogram Matching 91 Brightness Inversion 93

83

7 Applying Spectral Enhancement
RGB to IHS 96 IHS to RGB 99 Vegetative Indices 101 Color IR to Natural Color

95

104

8 Performing GIS Analysis
Information versus data Neighborhood Analysis Thematic Change 111 Recode 114 Summarize Areas 120 108 109

107

9 Using Utilities

123

Image Difference 124 Layer Stack 126

10 Understanding Classification
The Classification Process
IV

129

130
USING IMAGE ANALYSIS FOR ARCGIS

Classification tips 132 Unsupervised Classification/Categorize Image Supervised Classification 138 Classification decision rules 140 134 11 Using Conversion 143 145 147 Conversion 144 Converting raster to features Converting features to raster 12 Applying Geocorrection Tools When to rectify 150 Geocorrection property dialogs 153 SPOT 158 The Spot Properties dialog 160 Polynomial transformation 161 The Polynomial Properties dialog 168 Rubber Sheeting 169 Camera Properties 171 IKONOS. and RPC Properties Landsat 177 149 173 Glossary 183 201 References Index 205 CONTENTS V . QuickBird.

VI USING IMAGE ANALYSIS FOR ARCGIS .

attributed behavior. roads. easy-to-use information. images allow you to extract the Where and What. The data in a GIS needs to reflect reality. Uncovering Why. industrial usage and natural phenomena continually alter our geography. LLC use imagery to allow you to accurately address the questions Where and What. Images also record relationships and processes as they occur in the real world. so you can then derive answers for the other three. and snapshots of reality need to be incorporated and accurately transformed into instantaneously ready. From snapshots to digital reality. But images go beyond simply recording features. Images are snapshots of geography. Images are snapshots of life on earth. mountains. and mountains. What. suburban sprawl. There are five essential questions that any GIS needs to answer: Where. rivers. analyzed relationships. When. Precisely where is that building? What is that parcel of land used for? What type of tree is that? The new extensions developed by Leica Geosystems GIS and Mapping. so VII . and How. and How are all done within the GIS. rivers. Images chronicle our earth and everything associated with it. images are pivotal in creating and maintaining the information infrastructure used by today’s society. As our geography changes. Why.Foreword An image of the earth’s surface is a wealth of information. When. But our earth is changing! Urban growth. trees. schools. They are snapshots of our changing cities. and modeled processes. Today’s geographic information systems have been carefully created with features. they record a specific place at a specific point in time. Images capture a permanent record of buildings. but they are also snapshots of reality. and other features located on the earth’s surface.

and understanding our world. ′ Mladen Stojic Product Manager Leica Geosystems GIS & Mapping. The new extensions by Leica Geosystems are technological breakthroughs which allow you to transform a snapshot of geography into information that digitally represents reality in the context of a GIS.does the information we need to understand it. using a series of images of the same area taken over time allows you to more accurately model and analyze the relationships and processes that are important to our earth. mapping. The extensions provided by Leica Geosystems reliably transform imagery directly into your GIS for analyzing. On behalf of the Image Analysis for ArcGIS and Stereo Analyst for ArcGIS product teams. and processes captured at a specific moment in time. Image Analysis™ for ArcGIS and Stereo Analyst® for ArcGIS are tools built on top of a GIS to maintain that GIS with up-to-date information. behavior. Sincerely. visualizing. I wish you all the best in working with these new products and hope you are successful in your GIS and mapping endeavors. Because an image is a permanent record of features. relationships. LLC VIII USING IMAGE ANALYSIS FOR ARCGIS .

Getting started Section 1 .

.

Today. environmental assessment.1Introducing Image Analysis for ArcGIS Introducing Image Analysis for ArcGIS IN THIS CHAPTER • Updating a database • Categorizing land cover and characterizing sites • Identifying and summarizing natural hazard damage • Identifying and monitoring urban growth and changes • Extracting features automatically • Assessing vegetation stress 1 Image Analysis for ArcGIS™ is primarily designed for natural resource and infrastructure management. The extension is very useful in the fields of forestry. imagery of the earth’s surface is an integral part of desktop mapping and GIS. 3 . Image Analysis for ArcGIS gives you the ability to perform many tasks: • Import and incorporate raster imagery into ArcGIS. and general geographic database update and maintenance. • Evaluate images captured at different times to identify areas of change. • Find areas of dense and thriving vegetation in an image. • Align an image to a map coordinate system for precise area location. • Categorize images into classes corresponding to land cover types such as vegetation. • Identify and automatically map a land cover type with a single click. engineering. and infrastructure projects such as facility siting and corridor monitoring. • Rectify satellite images through Geocorrection Models. • Enhance the appearance of an image by adjusting contrast and brightness or by applying histogram stretches. agriculture. and it’s more important than ever to have the ability to provide realistic backdrops to geographic databases and to be able to quickly update details involving street use or land use data.

spatial. and map accuracies.Up datin g database s There are many kinds of imagery to choose from in a wide range of scales. With Image Analysis for ArcGIS you are able to use imagery to identify changes and make revisions and corrections to your geographic database. Airphoto with shapefile of streets 4 USING IMAGE ANALYSIS FOR ARCGIS . Aerial photography is often the choice for map updating because of its high precision. and spectral resolutions.

and must avoid fragile areas like wetlands. In this case the areas not suitable for tower placement are highlighted. With Image Analysis for ArcGIS. Classified image for radio towers INTRODUCING IMAGE ANALYSIS FOR ARCGIS 5 . you can categorize images into land cover classes to help identify suitable locations. and the placement for the towers can be sited appropriately. You can use imagery and analysis techniques to identify wetlands and other environmentally sensitive areas. The Classification features enable you to divide an image into many different classes. and then highlight them as you wish. must be within a certain range of elevations.Categorizing lan d cover and characte rizin g sites Transmission towers for radio-based telecommunications must all be visible from each other.

Landsat images taken before and after the hurricane. you can see detailed tree stand inventory and management information. you can use the mapping tools of Image Analysis for ArcGIS to show where the damage occurred. in conjunction with a shapefile that identifies the forest boundary. The lower image features the shapefile. 6 USING IMAGE ANALYSIS FOR ARCGIS . you can show the condition of the vegetation. With other ArcGIS tools. Within the shapefile. are used for comparison. Below. how much stress it suffers. The upper two pictures show the area in 1987 and in 1989 after Hurricane Hugo. and how much damage it sustained in the hurricane.Iden tify ing a nd su mmarizing natu ral hazard d amag e When viewing a forest hit by a hurricane.

The bottom image shows the actual growth. and images give a good sense of how they grow. The final view shows the differences in extent of urban land use and land cover between 1973 and 1994. The yellow urban areas from 1994 represent how much the city has grown beyond the red urban areas from 1973. and how remaining land can be preserved by managing that growth. Landsat data spanning 21 years was analyzed for urban growth. first in 1974 and then in 1994. Here. The top two images represent urban areas in red.Iden tify ing a nd mo nito rin g urban grow th and chang es Cities grow over time. INTRODUCING IMAGE ANALYSIS FOR ARCGIS 7 . Those differences are represented as classes. You can use Image Analysis for ArcGIS to reveal patterns of urban growth over time.

You can use synthetic aperture radar (SAR) data and Image Analysis for ArcGIS tools to identify and map the extent of such environmental hazards. Images depicting an oil spill off the coast of Spain and a polygon grown in the spill using Seed Tool.Ext ra c ti n g fea tu r e s a u t om a t ic a lly Suppose you are responsible for mapping the extent of an oil spill as part of a rapid response effort. The first image shows the spill. and the second image gives you an example of how you can isolate the exact extent of a particular pattern using Image Analysis for ArcGIS. INTRODUCING IMAGE ANALYSIS FOR ARCGIS 8 . The following image shows an oil spill of the northern coast of Spain.

the Vegetative Indices function is used to see crop stress. Then. you can quickly update crop management plans. You can use multispectral imagery and analysis tools to identify and monitor a crop’s health. The stressed areas are then automatically digitized and saved as a shapefile. Crop stress shown through Vegetative Indices INTRODUCING IMAGE ANALYSIS FOR ARCGIS 9 . This kind of information can be used to help identify sources if variability in growth patterns.Ass essing veg etatio n stress Crops experience different stresses throughout the growing season. In these images.

index. see the Quick-start tutorial. Knowing about these applications will make your use of Image Analysis for ArcGIS much easier. Find ing answ ers to qu estio ns This book describes the typical workflow involved in creating and updating GIS data for mapping projects.leicageosystems. then you are introduced to the typical workflow you’d apply to get the results you want. The chapters are set up so that you first learn the theory behind certain applications. click Help on the ArcMap toolbar and choose ArcGIS Desktop Help. and technology. and self-study workbooks to find educational solutions that fit your learning style and pocketbook. The telephone number for Technical Support is 909-7933744.leica-geosystems. You can also contact Customer Support at 404/248-9777. Course Schedules. as well as finding areas of change and mosaicking images. You can follow the training link to Training Centers.gis. visit the Web site www. Lei ca Geosy stems GI S & Mapp ing Ed ucati on Solutions Leica Geosystems GIS & Mapping Division offers instructor-based training about Image Analysis for ArcGIS. or search feature to locate the information you need. Contac ting ESRI If you need to contact ESRI for technical support refer to “Getting technical support” in the Help system’s “Getting more help” section. Web-based courses. you’ll learn how to adjust the appearance of an image.com. you may want to read the books about ArcCatalog and ArcMap: Using ArcCatalog and Using ArcMap.Learning about Image Analysis for ArcGIS If you are just learning about geographic information systems (GISs).gis. In the Quick-start tutorial.com/education.esri. You can also visit ESRI on the Web at www.com. and Course Registration. From this point you can use the Table of contents.esri. click Help near the bottom of the Image Analysis menu. Visit Leica Geosystems on the Web at www. To browse the online help contents for Image Analysis for ArcGIS. got to the training Web site located at www. For more information. 10 USING IMAGE ANALYSIS FOR ARCGIS . If you’re ready to learn about how Image Analysis for ArcGIS works. Getti ng he lp on your compu ter You can get a lot of information about the features of Image Analysis for ArcGIS by accessing the online help.com. For more information. how to identify similar areas of an image. Contacting Leica Geosystems GIS & Mappi ng If you need to contact Leica Geosystems for technical support. You can choose among instructor-led courses. see the product registration and support card you received with Image Analysis for ArcGIS. A glossary is provided to help you understand any terms you haven’t seen before. how to align an image to a feature theme. GIS applications. If you need online help for ArcGIS. ESRI educ atio n sol utio ns ESRI provides educational opportunities related to GISs.

it can also be quickly saved into a shapefile. you are going to use the most important components of the Image Analysis for ArcGIS extension and learn about the types of problems it can solve. burn areas or oil spills. Once an area has been defined.2 Quick-start tutorial IN THIS CHAPTER • Starting Image Analysis for ArcGIS • Adjusting the appearance of an image • Identifying similar areas in an image • Finding areas of change • Mosaicking images • Orthorectifying an image 2 Now that you know a little bit about the Image Analysis for ArcGIS extension and its potential applications. In Image Analysis for ArcGIS. you can quickly identify areas with similar characteristics. By working through the exercises. the following exercises give you hands-on experience in using many of the extension’s tools. This avoids the need for manual digitizing. 11 . This tutorial will show you how to use some Image Analysis for ArcGIS tools and give you a good introduction to using Image Analysis for ArcGIS for your own GIS needs. This is useful for identification in cases such as environmental disasters.

Exercise 1: Starting Image Analysis for ArcGIS
In the following exercises, we’ve assumed that you are using a single monitor or dual monitor workstation that is configured for use with ArcMap and Image Analysis for ArcGIS. That being the case, you will be lead through a series of tutorials in this chapter to help acquaint you with Image Analysis for ArcGIS and further show you some of the abilities of Image Analysis for ArcGIS. In this exercise, you’ll learn how to start Image Analysis for ArcGIS and activate the toolbar associated with it. You will be able to gain access to all the important Image Analysis for ArcGIS features through its toolbar and menu list. After completing this exercise, you’ll be able to locate any Image Analysis for ArcGIS tool you need for preparation, enhancement, analysis, or geocorrection. This exercise assumes you have already successfully completed installation of Image Analysis for ArcGIS on your computer. If you have not installed Image Analysis for ArcGIS, refer to the installation guide packaged with the Image Analysis for ArcGIS CD, and install now. S t a r t i n g I m ag e A n a ly s i s for A rc G I S 1. Click the Start button on your desktop, then click Programs, and point to ArcGIS. 2. Click ArcMap to start the application.
1 2

A dd i n g t h e I m ag e A n a ly si s for A rc GI S ex tensi on 1. If the ArcMap dialog opens, keep the option to create a new empty map, then click OK.

1

2. In the ArcMap window, click the Tools menu, then click Extensions.
12 USING IMAGE ANALYSIS FOR ARCGIS

3. In the Extensions dialog, click the check box for Image Analysis Extension to add the extension to ArcMap.

3

1

4

Once the Image Analysis Extension check box has been selected, the extension is activated. 4. Click Close in the Extensions dialog. Adding too lbars 1. Click the View menu, then point to Toolbars, and click Image Analysis to add that toolbar to the ArcMap window.

The Image Analysis toolbar is your gateway to many of the tools and features you can use with the extension. From the Image Analysis toolbar you can choose many different analysis types from the menu, choose a geocorrection type, and set links in an image.

QUICK-START TUTORIAL

13

Exercise 2: Adding images and applying Histogram Stretch
Image data, displayed without any contrast manipulation, may appear either too light or too dark, making it difficult to begin your analysis. Image Analysis for ArcGIS allows you to display the same data in many different ways. For example, changing the distribution of pixels allows you to alter the brightness and contrast of the image. This is called histogram stretching. Histogram stretching enables you to manipulate the display of data to make your image easier to visually interpret and evaluate. A dd a n I m ag e A n a ly s i s for A rc G I S t h e m e o f Mos c ow 1. Open a new view. If you are starting this exercise immediately after Exercise 1, you should have a new, empty view ready. 2. Click the Add Data button . The image Moscow_spot.tif appears in the view. Apply a Histo gra m Eq ualization Standard deviations is the default histogram stretch applied to images by Image Analysis for ArcGIS. You can apply histogram equalization to redistribute the data so that each display value has roughly the same number of data points. More information about histogram equalization can be found in chapter 6 “Using Radiometric Enhancement”. 1. Select moscow_spot.tif in the Table of contents, right click your mouse, and select Properties to bring up Layer Properties. 2. Click the Symbology tab and under Show, select RGB Composite.
3

4. Click Add to display the image in the view.

3. In the Add Data dialog, select moscow_spot.tif, and click Add to draw it in the view. The path to the example data directory is ArcGIS\ArcTutor\ImageAnalysis.

3. Check the Bands order and click the dropdown arrows to change any of the Bands.

4

14

USING IMAGE ANALYSIS FOR ARCGIS

You can also change the order of the bands in your current image by clicking on the color bar beside each band in the Table of contents. If you want bands to appear in a certain order for each image that you draw in the view, go to Tools\Options\Raster in ArcMap, and change the Default RGB Band Combinations.
1 3

6

2

4

7. In the Histogram Equalization dialog, make sure moscow_spot.tif is in the Input Image box. 8. The Number of Bins will default to 256. For this exercise, leave the number at 256, but in the future, you can change it to suit your needs. 9. Navigate to the directory where you want your output images stored, type a name for your image, and click Save. The path will appear in Output Image. You can go to the Options dialog, accessible from the Image Analysis toolbar, and enter the working directory you want to use on the General tab of the dialog. This step will save you time by automatically bringing up your working directory whenever you click the browse button to navigate to it in order to store an output image.

5

4. Click the dropdown arrow and select Histogram Equalize as the Stretch Type. 5. Click Apply and OK. 6. Click the Image Analysis menu dropdown arrow, point to Radiometric Enhancement, and click Histogram Equalization.

QUICK-START TUTORIAL

15

Check the Invert box. 16 USING IMAGE ANALYSIS FOR ARCGIS . Click Properties and go to the Symbology tab. This is the histogram equalized image of Moscow. 2 3 4 4. The equalized image will appear in your Table of contents and in your view.2. 1. If you want to see the histograms for the image. and dark areas are bright. you apply the Invert Stretch to the image to redisplay it with its brightness values reversed. Click Apply and OK. and right-click your mouse. 7 8 9 1 10 3. Areas that originally appeared bright are now dark. Ap ply an Inve r t Stretch to the imag e of Mos c ow In this example. Select the equalized file in the Table of contents. click the Histograms button located in the Stretch box. 10. Click OK.

one stretch may make the image appear better than another. Depending on the original distribution of the data in the image. You’ll learn more about these stretches in chapter 6 “Using Radiometric Enhancement”. Image Analysis for ArcGIS allows you to rapidly make those comparisons. The Layer Properties Symbology tab can be a learning tool to see the effect of stretches on the input and output histograms.This is an inverted image of Moscow_spot.tif. You can apply different types of stretches to your image to emphasize different parts of the data. QUICK-START TUTORIAL 17 .

In the Add Data dialog. 1 2 This is a radar image showing an oil spill off the northern coast of Spain. If you are beginning here. you can point and click inside the area you want to highlight. 3.Exercise 3: Identifying similar areas in an image With Image Analysis for ArcGIS you can quickly identify areas with similar characteristics. start ArcMap and load the Image Analysis for ArcGIS extension. C r e a t e a s ha p e f i l e In this exercise. Once an area has been defined. To define the area. and click Add to draw it in the view. Click the Zoom In tool. If you are starting immediately after the previous exercise. you use the Seed Tool to point to an area of interest such as a dark area on an image depicting an oil spill. After going through these steps. select radar_oilspill. The Seed Tool returns a graphic polygon outlining areas with similar characteristics. Click the Add Data button. 18 USING IMAGE ANALYSIS FOR ARCGIS . in this case an oil spill. you use the Seed Tool (also called the Region Growing Tool). This action lets you avoid the need for manual digitizing. 2. 1. The Seed Tool grows a polygon graphic in the image that encompasses all similar and contiguous areas. and drag a rectangle around the black area to see the spill more clearly. it can also be quickly saved into a shapefile. you will first need to create a shapefile in ArcCatalog and start editing in order to enable the Seed Tool. In order to use the Seed Tool. You do not need to save the image. clear your view by clicking the New Map File button on your ArcMap tool bar. The polygon enables you to see how much of an area the oil spill covers. This is useful for identification of environmental disasters or burn areas. and create a polygon.img. Add an d draw an Imag e An alys is for A rc G I S t h e m e d e p i c t i n g a n o i l s p i l l 1.

Oilspill will appear in the Table of contents. and drag and drop it in the ArcMap window. 6. In the Create New Shapefile dialog. Select the oilspill shapefile. QUICK-START TUTORIAL 19 . Click OK in the Create New Shapefile dialog. name the new shapefile oilspill. and click Shapefile. 8. and select radar_oilspill. and click the Feature Type dropdown arrow and select Polygon. click Import.1 2 4 2.img and click Add from the Browse for Dataset dialog that will pop up containing the example data directory. 4. Check Show Details. 11. 5 6 9 3 7. Click Apply and OK. 10. Select the directory in the Table of contents and right click or click File. 3. Close ArcCatalog. 5. 9. point to New. Click the ArcCatalog button. You can store the shapefile you’re going to create in the example data directory or navigate to a different directory if you wish. In the Spatial Reference Properties dialog. Click Edit.

1 7

8 2

Draw the po lygo n with the See d Tool 1. Click the Image Analysis dropdown arrow, and click Seed Tool Properties. 2. Type a Seed Radius of 10 pixels in the Seed Radius text box. 3. Uncheck the Include Island Polygons box. The Seed Radius is the number of pixels surrounding the target pixel. The range of values of those surrounding pixels is considered when the Seed Tool grows the polygon. 4. Click OK.
4 3

5. Click the Editor toolbar button on the ArcMap toolbar to display the Editor toolbar. 6. Click Editor on the Editor toolbar in ArcMap, and select Start Editing.

20

USING IMAGE ANALYSIS FOR ARCGIS

5

6

This is a polygon of an oil spill grown by the Seed Tool. If you don’t automatically see the formed polygon in the image displayed in the view, click the refresh button at the bottom of the view screen in ArcMap. You can see how the tool identifies the extent of the spill. An emergency team could be informed of the extent of this disaster in order to effectively plan a clean up of the oil.

7. Click the Seed Tool and click a point in the center of the oil spill. The Seed Tool will take a few moments to produce the polygon.

6

QUICK-START TUTORIAL

21

Exercise 4: Finding areas of change
The Image Analysis for ArcGIS extension allows you to see changes over time. You can perform this type of analysis on either continuous data using Image Difference or thematic data using Thematic Change. In this exercise, you’ll learn how to use Image Difference and Thematic Change. Image Difference is useful for analyzing images of the same area to identify land cover features that may have changed over time. Image Difference performs a subtraction of one theme from another. This change is highlighted in green and red masks depicting increasing and decreasing values. Find cha ng ed a reas In the following example, you are going to work with two continuous data images of the north metropolitan Atlanta, Georgia, area—one from 1987 and one from 1992. Continuous data images are those obtained from remote sensors like Landsat and SPOT. This kind of data measures reflectance characteristics of the earth’s surface, analogous to exposed film capturing an image. You will use Image Difference to identify areas that have been cleared of vegetation for the purpose of constructing a large regional shopping mall. Add an d draw th e imag es o f Atlan ta 1. If you are starting immediately after the previous exercise, clear your view by clicking the New Map File button on your ArcMap tool bar. You do not need to save the image. If you are beginning here, start ArcMap and load the Image Analysis for ArcGIS extension. 2. Click the Add Data button. 3. Press the Shift or Ctrl key, and click on atl_spotp_87.img and atl_spotp_92.img in the Add Data dialog.
22 USING IMAGE ANALYSIS FOR ARCGIS

4. Click OK.

With images active in the view, you can calculate the difference between them. C o m p u t e t h e d i f fe r e n c e d u e t o develop ment 1. Click the Image Analysis dropdown arrow, click Utilities, and click Image Difference.

2

3

1

4 5 6

7

8

2. In the Image Difference dialog, click the Before Theme dropdown arrow, and select Atl_spotp_87.img. 3. Click the After Theme dropdown arrow, and select Atl_spotp_92.img.

9

4. Choose As Percent in the Highlight Changes box. 5. Click the arrows to 15 in the Increases more than box. 6. Click the arrows to 15 in the Decreases more than box. 7. Navigate to the directory where you want to store your Image Difference file, type the name of the file, and click Save. 8. Navigate to the directory where you want to store your Highlight Change file, type the name of the file, and click Save. 9. Click OK in the Image Difference dialog. The Highlight Change and Image Difference files appear in the Table of contents and the view.

QUICK-START TUTORIAL

23

Thematic Change is similar to Image Difference in that it computes changes between the same area at different points in time. Image Difference also finds areas that are at least 15 percent decreased than before (designating an area that has increased vegetation or an area that was once dry. click the check box to turn off Highlight Change. This next example uses two images of an area near Hagan Landing. Thematic Change creates a theme that shows all possible combinations of change and how an area’s land cover class changed over time. C l o s e t h e v iew You can now clear the view and either go to the next portion of this exercise. and check Image Difference to display it in the view. 10. Highlight Change shows the difference in red and green areas. and click Exit. However. Thematic Change can only be used with thematic data (data that is classified into distinct categories). In the Table of contents. before and after Hurricane Hugo. South Carolina. Using Thematic Change Image Analysis for ArcGIS provides the Thematic Change feature to make comparisons between thematic data images. or end the session by closing ArcMap. but is now wet) and highlights them in red. Suppose you are the forest manager for a paper company that owns a parcel of land in the hurricane’s path. An example of thematic data is a vegetation class map. With Image Analysis for ArcGIS. Click No when asked to save changes. you can see exactly how much of your forested land has been destroyed by the storm. With the 15 percent parameter you set. Thematic Change. The Image Difference image shows the results of the subtraction of the Before Theme from the After Theme.Image Difference calculates the difference in pixel values. 24 USING IMAGE ANALYSIS FOR ARCGIS . If you want to shut down ArcMap with Image Analysis for ArcGIS. The images were taken in 1987 and 1989. Image Difference finds areas that are at least 15 percent increased than before (designated clearing) and highlights them in green. click the File menu.

you must first categorize the Before and After Themes. Click the arrows to 3 or type 3 in the Desired Number of Classes box. 1. Click the Image Analysis dropdown arrow.Add th e imag es o f an area damag ed by Hu rr icane Hu go 1. You can access Categorize through Unsupervised Classification. and click Unsupervised/Categorize. Navigate to the directory where you want to store the output image.img is in the text box. clear your view by clicking the New Map File button on your ArcMap toolbar. Click OK in the Unsupervised Classification dialog.img and tm_oct89. type the file name (use unsupervised_class_87 for this example). Create three class es of land cover Before you calculate Thematic Change. and click Save. which is an option available from the Image Analysis dropdown menu. 2. 4. point to Classification. Open a new view and click Add Data. You do not need to save the image. 3. 5. and select both tm_oct87. 3. 2. You’ll use the thematic themes created from those classifications to complete the Thematic Change calculation. 4 5 6 QUICK-START TUTORIAL 25 . 6. start ArcMap and load the Image Analysis for ArcGIS extension. Press either the Shift key or Ctrl key.img in the Add Data dialog. If you are starting immediately after the previous exercise. If you are beginning here. Click the dropdown arrow in the Layers section of the Image Analysis toolbar to make sure tm_oct87. Click the Input Image dropdown arrow to make sure tm_oct87. 1 3 This view shows an area damaged by Hurricane Hugo. Click Add.img is active.

You can then assign the classes names like water. and bare soil. 26 USING IMAGE ANALYSIS FOR ARCGIS . Type the name Water. 3. This step makes the remaining themes draw faster in the view. Click the Symbology tab. By using Unsupervised Classification. Click the check box of tm_oct87. Verify that Class_names is selected in the Value Field. 2 3 5 4 Give the classe s names and a ssign c olors to represent them 1. and choose blue from the color palette. 8. Double-click the color bar under Symbol for Class 001. Double-click the title unsupervised_class_87. and double-click Class 003 under Class_names. 4. you may be better able to quantify areas of different land cover in your image.img so the original theme is not drawn in the view. Select Class 002. forest. Type the name Bare Soil. Select Class 003. 6. 7. 10. Select Class 001. and choose green.Using Unsupervised Classification to categorize continuous images into thematic classes is particularly useful when you are unfamiliar with the data that makes up your image. Type the name Forest.img to access the Layer Properties dialog. Click Apply and OK. 7. and double-click Class 002 under Class_names. 5. Double-click the color bar under Symbol for Class 002. and double-click Class 001 under Class_names. and Image Analysis for ArcGIS performs a calculation assigning pixels to classes depending on their values. 10 9. 2. and choose a tan or light brown color. Double-click the color bar under Symbol for Class 003. You simply designate the number of classes you would like the data divided into.

Click the browse button to bring up your working directory. Click the box of the tm_oct89. 3. and name the Output Image. 4.img on pages 25 and 26 under “Create three classes of land cover” and “Give the classes names and assign colors to represent them” to categorize the classes of the tm_oct89. QUICK-START TUTORIAL 27 . Recode lets you create a file with the specific images you’ve classified. The Map Pixel Value through Field will read <From view>. Click the Input Image dropdown arrow to select one of the classified images. Rec ode to pe rman ently write class names a n d c o l o rs t o a f i l e After you have classified both of your images. 2. Click the Image Analysis dropdown arrow. you need to do a recode in order to permanently save the colors and class names you have assigned to the images. 1. Follow the steps provided for the theme tm_oct87. Leave this as is.img theme. 3 4 2 5 2.1 Categorize and name the areas in the posth u r r i c a n e i m ag e 1. 5. and click Recode. point to GIS Analysis. Click OK.img theme so that it does not draw in the view.

11. 8. the overall damage caused by the hurricane is clear. 2. Next. Click OK. Use Them atic Chan g e to see how land cov er chang e d bec ause o f Hugo 1. Double-click the Thematic Change title to access Layer Properties. 2. Click the Before Theme dropdown arrow and select the 87 classification image. you can use any color you like. You don’t have to choose red. 10. is now: Class 003 (was Forest. You can see the amount of destruction in red. and click Apply. 4. The red shows what was forest and is now bare soil. and click Save. 28 USING IMAGE ANALYSIS FOR ARCGIS . Click the check box of Thematic Change to draw it in the view. Make sure both recoded images are checked in the Table of contents so both will be active in the view. A dd a fe a t u r e t h e m e t h a t s h ow s t h e prop er ty bound ar y Using Thematic Change. Click the Image Analysis dropdown arrow. 1. and select the 89 classification image. and click Add. and click Thematic Change. Click OK. Click the color red in the color palette. point to GIS Analysis. 9. Both of the images will have your class names and colors permanently saved. 5. type the file name.Now do the same thing and perform a recode on the other classified image you did of the Hugo area. Click the After Theme dropdown arrow. you will want to see how much damage actually occurred on the paper company’s land. Select property. Click Add Data. 6. 7. 3 5 4 6 3. double-click the symbol for was: Class 002.shp. is now Bare Soil) to access the color palette. Navigate to the directory where you want to store the Output Image. In the Symbology tab.

6. Click OK.3 4 5 Thematic Change image with the property shapefile Make the pro per ty transpa rent 1. QUICK-START TUTORIAL 29 . click the Hollow symbol. and double-click the color symbol. 5. In the Symbol Selector. Click Apply and OK on the Symbology tab. 4. or type the number 3 in the box. Click the Symbology tab. Click the Outline Color dropdown arrow. Click the Outline Width arrows. Double-click on the property theme to access Layer Properties. 3. 6 The yellow outline clearly shows the devastation within the paper company’s property boundaries. 7. 2. and choose a color that will easily stand out to show your property line.

Exercise 5: Mosaicking images Image Analysis for ArcGIS allows you to mosaic multiple images. Click Add. In the following exercise. The two airphotos display in the view. Select Airphoto1. simply display them in the view. 2. The two images are displayed at a 1:1 resolution. You do not need to save the image. Add an d draw th e imag es 1. ensure that they have the same number of bands. Click Airphoto1. 3. If you are starting immediately after the previous exercise. Click the Add Data button. 4. The Mosaic tool joins them as they appear in the view: whichever is on top is also on top in the mosaicked image.img in the Add Data dialog. If you are beginning here. then select Mosaic. 30 USING IMAGE ANALYSIS FOR ARCGIS . start ArcMap and load the Image Analysis for ArcGIS extension with a new map. Click Zoom to raster resolution. Press the Shift key and select Airphoto1.img.img and drag it so that it is at the top of the Table of contents. Zoo m in to s ee imag e de tails 1. You can now use Pan to see how they overlap. 2. clear your view by clicking the New Map File button on your ArcMap tool bar. you join them together to form one single image that covers the entire area. 3. When you mosaic images. Click the Pan button. and right-click your mouse. To mosaic images. then maneuver the images in the view. you are going to mosaic two airphotos with the same resolution.img and Airphoto2.

4 3 Use Mosaic to join the imag es 1. Click the Image Analysis dropdown arrow. point to Data Preparation. 4. you cannot access the Options dialog. QUICK-START TUTORIAL 31 . 3. 2. After opening the Mosaic Images dialog. Click the Handle Images overlaps dropdown arrow and choose Use Order Displayed. and click Mosaic Images.1 This illustration shows where the two images overlap. it is recommended that you keep the default of Union of Inputs for mosaicking. you must first go to the Extent tab in the Options dialog and change the Extent before opening Mosaic Images. However. Click the Full Extent button so that both images display their entirety in the view. If you want to use some other extent than Union of Inputs for your mosaic.

4. If you want to automatically crop your images. and click Save. If you have changed the extent to something other than Union of Inputs. but for this exercise you will need to leave the extent set at Union of Inputs and the box unchecked. Choose Brightness/Contrast as the Color Balancing option. check the box. 8 5. In this case Airphoto1 is mosaicked over Airphoto2. 7. Click OK. and use the arrows or type the percentage by which to crop the images. check this box. 32 USING IMAGE ANALYSIS FOR ARCGIS . 3 4 5 6 7 The Mosaic function joins the two images as they appear in the view. 8. type the file name. 6. Navigate to the directory where you want to save your files.

4. 3. Click the Add Data button. 2. Add raster an d feature d atasets 1. Select Properties at the bottom of the menu to bring up the Data Frame Properties dialog. 3. You can see the fiducial markings around the edges and at the top. 4.E x e r c i s e 6 : O r t h o r e c t i f ic a t i o n o f c a m e r a i m a g e r y The Image Analysis for ArcGIS extension for ArcGIS has a feature called Geocorrection Properties. clear your view by clicking the New Map File button on your ArcMap tool bar. 1. The function of this feature is to rectify imagery. If you are beginning here. If you are starting immediately after the previous exercise. 2. 4 5 6 Se lect the c oordin ate sy stem for th e imag e This procedure defines the coordinate system for the data frame in Image Analysis for ArcGIS. One of the tools that makes up Geocorrection Properties is the Camera model.shp in the Add Data dialog. Click Add. You do not need to save the image.img and ps_streets. QUICK-START TUTORIAL 33 .img and click Zoom to Layer. Right click on ps_napp. In the box labeled Select a coordinate system. Either select Layers in the Table of contents and right click. Click the Coordinate System tab. or move your cursor into the view and right click. click Predefined. In this exercise you will orthorectify images using the Camera model in Geocorrection Properties. start ArcMap and load the Image Analysis for ArcGIS extension with a new map. 3 7 The images are drawn in the view. Hold the Shift key down and select both ps_napp.

and click OK. 9. 8.804. and choose ps_dem. 34 USING IMAGE ANALYSIS FOR ARCGIS .img as the Elevation File. Click the Camera tab. 3 4 2 5 6 2. 11. 7. then click NAD 1927 UTM Zone 11N. and select File to use as the Elevation Source. and select Default Wild. Click the Elevation tab. Click the arrows.000 for Y. Check Account for Earth’s curvature. Click the Elevation Units dropdown arrow and select Meters. Click NAD 1927. 7. Click in the Film X and Film Y box where the number of Fiducials will reduce to 4. 5. Or tho rectifyi ng your imag e us ing Geocorre ction Prop er ti es 1. 1 4. Click the Camera Name dropdown arrow. or type 4 for the number of Fiducials.5. and then click Utm. Click Apply. Click the Geocorrection Properties button on the toolbar to open the Camera dialog. 12. and click Camera. Click the Model Types dropdown arrow. Navigate to the ArcGIS ArcTutor directory. 3. 10. Click Projected Coordinate Systems. 6. Enter a Focal Length of 152.004 for X and 0. 6. In the Principal Point box. enter -0.

999 14 9 8 15 11 12 10 3 16 14. make sure to click Apply then OK to close. Name the camera in the Camera Name box. The RMSE should be less than 1. You will notice that both the image and the shape file are now displayed in the view. and click the crosshair there. Now. 35 . QUICK-START TUTORIAL When you are done placing fiducials.13. Click the Fixed Zoom In tool. you can reopen the Camera Properties dialog. 15.999 105.999 -105. it is time to rectify the images. -106. and zoom in until you can see the actual fiducial. You can then right click on the image in the Table of contents. Your cursor has become a crosshair. and the software will take you to the approximate location of the first fiducial placement. Type the following coordinates in the corresponding fiducial spaces. Click Apply and move to the next section.000 105. and click Zoom to Layer. 3.998 -106. 1 2 106.000 105. Click the Green fiducial.008 7 2.994 -105.0. Fidu cial place ment 1. and make sure the first fiducial orientation is selected. Click Save to save the camera information with the Camera Name. To look at the root mean square error (RMSE) on the fiducials tab. 2. 4. Use the Tab key to move from space to space. Click the Fiducials tab. The software will take you to each of the four points where you can click the crosshair in the fiducial marker. 16. 3. 1.

Your first link should look approximately like this: 3. and using the next image as a guide. Looking closely at the image and shapefile in the view. 36 USING IMAGE ANALYSIS FOR ARCGIS .After placing fiducials. P l a c i n g l i n ks 1. Place links 2 and 3. both the image and the shapefile are shown in the view for rectification. You will need to click the crosshair on the point in the image first and then drag the cursor over to the point in the shapefile where you want to click. 2. Click the Add Links button. Follow the markers in the next image to place the first three links. line up where you should place the first link.

and place a link according to the previous image. QUICK-START TUTORIAL 37 . and place a link according to this next image. You can use the Zoom tool to draw a rectangle around the aligned area and zoom in to see it more clearly. Zoom to the upper left portion of the image. 4. You can go to Save As on the Image Analysis menu and save the image if you wish. your image should look something like this: Your image should warp and become aligned with the streets shapefile.After placing the third link. 5. Zoom to the lower left portion of the image. Now take a look at the RMS Error on the Links tab of Camera Properties.

38 USING IMAGE ANALYSIS FOR ARCGIS . and include instructions on how to use them to your advantage. The following chapters go into greater detail about the different tools and elements of Image Analysis for ArcGIS.What’s Next? This tutorial has introduced you to some features and basic functions of Image Analysis for ArcGIS.

• Image Info gives you the ability to apply a NoData Value and recalculate statistics. 39 . • Options lets you change extent. Image Info. cell size. All three aid you in manipulating. and altering your data so you can produce results that are easier to interpret than they would be with no data tool input.3 Applying data tools IN THIS CHAPTER • Seed Tool Properties • Image Info • Options 3 You will notice when you look at the Image Analysis menu that there are three choices called Seed Tool Properties. and more. analyzing. preferences. • Seed Tool Properties automatically generates feature layer polygons of similar spectral value. and Options.

Setting the Seed Radius to 0. you can open Seed Tool Properties from the Image Analysis menu. or you can click and drag a rectangle in a portion of the image that interests you. In order to use the Seed Tool. choose polygon as the type of shapefile. If you only have one band displayed. create a new shapefile in the directory you want to use. then the Seed Tool evaluates the statistics in each band of data before creating the polygon. A smaller Seed Radius uses fewer pixels to determine the range. Once you are finished and you have grown the polygon. After creating a shapefile in ArcCatalog. The band or bands used in growing the polygon are controlled by the current visible bands as set in Layer Properties. However. green. the Seed Tool is controlled by the Seed Radius. you can either click in an image on a single point. Co ntroll ing the Seed Too l You can use the Seed Tool simply by choosing it from the Image Analysis toolbar and clicking on an image after generating a shapefile. Seed Tool dialog Se ed R adiu s When you use the simple click method.Using Seed Tool Properties As stated in the opening of the chapter. when you are interested in vegetation analysis. You can change the number of pixels of the Seed Radius by opening the dialog from the Image Analysis menu. and then use Start Editing on the Editor toolbar in ArcMap to activate the Seed Tool. a polygon defined using the Seed Tool is added to the shapefile. you can go back to the Editor toolbar and select Stop Editing. or you can experiment with which method looks best with your data. name it. This can be useful for thematic images in which a contiguous area might have a single pixel value. instead of a range of values like continuous data. A larger Seed Radius includes more pixels to calculate the range of pixel values used to grow the polygon. and blue) displayed. The Image Analysis for ArcGIS default Seed Radius is 5 pixels. the main function of Seed Tool Properties is to automatically generate feature layer polygons of similar spectral value. From this dialog. and typically produces a larger polygon. you select your Seed Radius in pixels. then the Seed Tool only looks at the statistics of that band to create the polygon. The defaults usually produce a good result. The Seed Radius determines how selective the Seed Tool is when selecting contiguous pixels. you can change the appearance of the polygon produced by the Seed Tool using the Graphics tools. if you want more control over the parameters of the Seed Tool. When a polygon shapefile is being edited. you must first create the shapefile for the image you are using in ArcCatalog.5 or less restricts the polygon to growing over pixels with the exact value as the pixel you click on in the image. If you have all the bands (red. You can decide which method you wish to use before clicking the tool on the toolbar. such as the red band. 40 USING IMAGE ANALYSIS FOR ARCGIS . Like other ArcGIS graphics. You will need to open ArcCatalog.

For single feature mapping where you want to see a more refined boundary. you may want to turn it off.Isla nd Po lygo ns The other option on the Seed Tool Properties dialog is Include Island Polygons. You should leave this option checked for use with Find Like Areas. APPLYING DATA TOOLS 41 .

Check Show Details. 2 3 4 5 6 9 42 USING IMAGE ANALYSIS FOR ARCGIS . Rename the New_Shapefile. 2. or navigate to it. Click Edit. 4. 5. and click Shapefile. Click File. point to New. 3. Open ArcCatalog and make sure your working directory appears in ArcCatalog. 1.P r e p a r i n g t o u s e t h e S e e d To o l 1 Go through the following steps to activate the Seed Tool and generate a polygon in your image. 6. Click the dropdown arrow and select Polygon.

7. 9. Clicking Import will allow you to import the coordinates of the image you are creating the shapefile for. 7 8 11 APPLYING DATA TOOLS 43 . Click Apply and OK in the Spatial Reference Properties dialog. 10. Close ArcCatalog and click the dropdown arrow on the Editor toolbar. 11. Click OK in the Create New Shapefile dialog. or New to input the coordinate system the new shapefile will use. Select Start Editing. 8. Import. Click Select.

check the box. and click Stop Editing. Ch angin g th e Se ed Radius 1. Click the Image Analysis dropdown arrow. Type a new value in the Seed Radius text box.Usi ng the Seed Tool These processes will take you through steps to change the Seed Radius and include Island Polygons. see chapter 2 “Quick-start tutorial”. Click OK. 3. If you need to enable Include Island Polygons. 4. 2 3 4 44 USING IMAGE ANALYSIS FOR ARCGIS . go back to the Editor toolbar. For an in-depth tutorial on using the Seed Tool and generating a polygon. and click Seed Tool Properties. 1 After growing the polygon in the image with the Seed Tool. 2. click the dropdown arrow.

the statistics for the image are recalculated using the NoData Value. The Image Info dialog is found on the Image Analysis menu. Using 0 does not work because 0 does contain value. NoDat a Value The NoDataValue section of the Image Info dialog gives you the opportunity to label certain areas of your image as NoData. You can also recalculate statistics (Recalc Stats) for single bands by choosing Current Band in the Statistics box on the Image Info dialog. Look at the Minimum value and the Maximum value under Statistics on the Image Info dialog and choose your NoData value to be any number between the Minimum and Maximum. If you find that a file you need to be continuous is listed as thematic. Also remember that you can type N/A or leave the area blank so that you have no NoData assigned if you don't want to use this option. or perhaps clear spots where it should be black. the images in your view will be displayed on a dropdown menu under Layer Selection. You can then type the pixel value that you wish to give the NoData pixels in your image. When you choose it. The Statistics portion of the dialog also features a dropdown menu so you can designate the layer for which to calculate NoData. The Image Info feature of Image Analysis for ArcGIS lets you choose a NoData Value and recalculate the statistics for your image so that a pixel value that is unimportant in your image can be designated as such. You will want to do this when the pixel values in that particular area of the image are not important to your statistics or image. you can choose All Bands after setting NoData in the single bands. When you click Recalc Stats. and recalculate for all. This problem becomes evident when the image is displayed in the view and there are black spots or triangles where it should be clear. In order to do this.Image Info When analyzing images. When you choose to apply NoData to single layers. You have to assign some type of value to those pixels to hold their place. you often have pixel values you need to alter or manipulate in order to perceive different parts of the image better. Sometimes the pixel value you choose as NoData will already be used so that NoData matches some other part of your image. then reopen it to see the NoData Value applied. You can apply NoData to a single layer of your image instead of to the entire image if you want or need to do so. If you want to set NoData for a single band. The Representation Type area of the dialog will automatically choose Continuous or Thematic depending on what kind of image you have in your view. so you need to come up with a value that's not being used for any of the other pixels you want to include. but recalculate statistics for all bands. you assign a certain value that no other pixel in the image has to the pixels you want to classify as NoData. and you can close the image in the view. it is important that you click Apply on the dialog before moving to the next layer. This area of the dialog also names the Pixel Type and the Minimum and Maximum values. you can change it here. APPLYING DATA TOOLS 45 . Image Info will only recalculate the statistics for that band. It is important to remember that if you click Recalc Stats while Current Band is selected.

Click the Image Analysis dropdown arrow. 7. Click Apply and OK. 1 2 3 7 5 4 6 8 46 USING IMAGE ANALYSIS FOR ARCGIS . Click the Layer Selection dropdown arrow to make sure the correct image is displayed. 4. 5. Choose All Bands or Current Band. Type the NoDataValue in the box. and click Image Info. 8. Close the image and re-open to view the results visually. Click the Statistics dropdown arrow to make sure the layer you want to recalculate is selected. 6. Make sure the correct Representation Type is chosen for your image. 2. Click Recalc Stats. 3.Usi ng the I m ag e I nfo dia log 1. 9.

but there may be times you want or need to change them. When you’re mosaicking images. you can navigate to the directory where your data is stored and select a file that has extents falling within the selected project area. As Specified below.. The rest of the tab will become active when Same as Display. If the view has been zoomed in on a portion of a theme. You do this by setting the Analysis extent. When you choose Same as Layer. you can select whether or not to have a warning message display if raster inputs have to be projected during analysis operation. Ex tent The Extent tab lets you control how much of a theme you want to use during processing. Through this dialog. On the General tab. Extent. The default extent is usually Intersection of Inputs. As Specified below lets you fill in the information for the extent. and the Analysis mask will default to none. The Image Analysis Options dialog The Options dialog has four tabs on it for General.. If you do so. you can set it to any raster dataset." (whatever layer is active in the view) are chosen. Same as Display refers to the area currently displayed in the view. but you may find it particularly useful with the Data Preparation features that will be covered in the next chapter. If you click this button. and Preferences. you can go to the Extent tab on the Options dialog in order to set the extent at something other than Union of Inputs. and preferences for future operations or a single operation. and Same as Layer ". You can use the Options dialog with any Image Analysis feature. It is recommended that you leave the default Union of Inputs when mosaicking. but you can change it.. then the functions would only operate on that portion of the theme. You can also click the open file button on the Extent tab to choose a dataset to use as the Analysis extent. If you want to store your output images and shapefiles in one working directory. The Analysis Coordinate System lets you choose which coordinate system you would like the image to be saved with—the one for the input or the one for the active data frame. you will need to check the Use Extent from Analysis Options box on the Mosaic Image dialog. which it automatically defaults to when mosaicking.. your output directory is displayed. all of the information in the Table of contents for that layer is considered regardless of whether or not they are displayed in the view.. It’s usually best to leave the options set at what they are. cell size. you can navigate to that directory or type the directory name in the Working directory box. but if you click the dropdown arrow. This will allow your working directory to automatically come up every time you click the browse button for an output image.Options You can access the Options dialog through the Image Analysis menu. you can set an analysis mask as well as setting the extent. APPLYING DATA TOOLS 47 . Cell Size. Finally.

For example.. the output is a 20 meter image. indicating a layer in the view. Right. Union is the default setting of Analysis extent for mosaicking.. or Same as Layer ". This is for the cell size of images you produce using Image Analysis for ArcGIS. and Image Analysis for ArcGIS will adjust the output accordingly. Portions of the images outside the area of overlap are discounted from analysis. The Number of Rows and Number of Columns fields should not be updated manually as they will update as analysis properties are changed. When the extent is set to Union of Inputs. Image Analysis for ArcGIS uses the union of every input theme. The Minimum of Inputs option produces an output that has the minimum resolution of the input files. click View in ArcMap. you can do so in these fields. the cell size reflects the current cell size of that layer. When you choose an extent that activates the rest of the Extent tab. Cell Size The third tab on the Options dialog is Cell Size. You can choose Maximum of Inputs.. If you are familiar with the data and want to enter exact coordinates. if you use Image Difference on a 10 meter image and a 20 meter image. the fields are Top. click the dropdown arrow for Map Units and choose either Feet or Meters.. When you choose Intersection (which is the default extent for all functions except Mosaic). Same as Display and As Specified Below activate the Snap extent to field where you can choose an image to snap the Analysis mask to. To choose one.. It is highly recommended that you keep this default setting when mosaicking images. and Left. When you choose As Specified below.".". If you choose Same as Layer ". Bottom.. if you use Image Difference on a 10 meter image and a 20 meter image.The other options on the Analysis extent dropdown list are Intersection of Inputs and Union of Inputs.. and on the General Tab. The Extent tab on the Options dialog 48 USING IMAGE ANALYSIS FOR ARCGIS . you can enter whatever cell size you wish to use. Minimum of Inputs. The Cell Size field will display in either meters or feet. click Data Frame Properties. Image Analysis for ArcGIS performs functions on the area of overlap common to the input images to the function. For example. Choosing Maximum of Inputs yields an output that has the maximum resolution of the input files. As Specified below. the output is a 10 meter image. The first field on the tab is a dropdown list for Analysis cell size.

but you can change it to Nearest Neighbor or Cubic Convolution if your data requires one of those choices. The Cubic Convolution option is a resampling method that uses the data file values of sixteen pixels in a 4 × 4 window to calculate an output data file value with a cubic function. APPLYING DATA TOOLS 49 .The Cell Size tab on the Options dialog The Preferences tab on the Options dialog Prefe renc es It is recommended that you leave the preference choice to the default of Bilinear Interpolation. Bilinear Interpolation is a resampling method that uses the data file values of four pixels in a 2 × 2 window to calculate an output data file value by computing a weighted average of the input data file values with a bilinear function. The Nearest Neighbor option is a resampling method in which the output data file value is equal to the input pixel that has coordinates closest to the retransformed coordinates of the output pixel.

The Ge nera l Tab 1. 2. Click the dropdown arrow and select the Analysis mask if you want one. Navigate to the Working directory if it’s not displayed in the box. and click Options. 2 3 4 5 50 USING IMAGE ANALYSIS FOR ARCGIS . Choose the Analysis Coordinate System. or navigate to the directory where it is stored. 6. 6 1 4. 5. Check or uncheck the Display warning box according to your needs.Usi ng the Options dialo g The following processes will take you through the parts you can change on the Options dialog. 3. Click the Extent tab to change Extents or OK to finish. Click the Image Analysis dropdown arrow.

you can type in coordinates if you know the exact ones to use. APPLYING DATA TOOLS 51 . Click the dropdown arrow for Analysis extent. If the coordinate boxes are on. If activated.The Extent Tab 4 1. or navigate to the directory where it is stored. 1 2 4. 3. click the dropdown arrow. 3 2. Click the Cell Size tab. and choose an extent. and choose an image to Snap extent to. or OK. or navigate to a directory to choose a dataset for the extent.

and choose the cell size. Bilinear Interpolation.Cel l S iz e tab 1. Click the Preferences tab or OK. 4 5 52 USING IMAGE ANALYSIS FOR ARCGIS . 4. or Cubic Convolution. 3. 2. type the cell size you want to use. Type the number of rows. or navigate to the directory where it is stored. Type the number of columns. Click the dropdown arrow. 5. If activated. 1 2 3 The Preferences tab has only the one option of clicking the dropdown arrow and choosing to resample using either Nearest Neighbor.

Working with features Section 2 .

.

4 Using Data Preparation IN THIS CHAPTER • Create New Image • Subset Image • Mosaic Images • Reproject Image 4 When using the Image Analysis for ArcGIS extension. It is important to understand how to prepare your data before moving on to the different ways Image Analysis for ArcGIS gives you to manipulate your data. You are given several options for preparing data in Image Analysis for ArcGIS. In this chapter you will learn how to: • Create a new image • Subset an image • Mosaic images • Reproject an image 55 . it is sometimes necessary to prepare your data first.

you can click OK to create the image. land cover.767 -2 billion 2 billion The Number of Layers allows you to select how many layers to create in the new file. Continuous raster layers can be multiband or single band such as Landsat.Create New Image The Create New Image function makes it easy to create a new image file. SPOT. land use. It also allows you to define the size and content of the file as well as choosing whether or not the new image type will be thematic or continuous. Thematic layers lend themselves to applications in which categories or themes are used. Choose thematic for raster layers that contain qualitative and categorical information about an area. The Initial Value lets you choose the number to initialize the new file. or Cancel to close the dialog. Continuous data is represented in raster layers that contain quantitative (measuring a characteristic on an interval or ratio scale) and related. digitized (scanned) aerial photograph. 56 USING IMAGE ANALYSIS FOR ARCGIS .768 Maximum Value 1 3 15 255 127 65. such as soils. continuous values. but you can change that) and you choose the data type as well. Every cell is given this value.535 32. DEM. and temperature. The data type determines the type of numbers and the range of values that can be stored in a raster layer. When you are finished entering your information into the fields. With this feature. you also get to choose the value of columns and rows (the default value is 512. They are used to represent data measured on a nominal or ordinal scale. Data Type Unsigned 1 bit Unsigned 2 bit Unsigned 4 bit Unsigned 8 bit Signed 8 bit Unsigned 16 bit Signed 16 bit Unsigned 32 bit Signed 32 bit Float Single Minimum Value 0 0 0 0 -128 0 -32. and roads. slope.

3. Click the dropdown arrow to choose the Data Type. 6. point to Data Preparation. 5. and click Create New Image. Navigate to the directory where the Output Image should be stored. Type or click the arrows to enter Number of Layers. 7. 1 2 3 4 5 6 7 8 USING DATA PREPARATION 57 . 2. Click OK. 4. Choose Thematic or Continuous as the Output Image Type. 8. Type or click the arrows to enter the Initial Value. Type or click the arrows to enter how many Columns or Rows if different from the default number of 512. Click the Image Analysis dropdown arrow.Creati ng a new i m ag e 1.

You can use the Zoom In tool to draw a rectangle around the specific area you wish to subset and go from there. This feature is also accessible from the Utilities menu.Subset Image This function allows you to copy a portion (a subset) of an input data file into an output data file. you do it directly in the Subset Image dialog by entering the desired band numbers to extract from the image. If you wish to subset an image spectrally. 3. To subset spatially. and 4. which can be important when dealing with multiband data. which allows you to apply a mask or extent or set the cell size. if you are working with a TM image that has seven bands of data. but it also speeds up processing as well. you first bring up the Options dialog. The Subset Image function works on multiband continuous data to separate that data into bands. You will probably spatially subset more frequently than spectrally. Spatial subsets are particularly useful if you have a large image and you only want to subset part of it for analysis. Following are illustrations of a TM image of the Amazon as it undergoes a spectral subset. and discard the rest. you may wish to make a subset of bands 2. The Amazon TM image before subsetting Amazon TM after a spectral subset 58 USING IMAGE ANALYSIS FOR ARCGIS . This may be necessary if you have an image file that is much larger than the particular area you need to study. Subset Image has the advantage of not only eliminating extraneous data. The Subset Image function can be used to subset an image either spatially or spectrally. For example. These options are used for all Image Analysis for ArcGIS functions including Subset Image.

The next illustrations reflect images using the spatial subsetting option. You can then save the subset image and work from there on your analysis. you click the Zoom In tool. The rectangle is defined by Top. Top and Bottom are measured as the locations on the Y-axis and the Left and Right coordinates are measured on the X-axis. and select Same As Display on the Extent tab. draw a rectangle over the area. The Options dialog The image of the Pentagon before spatial subsetting In order to specify the particular area to subset. Left. Bottom. open the options dialog. and Right coordinates. The Pentagon subset image after setting the Analysis Extent in Options USING DATA PREPARATION 59 .

3. 4. Double-click the image name in the Table of contents to open Layer Properties. and select the layer you want to subset. 1 4 3 5 6 60 USING IMAGE ANALYSIS FOR ARCGIS . Click the Band dropdown arrow. 2. 6. Click Stretched in the Show panel. Click Add Data to add the image to the view.Sub setting a n imag e sp ectrally 1. Click the Symbology tab in Layer Properties. 5. Click Apply and OK.

point to Data Preparation. or navigate to the directory where it should be stored. 11. 7 8 9 10 11 USING DATA PREPARATION 61 . or navigate to the directory where it is stored. Click OK.Click the Image Analysis dropdown arrow. 9. Click the Input Image dropdown arrow. 8. type the band numbers you want to subset in the text box. 10. 7. Type the file name of the Output Image. and click the file you want to use. Using a comma for separation. and click Subset Image.

Click the Analysis extent dropdown arrow. Click Apply and OK. Click the Image Analysis dropdown arrow and click Save As. and select Same As Display. 7.Sub s e tt i n g a n i m ag e s p at ia lly 1. 1 2 7 3 4 5 6 62 USING IMAGE ANALYSIS FOR ARCGIS . Click the Add Data button to add your image. Click the Extent tab. 4. 6. and draw a rectangle over the area you want to subset. Click the Image Analysis menu. 3. 5. and click Options. Click the Zoom In tool. and save the image in the appropriate directory. 2.

Average Value — replaces each pixel in the overlap area with the average of the values of the corresponding pixels in the overlapping images. the images are processed using whatever stretch you’ve specified in the Layer Properties dialog.Mosaic Images Mosaicking is the process of joining georeferenced images together to form a larger image. It is also important that the images you plan to mosaic contain the same number of bands. This allows you to adjust the stretch of each image independently to achieve the desired overall color balance. You can mosaic images with different cell sizes or resolutions. It is recommended that you leave it at the default of Union of Inputs. or average value. USING DATA PREPARATION 63 . you should resample using Nearest Neighbor. for some reason. or thematic data. Choose: Order Displayed — replaces each pixel in the overlap area with the pixel value of the image that is on top in the view. You can mosaic single or multiband continuous data. Minimum Value — replaces each pixel of the overlap area by the lesser value of the corresponding pixels in the overlapping images. although they need not be in the same projection or have the same cell sizes. It is extremely important when mosaicking to arrange your images in the view as you want the output theme to appear before you mosaic them. you want to use a different extent. All input images must have the same number of layers. maximum value. Other resampling methods use averages to compute pixel values and can produce an edge effect. When you apply Mosaic. however. During processing. and the output mosaicked image has the stretch built in. one with a 4-meter resolution and one with a 5-meter resolution. the output mosaicked image has a 5-meter resolution. For mosaicking images. minimum value. You can set the Cell Size in the Options dialog to whatever cell size you like so that the output mosaicked image has the cell size you selected. You can. Calibrated input images are also supported. each image is fed through its own lookup table. The Cell Size is initially set to the maximum cell size so if you mosaic two images. use Subset Image to subset bands from an existing image and then mosaic regardless of the number of bands they originally contained. When this happens you can consult the settings in the Image Analysis Options dialog for Cell Size. The input images must all contain map and projection information. Image Analysis for ArcGIS mosaics images strictly based on their appearance in the view. The Extent tab on the Options dialog will default to Union of Inputs for mosaicking images. This allows you to mosaic a large number of images without having to make them all active. you can change it in the Options dialog and check the Use Extent from Analysis Options box on the Mosaic Images dialog. If. and should be viewed with no stretch. You cannot mosaic a seven band TM image with a six band TM image. This will ensure that the mosaicked pixels do not differ in their appearance from the original image. Maximum Value — in order to replace each pixel in the overlap area with the greater value of corresponding pixels in the overlapping images. With the Mosaic tool you are also given a choice of how to handle image overlaps by using the order displayed. Another Options feature to take note of is the Preferences tab.

The color balancing options let you choose between balancing by brightness/contrast. the mosaicked image will be balanced by utilizing the adjustments you have made in Layer Properties/ Symbology. the input images are adjusted to have similar histograms to the top of the image in the view. Select None if you don’t want the pixel values adjusted. 64 USING IMAGE ANALYSIS FOR ARCGIS . or none. If you choose Histogram Matching. histogram matching. If you choose brightness/contrast.

Click the Image Analysis dropdown arrow. Check the box if you want to use the extent you set in Analysis Options. and click the method you want to use. and enter the Percent by which to crop the images. 2.How to Mo saic Imag es 1. 5. 8. Add the images you want to mosaic to the view. point to Data Preparation. see chapter 2 “Quick-start tutorial’’. If you want the images automatically cropped. 4. and click Mosaic Images. 2 For more information on mosaicking images. check the box. Arrange images in the view in the order that you want them in the mosaic. 9. Navigate to the directory where the Output Image should be stored. Click OK. 3. 7. 3 4 5 6 7 8 USING DATA PREPARATION 65 . Choose the Color Balance method. 6. Click the Handle Image Overlaps by dropdown arrow.

and so on if so desired. like all Image Analysis for ArcGIS functions. Cell Size. you apply it and go to Reproject Image n Image Analysis for ArcGIS.Reproject Image Reproject Image gives you the ability to reproject raster image data from one map projection to another. ArcMap has the capability to reproject images on the fly by simply setting the desired projection and choosing View/Data Frame Properties and selecting the Coordinate System tab. Reproject Image. The desired projection may then be selected. the only things you need to specify in Reproject Image are the input and output images. After you select the coordinate system. Here is the reprojected image after changing the Coordinate System to Mercator (world): After Reproject Image Before Reproject Image 66 USING IMAGE ANALYSIS FOR ARCGIS . At times you may need to produce an image in a specific projection. By having the desired output projection specified in the Data Frame Properties. observes the settings in the Options dialog so don’t forget to use Options to set Extent.

3. Click Apply and OK. 2. 4. 1 2 3 4 5 USING DATA PREPARATION 67 . Click Predefined and choose whatever coordinate system you want to use to reproject the image. Click on the Coordinate System tab. 5. and add the image you want to reproject to the view.How to Repro ject an Im ag e 1. Right-click in the view. Click Add Data. and click on Properties to bring up the Data Frame Properties dialog.

Navigate to the directory where the Output Image should be stored. and click Reproject Image. Click OK. 6 7 8 9 68 USING IMAGE ANALYSIS FOR ARCGIS .6. or navigate to the directory where it is stored. Click the Image Analysis dropdown arrow. Click the Input Image dropdown arrow and click the file you want to use. 8. point to Data Preparation. 7. 9.

Jensen (1986) defines spatial frequency as “the number of changes in brightness value per unit distance for any part of an image. and resolution merge to enhance your images.1 Performing Spatial Enhancement IN THIS CHAPTER • Convolution • Non-Directional Edge • Focal Analysis • Resolution Merge 5 Spatial Enhancement is a function that enhances an image using the values of individual and surrounding pixels. This chapter is organized according to the order in which the Spatial Enhancement tools appear. non-directional edge. You may want to skip ahead if the information you are seeking is about one of the tools near the end of the menu list.” There are three types of spatial frequency: • zero spatial frequency — a flat image. you will select one feature from the Spatial Enhancement menu. 69 . Spatial Enhancement deals largely with spatial frequency. focal analysis. This chapter will focus on the explanation of these features as well as how to apply them to your data. in which every pixel has the same value • low spatial frequency — an image consisting of a smoothly varying gray scale • high spatial frequency — an image consisting of drastically changing pixel values such as a checkerboard of black and white pixels The Spatial Enhancement feature lets you use convolution. Depending on what you need to do to your image. which is the difference between the highest and lowest values of a contiguous set of pixels.

Some texts use the terms synonymously. These numbers are often called coefficients. A convolution kernel is a matrix of numbers that is used to average the value of each pixel with the values of surrounding pixels. which refers to the altering of spatial or spectral features for image enhancement (Jensen 1996). and choosing Convolution from the Spatial Enhancement menu.Convolution Convolution filtering is the process of averaging small sets of pixels across an image. -1 -1 16 -1 -1 -1 -1 Co nvol ution exa m ple To understand how one pixel is convolved. each value in the convolution kernel is multiplied by the image pixel value that corresponds to it. The word filtering is a broad term. Convolution filtering is used to change the spatial frequency characteristics of an image (Jensen 1996). 2 2 2 2 2 8 8 2 2 2 Data 6 6 8 2 2 6 6 6 8 2 6 6 6 6 8 Ap plyi ng co nvol utio n fi ltering Apply Convolution filtering by clicking the Image Analysis dropdown arrow. Kernel -1 -1 To compute the output value for this pixel. imagine that the convolution kernel is overlaid on the data file values of the image (in one band) so that the pixel to be convolved is in the center of the window. Convolution filtering is one method of spatial filtering. and the total is divided by the sum of the values in the kernel. These products are summed. The numbers in the matrix serve to weight this average toward particular pixels. as shown in this equation: integer [((-1 × 8) + (-1 × 6) + (-1 × 6) + (-1 × 2) + (16 × 8) + (-1 × 6) + (-1 × 2) + (-1 × 2) + (-1 × 8))/ : (-1 + -1 + -1 + -1 + 16 + -1 + -1 + -1 + -1)] = int [(128-40) / (16-8)] = int (88 / 8) = int (11) = 11 70 USING IMAGE ANALYSIS FOR ARCGIS . because they are used as such in the mathematical equations.

PERFORMING SPATIAL ENHANCEMENT 71 . so that the output values are in relatively the same range as the input values.j (in the kernel) the data value of the pixel that corresponds to fij the dimension of the kernel. Since F cannot equal zero (division by zero is not defined). assuming a square kernel (if q = 3. Schowengerdt 1983 The sum of the coefficients (F) is used as the denominator of the equation above. F is set to 1 if the sum is zero. or 1 if the sum of coefficients is zero the output pixel value = The kernel used in this example is a high frequency kernel. thus increasing the spatial frequency of the image. the kernel is 3 × 3) either the sum of the coefficients of the kernel. q = Co nvol ution formula The following formula is used to derive an output data file value for the pixel being convolved (in the center): F = V = Source: Modified from Jensen 1996. The relatively lower values become lower. the output values are: 1 1 2 3 4 5 - 2 11 0 - 3 5 11 - 4 - 5 Where: fij dij =  q   f ij d ij ∑∑   i = 1 j = 1 V = ---------------------------------F q the coefficient of a convolution kernel at position i. and the higher values become higher.When the 2 × 2 set of pixels near the center of this 5 × 5 image is convolved.

like this. -1 -1 -1 -1 16 -1 -1 -1 -1 Therefore. In this case. For example. this 3 × 3 kernel is biased to the south (Jensen 1996). 72 USING IMAGE ANALYSIS FOR ARCGIS . low values become much lower) High freq uency kernels A high frequency kernel.. since they bring out the edges between homogeneous groups of pixels. a zero sum kernel is an edge detector. When a high frequency kernel is used on a set of pixels in which a relatively low value is surrounded by higher values. High frequency kernels serve as edge enhancers.. or high pass kernel. BEFORE 204 201 200 106 200 197 209 210 - AFTER 10 - -1 1 1 -1 -2 1 -1 1 1 198 .. Unlike edge detectors (such as zero sum kernels). which is at the edges between homogeneous (homogeneity is low spatial frequency) groups of pixels. since division by zero is not defined... When a zero sum kernel is used. has the effect of increasing spatial frequency. Inversely. no division is performed (F = 1).Ze ro s um k e rn e l s Zero sum kernels are kernels in which the sum of all coefficients in the kernel equals zero.the low value gets lower. they highlight edges and do not necessarily eliminate other features. This generally causes the output values to be: • • • zero in areas where all input values are equal (no edges) low in areas of low spatial frequency extreme in areas of high spatial frequency (high values become much higher. when the high frequency kernel is used on a set of pixels in which a relatively high value is surrounded by lower values. Zero sum kernels can be biased to detect edges in a particular direction. as above. which usually smooths out or zeros out areas of low spatial frequency and creates a sharp contrast where spatial frequency is high. then the sum of the coefficients is not used in the convolution equation. The resulting image often consists of only edges and zeros..

. PERFORMING SPATIAL ENHANCEMENT 73 . spatial frequency is increased by this kernel. Low f re q u e n cy k e r n e l s Below is an example of a low frequency kernel.BEFORE 64 61 58 60 125 60 57 69 70 - AFTER 188 - . or low pass kernel.. Convolution with High Pass 1 1 1 1 1 1 1 1 1 Convolution With High Pass This kernel simply averages the values of the pixels. causing them to be more homogeneous. which decreases spatial frequency.the high value becomes higher. The resulting image looks either more smooth or more blurred. In either case.

Background fill uses zeros to fill in the kernel area beyond the edge of the image. and click a file. and click Convolution. or navigate to the directory where the file is stored.Ap ply Convolu tion 1. 3. Click the Kernel dropdown arrow. point to Spatial Enhancement. Click the Input Image dropdown arrow. If the sum is zero. 6. the division is not performed. Click the Image Analysis dropdown arrow. 5. Choose Reflection or Background Fill. 4. these numbers are summed and then divided by the sum of the coefficients. and click the kernel you want to use. To make sure the output values are within the general range of the input values. 1 A p p l y i n g C o n v o lu t i o n Reflection fills in the area beyond the edge of the of the image with a reflection of the values at the edge. Convolution allows you to perform image enhancement operations such as averaging and high pass or low pass filtering. Navigate to the directory where the Output Image should be stored. 3 4 5 2 6 74 USING IMAGE ANALYSIS FOR ARCGIS . 2. Each data file value of the new output file is calculated by centering the kernel over a pixel and multiplying the original values of the center pixel and the appropriate surrounding pixels by the corresponding coefficients from the matrix. Click OK.

or slope. Most of the standard image processing filters are implemented as a single pass moving window (kernel) convolution. a Sobel filter has been selected. and summary filters. To convert this model to the Prewitt filter calculation. Image of Seattle before applying Non-Directional Edge Sobel= –1 –2 –1 0 0 0 1 2 1 horizontal 1 0 –1 2 0 –2 1 0 –1 vertical Prewitt= –1 –1 –1 0 0 0 1 1 1 horizontal 1 0 –1 1 0 –1 1 0 –1 vertical After Non-Directional Edge PERFORMING SPATIAL ENHANCEMENT 75 .Non-Directional Edge The Non-Directional Edge function averages the results of two orthogonal first derivative edge detectors. For this model. Examples include low pass. in both the x and y directions. the kernels must be changed according to the example below. Both use orthogonal kernels convolved separately with the original image. The Non-Directional Edge is based on the Sobel zero-sum convolution kernel. edge enhance. Both of these filters are based on a calculation of the 1st derivative. and then combined. edge detection. The filters used are the Sobel and Prewitt filters.

Background fill uses zeros to fill in the kernel area beyond the edge of the image. and click Non-Directional Edge. 1 Using Non-Directional Edge In step 4. or navigate to the directory where the file is stored. Choose Sobel or Prewitt. reflection fills in the area beyond the edge of the image with a reflection of the values at the edge. Click the Input Image dropdown arrow. Click OK. point to Spatial Enhancement. and click a file. or navigate to the directory where it should be stored. 3. Type the file name of the Output Image.Usi ng Non-Directional Ed g e 1. 2. 5. 2 3 4 5 6 76 USING IMAGE ANALYSIS FOR ARCGIS . Choose Reflection or Background Fill. 6. Click the Image Analysis dropdown arrow. 4.

Focal Analysis evaluates the region surrounding the pixel of interest (center pixel). This model (Median Filter) is useful for reducing noise such as random spikes in data sets. It is also useful for enhancing thematic images.Focal Analysis The Focal Analysis function enables you to perform one of several types of analysis on class values in an image file using a process similar to convolution filtering. After Focal Analysis is performed PERFORMING SPATIAL ENHANCEMENT 77 . The operations that can be performed on the pixel of interest include: • • • • • • Standard Deviation — measure of texture Sum Mean — good for despeckling radar data Median — despeckle radar Min Max An image before Focal Analysis These functions allow you to select the size of the surrounding region to evaluate by selecting the window size. and other impulse imperfections in any type of image. dead sensor striping.

Click OK. 3. 5. or navigate to the directory where the file is stored. and click the Matrix size you want to use. point to Spatial Enhancement. and click the function you want to use. you are able to perform several different types of analysis on the pixel values in an image file. 4. Click the Image Analysis dropdown arrow. Type the file name of the Output Image. With Focal Analysis. 2.Ap plying Fo cal Analysis 1. 2 1 F o c a l A n a ly s i s R e s u l t s Focal Analysis is similar to Convolution in the process that it uses. and click Focal. and click a file. Click the Neighborhood Definition dropdown arrow. Click the Input Image dropdown arrow. 3 4 5 6 7 78 USING IMAGE ANALYSIS FOR ARCGIS . 7. Click the Neighborhood Shape dropdown arrow. Click the Focal Function dropdown arrow. and click the shape you want to use. 6. or navigate to the directory where it should be stored.

spatial.e. However. this technique is limited to three bands (R.. water and high reflectance areas such as urban features). A number of models have been suggested to achieve this image merge. Another technique (Schowengerdt 1980) additively combines a high frequency image derived from the high spatial resolution data (i. 3 to RGB. SPOT panchromatic has one broad band with very good spatial resolution—10 m. to provide contrast in shadows. 3. image] DNB2_new = [DNB2 / DNB1 + DNB2 + DNB3] × [DNhigh res. 2 from a Landsat TM image.e. This function merges imagery of differing spatial resolutions. spectral.. In the above two techniques. 1 from a SPOT or Landsat TM image or 4. SPOT panchromatic) with the high spectral resolution Landsat TM image.G. It is unacceptable to resample the thermal band (TM6) based on the visible (SPOT panchromatic) image. Combining these two images to yield a seven-band data set with 10 m resolution provides the best characteristics of both sensors. The Resolution Merge function uses the Brovey Transform method of resampling low spatial resolution data to a higher spatial resolution while retaining spectral information: B rovey Tr a n s fo r m In the Brovey Transform. uses the forward-reverse principal components transforms with the SPOT image. Brovey Transform is good for producing RGB images with a higher degree of contrast in the low and high ends of the image histogram and for producing visually appealing images. replacing PC-1. image] Where: B = band The Brovey Transform was developed to visually increase contrast in the low and high ends of an image’s histogram (i. or temporal resolution.B). replacing I (from transformed TM data) with the SPOT panchromatic image. only three bands at a time should be merged from the input multispectral scene. image] DNB3_new = [DNB3 / DNB1 + DNB2 + DNB3] × [DNhigh res. 2. Since the Brovey Transform is intended to produce RGB images. Welch and Ehlers (1987) used forward-reverse RGB to IHS transforms. The resulting merged image should then be displayed with bands 1. among others. 2. PERFORMING SPATIAL ENHANCEMENT 79 . and that all the spectral information is contained in the other PCs or in H and S.Resolution Merge The resolution of a specific sensor can refer to radiometric.5 m. Chavez (1991). three bands are used according to the following formula: DNB1_new = [DNB1 / DNB1 + DNB2 + DNB3] × [DNhigh res. Landsat TM sensors have seven bands with a spatial resolution of 28. it is assumed that the intensity component (PC-1 or I) is spectrally equivalent to the SPOT panchromatic image. such as bands 3. this assumption does not strictly hold. Since SPOT data does not cover the full spectral range that TM data does.

or navigate to the directory where the file is stored. 2. Navigate to the directory where the Output Image should be stored. 1 U s in g R e s o l u t i o n M e r g e Use Resolution Merge to integrate imagery of different spatial resolutions (pixel size). Click the High Resolution Image dropdown arrow. and click a file. 5. 4. point to Spatial Enhancement. and click a file.Res olution Merg e 1. Click the Image Analysis dropdown arrow. Click OK. and click Resolution Merge. Click the Multi-Spectral Image dropdown arrow. 2 3 4 5 80 USING IMAGE ANALYSIS FOR ARCGIS . or navigate to the directory where the file is stored. 3.

The following images display the Resolution Merge function: High Resolution Image Multi-Spectral Image Resolution Merge PERFORMING SPATIAL ENHANCEMENT 81 .

82 USING IMAGE ANALYSIS FOR ARCGIS .

Therefore. Depending on the points and the bands in which they appear. the radiometric enhancement of a multiband image can usually be considered as a series of independent. It differs from Spatial Enhancement. Radiometric Enhancement consists of functions to enhance your image by using the values of individual pixels within each band.1 Using Radiometric Enhancement IN THIS CHAPTER • LUT (Lookup Table) Stretch • Histogram Equalization • Histogram Matching • Brightness Inversion 6 Radiometric enhancement deals with the individual values of the pixels in an image. radiometric enhancements that are applied to one band may not be appropriate for other bands. 83 . singleband enhancements (Faust 1989). which takes into account the values of neighboring pixels.

The data values specified can go only in an upward. You can enhance the contrast or brightness of any section in a single color gun at a time. Middle. The output is 3 bands. refer to the function that is applied to the data to perform the enhancement. as the contrast and brightness values are changed. Usually. Line ar and n onlin ear The terms linear and nonlinear. 2. it forces the contrast of the middle to decrease. and high. In most raw data. Line ar contrast stretch A linear contrast stretch is a simple way to improve the visible contrast of an image. For example.LUT Stretch LUT Stretch creates an output image that contains the data values as modified by a lookup table. and Low. The data values are continuous. A piecewise linear contrast stretch normally follows two rules: 1. increasing direction. middle. there can be no break in the values between High. It enables you to create a number of straight line segments that can simulate a curve. That range can be expanded to utilize the total range of the display device (usually 0 to 255). P i e c ew i s e l i n e a r c o n t r a s t s t r e t ch A piecewise linear contrast stretch allows for the enhancement of a specific portion of data by dividing the lookup table into three sections: low. This process is done in Layer Properties in Image Analysis for ArcGIS. The contrast value for each range represents a percentage of the available output range that particular range occupies. Contrast stretching involves taking a narrow input range and stretching the output brightness values for those same pixels over a wider range. the data file values fall within a narrow range— usually a range much narrower than the display device is capable of displaying. the transformation of data file values into brightness values is illustrated by the graph of a lookup table. Range specifications adjust in relation to any changes to maintain the data value range. instead of applying the same amount of contrast (slope) across the entire image. A piecewise linear stretch uses a polyline function to increase contrast to varying degrees over different ranges of the data. This technique is very useful for enhancing image areas in shadow or other areas of low contrast. Co ntrast s tre tch When radiometric enhancements are performed on the display device. when describing types of spectral enhancement. It is often necessary to contrast-stretch raw image data. 84 USING IMAGE ANALYSIS FOR ARCGIS . they may affect the contrast and brightness of other ranges. Since rules 1 and 2 above are enforced. N onlin e a r c ont r a s t s t re t ch A nonlinear spectral enhancement can be used to gradually increase or decrease contrast over a range. nonlinear enhancements bring out the contrast in one range while decreasing the contrast in other ranges. if the contrast of the low range increases. so that they can be seen on the display.

Lookup tables are created that convert the range of data file values to the maximum range of the display device. This figure shows how the contrast stretch manipulates the histogram of the data. These values are loaded into the view as the default display values the next time the image is displayed. which is created by adding breakpoints to the histogram. increasing contrast in some areas and decreasing it in others. standard deviation. a contrast stretch is performed on the display device only. If the data has a normal distribution. You can then edit and save the contrast stretch values and lookup tables as part of the raster data image file. Usually the data file values that are two standard deviations above and below the mean are used. The shadow pixels are usually at the low extreme of the data file values. and other statistics on each band of data. By manipulating the lookup tables as in the following illustration. then this range represents approximately 95 percent of the data. You can specify the number of standard deviations from the mean that are to be used in the contrast stretch. The mean and standard deviation are used instead of the minimum and maximum data file values because the minimum and maximum data file values are usually not representative of most of the data. The statistics in the image file contain the mean. or by a specific amount.Co ntrast s tre tch on the di splay Usually. A notable exception occurs when the feature being sought is in shadow. so that the data file values are not changed. USING RADIOMETRIC ENHANCEMENT 85 . This is also a good example of a piecewise linear contrast stretch. outside the range of two standard deviations from the mean. Var ying the contrast stretch There are variations of the contrast stretch that can be used to change the contrast of values over a specific range. the maximum contrast in the features of an image can be brought out. The mean and standard deviation are used to determine the range of data file values to be translated into brightness values or new data file values.

4. Navigate to the directory where the Output Image should be stored. or navigate to the directory where it is stored. 2 3 4 86 USING IMAGE ANALYSIS FOR ARCGIS . 1 LUT Stretch Class LUT Stretch Class provides a means of producing an output image that has the stretch built into the pixel values to use with packages that have no stretching capabilities. 3. Set the output type to TIFF. point to Radiometric Enhancement. Click the Input Image dropdown arrow. 2. Click OK.Ap ply LUT Stretch Class 1. and click LUT Stretch. and click the file you want to use. Click the Image Analysis dropdown arrow.

To equalize this histogram to 10 bins. as shown in the following equation: T A = --N Where: N = = = the number of bins the total number of pixels in the image the equalized number of pixels per bin tail T A After Equalization The pixels of each input value are assigned to bins. Therefore. The pixels are then given new values. which are simply numbered sets of pixels. Consider the following: There are 240 pixels represented by this histogram. Histogram Equalization can also separate pixels into distinct groups if there are few output values over a wide range. there would be: pixels at tail are grouped contrast is lost 240 pixels / 10 bins = 24 pixels per bin = A pixels at peak are spread apart . This can have the visual effect of a crude classification. The result approximates a flat histogram. based upon the bins to which they are assigned. The total number of pixels is divided by the number of bins. so that the number of pixels in each bin is as close to A as possible.contrast is gained USING RADIOMETRIC ENHANCEMENT 87 . equaling the number of pixels per bin.Histogram Equalization Histogram Equalization is a nonlinear stretch that redistributes pixel values so that there is approximately the same number of pixels with each value within a range. Original Histogram peak To perform a Histogram Equalization. contrast is increased at the peaks of the histogram and lessened at the tails. the pixel values of an image (either data file values or brightness values) are reassigned to a certain number of bins.

you can see that the enhanced image gains contrast in the peaks of the original histogram. For example. So. M = 9. data values at the tails of the original histogram are grouped together. the following equation is used:  H i  H  + ---- ∑ k 2 k = 1  B i = int ---------------------------------A i–1 output data file values Effec t on contrast Where: A Hi int Bi 88 = = = = equalized number of pixels per bin (see above) the number of values with the value i (histogram) integer function (truncating real numbers to integer) bin number for pixels with value i By comparing the original histogram of the example data with the one above. The output histogram of this equalized image looks like the following illustration: 30 A = 24 numbers inside bars are input data file values 60 10 5 60 40 number of pixels 15 10 5 0 5 1 2 3 4 5 6 7 8 data file values number of pixels 9 40 30 4 20 2 1 0 0 1 2 15 3 0 3 0 4 5 0 6 7 8 5 A = 24 6 7 8 9 9 15 To assign pixels to bins. because the input values ranged from 0 to 9. Input values 0 through 2 all have the output value of 0. In this example. However. the input range of 3 to 7 is stretched to the range 1 to 8. is lost. contrast among the tail pixels.60 60 Source: Modified from Gonzalez and Wintz 1977 The 10 bins are rescaled to the range 0 to M. so that the equalized histogram can be compared to the original. which usually make up the darkest and brightest regions of the input image. USING IMAGE ANALYSIS FOR ARCGIS .

since the pixels can rarely be grouped together into bins with an equal number of pixels. USING RADIOMETRIC ENHANCEMENT 89 .The resulting histogram is not exactly flat. Sets of pixels with the same value are never split up to form equal bins.

Histogram Equalization can also separate pixels into distinct groups if there are few output values over a wide range. Click OK. Type or click the arrows to enter the Number of Bins. and click Histogram Equalization. 5. The Histogram Equalization process works by redistributing pixel values so that there are approximately the same number of pixels with each value within a range. 1 H is t o g r a m E q u a l i z a t io n Perform Histogram Equalization when you need to redistribute pixels to approximate a flat histogram. 4. or navigate to the directory where it is stored. 3. 2. 2 3 4 5 90 USING IMAGE ANALYSIS FOR ARCGIS . This process can have the effect of a crude classification.Performing Hi stog ram Equ alizati on 1. Navigate to the directory where the Output Image should be stored. point to Radiometric Enhancement. Click the Input Image dropdown arrow. and click the file you want to use. Click the Image Analysis dropdown arrow.

mapped through the lookup table (b). This is especially useful for mosaicking or change detection. as illustrated here. or are slightly different because of sun angle or atmospheric effects.Histogram Matching Histogram Matching is the process of determining a lookup table that converts the histogram of one image so that it resembles the histogram of another. the spatial resolution of the data should be the same. For some applications. USING RADIOMETRIC ENHANCEMENT 91 . the two input images should have similar characteristics: • • • • The general shape of the histogram curves should be similar. frequency (a) frequency frequency (b) + 0 input 255 0 input (c) 255 Relative dark and light features in the image should be the same. The relative distributions of land covers should be about the same. Source histogram (a). even when matching scenes that are not of the same area. Histogram Matching is useful for matching data of the same or adjacent scenes that were collected on separate days. approximates model histogram (c). which serves as a function for converting one histogram to the other. a lookup table is mathematically derived. = 0 input 255 To match the histograms. To achieve good results with Histogram Matching.

2 3 4 5 92 USING IMAGE ANALYSIS FOR ARCGIS .Performing Hi stog ram Ma tchi ng 1. 3. 1 4. 2. Navigate to the directory where the Output Image should be stored. Click the Match Image dropdown arrow. and is particularly useful for mosaicking images or change detection. H is t o g r a m M a t c h in g Perform Histogram Matching when using matching data of the same or adjacent scenes that were gathered on different days and have differences due to the angle of the sun or atmospheric effects Histogram Matching mathematically determines a lookup table that will convert the histogram of one image to resemble the histogram of another. or navigate to the directory where it is stored. and click Histogram Match. and click the file you want to use. Click OK. 5. and click the file you want to use. Click the Image Analysis dropdown arrow. or navigate to the directory where it is stored. point to Radiometric Enhancement. Click the Input Image dropdown arrow.

1 DNin if 0.Brightness Inversion The Brightness Inversion functions produce images that have the opposite contrast of the original image. and light detail becomes dark.1 < DN < 1 The same image after Brightness Inversion An image before Brightness Inversion USING RADIOMETRIC ENHANCEMENT 93 . Inverse is useful for emphasizing detail that would otherwise be lost in the darkness of the low DN pixels.0 if 0.1 DNout = 0. This can also be used to invert a negative image that has been scanned to produce a positive image.0 < DNin < 0. This function applies the following algorithm: DNout = 1. Dark detail becomes light.

Dark detail becomes light. Navigate to the directory where the Output Image should be stored.Ap plying Brightn ess Inv ers ion 1. and click the file you want to use. point to Radiometric Enhancement. and light becomes dark 2 3 4 94 USING IMAGE ANALYSIS FOR ARCGIS . Click the Input Image dropdown arrow. 3. and click Brightness Inversion. Images can be produced that have the opposite contrast of the original image. 4. 1 B r ig h t n e s s I n v e r s i o n This function allows both linear and nonlinear reversal of the image intensity range. Click the Image Analysis dropdown arrow. Click OK. 2. or navigate to the directory where it is stored.

They can be used to: • extract new bands of data that are more interpretable to the eye • apply mathematical transforms and algorithms • display a wider variety of information in the three available color guns (R. G. and blue to intensity.1 Applying Spectral Enhancement IN THIS CHAPTER • RGB to IHS • IHS to RGB • Vegetative Indices • Color IR to Natural Color 7 Spectral Enhancement enhances images by transforming the values of each pixel on a multiband basis. and saturation. 95 . The techniques in this chapter all require more than one band of data. B) You can use the features of Spectral Enhancement to study such patterns as might occur with deforestation or crop rotation and to see images in a more natural state or view images in different ways. such as changing the bands in an image from red. green. hue.

and saturation (S) as the three positioned parameters (in lieu of R. However. G.RGB to IHS The color monitors used for image display on image processing systems have three color guns. the additive primary colors. The variance of intensity and hue in RGB to IHS hue intensity saturation M–r R = -------------M–m M–g G = -------------M–m M–b B = -------------M–m The algorithm used in the Image Analysis for ArcGIS RGB to IHS transform (Conrac 1980) 96 USING IMAGE ANALYSIS FOR ARCGIS . it could be defined as any data range. It varies from 0 at the red midpoint through green and blue back to the red midpoint at 360. • • • Intensity is the overall brightness of the scene (like PC-1) and varies from 0 (black) to 1 (white).G. When displaying three bands of a multiband data set. These correspond to red. Hue is representative of the color or dominant wavelength of the pixel. It is a circular dimension. and B). This system is advantageous in that it presents colors more nearly as perceived by the human eye. the viewed image is said to be in R. hue (H). Saturation represents the purity of color and also varies linearly from 0 to 1.B space.B). In the following image. it is possible to define an alternate color space that uses intensity (I). hue must vary from 0 to 360 to define the entire sphere (Buchanan 1979). green. However. and blue (R.G. 0 to 255 is the selected range.

R. The equations for calculating hue in the range of 0 to 360 are: If M = m. G. or b least value.0.0.0. g. largest value. r. or B least value. b M m = = are each in the range of 0 to 1. H = 60 (2 + b . corresponding to the color with the largest value. g.5. or b Where: R. largest value.5. r. B M m = = are each in the range of 0 to 1.b) If B = M.0 are: If M = m.r) APPLYING SPECTRAL ENHANCEMENT 97 . H = 60 (4 + r . G. R. M–m S = -------------M+m M–m S = ----------------------2–M–m If I > 0. G.0 is: M+m I = -------------2 The equations for calculating saturation in the range of 0 to 1. corresponding to the color with the least value. G. are each in the range of 0 to 1. H = 0 If R = M. G. S = 0 If I ≤ 0. G.g) If G = M. The equation for calculating intensity in the range of 0 to 1. g. or B At least one of the R. or B values is 1. or B values is 0.Where: R. H = 60 (6 + g . B r. and at least one of the R.

Click OK. green. RGB to IHS Using RGB to IHS applies an algorithm that transforms red. 2 3 4 98 USING IMAGE ANALYSIS FOR ARCGIS . Navigate to the directory where the Output Image should be stored. point to Spectral Enhancement. Click the Input Image dropdown arrow.RGB to IHS 1. 1 3. Click the Image Analysis dropdown arrow. and saturation (IHS) values. hue. or navigate to the directory where it is stored. 4. and click the image you want to use. and blue (RGB) values to the intensity. and click RGB to IHS. 2.

and the resultant image looks very much like the input image. a min-max stretch is applied to either intensity (I). so that they more fully utilize the 0 to 1 value range. However.and high-frequency radar imagery.IHS to RGB IHS to RGB is intended as a complement to the standard RGB to IHS transform. You can also replace I with radar intensity before the IHS to RGB transform (Holcomb 1993). The values for hue (H). a circular dimension. H and I are replaced by low. or both.  60  Equations for calculating B in the range of 0 to 1. depending on the dynamic range of the DN values of the input image. 360 – H G = m + ( M – m )  -----------------. it is possible that I or S or both occupy only a part of the 0 to 1 range. R = m The equations for calculating G in the range of 0 to 1. the full IHS image is retransformed back to the original RGB space.  60  G = M If 120 H < 180. The algorithm used by Image Analysis for ArcGIS for the IHS to RGB function is (Conrac 1980): Given: H in the range of 0 to 360.  60  B = M H – 240 B = m + ( M – m )  -----------------. and then transform to RGB space. If 240 H 360. G = m H – 120 G = m + ( M – m )  -----------------. In another approach (Daily 1983). M = I + S – I ( S ) m = 2⋅1–M If I 0. If 180 H < 300. If 300 H 360. B = M 120 – H B = m + ( M – m )  -----------------. Chavez evaluates the use of the IHS to RGB transform to resolution merge Landsat TM with SPOT panchromatic imagery (Chavez 1991). S. In the IHS to RGB algorithm. If 180 H < 240. B = M 99 . After stretching.  60  If 60 H < 120.0: If H < 60. HR = m + ( M – m )  -----  60 R = M 240 – H R = m + ( M – m )  -----------------. or both. You could define I and/ or S as other parameters. It is not essential that the input parameters (IHS) to this transform be derived from an RGB to IHS transform. a min-max stretch is applied to either I. I and S in the range of 0 to 1.0 are: If H < 120.0 are: APPLYING SPECTRAL ENHANCEMENT If 240 H < 300. M = I(1 + S) If I > 0. saturation (S). so that they more fully utilize the 0 to 1 value range.  60  If 60 H < 180. As the parameter Hue is not modified. set Hue at 0 to 360. If 300 H 360. This is a method of color coding other data sets. If 120 H < 240.5.0 If H < 60.5. The equations for calculating R in the range of 0 to 1. it largely defines what we perceive as color. In this model. are 0 to 360.

hue. point to Spectral Enhancement.Co nv er tin g IHS to RGB 1. and click IHS to RGB. 2 3 4 100 USING IMAGE ANALYSIS FOR ARCGIS . Click OK. green. Click the Image Analysis dropdown arrow. 2. and click the image you want to use. 3. or navigate to the directory where it is stored. and saturation (IHS) values to red. 4. 1 IHS to RGB Using IHS to RGB applies an algorithm that transforms intensity. and blue (RGB) values. Click the Input Image dropdown arrow. Navigate to the directory where the Output Image should be stored.

Green 5/4. Certain combinations of TM ratios are routinely used by geologists for interpretation of Landsat imagery for mineral type. For example: Red 5/7.+ 0.Vegetative Indices Mapping vegetation is a common application of remotely sensed imagery. Black and white images of individual indices. Indices are used to create output images by mathematically combining the DN values of different bands.Band Y) or more complex: • Indices can also be used to minimize shadow effects in satellite and aircraft multispectral images. To help you find vegetation quickly and easily. judiciously chosen indices can highlight and enhance differences that cannot be observed in the display of the original color bands. Blue 3/1. In many cases. these indices are ratios of band DN values: BandX ---------------BandY These ratio images are derived from the absorption/reflection spectra of the material of interest. the ratio often gives information on the chemical composition of the target. IR – R --------------IR + R • Transformed NDVI (TNDVI) = IR – R --------------. • I n d ex ex a m p l e s The following are examples of indices that have been preprogrammed in Image Analysis for ArcGIS: • • • • IR/R (infrared/red) SQRT (IR/R) Vegetation Index = IR-R Normalized Difference Vegetation Index (NDVI) = BandX – BandY ---------------------------------------BandX + BandY In many instances. These may be simplistic: (Band X . The absorption is based on the molecular bonds in the (surface) material. Jensen 1996. Tucker 1979 Ap plica tions • Indices are used extensively in mineral exploration and vegetation analysis to bring out small differences between various rock types and vegetation classes. APPLYING SPECTRAL ENHANCEMENT 101 . or a color combination of three ratios. Image Analysis for ArcGIS includes a Vegetative Indices feature. may be generated.5 IR + R Source: Modified from Sabins 1987. Thus.

the calculation: (infrared band) . These are derived from the absorption spectra of the material of interest. yet very useful. measure of the presence of vegetation. For example.DNred yields a simple. Jensen 1996): Sensor Landsat MSS SPOT XS Landsat TM NOAA AVHRR IR Band 4 3 4 2 R Band 2 2 3 1 Imag e alg eb ra Image algebra is a general term used to describe operations that combine the pixels of two or more raster layers in mathematical combinations. Band ratios are also commonly used. 102 USING IMAGE ANALYSIS FOR ARCGIS .The following table shows the infrared (IR) and red (R) band for some common sensors (Tucker 1979. The numerator is a baseline of background absorption and the denominator is an absorption peak.(red band) DNir .

point to Spectral Enhancement. 7. Click the dropdown list to add the Near Infrared Band number. 4. and click Vegetative Indices.Usi ng Veg etative Indic es 1. 5. 3. 2. Navigate to the directory where the image is stored. Click the Image Analysis dropdown arrow. Choose the Desired Index from the dropdown list. 6. Navigate to the directory where the Output Image should be stored. Click OK. 1 2 3 4 5 6 7 APPLYING SPECTRAL ENHANCEMENT 103 . Click the dropdown list to add the Visible Red Band number.

You will need to assign bands to color depending on how many bands are in the image you want to change to natural color. certain bands of data need to be assigned to red. 104 USING IMAGE ANALYSIS FOR ARCGIS . and water becomes dark in color. the image appears in natural colors. When an image is displayed in natural color.e. Vegetation becomes green in color. the bands are arranged to approximate the most natural representation of the image in the real world. To create natural color. You cannot apply this feature to images having only one band of data (i. After using Color IR to Natural Color.Color IR to Natural Color This function lets you simulate natural colors from other types of data so that the output image is a fair approximation of the natural colors from an infrared image. and blue. If you are not familiar with the bands designated to reflect infrared and natural color for a particular type of imagery. Image Analysis for ArcGIS can help you apply either scheme through the Color IR to Natural Color choice in Spectral Enhancement. grayscale images). green. The infrared image of a golf course.

and select the appropriate band. Click the Visible Red Band dropdown arrow. 7. 2. 6. and select the appropriate band. Click the dropdown arrow or navigate to the directory to select the Input Image. Click OK. 5. Click the Visible Green Band dropdown arrow. Click the Image Analysis dropdown arrow. and select the appropriate band. 4. Click the Near Infrared Band dropdown arrow. and click Color IR to Natural Color. 3. 1 2 3 4 5 6 7 APPLYING SPECTRAL ENHANCEMENT 105 .Usi ng C olo r I R t o N a tur a l Col or 1. point to Spectral Enhancement. Navigate to the directory where the Output Image should be stored.

106 USING IMAGE ANALYSIS FOR ARCGIS .

Although the term GIS is commonly used to describe software packages. statistical data. and software (Walker and Miller 1990). marketing. and analyze layers of geographic data to produce interpretable information. from Landscape Architecture to natural resource management to transportation routing.1 Performing GIS Analysis IN THIS CHAPTER • Performing Neighborhood Analysis • Performing Thematic Change • Using Recode • Using Summarize Areas 8 A GIS is a unique system designed to input. store. manipulate. budgets. hardware. maps. and so on located in a GIS. The tools contained in GIS Analysis will help you turn geographic data into useful information. A GIS should also be able to create reports and maps (Marble 1990). a true GIS includes knowledgeable staff. a training program. The central purpose of a GIS is to turn geographic data into useful information— the answers to real-life questions—questions such as: • How should political districts be redrawn in a growing metropolitan area? • How can we monitor the influence of global climatic changes on the earth’s resources? • What areas should be protected to ensure the survival of endangered species? This chapter is about using the different analysis functions in Image Analysis for ArcGIS to better use the images. or any other data that is needed in a study. You can use GIS technology in any geography related discipline. hardcopy maps. data. The GIS database may include computer images. 107 . GIS technology can be used in almost any geography-related discipline. retrieve. data.

the first step in any GIS project is usually an assessment of the scope and goals of the study. zip code data is probably not needed. For this reason. “Land cover with a value of 8 are on slopes too steep for development. 108 USING IMAGE ANALYSIS FOR ARCGIS . new information can be retrieved by combining and comparing layers using the following procedures. You can input data into a GIS and output information. The database must be designed to meet the needs and objectives of the organization. data layers are combined and manipulated in order to create new layers and to extract meaningful information from them.Information versus data Information. the layers can be analyzed and new information extracted.” is information. Although software and data are commercially available. It is relevant to a particular problem or question: • • “The land cover at coordinate N875250. Once the database (layers and attribute data) is assembled. as opposed to data. In the analysis phase. For example. A major step in successful GIS implementation is analysis. Once the project is defined. is independently meaningful. However. Some information can be extracted simply by looking at the layers and visually comparing them to other layers. while land cover data may be useful. you can begin the process of building the database. a custom database must be created for the particular project and study area. E757261 has a data file value 8. if you are looking for a suitable refuge for bald eagles. The information you wish to derive determines the type of data that must be input.” is data.

It can also be used to highlight disconnected linear features. according to the pixels that surround it. but is not to be confused with data capture via a digital camera. This can be used to emphasize classes with the higher class values or to eliminate linear features or boundaries. based upon the analyzed pixel. This can be used to emphasize classes with the low class values. Every pixel is analyzed spatially. such as boundary. Majority—outputs the class value that represents the majority of the class values in the window. This is often useful in assessing vegetation crown closure. sum. • • • • • • PERFORMING GIS ANALYSIS . There are several types of analysis that can be performed upon each window of pixels. Neighborhood analysis is based on local or neighborhood characteristics of the data (Star and Estes 1990). as described below: • Density—outputs the number of pixels that have the same class value as the center (analyzed) pixel. mean. Minority—outputs the least common of the class values that are within the window. 109 • Minimum—outputs the least or smallest class value within the window. Neighborhood analysis creates a new thematic layer. and so on. This is similar to the convolution filtering performed on continuous data. This option can be used to identify the least common classes. Diversity—outputs the number of class values that are present within the window. This is also a measure of homogeneity (sameness).Neighborhood Analysis Neighborhood Analysis applies to any image processing technique that takes surrounding pixels into consideration. Several types of analyses can be performed. The GIS filtering process is sometimes referred to as scanning. With a process similar to the convolution filtering of continuous raster layers. Maximum—outputs the greatest class value within the window. such as convolution filtering and scanning. In a file where class values are ranked. Sum—totals the class values. Diversity is also a measure of heterogeneity (difference). density. totaling enables you to further rank pixels based on their proximity to high-ranking pixels. which is defined by you. Rank—outputs the number of pixels in the scan window whose value is less than the center pixel. The number and the location of the surrounding pixels is determined by a scanning window. These operations are known as focal operations. This option operates like a low-frequency filter to clean up a salt and pepper layer. thematic raster layers can also be filtered.

2.Performing Ne ighbo rho od Analysis 1. and click Neighborhood. and click the file you want to use. point to GIS Analysis. and choose the shape you want to use. Click the Neighborhood Function dropdown arrow. This function creates a new thematic layer. and choose the size you want to use. Click the Neighborhood Shape dropdown arrow. 5. 1 Neighborhood Analysis Neighborhood Analysis applies to any analysis function that takes neighboring pixels into account. Every pixel is spatially analyzed according to the pixels surrounding it. and choose the function you want to use. 7. Click OK. Click the Input Image dropdown arrow. Click the Matrix size dropdown arrow. Click the Image Analysis dropdown arrow. or navigate to the directory where it is stored. 6. 4. The Neighborhood Analysis process is similar to convolution filtering. 3. 5 2 3 4 6 7 110 USING IMAGE ANALYSIS FOR ARCGIS . Navigate to the directory where the Output Image should be stored. The different types of analysis that can be performed on each window of pixels are listed in the dropdown menu for Neighborhood Function.

and the second one specifies the rows. PERFORMING GIS ANALYSIS 111 . you use Thematic Change after you perform categorizations of your data. Zero is not treated specially in any way. you can quantify both the amount and the type of changes that take place over time. Image Analysis for ArcGIS produces a thematic image that has all the possible combinations of change. Thematic Change creates an output image from two input raster files. Both before and after images prior to performing Thematic Change. The number of classes in the output file is the product of the number of classes from the two input files. By using the categorizations of Before Theme and After Theme in the dialog. The class values of the two input files are organized into a matrix. The first input file specifies the columns of the matrix. Typically.Thematic Change Thematic Change identifies areas that undergo change over time.

Click the After Theme dropdown arrow. 5. 2 3 4 5 112 USING IMAGE ANALYSIS FOR ARCGIS . and click the file you want to use. or navigate to the directory where it is stored. point to GIS Analysis. and click the file you want to use. or navigate to the directory where it is stored. 1 4. 2.Performing Them atic Cha ng e 1. Thematic Change Use Thematic Change to identify areas that have undergone change over time. 3. Click the Before Theme dropdown arrow. and click Thematic Change. Click the Image Analysis dropdown arrow. Click OK. Navigate to the directory where the Output Image should be stored.

The following illustration is an example of the previous image after undergoing Thematic Change. Note the areas of classification that show the changes between 1973 and 1994. In the Table of contents you see the combination of classes from the Before and After images. PERFORMING GIS ANALYSIS 113 .

you perform the actual recoding process. Recoding involves the assignment of new values to one or more classes of an existing file. class values can be recoded to new values. 114 USING IMAGE ANALYSIS FOR ARCGIS . and recoding a previously grouped image. you must group the discrete classes together into common groups. ratio. You can also use Recode to save any changes made to the color scheme or class names of a classified image to the Attribute Table for later use. Secondly. and best areas. better. recoding can be used to assign classes to appropriate values. Recoding is used to: • • • • reduce the number of classes combine classes assign different class values to existing classes write class name and color changes to the Attribute table When an ordinal. First. or interval class numbering system is used. These methods are recoding by class name. which rewrites the Attribute table using the information from your grouping process. The following exercises will take you through each of the three recoding methods. South Carolina soils after the recode. recoding by symbology. Just saving an image will not record these changes.Recode By using Recode. in creating a model that outputs good. it may be beneficial to recode the input layers so all of the best classes have the highest class values. Notice the changed and grouped class names in the Table of contents. Recoding is often performed to make later steps easier. The three recoding methods described below are more accurately described as three methods of grouping the classified image to get it ready for the recode process. For example. Recoding an image involves two major steps. Thematic Image of South Carolina soil types before Recode by class name.

Triple-click each class you wish to rename. Identify the classes you want to group together in the Table of contents. Click the color of each class. 2. 3. 4.Performing Re code by class name You will group the classified image in the ArcMap Table of contents. and change it to the color scheme you want to use. Click Add Data to open a classified image. and then perform the recode. and rename it. 1. 2 3 4 PERFORMING GIS ANALYSIS 115 .

6.5. Navigate to the directory where the Output Image should be stored. and click Recode. Click OK. 7. 5 6 7 116 USING IMAGE ANALYSIS FOR ARCGIS . Click the Image Analysis dropdown arrow. point to GIS Analysis.

6. 13. Click the Symbology tab in the Layer Properties dialog. Double-click the image name in the Table of contents. 3. You will see similarities with recoding by class name. Follow steps 5-7 to group the rest of your classes. and click Recode. Click the Image Analysis dropdown arrow. point to GIS Analysis. You will notice that steps 1-3 and 10-12 are the same as the previous Recode exercise. and click Group Values. Click OK. 7. 2. Click the colors of the classes to change to your desired color scheme. Click Apply and OK. 4. Click Add Data to open an classified image. Press the Ctrl key while clicking on the first set of classes you want to group together. Identify the classes you want to group together.Performing Re code by s ymbol ogy 4 This process will show you how to recode by symbology. but you should be aware of some different procedures. 5 6 7 8 PERFORMING GIS ANALYSIS 117 . 12. Right click on the selected classes. 1. 9. 5. 11. 8. Click in the Label column and type the new name for the class. 10. Navigate to the directory where the Output Image should be stored.

Click OK. 2 4. 3. and click Recode.Rec oding with previou sly groupe d imag e You may need to open an image that has been classified and grouped in another program such as ERDAS IMAGINE®. 2. 1. Click the Map Pixel Value through Field dropdown arrow. and select the attribute you want to use to recode the image. 5. 3 5 4 118 USING IMAGE ANALYSIS FOR ARCGIS . point to GIS Analysis. Navigate to the directory where the Output Image should be stored. These images may have more than one valid attribute column that can be used to perform the recode. Click Add Data and add the grouped image. Click the Image Analysis dropdown arrow.

Previously grouped before Recode After Recode in Image Analysis for ArcGIS PERFORMING GIS ANALYSIS 119 .The following images depict soil data that was previously grouped in ERDAS IMAGINE.

A file containing the area to be inventoried can be summarized by a file for the same geographical area containing the land cover categories. Summarize Areas might be used to assist a regional planning office in preparing a study of urban change for certain counties within the jurisdiction or even within one county or city. Summarize Areas works by using a feature theme or an Image Analysis for ArcGIS theme to compile information about that area in tabular format. number of acres (or hectares or square miles) in common. and percentages.Summarize Areas Image Analysis for ArcGIS also provides Summarize Areas as a method of assessing change in thematic data. Once you complete the Thematic Change analysis. The summary report could indicate the amount of urban change in a particular area of a larger thematic change. including number of points in common. Summarize Areas produces cross-tabulation statistics that compare class value areas between two thematic files. you can use Summarize Areas to limit the analysis to include only a portion of the entire image. 120 USING IMAGE ANALYSIS FOR ARCGIS .

and click on the class theme. number of acres (or hectares or square miles) in common. Summarize Areas Use Summarize Areas to produce cross-tabulation statistics for comparison of class value areas between two thematic files. Click on the dropdown arrow for the Zone Attribute. and click Summarize Areas. or navigate to the directory where it is stored. 2.U si n g Su m m a ri z e A r e as 1. 2 3 4 5 PERFORMING GIS ANALYSIS 121 . Click OK. and click on the condition for each value of the attribute. or navigate to the directory where it is stored. point to GIS Analysis. 4. Click on the dropdown arrow for the Class Theme. Click the Zone theme dropdown arrow. 3. and click on the theme you want to use. Click the Image Analysis dropdown arrow. including number of points in common. 1 5. and percentages. or one thematic and one shapefile.

122 USING IMAGE ANALYSIS FOR ARCGIS .

The Utilities part of Image Analysis for ArcGIS provides a number of features for you to use in this capacity. set new parameters. The different procedures offered in the Utilities menu allow you to alter your images in order to see differences. The information about Subset Image. and Reproject Image can be found in chapter 4 “Using Data Preparation” since the options are also accessible through that menu.1 Using Utilities IN THIS CHAPTER • Image Difference • Layer Stack 9 The core of Image Analysis for ArcGIS is the ability it gives you to interpret and manipulate your data. This chapter will explain the following functions and show you how to use: • Image Difference • Layer Stack 123 . create images. Create New Image. or subset images.

With Image Difference. A lgor it hm Subtract two images on a pixel by pixel basis. 5. Two images are generated from this image-toimage comparison.Image Difference The Image Difference function gives you the ability to conveniently perform change detection on aspects of an area by comparing two images of the same place from different times. For your application. This may mean an area has become more vegetated. you may edit the colors to select any color desired for your study. Convert the increase percentage to a value. This image is created by subtracting the Before Image from the After Image. The first image generated from Image Difference is the Difference image. Dark areas have decreased in reflectance. Some Increase. The Increased class shows areas of positive (brighter) change greater than the threshold and is green in color. This may mean clearing of forested areas. Brighter areas have increased in reflectance. and Increased. Image Difference is used for change analysis with imagery that depicts the same area at different points in time. The Image Difference tool is particularly useful in plotting environmental changes such as urban sprawl and deforestation or the destruction caused by a wildfire or tree disease. The Decreased class represents areas of negative (darker) change greater than the threshold for change and is red in color. The second image is the Highlight Difference image. Subtract the Before Image from the After Image. Convert the decrease percentage to a value. The Difference image is a grayscale image composed of single band continuous data. The five categories are Decreased. 4. 1. Since Image Difference calculates change in brightness values over time. 2. and the other is a five-class thematic image. This thematic image divides the changes into five categories. or the area was dry and is now wet. the Difference image simply reflects that change using a grayscale image. If the difference is less than the decrease value. you can highlight specific areas of change in whatever amount you choose. Other areas of positive and negative change less than the thresholds and areas of no change are transparent. It is also a handy tool to use in determining crop rotation or the best new place to develop a neighborhood. Unchanged. 124 USING IMAGE ANALYSIS FOR ARCGIS . 3. If the difference is greater than the increase value then assign the pixel to Class 5 (Increased). Some Decrease. then assign the pixel to Class 1 (Decreased). one is a grayscale continuous image.

point to Utilities. 8. Type the Highlight Change file name. 6. or navigate to the directory where it should be stored. 2 3 4 5 6 7 The Image Difference Output file showing highlight change. 1 4. or navigate to the directory where it should be stored. 2. Click the Before Theme dropdown arrow. Click the color bar to choose the color you want to represent the increases and decreases. Click OK. or navigate to the directory where it is stored. Click the After Theme dropdown arrow and click the file you want to use. Choose As Percent or As Value for the Highlight Changes. 9. 7. or navigate to the directory where it is stored. 3. 8 9 USING UTILITIES 125 . Type the Image Difference file name. and click Image Difference. Enter the Increases and Decreases values.U s i n g I m ag e D i f fe r e n c e 1. and click the file you want to use. 5. Click the Image Analysis dropdown arrow.

There are several applications of this feature such as change visualization. you finish with one three band image.Layer Stack Layer Stack lets you stack layers from different images in any order to form a single theme. and the red and blue bands were chosen from one image. An example of a multispectral dataset with individual bands in separate files would be Landsat TM data. In general. The files used are from the Amazon. 126 USING IMAGE ANALYSIS FOR ARCGIS . you should first ensure that the images are in the order that you want. Layer Stack is particularly useful if you have received a multispectral dataset with each of the individual bands in separate files. This order represents the order in which the bands will be arranged in the output file. Stacking works based on the order in the Table of contents. combining and viewing multiple resolution data. Layer stack quickly consolidates the bands of data into one file. while the green band was chosen from the other. if you stack three single-band grayscale images. It is useful for combining different types of imagery for analysis such as multispectral and radar data. you will find that stacking images is most useful for combining grayscale single-band images into multiband images. The image on this page is an example of a Layer Stack output. Before you initiate stacking. A stacked image with bands 1 and 3 taken from Amazon LBAND image and the rest of the layers take from Amazon TM. For example. You can also use Layer Stack to analyze datasets taken during different seasons when different sets show different stages for vegetation in an area. and viewing disparate data types.

and click Layer Stack.Usi ng L aye r Stack 1. Click the browse button to navigate to a file containing layers you want to add to the layer stack. Select a currently open layer. 1 2 3 4 5 6 USING UTILITIES 127 . and click Add to include it in the layer stack. 5. point to Utilities. Click the Image Analysis dropdown arrow. 6. Navigate to the directory where the Output Image should be stored. Click OK. Select any files you want to remove from the layer stack and click Remove. 2. 4. 3.

128 USING IMAGE ANALYSIS FOR ARCGIS .

1 Understanding Classification IN THIS CHAPTER • The Classification Process • Classification Tips • Unsupervised Classification • Supervised Classification • Classification Decision Rules 10 Multispectral classification is the process of sorting pixels into a finite number of individual classes. This chapter covers the two ways to classify pixels into different categories: • Unsupervised Classification • Supervised Classification The differences in the two are basically as their titles suggest. An example of a classified image is a land cover map that shows vegetation. Depending on the type of information you want to extract from the original data. based on their data file values. or categories of data. pasture. 129 . Supervised Classification is more closely controlled by you than Unsupervised Classification. If a pixel satisfies a certain set of criteria. the pixel is assigned to the class that corresponds to that criteria. urban. bare land. classes may be associated with known features on the ground or may simply represent areas that look different to the computer. and so on.

The classification process breaks down into two parts: training and classifying (using a decision rule). Statistics are derived from the spectral characteristics of all pixels in an image. Tr aini ng First. ground truth data. S u p er vi s e d t r a i n i n g Supervised training is closely controlled by the analyst. easily recognized areas of a particular soil type or land use. Unsuper vised traini ng Unsupervised training is more computer-automated. Knowledge of the data. as explained below. A parametric signature is based on statistical parameters (e. and is used with a decision rule (explained below) to assign the pixels in the image file to a class. A set of parametric signatures can be used to train a statistically-based classifier (e.g. Unsupervised classification is useful only if the classes can be appropriately interpreted. mean and covariance matrix) of the pixels that are in the training sample or cluster. If the classification is accurate. These patterns do not necessarily correspond to directly meaningful characteristics of the scene. the resulting classes represent the categories within the data that you originally identified. you can instruct the computer system to identify pixels with similar characteristics. pixels are sorted based on mathematical criteria. maximum likelihood) to define the classes.. Training is the process of defining the criteria by which these patterns are recognized (Hord 1982). the statistics are derived from the training samples. However. pattern recognition can be performed with the human eye. By identifying patterns. to attach meaning to the resulting classes (Jensen 1996). after classification. 130 USING IMAGE ANALYSIS FOR ARCGIS . In a computer system. is required before classification. such as aerial photos. Supervised and unsupervised training can generate parametric signatures.. By spatially and spectrally enhancing an image. the human brain automatically sorts certain textures and colors into categories.The Classification Process Pattern rec ognition Pattern recognition is the science—and art—of finding meaningful patterns in data. or that you can identify with help from other sources. It is then the analyst’s responsibility. Signatures contain both parametric class definitions (mean and covariance) and non-parametric class definitions (parallelepiped boundaries that are the per band minima and maxima). the computer system must be trained to recognize patterns in the data. In some cases. Unsupervised training is dependent upon the data itself for the definition of classes. It enables you to specify some parameters that the computer uses to uncover statistical patterns that are inherent in the data. This method is usually used when less is known about the data before classification.g. Each signature corresponds to a class. in Supervised Classification. spectral pattern recognition can be more scientific. and not the entire image. Si gnatures The result of training is a set of signatures that defines a training sample or cluster. which can be extracted through classification. They are simply clusters of pixels with similar spectral characteristics. you select pixels that represent patterns or land cover features that you recognize. it may be more important to identify groups of pixels with similar spectral characteristics than it is to sort pixels into recognizable categories. Training can be performed with either a supervised or an unsupervised method. such as contiguous. and of the classes desired. After the statistics are derived. In this process. or maps.

There are three parametric decision rules offered: • • • Minimum distance Mahalanobis distance Maximum likelihood No nparametric decisi on rule When a nonparametric rule is set. using data contained in the signature. the pixel is assigned to that class. When a parametric decision rule is used. every pixel is assigned to a class since the parametric decision space is continuous (Kloer 1994). then the pixel is assigned to a class called unclassified. These signatures are defined by the mean vector and covariance matrix for the data file values of the pixels in the signatures. the pixels of the image are sorted into classes based on the signatures by use of a classification decision rule. performs the actual sorting of pixels into distinct class values. UNDERSTANDING CLASSIFICATION 131 . This rule results in the following conditions: • • If the nonparametric test results in one unique class.Dec ision ru le After the signatures are defined. Para metric d ecisio n rule A parametric decision rule is trained by the parametric signatures. If the nonparametric test results in zero classes (for example. the pixel lies outside all the nonparametric decision boundaries). The decision rule is a mathematical algorithm that. Parallelepiped is the only nonparametric decision rule in Image Analysis for ArcGIS. the pixel is tested against all of the signatures with nonparametric definitions.

then the application is better suited to unsupervised training. Unsupervised training enables you to define many classes easily. 1975. they will be assigned to the first class when performing Unsupervised Classification. R. • • • Other states or government agencies may also have specialized land use/cover studies. The proper classification scheme includes classes that are both important to the study and discernible from the data on hand. Thematic Mapping Section. and identify classes that are not in contiguous. et al. J. Florida Topographic Bureau. Lansing. D. The purpose of such a scheme is to provide a framework for organizing and categorizing the information that can be extracted from the data (Jensen 1983). Michigan: State of Michigan Office of Land Use. if you want the classes to be determined by spectral distinctions that are inherent in the data so that you can define the classes later.: U. when you have selected training sites that can be verified with ground truth data. Fish and Wildlife Service. It is recommended that the classification process is begun by defining a classification scheme for the application. 1979. Lewis M. S u pe r v is e d v e rs us Uns up er vi s ed Classification In supervised training. A number of classification schemes have been developed by specialists who have inventoried a geographic region. 550-010-001-a. “A Land Use and Land Cover Classification System for Use with Remote Sensor Data. On the other hand. like those above. classification is performed with a set of target classes in mind.S. You must also have some way of recognizing pixels that represent the classes that you want to extract. Michigan Land Cover/Use Classification System. using previously developed schemes. Cowardin. Geological Survey Professional Paper 964. Florida Department of Transportation. Procedure No. In Image Analysis for ArcGIS. Some references for professionally-developed schemes are listed below: • Anderson. it is important to have a set of desired classes in mind.. You can assign a specific class by taking a training sample when performing a Supervised Classification. and then create the appropriate signatures from the data. if you need to correctly classify small areas with actual representation. which can describe a study area in several levels of detail.S. Cover and Forms Classification System. Classification of Wetlands and Deepwater Habitats of the United States. Washington.Classification tips C l a s s i f i c a t i o n s ch e m e Usually. Such a set is called a classification scheme (or classification system). 1985. you should choose Supervised Classification. Most schemes have a hierarchical structure. homogeneous regions that represent each class. 1976. Michigan Land Use Classification and Reference Committee. Supervised classification is usually appropriate when you want to identify relatively few classes. 132 USING IMAGE ANALYSIS FOR ARCGIS . If you have areas that have a value of zero. et al.” U. easily recognized regions. and you do not classify them as NoData (see chapter 3 “Applying data tools”). or when you can identify distinct. Florida Land Use.C.. as a general framework.

Limi ting dime nsion s Although Image Analysis for ArcGIS allows an unlimited number of layers of data to be used for one classification. it is usually wise to reduce the dimensionality of the data as much as possible. image algebra. remotely-sensed data be classified. and causes the computer system to perform more arduous calculations. Often. However. without understanding the data and the enhancements used. Unnecessary data take up valuable disk space. UNDERSTANDING CLASSIFICATION 133 . classifying data that have been merged. which slows down processing. or other transformations—can produce very specific and meaningful results. certain layers of data are redundant or extraneous to the task at hand. it is recommended that only the original.Classifying enhan ced da ta For many specialized applications. spectrally merged or enhanced—with principal components.

µn-σn) and the coordinates (µ1+σ1.Unsupervised Classification/Categorize Image Unsupervised training requires only minimal initial input from you. and classifies again. µ2-σ2. it is not biased to the top of the data file. If you need to classify small areas with small representation.. µ3+σ3. Such a vector in two dimensions is illustrated below. so that the spectral distance patterns in the data gradually emerge. a new mean for each cluster is calculated. but iteratively classifies the pixels. small areas such as wetlands. µB+σB). Cl usters Clusters are defined with a clustering algorithm. µBσB) and (µA+σA. you should use Supervised Classification. . because it is based on the natural groupings of pixels in image data when they are plotted in feature space. Because the ISODATA method is iterative. µ2+σ2.. the means of N clusters can be arbitrarily determined. which often uses all or many of the pixels in the input data file for its analysis. . ISODATA clu s te r i n g ISODATA is iterative in that it repeatedly performs an entire classification (outputting a thematic raster layer) and recalculates statistics. so that those means shift to the means of the clusters in the data. After each iteration. or grasses can be wrongly classified on rural data sets. Unsupervised training is also called clustering. The initial cluster means are distributed in feature space along a vector that runs between the point at spectral coordinates (µ1-σ1. Ini tia l cl us t e r m e a ns On the first iteration of the ISODATA algorithm. Then. However. and then it processes repetitively.. The process begins with a specified number of arbitrary cluster means or the means of existing signatures. The clustering algorithm has no regard for the contiguity of the pixels that define each cluster. µn+σn). based on the actual spectral locations of the pixels in the cluster. The ISODATA method uses minimum spectral distance to assign a cluster for each candidate pixel. instead of the initial arbitrary calculation. you have the task of interpreting the classes that are created by the unsupervised training algorithm. redefines the criteria for each class. small urban areas. The initial cluster means are evenly distributed between (µA-σA. as are the one-pass clustering algorithms.. Self-Organizing refers to the way in which it locates clusters with minimum user input. Due to the skip factor of 8 used by the Unsupervised Classification signature collection. 134 USING IMAGE ANALYSIS FOR ARCGIS . these new means are used for defining clusters in the next iteration. The process continues until there is little change between iterations (Swain 1973). The Iterative Self-Organizing Data Analysis Technique (ISODATA) (Tou and Gonzalez 1974) clustering method uses spectral distance as in the sequential method. µ3-σ3.

the first iteration of the ISODATA algorithm always gives results similar to those in this illustration. The ISODATA function creates an output image file with a thematic raster layer as a result of the clustering. causing them to shift in feature space. an image file exists that shows the assignments of the pixels to the clusters. the means of all clusters are recalculated. At the end of each iteration. The spectral distance between the candidate pixel and each cluster mean is calculated. block by block. For the second iteration. The pixel is assigned to the cluster whose mean is the closest. UNDERSTANDING CLASSIFICATION 135 . arbitrary assignment of the initial cluster means. Considering the regular.ISODATA Arbitrary Clusters 5 arbitrary cluster means in two-dimensional spectral space Cluster 3 Cluster 4 Cluster 5 µΒ+σΒ Band B data file values µΒ Band B data file values Cluster 2 Cluster 1 µΒ−σΒ µΑ+σΑ µΑ µΑ−σΑ Band A data file values Band A data file values Pixel a naly sis Pixels are analyzed beginning with the upper left corner of the image and going left to right. The entire process is repeated—each candidate pixel is compared to the new cluster means and assigned to the closest cluster mean.

Since you are not able to control the convergence threshold. M.Cluster 4 Cluster 3 Cluster 2 Band B data file values Cluster 5 Cluster 1 Band A data file values Perc e n t ag e u ncha n g ed After each iteration. When this number reaches T (the convergence threshold). the normalized percentage of pixels whose assignments are unchanged since the last iteration is displayed on the dialog. It is possible for the percentage of unchanged pixels to never converge or reach T (the convergence threshold). the program terminates. or specify a reasonable maximum number of iterations. so that the program does not run indefinitely. 136 USING IMAGE ANALYSIS FOR ARCGIS . it may be beneficial to monitor the percentage.

Navigate to the directory where the Output Image should be stored. 1 2 3 4 UNDERSTANDING CLASSIFICATION 137 . Click the Image Analysis dropdown arrow. Click OK.Performing Un super vised Classification/ Cat egoriz e Im ag e 1. Type or click the arrows to enter the Desired Number of Classes. 4. 3. 5. and click Unsupervised/Categorize. or navigate to the directory where it is stored. Click the Input Image dropdown arrow. 2. point to Classification.

such as a land cover type. Ground truth data are considered to be the most accurate (true) data available about the area of study. and so on. To select reliable samples.Supervised Classification Supervised classification requires a priori (already known) information about the data. It should be collected at the same time as the remotely sensed data. may be known through ground truthing. The location of a specific characteristic. you should know some information— either spatial or spectral—about the pixels that you want to classify. However. some ground data may not be very accurate due to a number of errors and inaccuracies. analysis of aerial photography. Ground truthing refers to the acquisition of knowledge about the study area from field work. you rely on your own pattern recognition skills and a priori knowledge of the data to help the system determine the statistical criteria (signatures) for data classification. personal experience. soil. which types of land cover. or vegetation (or whatever) are represented by the data? In supervised training. such as: • • What type of classes need to be extracted? Soil type? Land use? Vegetation? What classes are most likely to be present in the data? That is. 138 USING IMAGE ANALYSIS FOR ARCGIS . so that the data correspond as much as possible (Star and Estes 1990).

point to Classification. and click the file you want to use. 3. and click the file you want to use. Click the Class Name Field dropdown arrow.Performing Su per vi sed Clas sific ation 1. or navigate to the directory where it is stored. or navigate to the directory where it is stored. 7. Navigate to the directory where the Output Image should be stored. 8. and click the rule you want to use. Click the Image Analysis dropdown arrow. 2. 1 2 3 4 5 6 7 8 UNDERSTANDING CLASSIFICATION 139 . Click the Input Image dropdown arrow. and click Supervised. Click OK. 6. Click the Signature Features dropdown arrow. 4. Choose All Features or Selected Features to use during classification. Click the Classification Rule dropdown arrow. 5. and click the field you want to use.

The equation for classifying by spectral distance is based on the equation for Euclidean distance: n SD xyc = ∑ i=1 ( µ ci – X xyi ) 2 140 USING IMAGE ANALYSIS FOR ARCGIS . the next step is to perform a classification of the data. Mini mum dista nce The minimum distance decision rule (also called spectral distance) calculates the spectral distance between the measurement vector for the candidate pixel and the mean vector for each signature. The candidate pixel is assigned to the class with the closest mean. candidate pixel µB3 Band B data file values µ3 ◆ Para metric rule s Image Analysis for ArcGIS provides these commonly-used decision rules for parametric signatures: • • • minimum distance Mahalanobis distance maximum likelihood (with Bayesian variation) µB2 µ2 ◆ µB1 o o ◆ µ1 µA1 µA2 µA3 No nparametric rul e • Parallelepiped Band A data file values In this illustration. Each pixel is analyzed independently. The measurement vector for each pixel is compared to each signature. Pixels that pass the criteria that are established by the decision rule are then assigned to the class for that signature. or algorithm. according to a decision rule. spectral distance is illustrated by the lines from the candidate pixel to the means of the three signatures. Image Analysis for ArcGIS enables you to classify the data parametrically with statistical representation.Classification decision rules Once a set of reliable signatures has been created and evaluated.

5 ( X – M c )T ( Cov c ) ( X – M c ) ] –1 Mahal anobi s dis tance Note: The Mahalanobis distance algorithm assumes that the histograms of the bands have normal distributions. If this is not the case. Mahalanobis distance is similar to minimum distance. and that the input bands have normal distributions. Variance and covariance are figured in so that clusters that are highly varied lead to similarly varied classes.y to the mean of class c Where: D c X Mc ac = = = = = weighted distance (likelihood) a particular class the measurement vector of the candidate pixel the mean vector of the sample of class c percent probability that any candidate pixel is a member of class c (defaults to 1. UNDERSTANDING CLASSIFICATION 141 . or is entered from a priori data) the covariance matrix of the pixels in the sample of class c determinant of Covc (matrix algebra) inverse of Covc (matrix algebra) natural logarithm function transposition function (matrix algebra) Covc |Covc| Covc-1 ln T = = = = = Source: Swain and Davis 1978 When spectral distance is computed for all possible values of c (all possible classes) the class of the candidate pixel is assigned to the class for which SD is the lowest. If this is not the case. The basic equation assumes that these probabilities are equal for all classes. except that the covariance matrix is used in the equation. The maximum likelihood decision rule is based on the probability that a pixel belongs to a particular class.Where: n i c Xxyi µci SDxyc = = = = = = number of bands (dimensions) a particular band a particular class data file value of pixel x.0. Maximum likelih ood Note: The maximum likelihood algorithm assumes that the histograms of the bands of data have normal distributions.5 ln ( Cov c ) ] – [ 0. which is usually not a highly varied class (Swain and Davis 1978). you may have better results with the minimum distance decision rule. For example. The Equation for the Maximum Likelihood/Bayesian Classifier is as follows: D = ln ( ac ) – [ 0. you may have better results with the parallelepiped or minimum distance decision rule.y in band i mean of data file values in band i for the sample for class c spectral distance from pixel x. when classifying urban areas—typically a class whose pixels vary widely—correctly classified pixels may be farther from the mean than those of a class for water. and vice versa. or by performing a first-pass parallelepiped classification.

When a pixel falls into no class boundaries. 142 USING IMAGE ANALYSIS FOR ARCGIS . it is labeled unclassified. In the parallelepiped decision rule.The equation for the Mahalanobis distance classifier is as follows: D = ( X – M c ) ( Cov c ) ( X – M c ) Where: D c X Mc Covc Covc-1 T T –1 = = = = = = = Mahalanobis distance a particular class the measurement vector of the candidate pixel the mean vector of the signature of class c the covariance matrix of the pixels in the signature of class c inverse of Covc transposition function The pixel is assigned to the class. then the pixel is assigned to that signature’s class. c. In the case of a pixel falling into more than one class. the data file values of the candidate pixel are compared to upper and lower limits which are the minimum and maximum data file values of each band in the signature. When a pixel’s data file values are between the limits for every band in a signature. then the first class is the one assigned. Para llele piped Image Analysis for ArcGIS provides the parallelepiped decision rule as its nonparametric decision rule. There are high and low limits for every signature in every band. for which D is the lowest.

urban sprawl.1 Using Conversion IN THIS CHAPTER • Conversion • Convert Raster to Features • Convert Features to Raster 11 The Conversion feature gives you the ability to convert shape files to raster images and raster images to shape files. This tool is very helpful when you need to isolate or highlight certain parts of a raster image or when you have a shape file and you need to view it as a raster image. The ability to assign certain pixel values as NoData is very helpful when converting images. 143 . and shore erosion. Possible applications include viewing deforestation patterns. The Image Info tool that is discussed in chapter 3 “Applying data tools” is also an important part of Raster/Feature Conversion.

polygons.Conversion Always be aware of how the raster dataset will represent the features when converting points. Even though points are represented by a single cell. the more accurate the representation. With polygonal or areal data. The smaller the cell size. Points with area will have an accuracy of plus or minus half the cell size. Linear Data is represented by a polyline that is also comprised of cells so it has area even though by definition. There is a trade off when working with a cell-based system. lines do not. problems can occur from trying to represents smooth polygon boundaries with square cells. The finer the cell resolution and the greater the number of cells that represent small areas. or polylines to a raster. the accuracy of representation will vary according to the scale of the data the resolution of the raster dataset. 144 USING IMAGE ANALYSIS FOR ARCGIS . and it is that even though points don't have area. cells do. the smaller the area. that cell does have area. and thus a closer representation of the point feature. The accuracy of the representation is dependent on the scale of the data and the size of the cell. Because of this. For many users having all data types in the same format and being able to use them interchangeably in the same language is more important than a loss of accuracy. and vice versa.

NoData cells will not be transformed into points. When a raster that represents linear features is converted to a polyline feature. When you choose Convert Raster to Features. the dialog will give you the choice of a Field to specify from the image in the conversion. In order no to have jagged or sharp edges to the new feature file. you can check Generalize Lines to smooth out the edges. Cells that are NoData in the input raster will not become features in the output polyline feature.Converting raster to features During a conversion of a raster representing polygonal features to polygonal features. or a polyline according to the Field and data you’re using. a polygon. a point will be created in the output for each cell of the input raster. Continuous cells with the same value are grouped together to form polygons. You should note that regardless of what Field you pick. Arcs are created from cell borders in the raster. passing through the center of each cell. A raster image before conversion After conversion to a shapefile using Value as the Field USING CONVERSION 145 . the polygons are built from groups of contiguous cells having the same cell values. Each point will be positioned at the center of the cell it represents. When you convert a raster representing point features to point features. a polyline is created from each cell in the input raster. You will also be given the choice of an Output geometry type so you can choose if the feature will be a point. Cells that are NoData in the input raster will not become features in the output polygon feature. the category will not be populated on the Attribute Table after conversion.

Click the Output geometry type dropdown arrow. polygon. Navigate to the directory where the Output feature should be stored.Performing ra ster to fea ture conve rsion 1. 4. Click the Input raster dropdown arrow. and choose point. 7. 1 2 3 4 5 6 7 146 USING IMAGE ANALYSIS FOR ARCGIS . Click the Field dropdown arrow and choose a Filed to use. Click OK. 6. 5. Click the Image Analysis dropdown arrow. or navigate to the directory where the raster image is stored. 2. point to Convert. 3. Check or uncheck Generalize Lines according to your preference. or polyline. and click Convert Raster to Features.

USING CONVERSION 147 . the cell is given the value of the first line encountered while processing. forests. or points from any source file can be converted to a raster. polylines. cells are given the value of the points found within each cell. When you convert polylines. A field is added to the table of the output raster to hold the original string value from the features. at certain resolutions. only appear as lines representing streams or roads. fields. Polygons are used for buildings. When you convert points.Converting features to raster Any polygons. Cells that do not contain a point are given the value of NoData. the cells are given the value of the polygon found at the center of each cell. When you convert polygons. Cells that are not intersected by a line are given the value NoData. You should choose the cell size based on several different factors: the resolution of the input data. Each unique string in a string field is assigned a unique value to the output raster. Using a smaller cell size during conversion will alleviate this. cells are given the value of the line that intersects each cell. and many other features that are best represented by a series of connected cells. and the need to maintain a rapid processing speed. You are given the option of specifying the cell size you want to use in the Feature to Raster dialog. If more than one line is found in a cell. You can convert features using both string and numeric fields. Polylines are features that. the output resolution needed to perform your analysis.

Click the Input features dropdown arrow. Type the Output cell size. 2. point to Convert. Click the Image Analysis dropdown arrow. and click Convert Feature to Raster. or navigate to the directory where the file is stored. 6. 4. 5. Click the Field dropdown arrow. Navigate to the directory where the Output Raster should be stored. and select the Field option you want to use. Click OK. 1 2 3 4 5 6 148 USING IMAGE ANALYSIS FOR ARCGIS . 3.Performing Feature to Raster conversio n 1.

conform to other images. so they can be represented on a planar surface. but these images can be corrected. Rectification is the process of transforming the data from one grid system into another grid system using a geometric transformation. but in mountainous areas (or on aerial photographs of buildings). Orthorectification is a form of rectification that corrects for terrain displacement and can be used if there is a DEM of the study area. The terms geocorrection and rectification are used synonymously when discussing geometric correction. In relatively flat areas. or rectified.12 Applying Geocorrection Tools 12 IN THIS CHAPTER • Geocorrection Properties • Spot Properties • Polynomial Properties • Rubber Sheeting • Camera Properties • IKONOS Properties • Landsat Properties • QuickBird Properties • RPC Properties The tools and methods described in this chapter concern the process of geometrically correcting the distortions in images caused by sensors and the curvature of the earth. and have the integrity of a map. orthorectification is recommended. It is based on collinearity equations. orthorectification is not necessary. which can be derived by using 3D Ground Control Points (GCPs). where a high degree of accuracy is required. Even images of seemingly flat areas are distorted. Resampling is the process of extrapolating data values for the pixels on the new grid from the values of the source pixels. 149 . the pixels must be resampled. Since the pixels of a new grid may not align with the pixels of the original grid.

The image data may already be projected onto the desired plane. and oblique areas may all require different projection systems (ESRI 1992). 150 USING IMAGE ANALYSIS FOR ARCGIS . the projection may be predetermined. and usually have had radiometric corrections applied. A commonly used projection in the United States government is State Plane. G e o r e fe r e n c in g Georeferencing refers to the process of assigning map coordinates to image data. If you are doing a government project. Geocoded data are images that have been rectified to a particular map projection and pixel size. D i s a dvan t ag e s o f r e c t i f i c a ti o n During rectification. east-west. Rectification. An unrectified image is more spectrally correct than a rectified image. some spectral integrity of the data can be lost during rectification. To select the optimum map projection and coordinate system. It is possible to purchase image data that is already geocoded. by itself. you must determine the appropriate coordinate system for the database. north-south. the primary use for the database must be considered. Use an equal area projection for thematic or distribution maps and conformal or equal area projections for presentation maps. such as ArcInfo comparing images that are originally at different scales extracting accurate distance and area measurements mosaicking images performing any other analyses requiring precise geographic locations • What is the extent of the study area? Circular. Georeferencing. Before selecting a map projection. consider the following: • • How large or small an area is mapped? Different projections are intended for different size areas. If map coordinates or map units are not needed in the application.When to rectify Rectification is necessary in cases where the pixel grid of the image must be changed to fit a map projection system or a reference image. There are several reasons for rectifying image data: • • • • • • • • • • comparing pixels scene to scene in applications. then it may be wiser not to rectify the image. involves georeferencing. involves changing only the map coordinate information in the image file. since all map projection systems are associated with map coordinates. but not yet referenced to the proper coordinate system. the data file values of rectified pixels must be resampled to fit into a new grid of pixel rows and columns. Before rectifying the data. The grid of the image does not change. Where on the globe is the study area? Polar regions and equatorial regions require different projections for maximum accuracy. such as change detection or thermal inertia mapping (day and night comparison) developing GIS databases for GIS modeling identifying training samples according to map coordinates prior to classification creating accurate scaled photomaps overlaying an image with vector data. Image to image registration involves georeferencing only if the reference image is already georeferenced. by definition. Geocoded data should be rectified only if they must conform to a different projection system or be registered to other rectified data. Although some of the algorithms for calculating these values are highly reliable.

Scanning or digitizing produces images that are planar. Therefore. the type of data being used. if an image file is produced by scanning or digitizing a paper map that is in the desired projection system. For example. The term map coordinates is sometimes used loosely to apply to reference coordinates and rectified coordinates. GCPs for large scale imagery might include the intersection of two roads. Use the mouse to select a pixel from an image in the view. For small scale imagery. although it could be different. This information is usually the same for each layer of an image file. other water bodies. larger features such as urban areas or geologic features may be used. Use a digitizing tablet to register an image to a hardcopy map. the rectified coordinates for all other points in the image are extrapolated. the cell size of band 6 of Landsat TM data is different than the cell size of the other bands. towers or buildings. utility corridors. if you are rectifying Landsat TM data and want the rectification to be accurate to within 30 meters. With both the source and destination views open. vegetation and so on) should not be used. but do not contain any map coordinate information. but GCPs from 1:24.00. and the accuracy of the GCPs and ancillary data being used. In many cases. the image header can simply be updated with new map coordinate information. GCPs acquired from GPS should have an accuracy of about 10 m. GCPs consist of two X. Landmarks that can vary (edges of lakes. For example. the more reliable the rectification is. Gro und contro l poi nts GCPs are specific pixels in an image for which the output map coordinates (or other output coordinates) are known. enter source coordinates and reference coordinates for image to image registration. The more dispersed the GCPs are. map coordinates are not necessary. in image to image registration.Geore ferenci ng on ly Rectification is not necessary if there is no distortion in the image. From the GCPs. the RMS error should not exceed 1. These coordinates are not limited to map coordinates. then that image is already planar and does not require rectification unless there is some skew or rotation of the image. The source and reference coordinates of the GCPs can be entered in the following ways: • • They may be known a priori. which is a much simpler process than rectification. These images need only to be georeferenced. Select many GCPs throughout the scene. Acceptable accuracy depends on the image area and the particular project. This involves redefining: • • the map coordinate of the upper left corner of the image the cell size (the area represented by each pixel) En tering GC Ps Accurate GCPs are essential for an accurate rectification. It is important to remember that RMS error is reported in pixels. and entered at the keyboard. APPLYING GEOCORRECTION TOOLS 151 .000scale maps should have an accuracy of about 20 m. For example. airport runways.Y pairs of coordinates: • • source coordinates — usually data file coordinates in the image being rectified reference coordinates — the coordinates of the map or reference image to which the source image is being registered • Tol e r a n c e o f R M S e rror ( R M S E ) Acceptable RMS error is determined by the end use of the data base. For example.

Another benefit is that a thematic file has only one band to rectify instead of the multiple bands of a continuous file. The available resampling methods are discussed in detail later in Geocorrection property dialogs. which may be a drawback in some applications. especially when using GPS data for the GCPs. it may be beneficial to rectify the data first.Classification Some analysts recommend classification before rectification since the classification is then based on the original data values. On the other hand. Thema tic files Nearest neighbor is the only appropriate resampling method for thematic files. the classification may be more accurate if the new coordinates help to locate better training samples. Since this data is very accurate. 152 USING IMAGE ANALYSIS FOR ARCGIS .

The vector layer you want to snap to another layer will be defined in the Link Snapping box. The General tab has a Link Coloring section. The purpose of this portion of the tool is to allow you to snap an edge. The Displayed Units area shows the measurement of the Vertical Units. and click the Within and Over Threshold boxes to change the link colors. or vertex to the edge. 1 3 2 APPLYING GEOCORRECTION TOOLS 153 . Display Units does not have any effect on the original data in latitude/longitude format. and all but Polynomial Properties and Rubber Sheeting Properties have an Elevation tab.Geocorrection property dialogs The individual Geocorrection Tools have their own dialog that appears whenever you choose a model type and click on the Geocorrection Properties button. or vertex of another layer. end. but they all have several tabs in common. If you have shapefiles (a vector layer) active in ArcMap. 3. 2. The Link Coloring section lets you set a Threshold and select or change link colors. Often one will be known and the other one not so it may say Meters for Vertical Units and Unknown for Horizontal Units. Checking one will activate Snap Tolerance and Snap Tolerance Units. end. check Vertex. The Displayed Units section gives you the Horizontal and Vertical Units if they are known. The choice is completely up to you. 1. Boundary. You will need to check either Vertex. and a Link Snapping section. Every Geocorrection Tool dialog has a General tab and a Links tab. The Link Snapping section will only be activated when you have a vector layer (shapefile) active in ArcMap. or End Point. Edge. Click the arrows to set the Threshold. or End depending on what you want to snap to in another layer. Some of the tool dialogs offer certain option tabs pertaining to that specific tool. a Displayed Units section. The image in the view will not show the changes either.

If your link coordinates are predefined. Click the Links tab. 2. Click the Coordinate System tab. The coordinates will be displayed in the cell array on this tab. 3. Click inside a cell and edit the contents. so when you add links in an image or between two images. 4. 3. Click the Geocorrection Properties button . There are a few additional checks you need to make before proceeding. 154 USING IMAGE ANALYSIS FOR ARCGIS . click the appropriate Predefined coordinate system. 3. If you want to use the coordinate system from a specific layer. 1. if you want to experiment with coordinates other than the ones you’ve been given. Right-click in the view area and click Properties at the bottom of the popup menu. they will be listed under this tab. select that layer from the list of Layers. Click the Add Links button to set your new links. You can edit and delete information displayed in the CellArray as well. The program is interactive between the image and the Links tab. including reference points and RMS Error.Link s tab The Links tab (this display is also called a CellArray) shows information about the links in your image. Make sure that the correct layer is displayed in the Layers box on the Image Analysis toolbar. 2. 1 2 2 3 You can proof and edit the coordinates of the links as you enter them. 2. When you are finished. Choose your Model Type from the dropdown list. information is automatically updated in the CellArray. 3 1. Before adding links or editing the links table. 1. you need to select the Coordinate System in which you want to store the link coordinates. you can click Export Links to Shape file and save the new shapefile. The Data Frame Properties dialog displays. you can plug your own coordinates into the CellArray on the Links tab. For example. If you have already added links to your image.

Choosing Constant changes the options in the Elevation Source section to allow you to specify the Elevation Value and Elevation Units. APPLYING GEOCORRECTION TOOLS 155 . because most of the time you will have an Elevation File to use as your elevation source. If you do not have an Elevation File. When you click the Elevation tab in any of the Geocorrection Model Types. but the Elevation tab is the same on all of the Model Types that allow you to specify elevation information.2 3 4 Eleva tion tab Elevation Source File The Elevation tab is in all Geocorrection Model Properties except for Polynomial and Rubber Sheeting. you should use a Constant elevation value as the elevation source. The Constant value you should use is the average ground elevation for the entire scene. the default selection will allow you to choose a file to use as an Elevation Source. The following examples use the Landsat Properties dialog.

Click Apply to set the Elevation Source. Choose File. The following steps take you through the Elevation tab.1 2 3 4 Elevation Source Constant After the Elevation Source section you can check the box if you want to Account for Earth’s curvature as part of the Elevation. 2. The second set uses Constant as the Elevation Source. 4. 3. 156 USING IMAGE ANALYSIS FOR ARCGIS . Check if you want to Account for the Earth’s curvature. 5. Click OK if you are finished with the dialog. Click the dropdown arrow and choose Feet or Meters. The first set of instructions pertains to using File as your Elevation Source. 1. Type the file name or navigate to the directory where the Elevation File is stored.

Click the dropdown arrow. 3. and choose either Feet or Meters. 1 2 3 4 APPLYING GEOCORRECTION TOOLS 157 . 1. 4. Check if you want to Account for the Earth’s curvature. Click OK if you are finished with the dialog. 5. 2. Click Apply to set the Elevation Source.These are the steps to take when using a Constant value as the elevation source. Click the arrows to enter the Elevation Value. Choose Constant.

It is also very useful in collecting stereo data from which elevation data can be extracted.89 µm Panch romati c SPOT Panchromatic (meaning sensitive to all visible colors) has 10 × 10 m spatial resolution. The SPOT satellite can observe the same area on the globe once every 26 days.50 to 0. This off-nadir viewing can be programmed from the ground control station. SPOT is commonly referred to as a pushbroom scanner. developed by the French Centre National d’Etudes Spatiales (CNES).51 to 0. which means that all scanning parts are fixed. The width of the swath observed varies between 60 km for nadir viewing and 80 km for off-nadir viewing at a height of 832 km (Jensen 1996). It is also useful for soil boundary and geological boundary delineations. Reflective IR 0. but off to an angle. or multispectral. Off-nadir refers to any point that is not directly beneath the detectors. and scanning is accomplished by the forward motion of the scanner. contains 1 band—0.61 to 0. 2. The sensors operate in two modes. XS SPOT XS.SPOT The first SPOT satellite. and is quite useful for collecting data in a region not directly in the path of the scanner or in the event of a natural or man-made disaster. Red 3. It is useful for crop identification and emphasizes soil/crop and land/water contrasts. This is different from Landsat which scans with 16 detectors perpendicular to its orbit. multispectral and panchromatic. Using this off-nadir capability. This band is useful for discriminating between plant species. Green Comments This band corresponds to the green reflectance of healthy vegetation.79 to 0. has 20 × 20 m spatial resolution.73 mm—and is similar to a black and white photograph. but it does have off-nadir viewing capability. and the third was launched in 1993. SPOT XS Bands and Wavelengths Wavelength (microns) 0. 158 USING IMAGE ANALYSIS FOR ARCGIS . SPOT pushes 3000/6000 sensors along its orbit. one area on the earth can be viewed as often as every 3 days. The second SPOT satellite was launched in 1990.59 µm 0. where timeliness of data acquisition is crucial. The SPOT scanner normally produces nadir views. 8-bit radiometric resolution. It has a radiometric resolution of 8 bits (Jensen 1996). was launched in early 1986.68 µm Band 1. and contains 3 bands (Jensen 1996). This band is especially responsive to the amount of vegetation biomass present in a scene.

89 µm 1. The multispectral scanner has a pixel size of 20 × 20 m. The SPOT 4 satellite has two sensors on board: a multispectral sensor. Planimetric maps correctly represent horizontal distances between objects (Star and Estes 1990). (near-IR) 4. (mid-IR) S t e r e o s c o p i c p a i rs Two observations can be made by the panchromatic scanner on successive days. Green SPOT Panchromatic versus SPOT XS Wavelength 0.75 µm 0.59 µm 0.78 to 0. SPOT 4 carries High Resolution Visible Infrared (HR VIR) instruments that obtain information in the visible and near-infrared spectral bands.58 to 1.68 µm 2. This type of imagery can be used to produce a single image.SP OT 4 Panc hrom atic 1 band The SPOT 4 satellite was launched in 1998. or topographic and planimetric maps (Jensen 1996). Topographic maps indicate elevation. and a swath width of 60 km. Stereoscopic imagery can also be achieved by using one vertical scene and one off-nadir scene.61 to 0. and a swath width of 60 km. 3 bands XS 1 pixel = 10 m x 10 m The SPOT 4 satellite orbits the earth at 822 km above the Equator. and a panchromatic sensor. The panchromatic scanner has a pixel size of 10 × 10 m. so that the two images are acquired at angles on either side of the vertical.68 µm 0. SPOT 4 Bands and Wavelengths radiometric resolution 0-255 1 pixel = 20 m x 20 m Band 1. Panchromatic APPLYING GEOCORRECTION TOOLS 159 . Red 3.50 to 0.61 to 0. resulting in stereoscopic imagery.

Choose the Sensor type. 2 1. Click the arrows to enter the Number of Iterations. 4. and choose Spot. and Elevation tabs. 6. 2. Click the arrows to enter the Incidence Angle. the Spot Properties dialog also contains a Parameters tab. 8. Click OK. Links. 5.The Spot Properties dialog In addition to the General. and the layer. Click the arrows to enter the Background Value. 3. 1 3 6 4 5 7 8 160 USING IMAGE ANALYSIS FOR ARCGIS . Click the Model Types dropdown arrow. Most of the Geocorrection Properties dialogs do contain a Parameters tab. Click the Geocorrection Properties button. 7. but each one offers different options. Click the Parameters tab on the Spot Properties dialog.

and to rectify relatively small image areas. The distance between the GCP reference coordinate and the curve is called RMS error. but may not be rectified to the desired map projection.Polynomial transformation Polynomial equations are used to convert source file coordinates to rectified map coordinates. even if there isn’t a perfect fit of each GCP to the polynomial that the coefficients represent. Depending upon the distortion in the imagery. rotate scanned quad sheets according to the angle of declination stated in the legend. The size of the matrix depends upon the order of transformation. It is not always possible to derive coefficients that produce no error. it is not advisable to increase the order of transformation if at first a high RMS error occurs. 1st order or 2nd order transformations are used. Every GCP influences the coefficients. You can reorient skewed Landsat TM data. GCPs are plotted on a graph and compared to the curve that is expressed by a polynomial. complex polynomial equations may be required to express the needed transformation. The degree of complexity of the polynomial is expressed as the order of the polynomial. such as the GCP source and distribution. The transformation matrix for a 1st-order transformation consists of six coefficients—three for each coordinate (X and Y). Examine other factors first. The order of transformation is the order of the polynomial used in the transformation. A 1st order transformation can also be used for data that are already projected onto a plane. When doing this type of rectification. Image Analysis for ArcGIS allows 1st through nth order transformations. You can perform simple linear transformations to an image displayed in a view or to the transformation matrix itself. For example. Lin e a r t r a ns for m a t io ns A 1st order transformation is a linear transformation. to convert a planar map projection to another planar map projection. It can change: • • • • location in X and/or Y scale in X and/or Y skew in X and/or Y rotation Tr ansformati on ma tri x A transformation matrix is computed from the GCPs. Usually. SPOT and Landsat Level 1B data are already transformed to a plane. For example. in the figure below. which is discussed later in this chapter in “Camera Properties” on page 171. The matrix consists of coefficients that are used in polynomial equations to convert the coordinates. and rotate descending data so that north is up. 1st order transformations can be used to project raw imagery to a planar map projection. Linear transformations may be required before collecting GCPs on the displayed image. The goal in calculating the coefficients of the transformation matrix is to derive the polynomial equations for which there is the least possible amount of error when they are used to transform the reference coordinates of the GCPs into the source coordinates. Reference X coordinate GCP Polynomial curve Source X coordinate APPLYING GEOCORRECTION TOOLS 161 . and look for systematic errors.

due to camera lens distortion). x0 = a0 + a1 x + a2 y y0 = b0 + b1 x + b2 y Where: x and y are source coordinates (input) x0 and y0 are rectified coordinates (output) the coefficients of the transformation matrix are as above An easier way to arrive at the same number is: ( t + 1 )x ( t + 2 ) Clearly. the size of the transformation matrix increases with the order of the transformation. on scans of warped maps and with radar imagery. Fourth-order transformations can be used on very distorted aerial photographs.The transformation matrix for a transformation of order t contains this number of coefficients: a0 a1 b0 b1 a2 b2 t+1 2∑i i=0 Coefficients are used in a 1st order polynomial as follows: It is multiplied by two for the two sets of coefficients — one set for X and one for Y. and with distorted data (for example. for data covering a large area (to account for the earth’s curvature). Third-order transformations are used with distorted aerial photographs.  t  i  xo =  Σ   Σ      i = o  j = o ak × x i–j ×y j  t  i  yo =  Σ   Σ      i = o  j = o bk × x i–j ×y j 162 USING IMAGE ANALYSIS FOR ARCGIS . High o rder polynomi als The polynomial equations for a t order transformation take this form: No nline ar tra nsforma tion s Second-order transformations can be used to convert Lat/Lon data to a planar projection.

This equation is graphed on the next page: APPLYING GEOCORRECTION TOOLS 163 . 3 These GCPs allow a 1st order transformation of the X coordinates. which is satisfied by this equation (the coefficients are in parentheses): x r = ( 25 ) + ( – 8 )x i Where: xr xi = = the reference X coordinate the source X coordinate This equation takes on the same format as the equation of a line (y = mx + b). The following example uses only one coordinate (X) instead of two (X.Y) which are used in the polynomials for rectification. higher order polynomials are used to perform more complicated image rectifications.Where: t is the order of the polynomial a and b are coefficients the subscript k in a and b is determined by: Coefficients like those presented in this example would generally be calculated by the least squares regression method. This enables you to draw two-dimensional graphs that illustrate the way that higher orders of transformation affect the output image. Because only the X coordinate is used in these examples. it is helpful to see the output of various orders of polynomials. Therefore. To understand the effects of different orders of transformation in image rectification. Suppose GCPs are entered with these X coordinates: Source X Coordinate (input) 1 2 Reference X Coordinate (output) 17 9 1 k = i⋅i+j+j --------------2 E f fe c t s o f o rd e r The computation and output of a higher polynomial equation are more complex than that of a lower order polynomial equation. In mathematical terms. the number of GCPs used is less than the number required to actually perform the different orders of transformation. a 1st-order polynomial is linear. Therefore. a 1st-order transformation is also known as a linear transformation.

These points are plotted against each other below: 16 16 reference X coordinate reference X coordinate 12 xr = (25) + (-8)xi 12 8 8 4 4 0 0 1 2 3 4 0 0 1 2 3 4 source X coordinate source X coordinate However. x r = ( 31 ) + ( – 16 )x i + ( 2 )x i 2 Polynomials of the 2nd-order or higher are nonlinear. In this case. The graph of this curve is drawn below: 16 reference X coordinate 12 xr = (31) + (-16)xi + (2)xi2 8 4 0 0 1 2 3 4 source X coordinate 164 USING IMAGE ANALYSIS FOR ARCGIS . what if the second GCP were changed as follows? Source X Coordinate (input) 1 2 3 Reference X Coordinate (output) 17 7 1 A line cannot connect these points. a 2nd-order polynomial equation expresses these points. which illustrates that they cannot be expressed by a 1st-order polynomial like the one above.

because the output pixels in the X direction would be arranged in a different order than the input pixels in the X direction. the order of the transformation could be increased to 3rd-order. this equation may be unnecessarily complex. Source X Coordinate (input) 1 2 3 4 Reference X Coordinate (output) x 0 ( 1 ) = 17 x0 ( 2 ) = 7 x0 ( 3 ) = 1 x0 ( 4 ) = 5 (4. this fourth GCP does not fit on the curve of the 2nd-order polynomial equation.5) 4 0 0 1 2 3 4 source X coordinate As illustrated in the graph above. Performing a coordinate transformation with this equation may cause unwanted distortions in the output image for the sake of a perfect fit for all the GCPs. x r = ( 25 ) + ( – 5 )x i + ( – 4 )x i + ( 1 )x i 2 2 APPLYING GEOCORRECTION TOOLS 165 . The equation and graph below could then result. However. a 3rd-order transformation probably would be too high. To ensure that all of the GCPs fit. In this example.What if one more GCP were added to the list? 16 reference X coordinate Source X Coordinate (input) 1 2 3 4 Reference X Coordinate (output) 17 7 1 5 12 xr = (25) + (-5)xi + (-4)xi2 + (1)xi3 8 4 0 0 1 2 3 4 source X coordinate 16 reference X coordinate 12 xr = (31) + (-16)xi + (2)xi2 8 This figure illustrates a 3rd-order transformation.

no matter how many GCPs are used. However. the equation used in a 2nd-order transformation is the equation of a paraboloid. Similarly. to use a higher order of transformation. to perform a 1st-order transformation. which is expressed by the equation of a plane.M i n i mum num b e r o f G C P s x0 ( 1 ) > x0 ( 2 ) > x0 ( 4 ) > x0 ( 3 ) 17 > 7 > 5 > 1 input image X coordinates 1 1 2 2 3 3 4 4 Higher orders of transformation can be used to correct more complicated types of distortion. more GCPs are needed. Therefore. Although it is possible to get a perfect fit. Therefore. three points define a plane. it is rare. The minimum number of points required to perform a transformation of order t equals: ((t + 1)(t + 2)) -----------------------------------2 Use more than the minimum number of GCPs whenever possible. output image X coordinates 1 3 2 3 4 5 4 6 7 2 8 9 10 11 12 13 14 15 16 17 18 1 In this case a higher order of transformation would probably not produce the desired results. For instance. at least six GCPs are required to perform a 2nd-order transformation. at least three GCPs are needed. 166 USING IMAGE ANALYSIS FOR ARCGIS . Six points are required to define a paraboloid.

the minimum number of GCPs required to perform a transformation is listed in the following table: Number of GCPs Order of Transformation 1 2 3 4 5 6 7 8 9 10 Minimum GCPs Required 3 6 10 15 21 28 36 45 55 66 APPLYING GEOCORRECTION TOOLS 167 .For 1st through 10th-order transformations.

1 2 168 USING IMAGE ANALYSIS FOR ARCGIS . enter the Polynomial Order. The General tab and the Links tab are the same as the ones featured at the beginning of this chapter. It does not need an Elevation tab. These are filled in when the model is solved. Using the arrows. The Parameters tab contains a CellArray that shows the transformation coefficients table. Click the Parameters tab.The Polynomial Properties dialog Polynomial Properties has a Parameters tab in addition to the General and Links tabs. 2. 1.

Greedy.Rubber Sheeting Tr iang le-b ased finite element analysis The finite element analysis is a powerful tool for solving complicated computation problems which can be approached by small simpler pieces. It can also be called the triangle-based rectification because the transformation and resampling for image rectification are performed on a triangleby-triangle basis. Each triangle has three control points as its vertices. the geometry of each subset can be much simpler and modeled through simple transformation. optimal. the Delaunay triangulation is most widely used and is adopted because of the smaller angle variations of the resulting triangles. APPLYING GEOCORRECTION TOOLS 169 . The triangles defined this way are the most equiangular possible. finite element analysis is also called Rubber Sheeting. The Delaunay triangulation can be constructed by the empty circumcircle criterion. the polynomials can be used as the general transformation form between source and destination systems. and Delaunay triangulation. it is necessary to triangulate the control points into a mesh of triangles. If the geometric problem of the entire region is very complicated. the polynomial transformation can be used to establish mathematical relationships between source and destination systems for each triangle. Watson (1994) summarily listed four kinds of triangulation. the geometric rectification can be done on a triangle-by-triangle basis. Tria n gle . the known control points can be triangulated into many triangles.   xo = a 0 + a 1 x + a 2 y   yo = b 0 + b 1 x + b 2 y  There is no need for extra information because there are three known conditions in each triangle and three unknown coefficients for each polynomial. This triangle-based technique should be used when other rectification methods such as Polynomial Transformation and photogrammetric modeling cannot produce acceptable results.ba s e d r e c ti f i c a t i o n Once the triangle mesh has been generated and the spatial order of the control points is available. Lin e a r t r a ns for m a t io n The easiest and fastest transformation is the linear transformation with the first order polynomials: Tr iang ulation To perform the triangle-based rectification. For each triangle. Of the four kinds. It has been widely used as a local interpolation technique in geographic applications. The circumcircle formed from three points of any triangle does not have any other point inside. including the arbitrary. Then. Because the transformation exactly passes through each control point and is not in a uniform manner. For image rectification. This triangle-based method is appealing because it breaks the entire region into smaller subsets.

the accuracy assessment using independent checkpoints is recommended. The transformation function and its first order partial derivative are continuous. To evaluate the geometric transformation between source and destination coordinate systems. For an exact modeling method like rubber sheeting. The transitions between triangles are not always smooth. Three more conditions can be obtained by assuming that the normal partial derivative on each edge of the triangle is a cubic polynomial.  x =  0    y =  0  5 i i–j ∑ ∑ ak ⋅ x i = 0j = 0 5 i ⋅y j ∑ ∑ bk ⋅ x i = 0j = 0 i–j ⋅y j 170 USING IMAGE ANALYSIS FOR ARCGIS .No nline ar tra nsforma tion Even though the linear transformation is easy and fast. In order to distribute the slope change smoothly across triangles. This phenomenon is obvious when shaded relief or contour lines are derived from the DEM which is generated by the linear rubber sheeting. For each vertex of the triangle. and two 1st-order and three 2nd-order partial derivatives can be easily derived by establishing a 2nd-order polynomial using vertices in the neighborhood of the vertex. 21 conditions should be available. Then the total 18 conditions are ready to be used. it has one disadvantage. the nonlinear transformation with polynomial order larger than one is used by considering the gradient information. It is not difficult to construct (Akima 1978). The formulation is simply as follows: The 5th-order has 21 coefficients for each polynomial to be determined. which means that the sum of the polynomial items beyond the 3rd-order in the normal partial derivative has a value zero. one point value is given. C he ck p oint a na ly s is It should be emphasized that the independent checkpoint analysis is critical for determining the accuracy of rubber sheeting modeling. For solving these unknowns. It is a smooth function. It is caused by incorporating the slope change of the control data at the triangle edges and vertices. do not have much geometric residuals remaining. which are used in the modeling process. The fifth order or quintic polynomial transformation is chosen here as the nonlinear rubber sheeting technique in this example. the ground control points.

and Kappa rotation angles of the image to determine the viewing direction of the camera. and Fiducials. If you are going to fill in this information on the Orientation tab. Camera Properties has tabs for Orientation. In addition to the General. then you do not need the three links you normally would need for the Camera model. and is used for rectifying any image that uses a camera as its sensor.Camera Properties The Camera model is derived by space resection based on collinearity equations. You can see the areas to fill in on the Orientation tab below: Rotation offers the following options when you click the dropdown arrows: • • • • • • Unknown— select when the rotation angle is unknown Estimated — select when estimating the rotation angle Fixed — select when rotation angle is defined Omega — rotation angle is roll: around the x-axis of the ground system Phi — phi rotation angle is pitch: around the y-axis (after Omega rotation) Kappa — kappa rotation angle is yaw: around the z-axis rotated by Omega and Phi The Perspective Center Position is given in meters and allows you to enter the perspective center for ground coordinates. then you will need to make sure you do not check Account for Earth’s curvature on the Elevation tab. If you can fill in all the degrees and meters for the Rotation Angle and the Perspective Center Position. Links. and Elevation tabs. Phi. The Orientation feature allows you to choose different rotation angles and perspective center positions for the camera. You can choose from the following options: • • • • • • Unknown — select when the ground coordinate is unknown Estimated — select when estimating the ground coordinate Fixed — select when ground coordinate is defined X — enter the X coordinate of the perspective center Y — enter the Y coordinate of the perspective center Z — enter the Z coordinate of the perspective center Camera Properties dialog APPLYING GEOCORRECTION TOOLS 171 . Camera. The Rotation Angle lets you customize the Omega.

the Principal Point. In order to select the appropriate fiducial orientation. the appropriate fiducial orientation can be selected. The orientation of the image is largely dependent on the way the photograph was scanned during the digitization stage. and the Focal Length for the camera that was used to capture your image. The following illustrations demonstrate the fiducial orientation used under the various circumstances. The last tab on the Camera Properties dialog is the Fiducials tab. Fiducials are used to compute the transformation from data file to image coordinates. The fiducials for your image will be fixed on the frame and visible in the exposure. 172 Selecting the inappropriate fiducial orientation results in large RMS errors during the measurement of fiducial marks for interior orientation and errors during the automatic tie point collection. The image/ photo-coordinate system is defined by the camera calibration information. Fiducial orientation defines the relationship between the image/photo-coordinate system of a frame and the actual image orientation as it appears within a view. This is where you can specify the Camera Name. the automatic tie point collection capability provides inadequate results. the Number of Fiducials. Do not use over 8 fiducials in an image. USING IMAGE ANALYSIS FOR ARCGIS . Ensure that the appropriate fiducial orientation is used as a function of the image/photo-coordinate system.The next tab on Camera Properties is also called Camera. Fiducial One—places the marker at the left of the image Fiducial Two—places the marker at the top of the image Fiducial Three—places the marker at the right of the image Fiducial Four—places the marker at the bottom of the image Camera tab on Camera Properties dialog Click to select where to place the fiducial in the viewer. compare the axis of the photo-coordinate system (defined in the calibration report) with the orientation of the image. If initial approximations for exterior orientation have been defined and the corresponding fiducial orientation does not correspond. Based on the relationship between the photo-coordinate system and the image. You can click Load or Save to open or save a file with certain camera information in it. The Fiducial information you enter on the Camera tab will be displayed in a cell array on the Fiducial tab after you click Apply on the Camera Properties dialog.

and RPC Properties are sometimes referred to together as the Rational Function Models. The Parameters tab in IKONOS. Click the Parameters tab. QuickBird. and RPC Properties calls for an RPC file and the Elevation Range. The dialogs for the three in Geocorrection Properties are identical as well. The swath width is 13 km at nadir. IKONOS Bands and Wavelengths Band 1. The accuracy without ground control is 12 m horizontally. and 1. The resolution of the panchromatic sensor is 1 m. QuickBird. which was launched in September of 1999 by the Athena II rocket. and 10 m vertically. IKONOS IKONOS images are produced from the IKONOS satellite.IKONOS. Once you click the Add Links button and click the Geocorrection Properties button.45 to 0.90 µm 0. The resolution of the multispectral scanner is 4 m.76 to 0. RPC Properties uses NITF data. Red 4. NIR Panchromatic Wavelength (microns) 0.52 µm 0.52 to 0.5 days at 1. Like the other property dialogs in Geocorrection. They are virtually the same except for the files they use. Links.90 µm The IKONOS Properties dialog gives you the ability to rectify IKONOS images from the satellite. Blue 2.9 days at 1 m resolution. The revisit time is 2. QuickBird.60 µm 0. IKONOS orbits at an altitude of 423 miles. and RPC Properties IKONOS. APPLYING GEOCORRECTION TOOLS 173 . IKONOS has a General. and 3 m vertically. and Elevation tabs as well as Parameters and Chipping. the dialog will appear.63 to 0. with ground control it is 2 m horizontally. or 681 kilometers. IKONOS files are images captured by the IKONOS satellite. It is important that you click the Add Links button before you click the Geocorrection Properties button to open one of these three property dialogs.5 m resolution.69 µm 0. QuickBird files are images captured by the QuickBird satellite. IKONOS Properties Parameters tab The Parameters tab is the same in all three of these Geocorrection models.45 to 0. Green 3. and enter the RPC File before proceeding with anything else.

After the Parameters tab on the IKONOS Properties dialog. QuickBird. On the Parameters tab. This is provided so you may apply polynomial corrections to the original rational function model. This is made possible by specifying an affine relationship (pixel) between the chip and the full. IKONOS Properties Chipping tab The Chipping tab is the same for IKONOS. there is the Chipping tab. The 0-order results in a simple shift to both image X and Y coordinates. and RPC Properties. Check the box to enable the refinement process. On the Chipping tab you are given the choice of Scale and Offset or Arbitrary Affine as your chipping parameters.The RPC file is generated by the data provider based on the position of the satellite at the time of image capture. original image. X and Y correspond to the pixel coordinates for the full. The RPCs can be further refined by using ground control points (GCPs). The dialog will change depending on which chipping parameter you choose. then specify the order by clicking the arrows. there is also a check box for Refinement with Polynomial Order. The 2ndorder results in a second order transformation. original image from which the chip was derived. The formulas for calculating the affine using scale and offset are listed on the dialog. Scale and Offset is the more simple of the two. The 1st-order is an affine transformation. a 0 or 1st-order is sufficient to reduce error not addressed by the rational function model (RPC file). original image. This file should be located in the same directory as the image you intend to use in the Geocorrection process. This option corrects the remaining error and refines the mathematical solution. Usually. 174 USING IMAGE ANALYSIS FOR ARCGIS . and the 3rd-order in a third order transformation. The Chipping process allows circulation of RPCs for an image chip rather than the full.

a scale factor that is also used in rotation. this value defaults to 0. • • IKONOS Chipping tab using Arbitrary Affine APPLYING GEOCORRECTION TOOLS 175 . For Full Column Count. In the formulas. For Full Row Count. original image. If the header count is absent. this value is the row count of the full. Values for the following variables are either obtained from the header data of the chip. Also under the Chipping tab. this value corresponds to the row count of the chip. this value defaults to 0. Row Scale—This value corresponds to value e.The following is an example of the Scale and Offset dialog on the Chipping tab: The Arbitrary Affine formulas are listed on the dialog when you choose that option. In absence of header data. this value defaults to 1. In the absence of header data. Column Offset—This value corresponds to value c. you’ll find a box for Full Row Count and Full Column Count. In the absence of header data. original image. this value defaults to 1. or they default to the predetermined values described above. The following is an example of the Arbitrary Affine dialog on the Chipping tab: IKONOS Chipping tab using Scale and Offset • • Row Offset—This value corresponds to value f. this value is the column count of the full. an offset value. and y’ (y prime). a scale factor that is also used in rotation. if the chip header contains the appropriate data. Column Scale—This value corresponds to value a. If the header count is absent. In the absence of header data. correspond to the pixel coordinates in the chip with which you are currently working. an offset value. x’ (x prime). if the chip header contains the appropriate data. the value corresponds to the column count of the chip.

These work the same way in all three model properties. NITF stands for National Imagery Transmission Format Standard. NIR Wavelength (microns) 0. the RPC dialog contains the Parameters and Chipping tabs.76 to 0.69 µm 0. equator crossing time. Blue 2. Green 3.52 µm 0.60 µm 0. a 93.5 kilometers at nadir.52 to 0. These RPCs can be further refined by using GCPs. The multispectral bands are as follows: QuickBird Bands and Wavelengths Band 1. The dynamic range is 11 bits per pixel for both panchromatic and multispectral. The panchromatic bandwidth is 450-900 nanometers.45 to 0. The QuickBird satellite was launched in October of 2001.5 minute orbit time. text attachments. the function allows you to specify the associated RPC file to be used in Geocorrection.63 to 0.90 µm R PC RPC stands for rational polynomial coefficients. Red 4. Like IKONOS. and the nominal swath width is 16. The same information applies to both tabs as is discussed in the IKONOS section. QuickBird requires the use of an RPC file to describe the relationship between the image and the earth’s surface at the time of image capture. The sensor has both panchromatic and multispectral capabilities. Its orbit has an altitude of 450 kilometers. 176 USING IMAGE ANALYSIS FOR ARCGIS . RPC Properties in Image Analysis for ArcGIS allows you to work with NITF data. Just like IKONOS and QuickBird. The inclination is 97. The RPC file associated with the image contains rational function polynomial coefficients that are generated by the data provider based on the position of the satellite at the time of image capture. NITF data is designed to pack numerous image compositions with complete annotation.M. and a 10:30 A. and imagery associated metadata. This file should be located in the same directory as the image or images you intend to use in orthorectification.QuickBird QuickBird Properties allows you to rectify images captured with the QuickBird satellite.2 degrees sun-synchronous. Just like IKONOS. When you choose it. QuickBird has a Parameters tab as well as a Chipping tab on its Properties dialog.

except that the TM sensor records reflected/emitted electromagnetic energy from the visible. etc. • MSS The MSS from Landsats 4 and 5 has a swath width of approximately 185 × 170 km from a height of approximately 900 km for Landsats 1. water yellow. and 3 gathered Multispectral Scanner (MSS) data and Landsats 4 and 5 collect MSS and TM data. The spatial resolution of MSS data is 56 × 79 m. and so on.Landsat The Landsat dialog is used for orthorectification of any Landsat image that uses TM or MSS as its sensor. reflective-infrared. A typical scene contains approximately 2340 rows and 3240 columns. the colors do not reflect the features in natural colors. with a 79 × 79 m IFOV (instantaneous field of view). middle-infrared. Bands 5. Bands 3 and 4 are in the near-infrared portion of the spectrum and can be used in land/water and vegetation discrimination. For instance. 4. Landsats 1. vegetation appears red. TM has higher spatial. TM The TM scanner is a multispectral scanning system much like the MSS. The first system was called ERTS (Earth Resources Technology Satellites). For instance. in an infrared image. spectral. MSS data is widely used for general geologic studies as well as vegetation inventories. 2. but it is stored as 8-bit (Lillesand and Kiefer 1987). 3. and 3 are no longer operating. 2. TM has a swath width of approximately 185 km from a height of approximately 705 km. False color composites appear similar to an infrared photograph where objects do not have the same colors or contrasts as they would naturally. The bands to be used are determined by the particular application. There have been several Landsat satellites launched since 1972. and 705 km for Landsats 4 and 5. the National Aeronautics and Space Administration (NASA) initiated the first civilian program specializing in the acquisition of remotely sensed digital satellite data. and thermal-infrared regions of the spectrum. These bands also show detail in water. roads may be red. • Bands 1 and 2 are in the visible portion of the spectrum and are useful in detecting cultural features. The elevation information is required in the model for removing relief displacement. but Landsats 4 and 5 are still in orbit gathering data. and 2 create a pseudo color composite. and radiometric resolution than MSS. It is useful for vegetation type and health determination.) In pseudo color. APPLYING GEOCORRECTION TOOLS 177 . Bands 4. and 3. such as roads. snow and cloud differentiation. These are by no means all of the useful combinations of these seven bands. Landsats 1. 2. The model is derived by space resection based on collinearity equations. water appears navy or black. (A thematic image is also a pseudo color image. and 2 create a false color composite. and vegetation blue. • • Land sat 1 -5 In 1972. soil moisture. rock type discrimination. and later renamed to Landsat. Detectors record electromagnetic radiation (EMR) in four bands: Different color schemes can be used to bring out or enhance the features under study. The radiometric resolution is 6-bit.

and 3 are in the visible portion of the spectrum and are useful in detecting cultural features such as roads.5 × 28. Detectors record EMR in seven bands: • Bands 1. The larger pixel size of this band is necessary for adequate signal strength.52 to 0. Especially responsive to the amount of vegetation biomass present in a scene. differentiating between soil and vegetation. and detecting cultural features. which has a spatial resolution of 120 × 120 m. 2. Also useful for cultural feature identification.69 µm • • 4. and 7 are in the reflective-infrared portion of the spectrum and can be used in land/water discrimination.60 µm 0. It is useful for crop identification and emphasizes soil/ crop and land/water contrasts.5 m for all bands except the thermal (band 6).52 µm Band 1.The spatial resolution of TM is 28. TM Bands and Wavelengths Wavelength (microns) 0. NIR 0.76 to 0.63 to 0. Band 6 is in the thermal portion of the spectrum and is used for thermal mapping (Jensen 1996. However.5 m to match the other bands. 5. Green 3.90 µm 178 USING IMAGE ANALYSIS FOR ARCGIS . Corresponds to the green reflectance of healthy vegetation.45 to 0. These bands also show detail in water.5 × 28. Bands 4. For discriminating between many plant species. It is also useful for determining soil boundary and geological boundary delineations as well as cultural features. Lillesand and Kiefer 1987). 2. The radiometric resolution is 8-bit. meaning that each pixel has a possible range of data values from 0 to 255. Blue Comments For mapping coastal water areas. Red 0. the thermal band is resampled to 28. forest type mapping.

Bands 4. True color means that objects look as they would to the naked eye—similar to a color photograph. snow. vegetation appears red.255 1 pixel = 30 m x 30 m 7. For vegetation and crop stress detection. • APPLYING GEOCORRECTION TOOLS 179 .50 µm radiometric resolution 0.Band 5.35 µm Landsat MSS vs. etc. 3. water appears navy or black. Landsat TM Band Comb inations for Di splaying TM Data Different combinations of the TM bands can be displayed to create different composite effects. MIR 2. 3 bands MSS TM 7 bands radiometric resolution 0-127 1 pixel = 57 m x 79 m 6. and for locating thermal pollution. For instance.55 to 1. and Blue (RGB) color guns of the monitor.75 µm Comments Sensitive to the amount of water in plants.40 to 12. Important for the discrimination of geologic rock type and soil boundaries. and ice. which is useful in crop drought studies and in plant health analyses. It can also be used to locate geothermal activity. insecticide applications. as well as soil and vegetation moisture content. TIR 10. Green. False color composites appear similar to an infrared photograph where objects do not have the same colors or contrasts as they would naturally. in an infrared image. This is also one of the few bands that can be used to discriminate between clouds. 1 create a true color composite. 2 create a false color composite. The order of the bands corresponds to the Red. heat intensity. The following combinations are commonly used to display images: • Bands 3. 2. MIR Wavelength (microns) 1.08 to 2.

4 to 12.69 µm 0. launched in 1999. 180 USING IMAGE ANALYSIS FOR ARCGIS .60 µm 0. Moreover.• Bands 5. The bands to be used are determined by the particular application. which is corrected. and vegetation blue. The capabilities new to Landsat 7 include the following: • • • 15 m spatial resolution panchromatic band 5% radiometric calibration with full aperture 60 m spatial resolution thermal IR channel Lan dsat 7 spec ific ation s Information about the spectral range and ground resolution of the bands of the Landsat 7 satellite is provided in the following table: Landsat 7 Characteristics Band Number 1 2 3 4 5 6 7 Panchromatic (8) Wavelength (microns) 0. South Dakota at the USGS EROS Data Center (EDC). the colors do not reflect the features in natural colors. The other type of data is metadata. water yellow.55 to 1. roads may be red. uses Enhanced Thematic Mapper Plus (ETM+) to observe the earth.5 µm 2.75 µm 10. are only able to receive data for the portion of the ETM+ ground track where the satellite can be seen by the receiving station. Land sat 7 The Landsat 7 satellite. Browse data is a lower resolution image for determining image location.63 to 0. ETM+ data is transmitted using X-band direct downlink at a rate of 150 Mbps. EDC processes the data to Level 0r. L a n ds a t 7 d a t a t y p es One type of data available from Landsat 7 is browse data. which is descriptive information on the image. Stations located around the globe. This information is available via the internet within 24 hours of being received by the primary ground station.) In pseudo color.90 µm Resolution (m) 30 30 30 30 30 60 30 15 The primary receiving station for Landsat 7 data is located in Sioux Falls.35 µm 0. 4. For instance. and the receiving stations can obtain this data in real time using the X-band.52 µm 0.50 to 0. however.90 µm 1. Different color schemes can be used to bring out or enhance the features under study. This data has been corrected for scan direction and band alignment errors only. (A thematic image is also a pseudo color image.45 to 0. These are by no means all of the useful combinations of these seven bands.76 to 0. Level 1G data. 2 create a pseudo color composite. Landsat 7 is capable of capturing scenes without cloud obstruction. is also available.52 to 0. quality and information content.08 to 2.

The Lands at di alog The Landsat Properties dialog in Geocorrection Properties has the General. The Parameters tab has areas where you select the type of sensor used to capture your data. which is different from the ones discussed so far. The repeat coverage interval is 16 days. the Number of Iterations. or 233 orbits. and Elevation tabs already discussed in this chapter. The satellite orbits the earth at 705 kilometers. APPLYING GEOCORRECTION TOOLS 181 . It also has a Parameters tab. the Scene Coverage (if you choose Quarter Scene you also choose the quadrant).Landsat 7 has a swath width of 185 kilometers. Links. and the Background.

182 USING IMAGE ANALYSIS FOR ARCGIS .

These symbols often represent amounts that vary from place to place. 183 . Annotation can consist of lines. and unaccented letters a-z and A-Z. numbers. and any symbol that denotes geographical features. ellipses. American Standard Code for Information Interchange (ASCII) A basis of character sets. analysis mask An option that uses a raster dataset in which all cells of interest have a value and all other cells are no data. AOI See area of interest. most basic punctuation. Usually. text. yearly rainfall. ancillary data The data.Glossary Glossary T erm s abstract symbol An annotation symbol that has a geometric shape. other than remotely sensed data. Analysis mask lets you perform analysis on a selected set of cells. legends. and so on. the assumed true data is derived from ground truthing. annotation The explanatory material accompanying an image or a map. scale bars. such as population density.. that is used to aid in the classification process. rectangles. space. polygons. square. such as a circle. accuracy assessment The comparison of a classification to geographical data that is assumed to be true. or triangle..to convey some control codes.

The pixels are then given new values based upon the bins to which they are assigned. east. bilinear interpolation Uses the data file values of four pixels in a 2 × 2 window to calculate an output value with a bilinear function. not just the image area as does a neatline. infrared. border average The statistical mean. boundary A neighborhood analysis technique that is used to detect boundaries between thematic classes. or the direction that a surface faces. attribute The tabular information associated with a raster or vector layer. a line that usually encloses the entire map. Pixels are sorted into a specified number of bins. area of interest (AOI) a point. or creating new bands from other sources. the sum of a set of values divided by the number of values in the set. bin function A mathematical function that establishes the relationship between data file values and rows in a descriptor table. and so on) or some other user-defined information created by combining or enhancing the original bands. thermal. bins Ordered sets of pixels. On a map. south. line. ASCII See American Standard Code for Information Interchange. nearinfrared. area A measurement of a surface. Sometimes called channel. aspect The orientation. green. band A set of data file values for a specific portion of the electromagnetic spectrum of reflected light or emitted heat (red.a priori Already or previously known. blue. or polygon that is selected as a training sample or as the image area to be used in an operation. 184 USING IMAGE ANALYSIS FOR ARCGIS . with respect to the directions of the compass: north. west.

a list of known values of reference pixels. The model is derived by space resection based on collinearity equations. Classes are usually formed through classification of a continuous raster layer. For example. the circle that passes through each of the triangle’s three vertices. pixel value. A pixel. function memory value. categorize The process of choosing distinct classes to divide your image into. For example. The elevation information is required in the model for removing relief displacement. blue) to be output to a pixel on the display device. A set of pixels in a GIS file that represents areas that share some condition. Cartesian A coordinate system in which data are organized on a grid and points on the grid are referenced by their X. class value A data file value of a thematic file that identifies a pixel as belonging to a particular class. Sometimes called the pixel size. one cell in the image may represent an area 30’ × 30’ on the ground. Also called intensity value. circumcircle A triangle’s circumscribed circle. A 1 × 1 area of coverage. 2.Y coordinates. and screen value. checkpoint analysis The act of using check points to independently verify the degree of accuracy of a triangulation. measured in map units. class camera properties Camera properties are for the orthorectification of any image that uses a camera for its sensor. cell size The area that one pixel represents.brightness value The quantity of a primary color (red. classification accuracy table For accuracy assessment. buffer zones are often generated around streams in site assessment studies so that further analyses exclude these areas that are often unsuitable for development. supported by some ground truth or other a priori knowledge of the true class. classification The process of assigning the pixels of a continuous raster image to discrete categories. and a list of the classified values of the same pixels. DTED (Digital Terrain Elevation Data) are distributed in cells. buffer zone A specific area around a feature that is isolated for or from further analysis. display value. cell 1. from a classified file to be tested. GLOSSARY 185 . green. grid cell.

The numbers in the matrix serve to weight this average toward particular pixels. can be identified by their sizes and multiplied.000 to 1. continuous data A type of raster data that are quantitative (measuring a characteristic) and have related. convolution kernel collinearity A nonlinear mathematical model that photogrammetric triangulation is based upon. ground coordinates. The threshold is an absolute value threshold ranging from 0. usually according to a linear function. continuous values. coordinate system A method of expressing location. convolution filtering The process of averaging small sets of pixels across an image. or clumps. such as remotely sensed images ( Landsat. and orientation parameters. A matrix of numbers that is used to average the value of each pixel with the values of surrounding pixels in a particular way. and so on). See continuous data. clusters The natural groupings of pixels when plotted in spectral space. contrast stretch The process of reassigning a range of values to another range. 186 USING IMAGE ANALYSIS FOR ARCGIS . called raster regions. Collinearity equations describe the relationship among image coordinates. coefficient One number in a matrix. or a constant in a polynomial expression. Used to change the spatial frequency characteristics of an image. correlation threshold A value used in rectification to determine whether to accept or discard GCPs. SPOT. clustering Unsupervised training. the process of generating signatures based on the natural groupings of pixels in image data when they are plotted in spectral space. Contrast stretching is often used in displaying continuous raster layers. The purpose of such a scheme is to provide a framework for organizing and categorizing the information that can be extracted from the data. Groups of contiguous pixels in the same class. since the range of data file values is usually much narrower than the range of brightness values on the display device. continuous A term used to describe raster data layers that contain quantitative and related values. In two-dimensional coordinate systems. also called X and Y.classification scheme (or classification system) A set of target classes. contiguity analysis A study of the ways in which pixels of a class are grouped together spatially. locations are expressed by a column and row.000.

and can be produced with terrain analysis programs. In the context of remote sensing. corresponding GCPs The GCPs that are located in the same geographic location as the selected GCPs. cubic convolution Uses the data file values of sixteen pixels in a 4 × 4 window to calculate an output with cubic function. and can be processed to display that image.correlation windows Windows that consist of a local neighborhood of pixels. density A neighborhood analysis technique that outputs the number of pixels that have the same value as the analyzed pixel in a userspecified window. Also called file value. A collection of numbers. a computer file containing numbers that represent a remotely sensed image. These bands must be linear. database A relational data structure usually used to store tabular information. data 1. INFO. but are selected in different files. Covariance is defined as the average product of the differences between the data file values in each band and the mean of each band. data file value Each number in an image file. etc. The decision rule is used to process the data file values based upon the signature statistics. Oracle. strings. DEMs are available from the USGS at 1:24. but in different bands. pixel. DN.000 scale. brightness value. covariance matrix A square matrix that contains all of the variances and covariances within the bands in a data file. GLOSSARY 187 . or facts that requires some processing before it is meaningful. Examples of popular databases include SYBASE. image file value. dBASE. digital elevation model (DEM) Continuous raster layers in which data file values represent elevation. decision rule An equation or algorithm that is used to classify image data after signatures have been created. data file A computer file that contains numbers that represent an image.000 and 1:250. to vary with each other in relation to the means of their respective bands. 2. covariance Measures the tendencies of data file values for the same pixel.

that smooths out or zeros out areas of low spatial frequency and creates a sharp contrast where spatial frequency is high. feature collection The process of identifying. which is usually a zero-sum kernel. 188 USING IMAGE ANALYSIS FOR ARCGIS . dimensionality In classification dimensionality refers to the number of layers being classified. Divergence can be calculated for any combination of bands used in the classification. bands that diminish the results of the classification can be ruled out. delineating. feature space An abstract space that is defined by spectral units (such as an amount of electromagnetic radiation). 2. consisting of a group of planimetric coordinates (X. extension The three letters after the period in a file name that usually identify the type of file. The process of studying and locating areas and objects on the ground and deriving useful information from images. High spatial frequency is at the edges between homogeneous groups of pixels. edge enhancer A high-frequency convolution kernel that brings out the edges between homogeneous groups of pixels. Unlike an edge detector. it only highlights edges. diversity A neighborhood analysis technique that outputs the number of different values within a user-specified window. The area of the earth’s surface to be mapped.Y) and the elevations of the ground points and breaklines. feature extraction edge detector A convolution kernel. remotely sensed data more interpretable to the human eye. fiducial center The center of an aerial photo. a data file with three layers is said to be three dimensional. For example. extent 1. Enhancement can make important features of raw.digital terrain model (DTM) A discrete expression of topography in a data array. enhancement The process of making an image more interpretable for a particular application. and labeling various types of natural and human-made phenomena from remotely-sensed images. The image area to be displayed in a View. divergence A statistical measure of distance between two or more signatures. it does not necessarily eliminate other features.

For a single band of data. and any other data needed for a study. enhances. histogram A graph of data distribution. as well as computer software and human knowledge. focal The process of performing one of several analyses on data values in an image file. combines. georeferencing The process of assigning map coordinates to image data and resampling the pixels of the image to conform to the map projection grid. file coordinates The location of a pixel within the file in x. GLOSSARY 189 . the horizontal axis of a histogram graph is the range of all possible data file values. or a chart of the number of pixels that have each possible data file value. and analyzes layers of geographic data to produce interpretable information. A GIS may include computer images.y coordinates.0.fiducials Four or eight reference markers fixed on the frame of an aerial metric camera and visible in each exposure that are used to compute the transformation from data file to image coordinates. Some texts may use the terms filtering and spatial filtering synonymously. filtering The removal of spatial or spectral features for data enhancement. for use in rectifying an image. The vertical axis is a measure of pixels that have each data value. GCP matching For image to image rectification. The upper left file coordinate is usually 0. a GCP selected in one image is precisely matched to its counterpart in the other image using the spectral characteristics of the data and the transformation matrix. Convolution filtering is one method of spatial filtering. geographic information system (GIS) A unique system designed for a particular application that stores. GCPs are used for computing a transformation matrix. geocorrection The process of rectifying remotely sensed data that has distortions due to a sensor or the curvature of the earth. hardcopy maps. high frequency kernel A convolution kernel that increases the spatial frequency of an image. ground control point (GCP) Specific pixel in image data for which the output map coordinates (or other output coordinates) are known. using a process similar to convolution filtering. Also called a high-pass kernel. statistical data. GISs are used for solving complex geographic planning and management problems.

image data Digital representations of the earth that can be used in computer image processing and GIS analyses. The IKONOS satellite orbits at an altitude of 423 miles. Blue = 0 (and 360) magenta = 60. island polygons represent areas in the polygon that have differing characteristics from the areas in the larger polygon. IKONOS properties Use the IKONOS Properties geocorrection dialog to perform orthorectification on images gathered with the IKONOS satellite. histogram matching The process of determining a lookup table that converts the histogram of one band of an image or one color gun to resemble another histogram. island polygons When using Seed Tool. image matching The automatic acquisition of corresponding image points on the overlapping area of two images. and cyan = 300. red = 120. Landsat A series of earth-orbiting satellites that gather MSS and TM imagery operated by EOSAT. yellow = 180. IR Infrared portion of the electromagnetic spectrum. saturation) that is representative of the color or dominant wavelength of the pixel. hue A component of IHS (intensity. The result is a nearly flat histogram. It varies from 0 to 360. image file A file containing raster image data. classification. including (but not limited to) enhancement. ISODATA (Iterative Self-Organizing Data Analysis Technique) A method of clustering that uses spectral distance as in the sequential method.5 days at 1.histogram equalization The process of redistributing pixel values so that there are approximately the same number of pixels with each value within a range. You have the option to use the island polygons feature or to turn it off when using Seed Tool. 190 USING IMAGE ANALYSIS FOR ARCGIS .9 days at 1 meter resolution. and rectification operations. and 1. or 681 kilometers. image processing The manipulation of digital image data. and classifies again so that the spectral distance patterns in the data gradually emerge. The revisit time is 2. indices The process used to create output images by mathematically combining the DN values of different bands. hue. green = 240. redefines the criteria for each class. but iteratively classifies the pixels.5 meter resolution.

2. Also called 1st-order. A layer consists of a thematic image file. maximum likelihood linear transformation A 1st-order rectification. A single band or set of three bands displayed using the red. A neighborhood analysis technique that outputs the median value of the data file values in a user-specified window. To display or print an image. The central value in a set of data such that an equal number of values are greater than and less than the median. majority A neighborhood analysis technique that outputs the most common value of the data file values in a user-specified window. A band or channel of data. A classification decision rule based on the probability that a pixel belongs to a particular class. linear A description of a function that can be graphed as a straight line or a series of lines. A neighborhood analysis technique that outputs the mean value of the data file values in a userspecified window. and blue color guns. linear contrast stretch An enhancement technique that outputs new values at regular intervals. A linear transformation can change location in X and/or Y. skew in X and/or Y. 3. scale in X and/or Y. low frequency kernel A convolution kernel that decreases spatial frequency. The statistical Average. lookup table (LUT) An ordered set of numbers that is used to perform a function on a set of input values. and may also include attributes. map projection A method of representing the three-dimensional spherical surface of a planet on a two-dimensional map surface. Linear equations (transformations) can generally be expressed in the form of the equation of a line or plane. A component of a GIS database that contains all of the data for one theme. GLOSSARY 191 . median 1. All map projections involve the transfer of latitude and longitude onto an easily flattened surface. and that the input bands have normal distributions. lookup tables translate data file values into brightness values. mean 1. green. 2. 2. maximum A neighborhood analysis technique that outputs the greatest value of the data file values in a user-specified window. The basic equation assumes that these probabilities are equal for all classes. and rotation. the sum of a set of values divided by the number of values in the set.layer 1. Also called low-pass kernel.

192 multispectral scanner (MSS) Landsat satellite data acquired in four bands with a spatial resolution of 57 × 79 meters. NDVI See Normalized Difference Vegetation Index. Also called spectral distance. These filters use orthogonal kernels convolved separately with the original image. nadir The area on the ground directly beneath a scanner’s detectors. which. The values that NoData pixels are given are understood to be just place holders. Images that georeference to non-rectangles need a NoData concept for display even if they are not classified. or a single image. they are not given a value. multispectral imagery Satellite imagery with data recorded in two or more bands. minimum distance A classification decision rule that calculates the spectral distance between the measurement vector for each candidate pixel and the mean vector for each signature. at a glance. non-directional The process using the Sobel and Prewitt filters for edge detection. and then combined. such as convolution filtering and scanning. Modeling allows the creation of new classes from existing classes and the creation of a small set of images. neighborhood analysis Any image processing technique that takes surrounding pixels into consideration. contains many types of information about a scene. By assigning pixel values NoData. modeling The process of creating new layers from combining or operating upon existing layers. multispectral classification The process of sorting pixels into a finite number of individual classes. mosaicking The process of piecing together images side by side to create a larger image. nearest neighbor A resampling method in which the output data file value is equal to the input pixel that has coordinates closest to the retransformed coordinates of the output pixel.minimum A neighborhood analysis technique that outputs the least value of the data file values in a user-specified window. USING IMAGE ANALYSIS FOR ARCGIS . or categories of data. minority A neighborhood analysis technique that outputs the least common value of the data file values in a user-specified window. no data NoData is what you assign to pixel values you do not want to include in a classification or function. based on data file values in multiple bands.

Overlay sometimes refers generically to a combination of layers. 2. A classification decision rule in which the data file values of the candidate pixel are compared to upper and lower limits. Nonlinear equations usually contain expressions with exponents. 2. and R stands for the red portion of the electromagnetic spectrum. parametric signature A signature that is based on statistical parameters (such as mean and covariance matrix) of the pixels that are in the training sample or cluster. but off to an angle.nonlinear Describing a function that cannot be expressed as the graph of a line or in the form of the equation of a line or plane. observation In photogrammetric triangulation. The mean and standard deviation of data. The limits of a parallelepiped classification. normalized difference vegetation index (NDVI) The formula for NDVI is IR . There is not statistical basis for a nonparametric signature. especially when graphed as rectangles. where IR stands for the infrared portion of the electromagnetic spectrum. 2. The process of displaying a classified file over the original image to inspect the classification. NDVI finds areas of vegetation in imagery. The SPOT scanner allows off-nadir viewing. panchromatic imagery Single-band or monochrome satellite imagery. parameter 1. it is simply an area in a feature space image.R / IR + R. which are sufficient to describe a normal curve. overlay 1. Secondorder (2nd-order) or higher-order equations and transformations are nonlinear. A function that creates a composite file containing either the minimum or the maximum class values of the input files. parallelepiped 1. a grouping of the image coordinates for a GCP. Any variable that determines the outcome of a function or operation. nonparametric signature A signature for classification that is based on polygons or rectangles that are defined in the feature space image for the image file. nonlinear transformation A 2nd-order or higher rectification. GLOSSARY 193 . orthorectification A form of rectification that corrects for terrain displacement and can be used if a DEM of the study area is available. off-nadir Any point that is not directly beneath a scanner’s detectors.

pattern recognition The science and art of finding meaningful patterns in data, which can be extracted through classification. piecewise linear contrast stretch An enhancement technique used to enhance a specific portion of data by dividing the lookup table into three sections: low, middle, and high. pixel Abbreviated from picture element; the smallest part of a picture (image). pixel depth The number of bits required to store all of the data file values in a file. For example, data with a pixel depth of 8, or 8-bit data, have 256 values ranging from 0-255. pixel size The physical dimension of a single light-sensitive element (13 × 13 microns). polygon A set of closed line segments defining an area. polynomial A mathematical expression consisting of variables and coefficients. A coefficient is a constant that is multiplied by a variable in the expression.

principal components analysis (PCA) 1. A method of data compression that allows redundant data to be compressed into fewer bands (Jensen 1996; Faust 1989). 2. The process of calculating principal components and outputting principal component bands. It allows redundant data to be compacted into fewer bands (that is the dimensionality of the data is reduced). principal point The point in the image plane onto which the perspective center is projected, located directly beneath the interior orientation. profile A row of data file values from a DEM or DTED file. The profiles of DEM and DTED run south to north (that is the first pixel of the record is the southernmost pixel). pushbroom A scanner in which all scanning parts are fixed, and scanning is accomplished by the forward motion of the scanner, such as the SPOT scanner. QuickBird The QuickBird model requires the use of rational polynomial coefficients (RPCs) to describe the relationship between the image and the earth's surface at the time of image capture. By using QuickBird Properties, you can perform orthorectification on images gathered with the QuickBird satellite

194

USING IMAGE ANALYSIS FOR ARCGIS

radar data The remotely sensed data that are produced when a radar transmitter emits a beam of micro or millimeter waves, the waves reflect from the surfaces they strike, and the backscattered radiation is detected by the radar system’s receiving antenna, which is tuned to the frequency of the transmitted waves. radiometric correction The correction of variations in data that are not caused by the object or scene being scanned, such as scanner malfunction and atmospheric interference. radiometric enhancement An enhancement technique that deals with the individual values of pixels in an image. radiometric resolution The dynamic range, or number of possible data file values, in each band. This is referred to by the number of bits into which the recorded energy is divided. See pixel depth. rank A neighborhood analysis technique that outputs the number of values in a user-specified window that are less than the analyzed value. raster data A data type in which thematic class values have the same properties as interval values, except that ratio values have a natural zero or starting point.

recoding The assignment of new values to one or more classes. rectification The process of making image data conform to a map projection system. In many cases, the image must also be oriented so that the north direction corresponds to the top of the image. rectified coordinates The coordinates of a pixel in a file that has been rectified, which are extrapolated from the GCPs. Ideally, the rectified coordinates for the GCPs are exactly equal to the reference coordinates. Because there is often some error tolerated in the rectification, this is not always the case. reference coordinates The coordinates of the map or reference image to which a source (input) image is being registered. GCPs consist of both input coordinates and reference coordinates for each point. reference pixels In classification accuracy assessment, pixels for which the correct GIS class is known from ground truth or other data. The reference pixels can be selected by you, or randomly selected. reference plane In a topocentric coordinate system, the tangential plane at the center of the image on the earth ellipsoid, on which the three perpendicular coordinate axes are defined.

GLOSSARY

195

reproject Transforms raster image data from one map projection to another. resampling The process of extrapolating data file values for the pixels in a new grid when data have been rectified or registered to another image. resolution A level of precision in data. resolution merging The process of sharpening a lower-resolution multiband image by merging it with a higher-resolution monochrome image. RGB Red, green, blue. The primary additive colors that are used on most display hardware to display imagery. RGB clustering A clustering method for 24-bit data (three 8-bit bands) that plots pixels in three-dimensional spectral space and divides that space into sections that are used to define clusters. The output color scheme of an RGB-clustered image resembles that of the input file. RMS error The distance between the input (source) location of the GCP and the retransformed location for the same GCP. RMS error is calculated with a distance equation.

RPC properties The RPC Properties uses rational polynomial coefficients to describe the relationship between the image and the earth's surface at the time of image capture. You can specify the associated RPC file to be used in your geocorrection. rubber sheeting The application of nonlinear rectification (2nd-order or higher). saturation A component of IHS that represents the purity of color and also varies linearly from 0 to 1. scale 1. The ratio of distance on a map as related to the true distance on the ground. 2. Cell size. 3. The processing of values through a lookup table. scanner The entire data acquisition system such as the Landsat scanner or the SPOT panchromatic scanner. seed tool An Image Analysis for ArcGIS feature that automatically generates feature layer polygons of similar spectral value. shapefile A vector format that contains spatial data. Shapefiles have the .shp extension.

196

USING IMAGE ANALYSIS FOR ARCGIS

signature A set of statistics that defines a training sample or cluster. The signature is used in a classification process. Each signature corresponds to a GIS class that is created from the signatures with a classification decision rule. source coordinates In the rectification process, the input coordinates. spatial enhancement The process of modifying the values of pixels in an image relative to the pixels that surround them. spatial frequency The difference between the highest and lowest values of a contiguous set of pixels. spatial resolution A measure of the smallest object that can be resolved by the sensor, or the area on the ground represented by each pixel. speckle noise The light and dark pixel noise that appears in radar data. spectral distance The distance in spectral space computed as Euclidean distance in n-dimensions, where n is the number bands.

spectral enhancement The process of modifying the pixels of an image based on the original values of each pixel, independent of the values of surrounding pixels. spectral resolution A measure of the smallest object that can be resolved by the sensor, or the area on the ground represented by each pixel. spectral space An abstract space that is defined by spectral units (such as an amount of electromagnetic radiation). The notion of spectral space is used to describe enhancement and classification techniques that compute the spectral distance between n-dimensional vectors, where n is the number of bands in the data. SPOT SPOT satellite sensors operate in two modes, multispectral and panchromatic. SPOT is often referred to as the pushbroom scanner, meaning that all scanning parts are fixed, and scanning is accomplished by the forward motion of the scanner. standard deviation 1. The square root of the variance of a set of values which is used as a measurement of the spread of the values. 2. A neighborhood analysis technique that outputs the standard deviation of the data file values of a user-specified window.

GLOSSARY

197

Usually. and so on. such as land cover. thematic data Raster data that is qualitative and categorical. that is. temporal resolution The frequency with which a sensor obtains imagery of a particular area. the total width of the area on the ground covered by the scanner. soil type. 198 USING IMAGE ANALYSIS FOR ARCGIS . slope. thematic map summarize areas A common workflow progression with feature theme corresponding to an area of interest to summarize the change just within a certain area. thematic change Thematic Change is a feature in Image Analysis for ArcGIS that allows you to compare two thematic images of the same area captured at different times to notice change in vegetation. hydrology. Thematic layers often contain classes of related information. urban areas. etc. supervised training requires the analyst to select training samples from the data that represent patterns to be classified. land cover. it provides readings consistently greater than or less than the other detectors for the same band over the same ground cover.striping A data error that occurs if a detector on a scanning system goes out of adjustment. y. subsetting The process of breaking out a portion of a large image file into one or more smaller files. etc. supervised training Any method of generating signatures for classification in which the analyst is directly involved in the pattern recognition process. swath width In a satellite system. terrain analysis The processing and graphic simulation of elevation data. terrain data Elevation data expressed as a series of x. A map illustrating the class characterizations of a particular spatial variable such as soils. sum A neighborhood analysis technique that outputs the total of the data file values in a user-specified window. thematic mapper (TM) Landsat data acquired in seven bands with a spatial resolution of 30 × 30 meters. and z values that are either regularly or irregularly spaced.

training The process of defining the criteria by which patterns in image data are recognized for the purpose of classification. zoom The process of expanding displayed pixels on an image so they can be more closely studied. A numeric value that is changeable. A thematic layer.theme A particular type of information. objects that have been associated with a name using a declaration statement. In models. The size of the matrix depends upon the order of the transformation. 2. Also called sample. vegetative indices A gray scale image that clearly highlights vegetation. 4. or cutoff point. Only the vertices of vector data are stored. usually a maximum allowable amount of error in an analysis. triangulation Establishes the geometry of the camera or sensor relative to objects on the earth’s surface. transformation matrix A set of coefficients that is computed from GCPs. Zooming is similar to magnification. that is represented in a layer. One band of a multiband image. usually represented with a letter. In classification. 3. variable 1. such as soil type or land use. true color A method of displaying an image (usually from a continuous raster layer) that retains the relationships between data file values and represents multiple bands with separate color guns. lines. except that it changes the display only temporarily. and used in polynomial equations to convert coordinates from one system to another. instead of every point that makes up the element. vector data Data that represents physical forms (elements) such as points. leaving image memory the same. The image memory values from each displayed band are translated through the function memory of the corresponding color gun. thresholding is the process of identifying a maximum distance between a pixel and the mean of the signature to which it was classified. training sample A set of pixels selected to represent a potential class. threshold A limit. and polygons. unsupervised training A computer-automated method of pattern recognition in which some parameters are specified by the user and are used to uncover statistical patterns that are inherent in the data. GLOSSARY 199 .

200

USING IMAGE ANALYSIS FOR ARCGIS

References

References

This appendix lists references used in the creation of this book. Akima, H., 1978, A Method for Bivariate Interpolation and Smooth Surface Fitting for Irregularly Distributed Data Points, ACM Transactions on Mathematical Software 4(2), pp. 148-159. Buchanan, M.D. 1979. “Effective Utilization of Color in Multidimensional Data Presentation. “Proceedings of the Society of Photo-Optical Engineers, Vol. 199: 9-19. Chavez, Pat S., Jr, et al. 1991. “Comparison of Three Different Methods to Merge Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic.” Photogrammetric Engineering & Remote Sensing, Vol. 57, No. 3: 295-303. Conrac Corp., Conrac Division. 1980. Raster Graphics Handbook. Covina, California: Conrac Corp. Daily, Mike. 1983. “Hue-Saturation-Intensity Split-Spectrum Processing of Seasat Radar Imagery.” Photogrammetric Engineering& Remote Sensing, Vol. 49, No. 3: 349-355. ERDAS 2000. ArcView Image Analysis. Atlanta, Georgia: ERDAS, Inc. ERDAS 1999. Field Guide. 5th ed. Atlanta: ERDAS, Inc. ESRI 1992. Map Projections & Coordinate Management: Concepts and Procedures. Redlands, California: ESRI, Inc. Faust, Nickolas L. 1989. “Image Enhancement.” Volume 20, Supplement 5 of Encyclopedia of Computer Science and Technology, edited by Allen Kent and James G. Williams. New York: Marcel Dekker, Inc. Gonzalez, Rafael C., and Paul Wintz. 1977. Digital Image Processing. Reading, Massachusetts: Addison-Wesley Publishing Company. Holcomb, Derrold W. 1993. “Merging Radar and VIS/IR Imagery.” Paper submitted to the 1993 ERIM Conference, Pasadena, California. Hord, R. Michael. 1982. Digital Image Processing of Remotely Sensed Data. New York. Academic Press.

201

Jensen, John R., et al. 1983. “Urban/Suburban Land Use Analysis.” Chapter 30 in Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia: American Society of Photogrammetry. Jensen, John R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective. Englewood Cliffs, New Jersey: Prentice-Hall. Kloer, Brian R. 1994. “Hybrid Parametric/Non-parametric Image Classification.” Paper presented at the ACSM-ASPRS Annual Convention, April 1994, Reno, Nevada. Lillesand, Thomas M., and Ralph W. Kiefer. 1987. Remote Sensing and Image Interpretation. New York: John Wiley & Sons, Inc. Marble, Duane F. 1990. “Geographic Information Systems: An Overview.” Introductory Readings in Geographic Information Systems, edited by Donna J. Peuquet and Duane F. Marble. Bristol, Pennsylvania: Taylor & Francis, Inc. McCoy, Jill, and Kevin Johnston. Using ArcGIS Spatial Analyst. Redlands, California: ESRI, Inc. Sabins, Floyd F., Jr. 1987. Remote Sensing Principles and Interpretation. New York: W. H. Freeman and Co. Schowengerdt, Robert A. 1983. Techniques for Image Processing and Classification in Remote Sensing. New York. Academic Press. Schowengerdt, Robert A. 1980. “Reconstruction of Multispatial, Multispectral Image Data Using Spatial Frequency Content.” Photogrammetric Engineering & Remote Sensing, Vol. 46, No. 10: 1325-1334. Star, Jeffrey, and John Estes. 1990. Geographic Information Systems: An Introduction. Englewood Cliffs, New Jersey: Prentice-Hall. Swain, Philip H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis (LARS Information Note 111572). West Lafayette, Indiana: The Laboratory for Applications of Remote Sensing, Purdue University. Swain, Philip H., and Shirley M. Davis. 1978. Remote Sensing: The Quantitative Approach. New York: McGraw Hill Book Company. Tou, Julius T., and Rafael C. Gonzalez. 1974. Pattern Recognition Principles. Reading, Massachusetts: Addison-Wesley Publishing Company. Tucker, Compton J. 1979. “Red and Photographic Infrared Linear Combinations for Monitoring Vegetation.” Remote Sensing of Environment, Vol. 8: 127-150. Walker, Terri C., and Richard K. Miller. 1990. Geographic Information Systems: An Assessment of Technology, Applications, and Products. Madison, Georgia: SEAI Technical Publications. Watson, David, 1994, Contouring: A Guide to the Analysis and Display of Spatial Data, Elsevier Science, New York.

202

USING IMAGE ANALYSIS FOR ARCGIS

Welch, R., and W.Ehlers. 1987. “Merging Multiresolution SPOT HRV and Landsat TM Data.” Photogrammetric Engineering & Remote Sensing, Vol. 53, No. 3: 301-303.

REFERENCES

203

204 USING IMAGE ANALYSIS FOR ARCGIS .

Index Index A A priori 183 Absorption spectra 101 Abstract symbol 183 Accuracy assessment 183 Ancillary data 183 Annotation 183 AOI 183 Area 184 Area of interest 184 ASCII 183 Aspect 184 Atmospheric correction 91 Attribute 184 Average 184 AVHRR 102 B Band 184 Bilinear interpolation 184 Bin 87 Bin function 184 Bins 184 Border 184 Boundary 184 brightness inversion 94 Brightness value 184 Brovey Transform 79 Buffer zone 185 C Camera Model tutorial 33 Camera Properties Fiducials 172 Camera properties 185 Camera Properties dialog Cartesian 185 Categorize 185 171 Cell 185 Cell Size 48 Cell Size Tab workflow 51 Checkpoint analysis 170 Class 185 value numbering systems 114 Class value 185 Classification 152. 185 Classification accuracy table 185 Classification scheme 185 Clustering 186 Clusters 186 Coefficient 186 Collinearity 186 Contiguity analysis 186 Continuous 186 Continuous data 186 Contrast stretch for display 85 linear 84 min/max vs. standard deviation nonlinear 84 piecewise linear 84 Convolution 70 filtering 109 Convolution Filtering 70 Convolution filtering 186 Convolution kernel 186 Coordinate system 186 Correlation threshold 186 Correlation windows 186 Corresponding GCPs 187 Covariance 187 Covariance matrix 187 Creating a shapefile tutorial 18 Cubic convolution 187 85 205 .

85. 96 E Edge detector 188 Edge enhancer 188 Effects of order 163 Enhancement 188 linear 84 nonlinear 84 radiometric 83 spatial 83 Extension 188 Extent 47 Extent Tab workflow 51 F Feature collection 188 Feature extraction 188 Feature space 188 Fiducial center 188 Fiducials 188 File coordinates 189 Filtering 189 Finding areas of change 22 Focal 189 Focal Analysis 77 workflow 78 Focal operation 109 206 G GCP matching 189 GCPs 151 General Tab workflow 50 Geocorrection 189 tutorial 33 Geocorrection property dialogs Elevation tab 155 General tab 153 Links tab 154 Geographic information system Georeferencing 150. 102 Landsat 7 180 Landsat Properties 177 Landsat Properties dialog 181 Layer 190 Linear 191 Linear transformation 169. 190 I Identifying similar areas IHS to RGB 99 IKONOS 153 189 Chipping tab 174 IKONOS properties 190 IKONOS Properties dialog 173 Image data 190 Image Difference tutorial 22 Image file 190 Image Info 45 workflow 46 Image matching 190 Image processing 190 Index 101 Indices 190 Information (vs. 187 Data file 187 Data file value 187 display 84 Database 187 Decision rule 187 Digital elevation model 187 Digital terrain model 187 Display device 84.D Data 108. 189 GIS defined 107 Ground control point 189 Ground control points 151 H High frequency kernel 189 High Frequency Kernels 72 High order polynomials 162 Histogram 189 breakpoint 85 Histogram Equalization tutorial 14 Histogram equalization 189 formula 88 Histogram match 91 Histogram matching 190 histogram matching 92 Histogram Stretch tutorial 14 Hue 96. data) 108 Intensity 96 IR 190 Island Polygons 41 ISODATA 190 L Landsat 190 bands and wavelengths 177 MSS 102 TM 99. 191 Linear transformations 161 Lookup table 84 display 85 Lookup table (LUT) 191 M Majority 191 Map projection 191 Maximum likelihood 191 USING IMAGE ANALYSIS FOR ARCGIS 18 .

195 Rectified coordinates 195 Reference coordinates 195 Reference pixels 195 Reference plane 195 Reflection spectra see absorption spectra Reproject 195 Resampling 196 Resolution 196 spatial 91 Resolution Merge 79 workflow 80 Resolution merging 196 RGB 196 RGB clustering 196 RMS error 151. 192 Neighborhood analysis 109.Mean 85. 196 RMSE 35 RPC properties 196 RPC Properties dialog 173. 176 Rubber Sheeting 169 Rubber sheeting 196 S Saturation 96. 191 Median 191 Minimum 191 Minimum distance 192 Minimum GCPs 166 Minority 192 Modeling 192 Mosaicking 192 Mosaicking images tutorial 30 MSS 177 Multispectral classification 192 Multispectral imagery 192 Multispectral scanner (MSS) 192 N Nadir 192 NDVI 192 Nearest neighbor 152. 193 Nonlinear transformations 162 Normalized difference vegetation index 193 O Observation 193 Off-nadir 193 Options dialog 47 Options Dialog workflow 50 Orientation tab 171 Orthorectification 193 tutorial 33 Overlay 193 P Panchromatic imagery 193 Parallelepiped 193 Parameter 193 Parametric 131 Parametric signature 193 Pattern recognition 193 Pixel 194 Pixel depth 194 Pixel size 194 Placing links tutorial 36 Polygon 194 Polynomial 194 Polynomial Properties dialog 168 Polynomial Transformation 161 Preference Tab 51 Preferences 49 Principal components analysis (PCA) 194 Profile 194 Pushbroom 194 Q QuickBird 194 QuickBird Properties 176 QuickBird Properties dialog R Radar data 194 Radiometric correction 195 Radiometric enhancement 195 Radiometric resolution 195 Raster data 195 Recode 114 Recoding 195 Rectification 150. 192 density 109 diversity 109 majority 109 maximum 109 minimum 109 minority 109 rank 109 sum 109 NITF 176 NoData Value 45 Non-directional 192 Non-Directional Edge 75 workflow 76 Nonlinear transformation 170. 196 Scale 196 Scanner 196 Scanning window 109 Seed Radius 40 workflow 44 Seed Tool 18 207 173 INDEX .

199 True color 199 tutorial 18 U Unsupervised Classification tutorial 25 Unsupervised training 199 V Variable 199 Vector data 199 Vegetative indices Z Zero Sum Kernels Zoom 199 12 199 72 USING IMAGE ANALYSIS FOR ARCGIS . 199 Triangle-based finite element analysis 169 Triangle-based rectification 169 Triangulation 169.controlling 40 workflow 42 Seed Tool Properties 40 Shadow enhancing 84 Shapefile 196 Signature 196 Source coordinates 197 Spatial Enhancement 69 Spatial enhancement 197 Spatial frequency 197 Spatial resolution 197 Speckle noise 197 Spectral distance 197 Spectral enhancement 197 Spectral resolution 197 Spectral space 197 SPOT 197 panchromatic 99 XS 102 Spot 158 Panchromatic 158 XS 158 Spot 4 159 Spot Properties dialog 160 Standard deviation 85. 197 Starting Image Analysis for ArcGIS Stereoscopic pairs 159 Striping 197 Subsetting 198 Summarize areas 198 Supervised training 198 Swath width 198 T Temporal resolution 198 Terrain analysis 198 Terrain data 198 Thematic Change 208 tutorial 24 Thematic data 198 Thematic files 152 Thematic map 198 Thematic mapper (TM) 198 Theme 198 Threshold 199 TM 177 TM data 179 Training 199 Training sample 199 Transformation matrix 161.

Sign up to vote on this title
UsefulNot useful