Professional Documents
Culture Documents
The information contained in this document is the exclusive property of Leica Geosystems GIS & Mapping, LLC. This work is protected under United States copyright law
and other international copyright treaties and conventions. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopying and recording, or by any information storage or retrieval system, as expressly permitted in writing by Leica Geosystems GIS & Mapping, LLC. All
requests should be sent to Attention: Manager of Technical Documentation, Leica Geosystems GIS & Mapping, LLC, 2801 Buford Highway NE, Suite 400, Atlanta, GA,
30329-2137, USA.
CONTRIBUTORS
Contributors to this book and the On-line Help for Image Analysis for ArcGIS include: Christine Beaudoin, Jay Pongonis, Kris Curry, Lori Zastrow, Mladen Stojic′ , and
Cheryl Brantley of Leica Geosystems GIS & Mapping, LLC.
Any software, documentation, and/or data delivered hereunder is subject to the terms of the License Agreement. In no event shall the U.S. Government acquire greater than
RESTRICTED/LIMITED RIGHTS. At minimum, use, duplication, or disclosure by the U.S. Government is subject to restrictions set forth in FAR §52.227-14 Alternates I,
II, and III (JUN 1987); FAR §52.227-19 (JUN 1987), and/or FAR §12.211/12.212 (Commercial Technical Data/Computer Software); and DFARS §252.227-7015 (NOV
1995) (Technical Data) and/or DFARS §227.7202 (Computer Software), as applicable. Contractor/Manufacturer is Leica Geosystems GIS & Mapping, LLC, 2801 Buford
Highway NE, Suite 400, Atlanta, GA, 30329-2137, USA.
ERDAS, ERDAS IMAGINE, and IMAGINE OrthoBASE are registered trademarks. Image Analysis for ArcGIS is a trademark.
ERDAS® is a wholly owned subsidiary of Leica Geosystems GIS & Mapping, LLC.
Other companies and products mentioned herein are trademarks or registered trademarks of their respective trademark owners.
Contents
Getting started
1 Introducing Image Analysis for ArcGIS 3
Learning about Image Analysis for ArcGIS 10
2 Quick-start tutorial 11
Exercise 1: Starting Image Analysis for ArcGIS 12
Exercise 2: Adding images and applying Histogram Stretch 14
Exercise 3: Identifying similar areas in an image 18
Exercise 4: Finding areas of change 22
Exercise 5: Mosaicking images 30
Exercise 6: Orthorectification of camera imagery 33
What’s Next? 38
Glossary 183
References 201
Index 205
CONTENTS V
VI USING IMAGE ANALYSIS FOR ARCGIS
Foreword
The data in a GIS needs to reflect reality, and snapshots of reality need to be
incorporated and accurately transformed into instantaneously ready, easy-to-use
information. From snapshots to digital reality, images are pivotal in creating and
maintaining the information infrastructure used by today’s society. Today’s
geographic information systems have been carefully created with features,
attributed behavior, analyzed relationships, and modeled processes.
There are five essential questions that any GIS needs to answer: Where, What,
When, Why, and How. Uncovering Why, When, and How are all done within the
GIS; images allow you to extract the Where and What. Precisely where is that
building? What is that parcel of land used for? What type of tree is that? The new
extensions developed by Leica Geosystems GIS and Mapping, LLC use imagery
to allow you to accurately address the questions Where and What, so you can then
derive answers for the other three.
But our earth is changing! Urban growth, suburban sprawl, industrial usage and
natural phenomena continually alter our geography. As our geography changes, so
VII
does the information we need to understand it. Because an
image is a permanent record of features, behavior,
relationships, and processes captured at a specific moment in
time, using a series of images of the same area taken over
time allows you to more accurately model and analyze the
relationships and processes that are important to our earth.
Sincerely,
Mladen Stojic′
Product Manager
Leica Geosystems GIS & Mapping, LLC
Section 1
1Introducing Image Analysis for ArcGIS
Introducing Image Analysis
for ArcGIS 1
IN THIS CHAPTER Image Analysis for ArcGIS™ is primarily designed for natural resource and
infrastructure management. The extension is very useful in the fields of forestry,
• Updating a database agriculture, environmental assessment, engineering, and infrastructure projects
such as facility siting and corridor monitoring, and general geographic database
• Categorizing land cover and
update and maintenance.
characterizing sites
Today, imagery of the earth’s surface is an integral part of desktop mapping and
• Identifying and summarizing
GIS, and it’s more important than ever to have the ability to provide realistic
natural hazard damage
backdrops to geographic databases and to be able to quickly update details
• Identifying and monitoring urban involving street use or land use data.
growth and changes
Image Analysis for ArcGIS gives you the ability to perform many tasks:
• Extracting features automatically
• Import and incorporate raster imagery into ArcGIS.
• Assessing vegetation stress • Categorize images into classes corresponding to land cover types such as
vegetation.
• Evaluate images captured at different times to identify areas of change.
• Identify and automatically map a land cover type with a single click.
• Find areas of dense and thriving vegetation in an image.
• Enhance the appearance of an image by adjusting contrast and brightness or by
applying histogram stretches.
• Align an image to a map coordinate system for precise area location.
• Rectify satellite images through Geocorrection Models.
3
Up datin g database s
There are many kinds of imagery to choose from in a wide range of scales, spatial, and spectral resolutions, and map accuracies. Aerial
photography is often the choice for map updating because of its high precision. With Image Analysis for ArcGIS you are able to use imagery
to identify changes and make revisions and corrections to your geographic database.
Transmission towers for radio-based telecommunications must all be visible from each other, must be within a certain range of elevations,
and must avoid fragile areas like wetlands. With Image Analysis for ArcGIS, you can categorize images into land cover classes to help
identify suitable locations. You can use imagery and analysis techniques to identify wetlands and other environmentally sensitive areas.
The Classification features enable you to divide an image into many different classes, and then highlight them as you wish. In this case the
areas not suitable for tower placement are highlighted, and the placement for the towers can be sited appropriately.
When viewing a forest hit by a hurricane, you can use the mapping tools of Image Analysis for ArcGIS to show where the damage occurred.
With other ArcGIS tools, you can show the condition of the vegetation, how much stress it suffers, and how much damage it sustained in
the hurricane.
Below, Landsat images taken before and after the hurricane, in conjunction with a shapefile that identifies the forest boundary, are used for
comparison. Within the shapefile, you can see detailed tree stand inventory and management information.
The upper two pictures show the area in 1987 and in 1989 after Hurricane Hugo. The lower image features the shapefile.
Cities grow over time, and images give a good sense of how they grow, and how remaining land can be preserved by managing that growth.
You can use Image Analysis for ArcGIS to reveal patterns of urban growth over time.
Here, Landsat data spanning 21 years was analyzed for urban growth. The final view shows the differences in extent of urban land use and
land cover between 1973 and 1994. Those differences are represented as classes. The yellow urban areas from 1994 represent how much
the city has grown beyond the red urban areas from 1973.
The top two images represent urban areas in red, first in 1974 and then in 1994. The bottom image shows the actual growth.
Suppose you are responsible for mapping the extent of an oil spill as part of a rapid response effort. You can use synthetic aperture radar
(SAR) data and Image Analysis for ArcGIS tools to identify and map the extent of such environmental hazards.
The following image shows an oil spill of the northern coast of Spain. The first image shows the spill, and the second image gives you an
example of how you can isolate the exact extent of a particular pattern using Image Analysis for ArcGIS.
Images depicting an oil spill off the coast of Spain and a polygon grown in the spill using Seed Tool.
Crops experience different stresses throughout the growing season. You can use multispectral imagery and analysis tools to identify and
monitor a crop’s health.
In these images, the Vegetative Indices function is used to see crop stress. The stressed areas are then automatically digitized and saved as
a shapefile. This kind of information can be used to help identify sources if variability in growth patterns. Then, you can quickly update
crop management plans.
• Orthorectifying an image
11
Exercise 1: Starting Image Analysis for ArcGIS
In the following exercises, we’ve assumed that you are using
1
a single monitor or dual monitor workstation that is
configured for use with ArcMap and Image Analysis for 2
ArcGIS. That being the case, you will be lead through a
series of tutorials in this chapter to help acquaint you with
Image Analysis for ArcGIS and further show you some of the
abilities of Image Analysis for ArcGIS.
In this exercise, you’ll learn how to start Image Analysis for
ArcGIS and activate the toolbar associated with it. You will
be able to gain access to all the important Image Analysis for
ArcGIS features through its toolbar and menu list. After
completing this exercise, you’ll be able to locate any Image
A dd i n g t h e I m ag e A n a ly si s for A rc GI S
Analysis for ArcGIS tool you need for preparation,
ex tensi on
enhancement, analysis, or geocorrection.
This exercise assumes you have already successfully 1. If the ArcMap dialog opens, keep the option to create a
completed installation of Image Analysis for ArcGIS on your new empty map, then click OK.
computer. If you have not installed Image Analysis for
ArcGIS, refer to the installation guide packaged with the
Image Analysis for ArcGIS CD, and install now.
S t a r t i n g I m ag e A n a ly s i s for A rc G I S
1. Click the Start button on your desktop, then click
Programs, and point to ArcGIS.
2. Click ArcMap to start the application.
1
3
Once the Image Analysis Extension check box has been The Image Analysis toolbar is your gateway to many of the
selected, the extension is activated. tools and features you can use with the extension. From the
Image Analysis toolbar you can choose many different
4. Click Close in the Extensions dialog. analysis types from the menu, choose a geocorrection type,
Adding too lbars and set links in an image.
1. Click the View menu, then point to Toolbars, and click
Image Analysis to add that toolbar to the ArcMap
window.
QUICK-START TUTORIAL 13
Exercise 2: Adding images and applying Histogram Stretch
Image data, displayed without any contrast manipulation, 4. Click Add to display the image in the view.
may appear either too light or too dark, making it difficult to
begin your analysis. Image Analysis for ArcGIS allows you
to display the same data in many different ways. For
example, changing the distribution of pixels allows you to
alter the brightness and contrast of the image. This is called
histogram stretching. Histogram stretching enables you to
manipulate the display of data to make your image easier to
visually interpret and evaluate.
A dd a n I m ag e A n a ly s i s for A rc G I S t h e m e o f
Mos c ow
1. Open a new view. If you are starting this exercise
immediately after Exercise 1, you should have a new, The image Moscow_spot.tif appears in the view.
empty view ready.
Apply a Histo gra m Eq ualization
2. Click the Add Data button .
Standard deviations is the default histogram stretch applied
3. In the Add Data dialog, select moscow_spot.tif, and to images by Image Analysis for ArcGIS. You can apply
click Add to draw it in the view. The path to the example histogram equalization to redistribute the data so that each
data directory is ArcGIS\ArcTutor\ImageAnalysis. display value has roughly the same number of data points.
More information about histogram equalization can be found
in chapter 6 “Using Radiometric Enhancement”.
1. Select moscow_spot.tif in the Table of contents, right
click your mouse, and select Properties to bring up
Layer Properties.
2. Click the Symbology tab and under Show, select RGB
Composite.
3
3. Check the Bands order and click the dropdown arrows
to change any of the Bands.
4
4. Click the dropdown arrow and select Histogram You can go to the Options dialog, accessible from the Image
Equalize as the Stretch Type. Analysis toolbar, and enter the working directory you want to
use on the General tab of the dialog. This step will save you
5. Click Apply and OK. time by automatically bringing up your working directory
6. Click the Image Analysis menu dropdown arrow, point whenever you click the browse button to navigate to it in
to Radiometric Enhancement, and click Histogram order to store an output image.
Equalization.
QUICK-START TUTORIAL 15
2. If you want to see the histograms for the image, click the
Histograms button located in the Stretch box.
7
3. Check the Invert box.
8
9
1
10
QUICK-START TUTORIAL 17
Exercise 3: Identifying similar areas in an image
With Image Analysis for ArcGIS you can quickly identify
areas with similar characteristics. This is useful for
identification of environmental disasters or burn areas. Once
an area has been defined, it can also be quickly saved into a
shapefile. This action lets you avoid the need for manual
digitizing. To define the area, you use the Seed Tool to point
to an area of interest such as a dark area on an image
depicting an oil spill. The Seed Tool returns a graphic
polygon outlining areas with similar characteristics.
Add an d draw an Imag e An alys is for
A rc G I S t h e m e d e p i c t i n g a n o i l s p i l l This is a radar image showing an oil spill off the
northern coast of Spain.
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File C r e a t e a s ha p e f i l e
button on your ArcMap tool bar. You do not need to
In this exercise, you use the Seed Tool (also called the
save the image. If you are beginning here, start ArcMap
Region Growing Tool). The Seed Tool grows a polygon
and load the Image Analysis for ArcGIS extension.
graphic in the image that encompasses all similar and
contiguous areas. In order to use the Seed Tool, you will first
1 2 need to create a shapefile in ArcCatalog and start editing in
order to enable the Seed Tool. After going through these
steps, you can point and click inside the area you want to
highlight, in this case an oil spill, and create a polygon. The
polygon enables you to see how much of an area the oil spill
2. Click the Add Data button. covers.
3. In the Add Data dialog, select radar_oilspill.img, and 1. Click the Zoom In tool, and drag a rectangle around the
click Add to draw it in the view. black area to see the spill more clearly.
QUICK-START TUTORIAL 19
1
7
8
2
7. Click the Seed Tool and click a point in the center of the
oil spill. The Seed Tool will take a few moments to
produce the polygon.
QUICK-START TUTORIAL 21
Exercise 4: Finding areas of change
The Image Analysis for ArcGIS extension allows you to see 4. Click OK.
changes over time. You can perform this type of analysis on
either continuous data using Image Difference or thematic
data using Thematic Change. In this exercise, you’ll learn
how to use Image Difference and Thematic Change. Image
Difference is useful for analyzing images of the same area to
identify land cover features that may have changed over time.
Image Difference performs a subtraction of one theme from
another. This change is highlighted in green and red masks
depicting increasing and decreasing values.
Find cha ng ed a reas
In the following example, you are going to work with two
With images active in the view, you can calculate the
continuous data images of the north metropolitan Atlanta,
difference between them.
Georgia, area—one from 1987 and one from 1992.
Continuous data images are those obtained from remote C o m p u t e t h e d i f fe r e n c e d u e t o
sensors like Landsat and SPOT. This kind of data measures develop ment
reflectance characteristics of the earth’s surface, analogous to
1. Click the Image Analysis dropdown arrow, click
exposed film capturing an image. You will use Image
Utilities, and click Image Difference.
Difference to identify areas that have been cleared of
vegetation for the purpose of constructing a large regional
shopping mall.
Add an d draw th e imag es o f Atlan ta
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File
button on your ArcMap tool bar. You do not need to
save the image. If you are beginning here, start ArcMap
and load the Image Analysis for ArcGIS extension.
2. Click the Add Data button.
3. Press the Shift or Ctrl key, and click on
atl_spotp_87.img and atl_spotp_92.img in the Add Data
dialog.
1 4
5
6
7 8
QUICK-START TUTORIAL 23
Image Difference calculates the difference in pixel values.
With the 15 percent parameter you set, Image Difference
finds areas that are at least 15 percent increased than before
(designated clearing) and highlights them in green. Image
Difference also finds areas that are at least 15 percent
decreased than before (designating an area that has increased
vegetation or an area that was once dry, but is now wet) and
highlights them in red.
C l o s e t h e v iew
You can now clear the view and either go to the next portion
of this exercise, Thematic Change, or end the session by
closing ArcMap. If you want to shut down ArcMap with
Image Analysis for ArcGIS, click the File menu, and click
Highlight Change shows the difference in red and green Exit. Click No when asked to save changes.
areas.
Using Thematic Change
10. In the Table of contents, click the check box to turn off
Highlight Change, and check Image Difference to Image Analysis for ArcGIS provides the Thematic Change
display it in the view. feature to make comparisons between thematic data images.
Thematic Change creates a theme that shows all possible
combinations of change and how an area’s land cover class
changed over time. Thematic Change is similar to Image
Difference in that it computes changes between the same area
at different points in time. However, Thematic Change can
only be used with thematic data (data that is classified into
distinct categories). An example of thematic data is a
vegetation class map.
This next example uses two images of an area near Hagan
Landing, South Carolina. The images were taken in 1987 and
1989, before and after Hurricane Hugo. Suppose you are the
forest manager for a paper company that owns a parcel of
The Image Difference image shows the results of the land in the hurricane’s path. With Image Analysis for
subtraction of the Before Theme from the After Theme. ArcGIS, you can see exactly how much of your forested land
has been destroyed by the storm.
3
This view shows an area damaged by Hurricane Hugo. 4
QUICK-START TUTORIAL 25
Using Unsupervised Classification to categorize continuous 4. Select Class 001, and double-click Class 001 under
images into thematic classes is particularly useful when you Class_names. Type the name Water.
are unfamiliar with the data that makes up your image. You 5. Double-click the color bar under Symbol for Class 001,
simply designate the number of classes you would like the and choose blue from the color palette.
data divided into, and Image Analysis for ArcGIS performs a
calculation assigning pixels to classes depending on their 6. Select Class 002, and double-click Class 002 under
values. By using Unsupervised Classification, you may be Class_names. Type the name Forest.
better able to quantify areas of different land cover in your 7. Double-click the color bar under Symbol for Class 002,
image. You can then assign the classes names like water, and choose green.
forest, and bare soil.
8. Select Class 003, and double-click Class 003 under
7. Click the check box of tm_oct87.img so the original Class_names. Type the name Bare Soil.
theme is not drawn in the view. This step makes the
remaining themes draw faster in the view.
2
5 4
QUICK-START TUTORIAL 27
Now do the same thing and perform a recode on the other 9. In the Symbology tab, double-click the symbol for was:
classified image you did of the Hugo area. Both of the images Class 002, is now: Class 003 (was Forest, is now Bare
will have your class names and colors permanently saved. Soil) to access the color palette.
Use Them atic Chan g e to see how land 10. Click the color red in the color palette, and click Apply.
cov er chang e d bec ause o f Hugo You don’t have to choose red, you can use any color you
like.
1. Make sure both recoded images are checked in the Table
of contents so both will be active in the view. 11. Click OK.
2. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Thematic Change.
3
4
5
3. Click the Before Theme dropdown arrow and select the You can see the amount of destruction in red. The red
87 classification image. shows what was forest and is now bare soil.
5. Navigate to the directory where you want to store the Using Thematic Change, the overall damage caused by the
Output Image, type the file name, and click Save. hurricane is clear. Next, you will want to see how much
damage actually occurred on the paper company’s land.
6. Click OK.
1. Click Add Data.
7. Click the check box of Thematic Change to draw it in
the view. 2. Select property.shp, and click Add.
8. Double-click the Thematic Change title to access Layer
Properties.
6
Thematic Change image with the property shapefile
Make the pro per ty transpa rent
1. Double-click on the property theme to access Layer
Properties.
2. Click the Symbology tab, and double-click the color
symbol.
3. In the Symbol Selector, click the Hollow symbol.
4. Click the Outline Width arrows, or type the number 3 in
the box.
5. Click the Outline Color dropdown arrow, and choose a
color that will easily stand out to show your property
line.
6. Click OK. The yellow outline clearly shows the devastation within
the paper company’s property boundaries.
7. Click Apply and OK on the Symbology tab.
QUICK-START TUTORIAL 29
Exercise 5: Mosaicking images
Image Analysis for ArcGIS allows you to mosaic multiple The two airphotos display in the view. The Mosaic tool
images. When you mosaic images, you join them together to joins them as they appear in the view: whichever is on
form one single image that covers the entire area. To mosaic top is also on top in the mosaicked image.
images, simply display them in the view, ensure that they Zoo m in to s ee imag e de tails
have the same number of bands, then select Mosaic.
1. Select Airphoto1.img, and right-click your mouse.
In the following exercise, you are going to mosaic two
airphotos with the same resolution. 2. Click Zoom to raster resolution.
Add an d draw th e imag es
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File
button on your ArcMap tool bar. You do not need to
save the image. If you are beginning here, start ArcMap
and load the Image Analysis for ArcGIS extension with
a new map.
2. Click the Add Data button.
3. Press the Shift key and select Airphoto1.img and
Airphoto2.img in the Add Data dialog. Click Add.
4. Click Airphoto1.img and drag it so that it is at the top of The two images are displayed at a 1:1 resolution. You
the Table of contents. can now use Pan to see how they overlap.
3. Click the Pan button, then maneuver the images in the
view.
QUICK-START TUTORIAL 31
4. If you want to automatically crop your images, check
the box, and use the arrows or type the percentage by
which to crop the images.
3
4
6
The Mosaic function joins the two images as they
7
appear in the view. In this case Airphoto1 is mosaicked
over Airphoto2.
7
The images are drawn in the view. You can see the
fiducial markings around the edges and at the top. 3. Click the Coordinate System tab.
4. In the box labeled Select a coordinate system, click
Predefined.
QUICK-START TUTORIAL 33
5. Click Projected Coordinate Systems, and then click 4. Navigate to the ArcGIS ArcTutor directory, and choose
Utm. ps_dem.img as the Elevation File.
6. Click NAD 1927, then click NAD 1927 UTM Zone 5. Click the Elevation Units dropdown arrow and select
11N. Meters.
7. Click Apply, and click OK. 6. Check Account for Earth’s curvature.
Or tho rectifyi ng your imag e us ing
3
Geocorre ction Prop er ti es
1. Click the Model Types dropdown arrow, and click
Camera.
4
5
2
6
2. Click the Geocorrection Properties button on the toolbar
to open the Camera dialog.
16
QUICK-START TUTORIAL 35
After placing fiducials, both the image and the shapefile Your first link should look approximately like this:
are shown in the view for rectification.
P l a c i n g l i n ks
1. Click the Add Links button.
2. Looking closely at the image and shapefile in the view,
and using the next image as a guide, line up where you
should place the first link. Follow the markers in the
next image to place the first three links. You will need to
click the crosshair on the point in the image first and
then drag the cursor over to the point in the shapefile
where you want to click.
Now take a look at the RMS Error on the Links tab of Camera
Properties. You can go to Save As on the Image Analysis
menu and save the image if you wish.
QUICK-START TUTORIAL 37
What’s Next?
This tutorial has introduced you to some features and basic
functions of Image Analysis for ArcGIS. The following
chapters go into greater detail about the different tools and
elements of Image Analysis for ArcGIS, and include
instructions on how to use them to your advantage.
39
Using Seed Tool Properties
As stated in the opening of the chapter, the main function of Seed
Tool Properties is to automatically generate feature layer polygons
of similar spectral value. After creating a shapefile in ArcCatalog,
you can either click in an image on a single point, or you can click
and drag a rectangle in a portion of the image that interests you.
You can decide which method you wish to use before clicking the
tool on the toolbar, or you can experiment with which method looks
best with your data.
In order to use the Seed Tool, you must first create the shapefile for
the image you are using in ArcCatalog. You will need to open
ArcCatalog, create a new shapefile in the directory you want to use, Seed Tool dialog
name it, choose polygon as the type of shapefile, and then use Start
Editing on the Editor toolbar in ArcMap to activate the Seed Tool. Se ed R adiu s
Once you are finished and you have grown the polygon, you can go
back to the Editor toolbar and select Stop Editing. When you use the simple click method, the Seed Tool is controlled
by the Seed Radius. You can change the number of pixels of the
The band or bands used in growing the polygon are controlled by Seed Radius by opening the dialog from the Image Analysis menu.
the current visible bands as set in Layer Properties. If you only have From this dialog, you select your Seed Radius in pixels. The Image
one band displayed, such as the red band, when you are interested Analysis for ArcGIS default Seed Radius is 5 pixels.
in vegetation analysis, then the Seed Tool only looks at the statistics
of that band to create the polygon. If you have all the bands (red, The Seed Radius determines how selective the Seed Tool is when
green, and blue) displayed, then the Seed Tool evaluates the selecting contiguous pixels. A larger Seed Radius includes more
statistics in each band of data before creating the polygon. pixels to calculate the range of pixel values used to grow the
polygon, and typically produces a larger polygon. A smaller Seed
When a polygon shapefile is being edited, a polygon defined using Radius uses fewer pixels to determine the range. Setting the Seed
the Seed Tool is added to the shapefile. Like other ArcGIS Radius to 0.5 or less restricts the polygon to growing over pixels
graphics, you can change the appearance of the polygon produced with the exact value as the pixel you click on in the image. This can
by the Seed Tool using the Graphics tools. be useful for thematic images in which a contiguous area might
have a single pixel value, instead of a range of values like
Co ntroll ing the Seed Too l continuous data.
You can use the Seed Tool simply by choosing it from the Image
Analysis toolbar and clicking on an image after generating a
shapefile. The defaults usually produce a good result. However, if
you want more control over the parameters of the Seed Tool, you
can open Seed Tool Properties from the Image Analysis menu.
3
4
5 6
11
After growing the polygon in the image with the Seed Tool, go
back to the Editor toolbar, click the dropdown arrow, and click
Stop Editing.
2
7
5
4
The Cell Size field will display in either meters or feet. To choose
one, click View in ArcMap, click Data Frame Properties, and on the
General Tab, click the dropdown arrow for Map Units and choose
either Feet or Meters.
Prefe renc es
The following processes will take you through the parts you
can change on the Options dialog.
The Ge nera l Tab
2
3
Section 2
4 Using Data Preparation
4
IN THIS CHAPTER When using the Image Analysis for ArcGIS extension, it is sometimes necessary
to prepare your data first. It is important to understand how to prepare your data
• Create New Image before moving on to the different ways Image Analysis for ArcGIS gives you to
manipulate your data. You are given several options for preparing data in Image
• Subset Image Analysis for ArcGIS.
55
Create New Image
The Create New Image function makes it easy to create a new
image file. It also allows you to define the size and content of the
file as well as choosing whether or not the new image type will be Minimum Maximum
thematic or continuous. Data Type
Value Value
Choose thematic for raster layers that contain qualitative and Unsigned 1 bit 0 1
categorical information about an area. Thematic layers lend
themselves to applications in which categories or themes are used. Unsigned 2 bit 0 3
They are used to represent data measured on a nominal or ordinal
scale, such as soils, land use, land cover, and roads. Unsigned 4 bit 0 15
The Initial Value lets you choose the number to initialize the new
file. Every cell is given this value.
When you are finished entering your information into the fields,
you can click OK to create the image, or Cancel to close the dialog.
5
6
7
In order to specify the particular area to subset, you click the Zoom
In tool, draw a rectangle over the area, open the options dialog, and
select Same As Display on the Extent tab. The rectangle is defined
by Top, Left, Bottom, and Right coordinates. Top and Bottom are
measured as the locations on the Y-axis and the Left and Right
coordinates are measured on the X-axis. You can then save the
subset image and work from there on your analysis.
The Pentagon subset image after setting the Analysis Extent in Options
8
9
10
11
4
5
6
7
1. Click Add Data, and add the image you want to reproject 1
to the view.
2. Right-click in the view, and click on Properties to bring up
the Data Frame Properties dialog.
3. Click on the Coordinate System tab.
4. Click Predefined and choose whatever coordinate
system you want to use to reproject the image.
5. Click Apply and OK. X
7
8
• Resolution Merge • zero spatial frequency — a flat image, in which every pixel has the same value
• low spatial frequency — an image consisting of a smoothly varying gray scale
• high spatial frequency — an image consisting of drastically changing pixel
values such as a checkerboard of black and white pixels
The Spatial Enhancement feature lets you use convolution, non-directional edge,
focal analysis, and resolution merge to enhance your images. Depending on what
you need to do to your image, you will select one feature from the Spatial
Enhancement menu. This chapter will focus on the explanation of these features as
well as how to apply them to your data.
This chapter is organized according to the order in which the Spatial Enhancement
tools appear. You may want to skip ahead if the information you are seeking is
about one of the tools near the end of the menu list.
69
Convolution
Convolution filtering is the process of averaging small sets of pixels
across an image. Convolution filtering is used to change the spatial Data
frequency characteristics of an image (Jensen 1996).
2 - 11 5 - -
3 - 0 11 - - Where:
Co nvol ution formula F = either the sum of the coefficients of the kernel,
or 1 if the sum of coefficients is zero
The following formula is used to derive an output data file value for
the pixel being convolved (in the center): V = the output pixel value
Zero sum kernels are kernels in which the sum of all coefficients in A high frequency kernel, or high pass kernel, has the effect of
the kernel equals zero. When a zero sum kernel is used, then the increasing spatial frequency.
sum of the coefficients is not used in the convolution equation, as
above. In this case, no division is performed (F = 1), since division High frequency kernels serve as edge enhancers, since they bring
by zero is not defined. out the edges between homogeneous groups of pixels. Unlike edge
detectors (such as zero sum kernels), they highlight edges and do
This generally causes the output values to be: not necessarily eliminate other features.
• zero in areas where all input values are equal (no edges)
• low in areas of low spatial frequency
• extreme in areas of high spatial frequency (high values become -1 -1 -1
much higher, low values become much lower)
-1 16 -1
Therefore, a zero sum kernel is an edge detector, which usually -1 -1 -1
smooths out or zeros out areas of low spatial frequency and creates
a sharp contrast where spatial frequency is high, which is at the
edges between homogeneous (homogeneity is low spatial When a high frequency kernel is used on a set of pixels in which a
frequency) groups of pixels. The resulting image often consists of relatively low value is surrounded by higher values, like this...
only edges and zeros.
1 -2 1
...the low value gets lower. Inversely, when the high frequency
1 1 1 kernel is used on a set of pixels in which a relatively high value is
surrounded by lower values...
64 60 57 - - -
61 125 69 - 188 -
58 60 70 - - -
Low f r e q u e n cy k e r n e l s
1 1 1
1 1 1
This kernel simply averages the values of the pixels, causing them
to be more homogeneous. The resulting image looks either more
smooth or more blurred.
A p p l y i n g C o n v o lu t i o n
Reflection fills in the area beyond the edge of the of the image
with a reflection of the values at the edge. Background fill uses
2
zeros to fill in the kernel area beyond the edge of the image.
For this model, a Sobel filter has been selected. To convert this
model to the Prewitt filter calculation, the kernels must be changed
according to the example below. Image of Seattle before applying Non-Directional Edge
–1 –2 –1 1 0 –1
0 0 0 2 0 –2
Sobel=
1 2 1 1 0 –1
horizontal vertical
–1 –1 –1 1 0 –1
Prewitt= 0 0 0 1 0 –1
1 1 1 1 0 –1
horizontal vertical
4
5
F o c a l A n a ly s i s R e s u l t s 2
U s in g R e s o l u t i o n M e r g e
3
4
Resolution Merge
• Histogram Equalization Radiometric Enhancement consists of functions to enhance your image by using
the values of individual pixels within each band. Depending on the points and the
• Histogram Matching bands in which they appear, radiometric enhancements that are applied to one band
may not be appropriate for other bands. Therefore, the radiometric enhancement of
• Brightness Inversion a multiband image can usually be considered as a series of independent, single-
band enhancements (Faust 1989).
83
LUT Stretch
LUT Stretch creates an output image that contains the data values N o n lin e a r c ont r a s t st ret ch
as modified by a lookup table. The output is 3 bands.
A nonlinear spectral enhancement can be used to gradually increase
Co ntrast s tre tch or decrease contrast over a range, instead of applying the same
amount of contrast (slope) across the entire image. Usually,
When radiometric enhancements are performed on the display nonlinear enhancements bring out the contrast in one range while
device, the transformation of data file values into brightness values decreasing the contrast in other ranges.
is illustrated by the graph of a lookup table.
P i e c ew i s e l i n e a r c o n t r a s t s t r e t ch
Contrast stretching involves taking a narrow input range and
stretching the output brightness values for those same pixels over a A piecewise linear contrast stretch allows for the enhancement of a
wider range. This process is done in Layer Properties in Image specific portion of data by dividing the lookup table into three
Analysis for ArcGIS. sections: low, middle, and high. It enables you to create a number
of straight line segments that can simulate a curve. You can
Line ar and n onlin ear enhance the contrast or brightness of any section in a single color
gun at a time. This technique is very useful for enhancing image
The terms linear and nonlinear, when describing types of spectral areas in shadow or other areas of low contrast.
enhancement, refer to the function that is applied to the data to
perform the enhancement. A piecewise linear stretch uses a A piecewise linear contrast stretch normally follows two rules:
polyline function to increase contrast to varying degrees over
different ranges of the data. 1. The data values are continuous; there can be no break in the
values between High, Middle, and Low. Range specifications
Line ar contrast stretch adjust in relation to any changes to maintain the data value
range.
A linear contrast stretch is a simple way to improve the visible 2. The data values specified can go only in an upward,
contrast of an image. It is often necessary to contrast-stretch raw increasing direction.
image data, so that they can be seen on the display.
The contrast value for each range represents a percentage of the
In most raw data, the data file values fall within a narrow range— available output range that particular range occupies. Since rules 1
usually a range much narrower than the display device is capable of and 2 above are enforced, as the contrast and brightness values are
displaying. That range can be expanded to utilize the total range of changed, they may affect the contrast and brightness of other
the display device (usually 0 to 255). ranges. For example, if the contrast of the low range increases, it
forces the contrast of the middle to decrease.
The statistics in the image file contain the mean, standard deviation,
and other statistics on each band of data. The mean and standard
deviation are used to determine the range of data file values to be
translated into brightness values or new data file values. You can
specify the number of standard deviations from the mean that are to
be used in the contrast stretch. Usually the data file values that are
two standard deviations above and below the mean are used. If the
data has a normal distribution, then this range represents
approximately 95 percent of the data.
The mean and standard deviation are used instead of the minimum
and maximum data file values because the minimum and maximum
data file values are usually not representative of most of the data. A
notable exception occurs when the feature being sought is in
shadow. The shadow pixels are usually at the low extreme of the
data file values, outside the range of two standard deviations from
the mean.
Histogram Equalization can also separate pixels into distinct The total number of pixels is divided by the number of bins,
groups if there are few output values over a wide range. This can equaling the number of pixels per bin, as shown in the following
have the visual effect of a crude classification. equation:
Original Histogram
T-
A = ---
N
peak
Where:
After Equalization
The pixels of each input value are assigned to bins, so that the
number of pixels in each bin is as close to A as possible. Consider
the following:
0 1 2 3 4 5 6 7 8 9 40
number of pixels
data file values
30
4 5
To assign pixels to bins, the following equation is used:
20 A = 24
6
i–1 H 2
15 15
H + -----i
7
∑ k 2 1 3
8
k = 1 0 0 0 0 9
B i = int ----------------------------------
A 0 1 2 3 4 5 6 7 8 9
output data file values
Effec t on contrast
Where: By comparing the original histogram of the example data with the
one above, you can see that the enhanced image gains contrast in
A = equalized number of pixels per bin (see above) the peaks of the original histogram. For example, the input range of
Hi = the number of values with the value i 3 to 7 is stretched to the range 1 to 8. However, data values at the
(histogram) tails of the original histogram are grouped together. Input values 0
int = integer function (truncating real numbers to through 2 all have the output value of 0. So, contrast among the tail
integer) pixels, which usually make up the darkest and brightest regions of
Bi = bin number for pixels with value i the input image, is lost.
H is t o g r a m E q u a l i z a t io n
frequency
frequency
days, or are slightly different because of sun angle or atmospheric +
effects. This is especially useful for mosaicking or change
detection.
To achieve good results with Histogram Matching, the two input 0 255 0 255
images should have similar characteristics: input input
frequency
=
• For some applications, the spatial resolution of the data should
be the same.
• The relative distributions of land covers should be about the
same, even when matching scenes that are not of the same area.
0 255
input
To match the histograms, a lookup table is mathematically derived,
Source histogram (a), mapped through the lookup table (b),
which serves as a function for converting one histogram to the
approximates model histogram (c).
other, as illustrated here.
H is t o g r a m M a t c h in g
DNout = 0.1
if 0.1 < DN < 1
DNin
B r ig h t n e s s I n v e r s i o n
• IHS to RGB • extract new bands of data that are more interpretable to the eye
• apply mathematical transforms and algorithms
• Vegetative Indices
• display a wider variety of information in the three available color guns (R, G, B)
• Color IR to Natural Color You can use the features of Spectral Enhancement to study such patterns as might
occur with deforestation or crop rotation and to see images in a more natural state
or view images in different ways, such as changing the bands in an image from red,
green, and blue to intensity, hue, and saturation.
95
RGB to IHS
The color monitors used for image display on image processing
systems have three color guns. These correspond to red, green, and
blue (R,G,B), the additive primary colors. When displaying three
bands of a multiband data set, the viewed image is said to be in
R,G,B space.
intensity
saturation
• Intensity is the overall brightness of the scene (like PC-1) and
varies from 0 (black) to 1 (white).
• Saturation represents the purity of color and also varies linearly
from 0 to 1.
• Hue is representative of the color or dominant wavelength of
the pixel. It varies from 0 at the red midpoint through green and
blue back to the red midpoint at 360. It is a circular dimension.
In the following image, 0 to 255 is the selected range; it could
be defined as any data range. However, hue must vary from 0
to 360 to define the entire sphere (Buchanan 1979).
The variance of intensity and hue in RGB to IHS
M–r
R = ---------------
M–m
M – g-
G = --------------
M–m
M – b-
B = --------------
M–m
The algorithm used in the Image Analysis for ArcGIS RGB to IHS transform
(Conrac 1980)
R, G, B are each in the range of 0 to 1.0. R, G, B are each in the range of 0 to 1.0.
m = least value, r, g, or b
I = M
+ m-
--------------
2
If M = m, S = 0
If I ≤ 0.5,
M – m-
S = --------------
M+m
M–m -
If I > 0.5, S = -----------------------
2–M–m
If M = m, H = 0
If R = M, H = 60 (2 + b - g)
If G = M, H = 60 (4 + r - b)
If B = M, H = 60 (6 + g - r)
RGB to IHS
G = m + ( M – m ) -------------------
H – 120
If 120 H < 180,
It is not essential that the input parameters (IHS) to this transform
60
be derived from an RGB to IHS transform. You could define I and/
or S as other parameters, set Hue at 0 to 360, and then transform to If 180 H < 300, G = M
RGB space. This is a method of color coding other data sets.
G = m + ( M – m ) -------------------
360 – H
If 300 H 360,
In another approach (Daily 1983), H and I are replaced by low- and 60
high-frequency radar imagery. You can also replace I with radar
intensity before the IHS to RGB transform (Holcomb 1993). Equations for calculating B in the range of 0 to 1.0:
Chavez evaluates the use of the IHS to RGB transform to resolution
merge Landsat TM with SPOT panchromatic imagery (Chavez If H < 60, B = M
1991).
B = m + ( M – m ) -------------------
120 – H
If 60 H < 120,
The algorithm used by Image Analysis for ArcGIS for the IHS to 60
RGB function is (Conrac 1980):
If 120 H < 240, B = M
Given: H in the range of 0 to 360; I and S in the range of 0 to 1.0
B = m + ( M – m ) -------------------
H – 240
If 240 H < 300,
If I 0.5, M = I(1 + S) 60
If I > 0.5, M = I + S – I ( S ) If 300 H 360, B = M
m = 2⋅1–M
The equations for calculating R in the range of 0 to 1.0 are:
IHS to RGB
BandX IR – R-
----------------- • Normalized Difference Vegetation Index (NDVI) = ---------------
BandY IR + R
IR – R- + 0.5
These ratio images are derived from the absorption/reflection
• Transformed NDVI (TNDVI) = ---------------
IR + R
spectra of the material of interest. The absorption is based on the
molecular bonds in the (surface) material. Thus, the ratio often Source: Modified from Sabins 1987; Jensen 1996; Tucker 1979
gives information on the chemical composition of the target.
Ap plica tions
• Indices are used extensively in mineral exploration and
vegetation analysis to bring out small differences between
various rock types and vegetation classes. In many cases,
judiciously chosen indices can highlight and enhance
differences that cannot be observed in the display of the
original color bands.
Landsat MSS 4 2
SPOT XS 3 2
Landsat TM 4 3
NOAA AVHRR 2 1
Imag e alg eb ra
DNir - DNred
Band ratios are also commonly used. These are derived from the
absorption spectra of the material of interest. The numerator is a
baseline of background absorption and the denominator is an
absorption peak.
2
3
4
5
6
After using Color IR to Natural Color, the image appears in natural colors.
2
3
4
5
107
Information versus data
Information, as opposed to data, is independently meaningful. It is
relevant to a particular problem or question:
You can input data into a GIS and output information. The
information you wish to derive determines the type of data that
must be input. For example, if you are looking for a suitable refuge
for bald eagles, zip code data is probably not needed, while land
cover data may be useful.
For this reason, the first step in any GIS project is usually an
assessment of the scope and goals of the study. Once the project is
defined, you can begin the process of building the database.
Although software and data are commercially available, a custom
database must be created for the particular project and study area.
The database must be designed to meet the needs and objectives of
the organization.
Neighborhood Analysis
2
Neighborhood Analysis applies to any analysis function that 3
takes neighboring pixels into account. This function creates a
new thematic layer. 4
Thematic Change creates an output image from two input raster files. The class values of the two input files are organized into a matrix.
The first input file specifies the columns of the matrix, and the second one specifies the rows. Zero is not treated specially in any way. The
number of classes in the output file is the product of the number of classes from the two input files.
Thematic Change
2
3
4
Note the areas of classification that show the changes between 1973 and 1994.
You can also use Recode to save any changes made to the color
scheme or class names of a classified image to the Attribute Table
for later use. Just saving an image will not record these changes.
Recoding an image involves two major steps. First, you must group
the discrete classes together into common groups. Secondly, you
perform the actual recoding process, which rewrites the Attribute
table using the information from your grouping process.
4
5
Summarize Areas
This chapter will explain the following functions and show you how to use:
• Image Difference
• Layer Stack
123
Image Difference
The Image Difference function gives you the ability to The Decreased class represents areas of negative (darker) change
conveniently perform change detection on aspects of an area by greater than the threshold for change and is red in color. The
comparing two images of the same place from different times. Increased class shows areas of positive (brighter) change greater
than the threshold and is green in color. Other areas of positive and
The Image Difference tool is particularly useful in plotting negative change less than the thresholds and areas of no change are
environmental changes such as urban sprawl and deforestation or transparent. For your application, you may edit the colors to select
the destruction caused by a wildfire or tree disease. It is also a any color desired for your study.
handy tool to use in determining crop rotation or the best new place
to develop a neighborhood. A lgor it hm
Image Difference is used for change analysis with imagery that Subtract two images on a pixel by pixel basis.
depicts the same area at different points in time. With Image
Difference, you can highlight specific areas of change in whatever 1. Subtract the Before Image from the After Image.
amount you choose. Two images are generated from this image-to- 2. Convert the decrease percentage to a value.
image comparison; one is a grayscale continuous image, and the
3. Convert the increase percentage to a value.
other is a five-class thematic image.
4. If the difference is less than the decrease value, then assign
The first image generated from Image Difference is the Difference the pixel to Class 1 (Decreased).
image. The Difference image is a grayscale image composed of 5. If the difference is greater than the increase value then assign
single band continuous data. This image is created by subtracting the pixel to Class 5 (Increased).
the Before Image from the After Image. Since Image Difference
calculates change in brightness values over time, the Difference
image simply reflects that change using a grayscale image. Brighter
areas have increased in reflectance. This may mean clearing of
forested areas. Dark areas have decreased in reflectance. This may
mean an area has become more vegetated, or the area was dry and
is now wet.
5 6
7 8
3
4
• Unsupervised Classification
• Supervised Classification
The differences in the two are basically as their titles suggest. Supervised
Classification is more closely controlled by you than Unsupervised Classification.
129
The Classification Process
Pattern rec ognition Unsuper vised traini ng
Pattern recognition is the science—and art—of finding meaningful Unsupervised training is more computer-automated. It enables you
patterns in data, which can be extracted through classification. By to specify some parameters that the computer uses to uncover
spatially and spectrally enhancing an image, pattern recognition statistical patterns that are inherent in the data. These patterns do
can be performed with the human eye; the human brain not necessarily correspond to directly meaningful characteristics of
automatically sorts certain textures and colors into categories. the scene, such as contiguous, easily recognized areas of a
particular soil type or land use. They are simply clusters of pixels
In a computer system, spectral pattern recognition can be more with similar spectral characteristics. In some cases, it may be more
scientific. Statistics are derived from the spectral characteristics of important to identify groups of pixels with similar spectral
all pixels in an image. However, in Supervised Classification, the characteristics than it is to sort pixels into recognizable categories.
statistics are derived from the training samples, and not the entire
image. After the statistics are derived, pixels are sorted based on Unsupervised training is dependent upon the data itself for the
mathematical criteria. The classification process breaks down into definition of classes. This method is usually used when less is
two parts: training and classifying (using a decision rule). known about the data before classification. It is then the analyst’s
responsibility, after classification, to attach meaning to the
Tr aini ng resulting classes (Jensen 1996). Unsupervised classification is
useful only if the classes can be appropriately interpreted.
First, the computer system must be trained to recognize patterns in
the data. Training is the process of defining the criteria by which Si gnatures
these patterns are recognized (Hord 1982). Training can be
performed with either a supervised or an unsupervised method, as The result of training is a set of signatures that defines a training
explained below. sample or cluster. Each signature corresponds to a class, and is used
with a decision rule (explained below) to assign the pixels in the
S u p er vi s e d t r a i n i n g image file to a class. Signatures contain both parametric class
definitions (mean and covariance) and non-parametric class
Supervised training is closely controlled by the analyst. In this definitions (parallelepiped boundaries that are the per band minima
process, you select pixels that represent patterns or land cover and maxima).
features that you recognize, or that you can identify with help from
other sources, such as aerial photos, ground truth data, or maps. A parametric signature is based on statistical parameters (e.g., mean
Knowledge of the data, and of the classes desired, is required before and covariance matrix) of the pixels that are in the training sample
classification. or cluster. Supervised and unsupervised training can generate
parametric signatures. A set of parametric signatures can be used to
By identifying patterns, you can instruct the computer system to train a statistically-based classifier (e.g., maximum likelihood) to
identify pixels with similar characteristics. If the classification is define the classes.
accurate, the resulting classes represent the categories within the
data that you originally identified.
After the signatures are defined, the pixels of the image are sorted
into classes based on the signatures by use of a classification
decision rule. The decision rule is a mathematical algorithm that,
using data contained in the signature, performs the actual sorting of
pixels into distinct class values.
• Minimum distance
• Mahalanobis distance
• Maximum likelihood
No nparametric decisi on rule
ISODATA clu s te r i n g
µΒ
Band B
Band B
Cluster
1
µΒ−σΒ
For the second iteration, the means of all clusters are recalculated,
Pixel a naly sis
causing them to shift in feature space. The entire process is
Pixels are analyzed beginning with the upper left corner of the repeated—each candidate pixel is compared to the new cluster
image and going left to right, block by block. means and assigned to the closest cluster mean.
The spectral distance between the candidate pixel and each cluster
mean is calculated. The pixel is assigned to the cluster whose mean
is the closest. The ISODATA function creates an output image file
with a thematic raster layer as a result of the clustering. At the end
of each iteration, an image file exists that shows the assignments of
the pixels to the clusters.
Cluster
2
data file values
Cluster
Band B
Band A
data file values
Perc e n t ag e u ncha n g ed
2
3
2
3
4
5
6
7
Band B
µB2 ◆
Image Analysis for ArcGIS provides these commonly-used µ2
decision rules for parametric signatures:
• Mahalanobis distance o
o µA1 µA2 µA3
• maximum likelihood (with Bayesian variation)
Band A
No nparametric rul e data file values
• Parallelepiped
In this illustration, spectral distance is illustrated by the lines from
the candidate pixel to the means of the three signatures. The
candidate pixel is assigned to the class with the closest mean.
∑ ( µci – Xxyi )
2
SD xyc =
i=1
Note: The maximum likelihood algorithm assumes that the Note: The Mahalanobis distance algorithm assumes that the
histograms of the bands of data have normal distributions. If this is histograms of the bands have normal distributions. If this is not the
not the case, you may have better results with the minimum case, you may have better results with the parallelepiped or
distance decision rule. minimum distance decision rule, or by performing a first-pass
parallelepiped classification.
The maximum likelihood decision rule is based on the probability
that a pixel belongs to a particular class. The basic equation Mahalanobis distance is similar to minimum distance, except that
assumes that these probabilities are equal for all classes, and that the covariance matrix is used in the equation. Variance and
the input bands have normal distributions. covariance are figured in so that clusters that are highly varied lead
to similarly varied classes, and vice versa. For example, when
The Equation for the Maximum Likelihood/Bayesian Classifier is classifying urban areas—typically a class whose pixels vary
as follows: widely—correctly classified pixels may be farther from the mean
than those of a class for water, which is usually not a highly varied
–1
D = ln ( ac ) – [ 0.5 ln ( Cov c ) ] – [ 0.5 ( X – M c )T ( Cov c ) ( X – M c ) ] class (Swain and Davis 1978).
T –1
D = ( X – M c ) ( Cov c ) ( X – M c )
Where:
D = Mahalanobis distance
c = a particular class
X = the measurement vector of the candidate pixel
Mc = the mean vector of the signature of class c
Covc = the covariance matrix of the pixels in the
signature of class c
Covc-1 = inverse of Covc
T
= transposition function
There are high and low limits for every signature in every band.
When a pixel’s data file values are between the limits for every
band in a signature, then the pixel is assigned to that signature’s
class. In the case of a pixel falling into more than one class, then the
first class is the one assigned. When a pixel falls into no class
boundaries, it is labeled unclassified.
• Convert Features to Raster The Image Info tool that is discussed in chapter 3 “Applying data tools” is also an
important part of Raster/Feature Conversion. The ability to assign certain pixel
values as NoData is very helpful when converting images.
143
Conversion
Always be aware of how the raster dataset will represent the
features when converting points, polygons, or polylines to a raster,
and vice versa. There is a trade off when working with a cell-based
system, and it is that even though points don't have area, cells do.
Even though points are represented by a single cell, that cell does
have area. The smaller the cell size, the smaller the area, and thus a
closer representation of the point feature. Points with area will have
an accuracy of plus or minus half the cell size. For many users
having all data types in the same format and being able to use them
interchangeably in the same language is more important than a loss
of accuracy.
When you choose Convert Raster to Features, the dialog will give A raster image before conversion
you the choice of a Field to specify from the image in the
conversion. You will also be given the choice of an Output
geometry type so you can choose if the feature will be a point, a
polygon, or a polyline according to the Field and data you’re using.
In order no to have jagged or sharp edges to the new feature file,
you can check Generalize Lines to smooth out the edges. You
should note that regardless of what Field you pick, the category will
not be populated on the Attribute Table after conversion.
2
3
4
5
6
When you convert points, cells are given the value of the points
found within each cell. Cells that do not contain a point are given
the value of NoData. You are given the option of specifying the cell
size you want to use in the Feature to Raster dialog. You should
choose the cell size based on several different factors: the resolution
of the input data, the output resolution needed to perform your
analysis, and the need to maintain a rapid processing speed.
Polygons are used for buildings, forests, fields, and many other
features that are best represented by a series of connected cells.
When you convert polygons, the cells are given the value of the
polygon found at the center of each cell.
2
3
4
5
• Polynomial Properties The terms geocorrection and rectification are used synonymously when discussing
geometric correction. Rectification is the process of transforming the data from
• Rubber Sheeting one grid system into another grid system using a geometric transformation. Since
the pixels of a new grid may not align with the pixels of the original grid, the pixels
• Camera Properties must be resampled. Resampling is the process of extrapolating data values for the
pixels on the new grid from the values of the source pixels.
• IKONOS Properties
Orthorectification is a form of rectification that corrects for terrain displacement
• Landsat Properties
and can be used if there is a DEM of the study area. It is based on collinearity
• QuickBird Properties equations, which can be derived by using 3D Ground Control Points (GCPs). In
relatively flat areas, orthorectification is not necessary, but in mountainous areas
• RPC Properties (or on aerial photographs of buildings), where a high degree of accuracy is
required, orthorectification is recommended.
149
When to rectify
Rectification is necessary in cases where the pixel grid of the image • What is the extent of the study area? Circular, north-south,
must be changed to fit a map projection system or a reference east-west, and oblique areas may all require different
image. There are several reasons for rectifying image data: projection systems (ESRI 1992).
Rectification is not necessary if there is no distortion in the image. Accurate GCPs are essential for an accurate rectification. From the
For example, if an image file is produced by scanning or digitizing GCPs, the rectified coordinates for all other points in the image are
a paper map that is in the desired projection system, then that image extrapolated. Select many GCPs throughout the scene. The more
is already planar and does not require rectification unless there is dispersed the GCPs are, the more reliable the rectification is. GCPs
some skew or rotation of the image. Scanning or digitizing for large scale imagery might include the intersection of two roads,
produces images that are planar, but do not contain any map airport runways, utility corridors, towers or buildings. For small
coordinate information. These images need only to be scale imagery, larger features such as urban areas or geologic
georeferenced, which is a much simpler process than rectification. features may be used. Landmarks that can vary (edges of lakes,
In many cases, the image header can simply be updated with new other water bodies, vegetation and so on) should not be used.
map coordinate information. This involves redefining:
The source and reference coordinates of the GCPs can be entered in
• the map coordinate of the upper left corner of the image the following ways:
• the cell size (the area represented by each pixel)
• They may be known a priori, and entered at the keyboard.
This information is usually the same for each layer of an image file, • Use the mouse to select a pixel from an image in the view. With
although it could be different. For example, the cell size of band 6 both the source and destination views open, enter source
of Landsat TM data is different than the cell size of the other bands. coordinates and reference coordinates for image to image
registration.
Gro und contro l poi nts • Use a digitizing tablet to register an image to a hardcopy map.
GCPs are specific pixels in an image for which the output map Tol e r a n c e o f R M S e rror ( R M S E )
coordinates (or other output coordinates) are known. GCPs consist
of two X,Y pairs of coordinates: Acceptable RMS error is determined by the end use of the data
base, the type of data being used, and the accuracy of the GCPs and
• source coordinates — usually data file coordinates in the image ancillary data being used. For example, GCPs acquired from GPS
being rectified should have an accuracy of about 10 m, but GCPs from 1:24,000-
• reference coordinates — the coordinates of the map or scale maps should have an accuracy of about 20 m.
reference image to which the source image is being registered
It is important to remember that RMS error is reported in pixels.
The term map coordinates is sometimes used loosely to apply to Therefore, if you are rectifying Landsat TM data and want the
reference coordinates and rectified coordinates. These coordinates rectification to be accurate to within 30 meters, the RMS error
are not limited to map coordinates. For example, in image to image should not exceed 1.00. Acceptable accuracy depends on the image
registration, map coordinates are not necessary. area and the particular project.
The Link Snapping section will only be activated when you have a
vector layer (shapefile) active in ArcMap. The purpose of this
portion of the tool is to allow you to snap an edge, end, or vertex to
the edge, end, or vertex of another layer. The vector layer you want
to snap to another layer will be defined in the Link Snapping box. 2
You will need to check either Vertex, Edge, or End depending on
what you want to snap to in another layer. The choice is completely
up to you.
1. Click the arrows to set the Threshold, and click the Within
and Over Threshold boxes to change the link colors.
2. The Displayed Units area shows the measurement of the
Vertical Units.
3. If you have shapefiles (a vector layer) active in ArcMap,
check Vertex, Boundary, or End Point. Checking one will
activate Snap Tolerance and Snap Tolerance Units.
1 2
2
You can proof and edit the coordinates of the links as you enter
them.
3
1. Click the Geocorrection Properties button .
2. Click the Links tab. The coordinates will be displayed in the
cell array on this tab.
3. Click inside a cell and edit the contents.
4. When you are finished, you can click Export Links to Shape
file and save the new shapefile.
After the Elevation Source section you can check the box if you
want to Account for Earth’s curvature as part of the Elevation.
The following steps take you through the Elevation tab. The first set
of instructions pertains to using File as your Elevation Source. The
second set uses Constant as the Elevation Source.
1. Choose File.
2. Type the file name or navigate to the directory where the
Elevation File is stored.
3. Click the dropdown arrow and choose Feet or Meters.
4. Check if you want to Account for the Earth’s curvature.
5. Click Apply to set the Elevation Source. Click OK if you are
finished with the dialog.
1. Choose Constant.
2. Click the arrows to enter the Elevation Value.
3. Click the dropdown arrow, and choose either Feet or Meters.
4. Check if you want to Account for the Earth’s curvature.
5. Click Apply to set the Elevation Source. Click OK if you are
finished with the dialog.
Panch romati c
The SPOT 4 satellite orbits the earth at 822 km above the Equator.
3 bands The SPOT 4 satellite has two sensors on board: a multispectral
XS
sensor, and a panchromatic sensor. The multispectral scanner has a
pixel size of 20 × 20 m, and a swath width of 60 km. The
1 pixel = panchromatic scanner has a pixel size of 10 × 10 m, and a swath
10 m x 10 m width of 60 km.
A 1st order transformation can also be used for data that are already
projected onto a plane. For example, SPOT and Landsat Level 1B
Reference X coordinate
b0 b1 b2 2∑i
i=0
( t + 1 )x ( t + 2 )
y0 = b0 + b1 x + b2 y
Clearly, the size of the transformation matrix increases with the
order of the transformation.
Where:
16 16
reference X coordinate
reference X coordinate
12 xr = (25) + (-8)xi 12
8 8
4 4
0 0
0 1 2 3 4 0 1 2 3 4
source X coordinate source X coordinate
3 1 16
reference X coordinate
12 xr = (31) + (-16)xi + (2)xi2
0
0 1 2 3 4
source X coordinate
16
Source X Reference X
reference X coordinate
Coordinate Coordinate 12 xr = (25) + (-5)xi + (-4)xi2 + (1)xi3
(input) (output)
8
1 17
4
2 7
3 1 0
0 1 2 3 4
4 5 source X coordinate
12 xr = (31) + (-16)xi + (2)xi2 this example, a 3rd-order transformation probably would be too
high, because the output pixels in the X direction would be arranged
8 in a different order than the input pixels in the X direction.
(4,5)
4
Source X Reference X
Coordinate Coordinate
0
0 1 2 3 4
(input) (output)
source X coordinate
1 x 0 ( 1 ) = 17
As illustrated in the graph above, this fourth GCP does not fit on the
curve of the 2nd-order polynomial equation. To ensure that all of 2 x0 ( 2 ) = 7
the GCPs fit, the order of the transformation could be increased to
3rd-order. The equation and graph below could then result. 3 x0 ( 3 ) = 1
2 2 4 x0 ( 4 ) = 5
x r = ( 25 ) + ( – 5 )x i + ( – 4 )x i + ( 1 )x i
x0 ( 1 ) > x0 ( 2 ) > x0 ( 4 ) > x0 ( 3 ) Higher orders of transformation can be used to correct more
complicated types of distortion. However, to use a higher order of
transformation, more GCPs are needed. For instance, three points
define a plane. Therefore, to perform a 1st-order transformation,
17 > 7 > 5 > 1 which is expressed by the equation of a plane, at least three GCPs
are needed. Similarly, the equation used in a 2nd-order
transformation is the equation of a paraboloid. Six points are
required to define a paraboloid. Therefore, at least six GCPs are
input image required to perform a 2nd-order transformation. The minimum
X coordinates number of points required to perform a transformation of order t
equals:
1 2 3 4
1 2 3 4
((t + 1)(t + 2))
-------------------------------------
2
output image
X coordinates
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
3 4 2 1
Number of GCPs
1 3
2 6
3 10
4 15
5 21
6 28
7 36
8 45
9 55
10 66
1
2
The finite element analysis is a powerful tool for solving Once the triangle mesh has been generated and the spatial order of
complicated computation problems which can be approached by the control points is available, the geometric rectification can be
small simpler pieces. It has been widely used as a local done on a triangle-by-triangle basis. This triangle-based method is
interpolation technique in geographic applications. For image appealing because it breaks the entire region into smaller subsets. If
rectification, the known control points can be triangulated into the geometric problem of the entire region is very complicated, the
many triangles. Each triangle has three control points as its vertices. geometry of each subset can be much simpler and modeled through
Then, the polynomial transformation can be used to establish simple transformation.
mathematical relationships between source and destination systems
for each triangle. Because the transformation exactly passes For each triangle, the polynomials can be used as the general
through each control point and is not in a uniform manner, finite transformation form between source and destination systems.
element analysis is also called Rubber Sheeting. It can also be
called the triangle-based rectification because the transformation L in e a r t r a ns fo r m a t io n
and resampling for image rectification are performed on a triangle-
by-triangle basis. The easiest and fastest transformation is the linear transformation
with the first order polynomials:
This triangle-based technique should be used when other
rectification methods such as Polynomial Transformation and
photogrammetric modeling cannot produce acceptable results.
xo = a 0 + a 1 x + a 2 y
Tr iang ulation
yo = b 0 + b 1 x + b 2 y
To perform the triangle-based rectification, it is necessary to
triangulate the control points into a mesh of triangles. Watson
(1994) summarily listed four kinds of triangulation, including the
arbitrary, optimal, Greedy, and Delaunay triangulation. Of the four There is no need for extra information because there are three
kinds, the Delaunay triangulation is most widely used and is known conditions in each triangle and three unknown coefficients
adopted because of the smaller angle variations of the resulting for each polynomial.
triangles.
After the Parameters tab on the IKONOS Properties dialog, there is The Chipping tab is the same for IKONOS, QuickBird, and RPC
the Chipping tab. The Chipping process allows circulation of RPCs Properties.
for an image chip rather than the full, original image from which the
chip was derived. This is made possible by specifying an affine On the Chipping tab you are given the choice of Scale and Offset or
relationship (pixel) between the chip and the full, original image. Arbitrary Affine as your chipping parameters. The dialog will
change depending on which chipping parameter you choose. Scale
and Offset is the more simple of the two. The formulas for
calculating the affine using scale and offset are listed on the dialog.
X and Y correspond to the pixel coordinates for the full, original
image.
QuickBird Properties allows you to rectify images captured with RPC stands for rational polynomial coefficients. When you choose
the QuickBird satellite. Like IKONOS, QuickBird requires the use it, the function allows you to specify the associated RPC file to be
of an RPC file to describe the relationship between the image and used in Geocorrection. RPC Properties in Image Analysis for
the earth’s surface at the time of image capture. ArcGIS allows you to work with NITF data.
The QuickBird satellite was launched in October of 2001. Its orbit NITF stands for National Imagery Transmission Format Standard.
has an altitude of 450 kilometers, a 93.5 minute orbit time, and a NITF data is designed to pack numerous image compositions with
10:30 A.M. equator crossing time. The inclination is 97.2 degrees complete annotation, text attachments, and imagery associated
sun-synchronous, and the nominal swath width is 16.5 kilometers metadata.
at nadir. The sensor has both panchromatic and multispectral
capabilities. The dynamic range is 11 bits per pixel for both The RPC file associated with the image contains rational function
panchromatic and multispectral. The panchromatic bandwidth is polynomial coefficients that are generated by the data provider
450-900 nanometers. The multispectral bands are as follows: based on the position of the satellite at the time of image capture.
These RPCs can be further refined by using GCPs. This file should
QuickBird Bands and Wavelengths be located in the same directory as the image or images you intend
to use in orthorectification.
Band Wavelength (microns)
Just like IKONOS and QuickBird, the RPC dialog contains the
1, Blue 0.45 to 0.52 µm Parameters and Chipping tabs. These work the same way in all
three model properties.
2, Green 0.52 to 0.60 µm
Land sat 1 -5 • Bands 4, 3, and 2 create a false color composite. False color
composites appear similar to an infrared photograph where
In 1972, the National Aeronautics and Space Administration objects do not have the same colors or contrasts as they would
(NASA) initiated the first civilian program specializing in the naturally. For instance, in an infrared image, vegetation
acquisition of remotely sensed digital satellite data. The first appears red, water appears navy or black, etc.
system was called ERTS (Earth Resources Technology Satellites), • Bands 5, 4, and 2 create a pseudo color composite. (A thematic
and later renamed to Landsat. There have been several Landsat image is also a pseudo color image.) In pseudo color, the colors
satellites launched since 1972. Landsats 1, 2, and 3 are no longer do not reflect the features in natural colors. For instance, roads
operating, but Landsats 4 and 5 are still in orbit gathering data. may be red, water yellow, and vegetation blue.
Landsats 1, 2, and 3 gathered Multispectral Scanner (MSS) data and Different color schemes can be used to bring out or enhance the
Landsats 4 and 5 collect MSS and TM data. features under study. These are by no means all of the useful
combinations of these seven bands. The bands to be used are
MSS determined by the particular application.
Glossary
T erm s
abstract symbol
An annotation symbol that has a geometric shape, such as a circle, square, or triangle. These
symbols often represent amounts that vary from place to place, such as population density, yearly
rainfall, and so on.
accuracy assessment
The comparison of a classification to geographical data that is assumed to be true. Usually, the
assumed true data is derived from ground truthing.
analysis mask
An option that uses a raster dataset in which all cells of interest have a value and all other cells are
no data. Analysis mask lets you perform analysis on a selected set of cells.
ancillary data
The data, other than remotely sensed data, that is used to aid in the classification process.
annotation
The explanatory material accompanying an image or a map. Annotation can consist of lines, text,
polygons, ellipses, rectangles, legends, scale bars, and any symbol that denotes geographical
features.
AOI
See area of interest.
183
a priori band
Already or previously known. A set of data file values for a specific portion of the electromagnetic
spectrum of reflected light or emitted heat (red, green, blue, near-
infrared, infrared, thermal, and so on) or some other user-defined
area information created by combining or enhancing the original bands,
A measurement of a surface. or creating new bands from other sources. Sometimes called
channel.
area of interest
bilinear interpolation
(AOI) a point, line, or polygon that is selected as a training sample
or as the image area to be used in an operation. Uses the data file values of four pixels in a 2 × 2 window to
calculate an output value with a bilinear function.
ASCII
bin function
See American Standard Code for Information Interchange.
A mathematical function that establishes the relationship between
data file values and rows in a descriptor table.
aspect
The orientation, or the direction that a surface faces, with respect to bins
the directions of the compass: north, south, east, west.
Ordered sets of pixels. Pixels are sorted into a specified number of
bins. The pixels are then given new values based upon the bins to
attribute which they are assigned.
The tabular information associated with a raster or vector layer.
border
average On a map, a line that usually encloses the entire map, not just the
image area as does a neatline.
The statistical mean; the sum of a set of values divided by the
number of values in the set.
boundary
A neighborhood analysis technique that is used to detect
boundaries between thematic classes.
GLOSSARY 185
classification scheme (or classification system) continuous data
A set of target classes. The purpose of such a scheme is to provide A type of raster data that are quantitative (measuring a
a framework for organizing and categorizing the information that characteristic) and have related, continuous values, such as
can be extracted from the data. remotely sensed images ( Landsat, SPOT, and so on).
correlation threshold
continuous
A value used in rectification to determine whether to accept or
A term used to describe raster data layers that contain quantitative discard GCPs. The threshold is an absolute value threshold ranging
and related values. See continuous data. from 0.000 to 1.000.
covariance
data file value
Measures the tendencies of data file values for the same pixel, but
in different bands, to vary with each other in relation to the means Each number in an image file. Also called file value, image file
of their respective bands. These bands must be linear. Covariance value, DN, brightness value, pixel.
is defined as the average product of the differences between the
data file values in each band and the mean of each band.
decision rule
An equation or algorithm that is used to classify image data after
covariance matrix signatures have been created. The decision rule is used to process
A square matrix that contains all of the variances and covariances the data file values based upon the signature statistics.
within the bands in a data file.
density
cubic convolution A neighborhood analysis technique that outputs the number of
Uses the data file values of sixteen pixels in a 4 × 4 window to pixels that have the same value as the analyzed pixel in a user-
calculate an output with cubic function. specified window.
GLOSSARY 187
digital terrain model (DTM) enhancement
A discrete expression of topography in a data array, consisting of a The process of making an image more interpretable for a particular
group of planimetric coordinates (X,Y) and the elevations of the application. Enhancement can make important features of raw,
ground points and breaklines. remotely sensed data more interpretable to the human eye.
dimensionality extension
In classification dimensionality refers to the number of layers being The three letters after the period in a file name that usually identify
classified. For example, a data file with three layers is said to be the type of file.
three dimensional.
extent
divergence 1. The image area to be displayed in a View. 2. The area of the
A statistical measure of distance between two or more signatures. earth’s surface to be mapped.
Divergence can be calculated for any combination of bands used in
the classification; bands that diminish the results of the
classification can be ruled out. feature collection
The process of identifying, delineating, and labeling various types
of natural and human-made phenomena from remotely-sensed
diversity images.
A neighborhood analysis technique that outputs the number of
different values within a user-specified window.
feature extraction
The process of studying and locating areas and objects on the
edge detector ground and deriving useful information from images.
A convolution kernel, which is usually a zero-sum kernel, that
smooths out or zeros out areas of low spatial frequency and creates
a sharp contrast where spatial frequency is high. High spatial feature space
frequency is at the edges between homogeneous groups of pixels. An abstract space that is defined by spectral units (such as an
amount of electromagnetic radiation).
edge enhancer
A high-frequency convolution kernel that brings out the edges fiducial center
between homogeneous groups of pixels. Unlike an edge detector, it The center of an aerial photo.
only highlights edges, it does not necessarily eliminate other
features.
For image to image rectification, a GCP selected in one image is A convolution kernel that increases the spatial frequency of an
precisely matched to its counterpart in the other image using the image. Also called a high-pass kernel.
spectral characteristics of the data and the transformation matrix.
histogram
geocorrection A graph of data distribution, or a chart of the number of pixels that
The process of rectifying remotely sensed data that has distortions have each possible data file value. For a single band of data, the
due to a sensor or the curvature of the earth. horizontal axis of a histogram graph is the range of all possible data
file values. The vertical axis is a measure of pixels that have each
data value.
GLOSSARY 189
histogram equalization image matching
The process of redistributing pixel values so that there are The automatic acquisition of corresponding image points on the
approximately the same number of pixels with each value within a overlapping area of two images.
range. The result is a nearly flat histogram.
image processing
histogram matching The manipulation of digital image data, including (but not limited
The process of determining a lookup table that converts the to) enhancement, classification, and rectification operations.
histogram of one band of an image or one color gun to resemble
another histogram.
indices
The process used to create output images by mathematically
hue combining the DN values of different bands.
A component of IHS (intensity, hue, saturation) that is
representative of the color or dominant wavelength of the pixel. It
varies from 0 to 360. Blue = 0 (and 360) magenta = 60, red = 120, IR
yellow = 180, green = 240, and cyan = 300. Infrared portion of the electromagnetic spectrum.
mean
lookup table (LUT) 1. The statistical Average; the sum of a set of values divided by the
An ordered set of numbers that is used to perform a function on a number of values in the set. 2. A neighborhood analysis technique
set of input values. To display or print an image, lookup tables that outputs the mean value of the data file values in a user-
translate data file values into brightness values. specified window.
GLOSSARY 191
minimum multispectral scanner (MSS)
A neighborhood analysis technique that outputs the least value of Landsat satellite data acquired in four bands with a spatial
the data file values in a user-specified window. resolution of 57 × 79 meters.
mosaicking no data
The process of piecing together images side by side to create a NoData is what you assign to pixel values you do not want to
larger image. include in a classification or function. By assigning pixel values
NoData, they are not given a value. Images that georeference to
non-rectangles need a NoData concept for display even if they are
multispectral classification
not classified. The values that NoData pixels are given are
The process of sorting pixels into a finite number of individual understood to be just place holders.
classes, or categories of data, based on data file values in multiple
bands.
non-directional
The process using the Sobel and Prewitt filters for edge detection.
multispectral imagery
These filters use orthogonal kernels convolved separately with the
Satellite imagery with data recorded in two or more bands. original image, and then combined.
nonparametric signature
panchromatic imagery
A signature for classification that is based on polygons or
rectangles that are defined in the feature space image for the image Single-band or monochrome satellite imagery.
file. There is not statistical basis for a nonparametric signature; it is
simply an area in a feature space image.
parallelepiped
1. A classification decision rule in which the data file values of the
normalized difference vegetation index (NDVI) candidate pixel are compared to upper and lower limits. 2. The
The formula for NDVI is IR - R / IR + R, where IR stands for the limits of a parallelepiped classification, especially when graphed as
infrared portion of the electromagnetic spectrum, and R stands for rectangles.
the red portion of the electromagnetic spectrum. NDVI finds areas
of vegetation in imagery.
parameter
1. Any variable that determines the outcome of a function or
observation
operation. 2. The mean and standard deviation of data, which are
In photogrammetric triangulation, a grouping of the image sufficient to describe a normal curve.
coordinates for a GCP.
parametric signature
off-nadir
A signature that is based on statistical parameters (such as mean
Any point that is not directly beneath a scanner’s detectors, but off and covariance matrix) of the pixels that are in the training sample
to an angle. The SPOT scanner allows off-nadir viewing. or cluster.
GLOSSARY 193
pattern recognition principal components analysis (PCA)
The science and art of finding meaningful patterns in data, which 1. A method of data compression that allows redundant data to be
can be extracted through classification. compressed into fewer bands (Jensen 1996; Faust 1989). 2. The
process of calculating principal components and outputting
principal component bands. It allows redundant data to be
piecewise linear contrast stretch compacted into fewer bands (that is the dimensionality of the data
An enhancement technique used to enhance a specific portion of is reduced).
data by dividing the lookup table into three sections: low, middle,
and high.
principal point
The point in the image plane onto which the perspective center is
pixel projected, located directly beneath the interior orientation.
Abbreviated from picture element; the smallest part of a picture
(image).
profile
A row of data file values from a DEM or DTED file. The profiles
pixel depth of DEM and DTED run south to north (that is the first pixel of the
The number of bits required to store all of the data file values in a record is the southernmost pixel).
file. For example, data with a pixel depth of 8, or 8-bit data, have
256 values ranging from 0-255.
pushbroom
A scanner in which all scanning parts are fixed, and scanning is
pixel size accomplished by the forward motion of the scanner, such as the
The physical dimension of a single light-sensitive element (13 × 13 SPOT scanner.
microns).
QuickBird
polygon The QuickBird model requires the use of rational polynomial
A set of closed line segments defining an area. coefficients (RPCs) to describe the relationship between the image
and the earth's surface at the time of image capture. By using
QuickBird Properties, you can perform orthorectification on
polynomial images gathered with the QuickBird satellite
A mathematical expression consisting of variables and coefficients.
A coefficient is a constant that is multiplied by a variable in the
expression.
reference coordinates
radiometric resolution The coordinates of the map or reference image to which a source
The dynamic range, or number of possible data file values, in each (input) image is being registered. GCPs consist of both input
band. This is referred to by the number of bits into which the coordinates and reference coordinates for each point.
recorded energy is divided. See pixel depth.
reference pixels
rank In classification accuracy assessment, pixels for which the correct
A neighborhood analysis technique that outputs the number of GIS class is known from ground truth or other data. The reference
values in a user-specified window that are less than the analyzed pixels can be selected by you, or randomly selected.
value.
reference plane
raster data
In a topocentric coordinate system, the tangential plane at the
A data type in which thematic class values have the same properties center of the image on the earth ellipsoid, on which the three
as interval values, except that ratio values have a natural zero or perpendicular coordinate axes are defined.
starting point.
GLOSSARY 195
reproject RPC properties
Transforms raster image data from one map projection to another. The RPC Properties uses rational polynomial coefficients to
describe the relationship between the image and the earth's surface
at the time of image capture. You can specify the associated RPC
resampling file to be used in your geocorrection.
The process of extrapolating data file values for the pixels in a new
grid when data have been rectified or registered to another image.
rubber sheeting
The application of nonlinear rectification (2nd-order or higher).
resolution
A level of precision in data.
saturation
A component of IHS that represents the purity of color and also
resolution merging varies linearly from 0 to 1.
The process of sharpening a lower-resolution multiband image by
merging it with a higher-resolution monochrome image.
scale
1. The ratio of distance on a map as related to the true distance on
RGB the ground. 2. Cell size. 3. The processing of values through a
Red, green, blue. The primary additive colors that are used on most lookup table.
display hardware to display imagery.
scanner
RGB clustering The entire data acquisition system such as the Landsat scanner or
A clustering method for 24-bit data (three 8-bit bands) that plots the SPOT panchromatic scanner.
pixels in three-dimensional spectral space and divides that space
into sections that are used to define clusters. The output color
seed tool
scheme of an RGB-clustered image resembles that of the input file.
An Image Analysis for ArcGIS feature that automatically generates
feature layer polygons of similar spectral value.
RMS error
The distance between the input (source) location of the GCP and
shapefile
the retransformed location for the same GCP. RMS error is
calculated with a distance equation. A vector format that contains spatial data. Shapefiles have the .shp
extension.
spectral resolution
source coordinates A measure of the smallest object that can be resolved by the sensor,
In the rectification process, the input coordinates. or the area on the ground represented by each pixel.
The light and dark pixel noise that appears in radar data. 1. The square root of the variance of a set of values which is used
as a measurement of the spread of the values. 2. A neighborhood
analysis technique that outputs the standard deviation of the data
spectral distance file values of a user-specified window.
The distance in spectral space computed as Euclidean distance in
n-dimensions, where n is the number bands.
GLOSSARY 197
striping temporal resolution
A data error that occurs if a detector on a scanning system goes out The frequency with which a sensor obtains imagery of a particular
of adjustment, that is, it provides readings consistently greater than area.
or less than the other detectors for the same band over the same
ground cover.
terrain analysis
The processing and graphic simulation of elevation data.
subsetting
The process of breaking out a portion of a large image file into one
or more smaller files. terrain data
Elevation data expressed as a series of x, y, and z values that are
either regularly or irregularly spaced.
sum
A neighborhood analysis technique that outputs the total of the data
file values in a user-specified window. thematic change
Thematic Change is a feature in Image Analysis for ArcGIS that
allows you to compare two thematic images of the same area
supervised training captured at different times to notice change in vegetation, urban
Any method of generating signatures for classification in which the areas, and so on.
analyst is directly involved in the pattern recognition process.
Usually, supervised training requires the analyst to select training
samples from the data that represent patterns to be classified. thematic data
Raster data that is qualitative and categorical. Thematic layers
often contain classes of related information, such as land cover, soil
swath width type, slope, etc.
In a satellite system, the total width of the area on the ground
covered by the scanner.
thematic map
A map illustrating the class characterizations of a particular spatial
summarize areas
variable such as soils, land cover, hydrology, etc.
A common workflow progression with feature theme
corresponding to an area of interest to summarize the change just
within a certain area. thematic mapper (TM)
Landsat data acquired in seven bands with a spatial resolution of 30
× 30 meters.
GLOSSARY 199
200 USING IMAGE ANALYSIS FOR ARCGIS
References References
Akima, H., 1978, A Method for Bivariate Interpolation and Smooth Surface Fitting for
Irregularly Distributed Data Points, ACM Transactions on Mathematical Software 4(2),
pp. 148-159.
Chavez, Pat S., Jr, et al. 1991. “Comparison of Three Different Methods to Merge
Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic.”
Photogrammetric Engineering & Remote Sensing, Vol. 57, No. 3: 295-303.
Conrac Corp., Conrac Division. 1980. Raster Graphics Handbook. Covina, California:
Conrac Corp.
ESRI 1992. Map Projections & Coordinate Management: Concepts and Procedures.
Redlands, California: ESRI, Inc.
Gonzalez, Rafael C., and Paul Wintz. 1977. Digital Image Processing. Reading,
Massachusetts: Addison-Wesley Publishing Company.
Holcomb, Derrold W. 1993. “Merging Radar and VIS/IR Imagery.” Paper submitted to the
1993 ERIM Conference, Pasadena, California.
Hord, R. Michael. 1982. Digital Image Processing of Remotely Sensed Data. New York.
Academic Press.
201
Jensen, John R., et al. 1983. “Urban/Suburban Land Use Analysis.” Chapter 30 in Manual of Remote Sensing, edited by Robert N.
Colwell. Falls Church, Virginia: American Society of Photogrammetry.
Jensen, John R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective. Englewood Cliffs, New Jersey:
Prentice-Hall.
Kloer, Brian R. 1994. “Hybrid Parametric/Non-parametric Image Classification.” Paper presented at the ACSM-ASPRS Annual
Convention, April 1994, Reno, Nevada.
Lillesand, Thomas M., and Ralph W. Kiefer. 1987. Remote Sensing and Image Interpretation. New York: John Wiley & Sons, Inc.
Marble, Duane F. 1990. “Geographic Information Systems: An Overview.” Introductory Readings in Geographic Information
Systems, edited by Donna J. Peuquet and Duane F. Marble. Bristol, Pennsylvania: Taylor & Francis, Inc.
McCoy, Jill, and Kevin Johnston. Using ArcGIS Spatial Analyst. Redlands, California: ESRI, Inc.
Sabins, Floyd F., Jr. 1987. Remote Sensing Principles and Interpretation. New York: W. H. Freeman and Co.
Schowengerdt, Robert A. 1983. Techniques for Image Processing and Classification in Remote Sensing. New York. Academic Press.
Schowengerdt, Robert A. 1980. “Reconstruction of Multispatial, Multispectral Image Data Using Spatial Frequency Content.”
Photogrammetric Engineering & Remote Sensing, Vol. 46, No. 10: 1325-1334.
Star, Jeffrey, and John Estes. 1990. Geographic Information Systems: An Introduction. Englewood Cliffs, New Jersey: Prentice-Hall.
Swain, Philip H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis (LARS Information Note 111572). West
Lafayette, Indiana: The Laboratory for Applications of Remote Sensing, Purdue University.
Swain, Philip H., and Shirley M. Davis. 1978. Remote Sensing: The Quantitative Approach. New York: McGraw Hill Book Company.
Tou, Julius T., and Rafael C. Gonzalez. 1974. Pattern Recognition Principles. Reading, Massachusetts: Addison-Wesley Publishing
Company.
Tucker, Compton J. 1979. “Red and Photographic Infrared Linear Combinations for Monitoring Vegetation.” Remote Sensing of
Environment, Vol. 8: 127-150.
Walker, Terri C., and Richard K. Miller. 1990. Geographic Information Systems: An Assessment of Technology, Applications, and
Products. Madison, Georgia: SEAI Technical Publications.
Watson, David, 1994, Contouring: A Guide to the Analysis and Display of Spatial Data, Elsevier Science, New York.
REFERENCES 203
204 USING IMAGE ANALYSIS FOR ARCGIS
Cell 185
Index
Index A
A priori 183
Cell Size 48
Cell Size Tab
Absorption spectra 101 workflow 51
Abstract symbol 183 Checkpoint analysis 170
Accuracy assessment 183 Class 185
Ancillary data 183 value
Annotation 183 numbering systems 114
AOI 183 Class value 185
Area 184 Classification 152, 185
Area of interest 184 Classification accuracy table 185
ASCII 183 Classification scheme 185
Aspect 184 Clustering 186
Atmospheric correction 91 Clusters 186
Attribute 184 Coefficient 186
Average 184 Collinearity 186
AVHRR 102 Contiguity analysis 186
Continuous 186
B Continuous data 186
Band 184 Contrast stretch
Bilinear interpolation 184 for display 85
Bin 87 linear 84
Bin function 184 min/max vs. standard deviation 85
Bins 184 nonlinear 84
Border 184 piecewise linear 84
Boundary 184 Convolution 70
brightness inversion 94 filtering 109
Brightness value 184 Convolution Filtering 70
Brovey Transform 79 Convolution filtering 186
Buffer zone 185 Convolution kernel 186
Coordinate system 186
C Correlation threshold 186
Camera Model Correlation windows 186
tutorial 33 Corresponding GCPs 187
Camera Properties Covariance 187
Fiducials 172 Covariance matrix 187
Camera properties 185 Creating a shapefile
Camera Properties dialog 171 tutorial 18
Cartesian 185 Cubic convolution 187
Categorize 185
205
D G Chipping tab 174
Data 108, 187 GCP matching 189 IKONOS properties 190
Data file 187 GCPs 151 IKONOS Properties dialog 173
Data file value 187 General Tab Image data 190
display 84 workflow 50 Image Difference
Database 187 Geocorrection 189 tutorial 22
Decision rule 187 tutorial 33 Image file 190
Digital elevation model 187 Geocorrection property dialogs 153 Image Info 45
Digital terrain model 187 Elevation tab 155 workflow 46
Display device 84, 85, 96 General tab 153 Image matching 190
Links tab 154 Image processing 190
E Geographic information system 189 Index 101
Edge detector 188 Georeferencing 150, 189 Indices 190
Edge enhancer 188 GIS Information (vs. data) 108
Effects of order 163 defined 107 Intensity 96
Enhancement 188 Ground control point 189 IR 190
linear 84 Ground control points 151 Island Polygons 41
nonlinear 84 ISODATA 190
radiometric 83 H
spatial 83 High frequency kernel 189 L
Extension 188 High Frequency Kernels 72 Landsat 190
Extent 47 High order polynomials 162 bands and wavelengths 177
Extent Tab Histogram 189 MSS 102
workflow 51 breakpoint 85 TM 99, 102
Histogram Equalization Landsat 7 180
F tutorial 14 Landsat Properties 177
Feature collection 188 Histogram equalization 189 Landsat Properties dialog 181
Feature extraction 188 formula 88 Layer 190
Feature space 188 Histogram match 91 Linear 191
Fiducial center 188 Histogram matching 190 Linear transformation 169, 191
Fiducials 188 histogram matching 92 Linear transformations 161
File coordinates 189 Histogram Stretch Lookup table 84
Filtering 189 tutorial 14 display 85
Finding areas of change 22 Hue 96, 190 Lookup table (LUT) 191
Focal 189
Focal Analysis 77 I M
workflow 78 Identifying similar areas 18 Majority 191
Focal operation 109 IHS to RGB 99 Map projection 191
IKONOS Maximum likelihood 191
INDEX 207
controlling 40 tutorial 24
workflow 42 Thematic data 198
Seed Tool Properties 40 Thematic files 152
Shadow Thematic map 198
enhancing 84 Thematic mapper (TM) 198
Shapefile 196 Theme 198
Signature 196 Threshold 199
Source coordinates 197 TM 177
Spatial Enhancement 69 TM data 179
Spatial enhancement 197 Training 199
Spatial frequency 197 Training sample 199
Spatial resolution 197 Transformation matrix 161, 199
Speckle noise 197 Triangle-based finite element analysis
Spectral distance 197 169
Spectral enhancement 197 Triangle-based rectification 169
Spectral resolution 197 Triangulation 169, 199
Spectral space 197 True color 199
SPOT 197 tutorial 18
panchromatic 99
XS 102 U
Spot 158 Unsupervised Classification
Panchromatic 158 tutorial 25
XS 158 Unsupervised training 199
Spot 4 159
Spot Properties dialog 160 V
Standard deviation 85, 197 Variable 199
Starting Image Analysis for ArcGIS 12 Vector data 199
Stereoscopic pairs 159 Vegetative indices 199
Striping 197
Subsetting 198 Z
Summarize areas 198 Zero Sum Kernels 72
Supervised training 198 Zoom 199
Swath width 198
T
Temporal resolution 198
Terrain analysis 198
Terrain data 198
Thematic Change