Professional Documents
Culture Documents
User’s Guide
December 2010
Copyright © 2010 ERDAS, Inc.
The information contained in this document is the exclusive property of ERDAS, Inc. This work is protected under United
States copyright law and other international copyright treaties and conventions. No part of this work may be reproduced
or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or by any
information storage or retrieval system, except as expressly permitted in writing by ERDAS, Inc. All requests should be
sent to the attention of:
Government Reserved Rights. MrSID technology incorporated in the Software was developed in part through a project
at the Los Alamos National Laboratory, funded by the U.S. Government, managed under contract by the University of
California (University), and is under exclusive commercial license to LizardTech, Inc. It is used under license from
LizardTech. MrSID is protected by U.S. Patent No. 5,710,835. Foreign patents pending. The U.S. Government and the
University have reserved rights in MrSID technology, including without limitation: (a) The U.S. Government has a non-
exclusive, nontransferable, irrevocable, paid-up license to practice or have practiced throughout the world, for or on
behalf of the United States, inventions covered by U.S. Patent No. 5,710,835 and has other rights under 35 U.S.C. §
200-212 and applicable implementing regulations; (b) If LizardTech's rights in the MrSID Technology terminate during
the term of this Agreement, you may continue to use the Software. Any provisions of this license which could reasonably
be deemed to do so would then protect the University and/or the U.S. Government; and (c) The University has no
obligation to furnish any know-how, technical assistance, or technical data to users of MrSID software and makes no
warranty or representation as to the validity of U.S. Patent 5,710,835 nor that the MrSID Software will not infringe any
patent or other proprietary right. For further information about these provisions, contact LizardTech, 1008 Western Ave.,
Suite 200, Seattle, WA 98104.
ERDAS, ERDAS IMAGINE, Stereo Analyst, IMAGINE Essentials, IMAGINE Advantage, IMAGINE, Professional,
IMAGINE VirtualGIS, Mapcomposer, Viewfinder, and Imagizer are registered trademarks of ERDAS, Inc.
Other companies and products mentioned herein are trademarks or registered trademarks of their respective owners.
Table of Contents
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .iii
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
About This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Example Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Conventions Used in This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
More Information/Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
iv Table of Contents
List of Tables
Table 1: APM Parameter Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Table 2: Tie Point-Based Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Table 3: Edge Match Image Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Table 4: Georeference Image Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
List of Tables v
List of Tables v
vi List of Tables
Preface
About This Manual The IMAGINE AutoSync User’s Guide serves as a handy guide to help
you use IMAGINE AutoSync™. Included is a comprehensive index, so
that you can reference particular information later.
Example Data Sample data sets are provided with the software. This data is
separately installed from the data DVD. For the purposes of
documentation, <ERDAS_Data_Home> represents the name of the
directory where sample data is installed. The Tour Guides refer to
specific data which are stored in <ERDAS_Data_Home>/examples.
Documentation This manual is part of a suite of on-line documentation that you receive
with ERDAS IMAGINE software. There are two basic types of
documents, digital hardcopy documents which are delivered as PDF
files suitable for printing or on-line viewing, and On-Line Help
Documentation, delivered as HTML files.
The PDF documents are found in <IMAGINE_HOME>\help\hardcopy
where <IMAGINE_HOME> represents the name of the directory where
ERDAS IMAGINE is installed. Many of these documents are available
from the ERDAS Start menu. The on-line help system is accessed by
clicking on the Help button in a dialog or by selecting an item from a
Help menu.
Conventions Used In ERDAS IMAGINE, the names of menus, menu options, buttons, and
other components of the interface are shown in bold type. For example:
in This Book
“In the Select Layer To Add dialog, select the Fit to Frame option.”
When asked to use the mouse, you are directed to click, Shift-click,
middle-click, right-click, hold, drag, and so forth.
Preface vii
Preface vii
• drag—designates dragging the mouse while holding down the left
mouse button.
Shaded Boxes
Shaded boxes contain supplemental information that is not required
to execute the steps of a tour guide, but is noteworthy. Generally, this
is technical information.
More As you go through the tour guides, there are several ways to obtain
more information regarding dialogs, tools, or menus, as described
Information/Help below.
On-Line Help
There are several ways to open Help:
viii Preface
Help Icon
Clicking this icon launches On-Line Help at the main start page. This file
provides access to all On-Line Help through the Contents, Index, and
Search tools. From the Contents pane, you may jump to any of the
available On-Line Help documents. From the Search pane, you may
search the entire On-Line Help system.
Help Button
Dialogs within ERDAS software contain a Help button which you can
click to open On-Line Help for the respective dialog. To open the
navigation pane (containing the Contents, Index, and Search functions)
you must click the View Navigation icon at the top of the help page.
Bubble Help
As you move your cursor moves along individual tools on many of the
tabs, a small Bubble Help popup dialog may open, if it is available for
the item and if your Preference is set to use Bubble Help. This
information is the same as the Status Bar Help shown in the bottom of
the Workspace, but opens directly under your cursor.
Preface ix
x Preface
Introduction to IMAGINE AutoSync
This chapter explains what to expect from IMAGINE AutoSync and how
to adjust the underlying engine for optimal results. It also provides
practical tips and hints, and describes the best strategies to handle
difficult situations you may encounter.
Constraints As with any tool, poor data quality and/or inappropriate parameters can
produce less than desirable results. As a user, you should be aware of
the following:
The more experienced you are with using IMAGINE AutoSync and the
more knowledge you have about the data and workflow, the better the
output will be using our software.
Data Preparation The quality of your input data plays a crucial role in determining the
accuracy of the output and extent of user intervention required.
Additionally, the type of the input data largely determines which
workflow to follow for optimal results. This section discusses various
data you can use in IMAGINE AutoSync and how to best prepare the
data. It also provides suggested remedies for potential problems.
Input Images When using the edge matching workflow, you can use georeferenced
or calibrated input images. In the georeferencing workflow, input
images can be georeferenced, calibrated, or raw images. You can also
use images that have map information but are not georeferenced to a
particular projected coordinate system.
If you are using raw input images, you must first establish a footprint
with the reference image before running automatic point measurement.
This is a necessary step since raw images lack the map information to
place the image at an approximate location to overlap with the
reference image.
Another consideration when using raw imagery is the potential for
matching problems between the uncorrected, vertically displaced
mountainous regions in the raw image and an orthorectified reference
image. This displacement can cause poor points to be generated from
the automatic point measurement process. You can alleviate this
problem by choosing an appropriate sensor model that allows for the
specification of a DEM (DLT, RPC, or ROP) and using an accurate
DEM. See Modeling on page 13 for more details.
Sensor metadata can be very helpful in establishing models for
rectification. For example, QuickBird images contain enough
information to build a rigorous model. Generally, data that are rectified
using a rigorous model and an accurate DEM produce the best results.
Selecting Input and When selecting input and reference images to use, it is preferable to
Reference Images use images with maximum similarity. This largely improves the result of
the automatic point measurement. If the images are too dissimilar, the
results from the automatic point measurement process may be
undesirable.
Some of the main factors that affect the similarities of images include
the following:
Time of Capture
Time of capture, especially the season, could greatly alter the
radiometric characteristics of the images. For example, a winter scene
will not match well with a summer scene with high vegetation.
Resolution
Resolution is another factor that affects point matching results, because
it creates a difference in the details of the two images. Avoid mixing
input and reference images with a resolution difference larger than a
factor of six.
Elevation Variation
Variation in the elevation could also cause a difference between the
input and reference images. This is because the reference most likely
will be an orthorectified image and therefore vertical displacement is
minimal compared to the input. As a result, features that should be in
the same location could be far apart when the input and reference
images are attempted to be matched. To avoid this problem, you can
select a model that will allow for the specification of a DEM.
Digital Elevation Model The availability of a high resolution Digital Elevation Model (DEM) can
(DEM) drastically impact the quality of rectification results, especially for
mountainous areas. A DEM provides additional model-solving
information in determining the location of features in the output. This
could greatly reduce the negative impact of vertical displacement when
matching input and reference images.
APM Engine Automatic Point Measurement (APM) is a software tool that uses
image-matching technology to automatically recognize and measure
the corresponding image points between two raster images. In
IMAGINE AutoSync, APM aims to deliver the coordinates of evenly
distributed corresponding points between an input image and a
reference image.
APM Strategy APM works automatically to find the needed image points, but there are
Parameters a set of parameters you can adjust in circumstances where the default
settings fail to produce acceptable results. An Advanced Point Matching
Strategy dialog is also provided for more control over the process.
The defaults that appear in these dialogs can be set from the
IMAGINE AutoSync category of the Preference Editor.
Before using APM, confirm that the Initial Pyramid Layer Number
is set to 1 in Image Files (General) category of the Preference
Editor. This retains the largest pyramid layer when computing
pyramid layers to ensure point matching accuracy.
Keep All Points Select this checkbox to use all tie points generated
regardless of accuracy or distribution. If this checkbox is active, the
number of collected tie points will be greater than the intended number
of points per image. You do not normally need to choose this option
unless your images have low contrast, yielding few points without this
option selected.
Starting Column, Starting Line: Define the starting location of tie
points you want to find on the image in pixels. You will get better results
if you define the starting location close to the upper-left corner of the
overlap area on the higher resolution image. It is safe to define the
location close to the upper-left corner of the image, but you may get bad
results if it is close to the lower-right corner.
Column Increment, Line Increment: Define the increment in pixels
for tie point locations along column and line direction. APM will try to find
tie points around the image locations with these increments to the
previous locations.
Ending Column, Ending Line: Define the last column and line for tie
point collection. If you want to define them, they should not exceed the
lower-right corner of the overlap area. If you leave them at the default
of 0, 0, APM will automatically use the last column and last line of the
overlap area.
Automatically Remove Blunders: Click this checkbox to remove
blunders (wrong tie points) automatically from the APM generated tie
points. Removing blunders is an iterative process based on a 3rd order
polynomial model. When this option is selected, the points that do not
fit well with the majority of tie points are considered blunders and are
discarded. By default, this option is selected. You should deselect this
option only if you suspect that it is removing correct tie points. For
example, you should deselect this option when most of the APM tie
points are wrong, or when there is a large difference in the terrain
between the inputs and reference image.
Maximum Blunder Removal Iterations: This option becomes
available when you choose to automatically remove blunders with the
Automatically Remove Blunders option. The default is 2. In most
cases, increasing this number means more iterations of the blunder
removal algorithm will be run. As a result, more tie points will be
considered as blunders and discarded.
Search Size: Enter the window size in pixels to use for searching for
corresponding points. IMAGINE AutoSync searches for the
corresponding point within a square window defined by this parameter.
The default value is 17 (a 17 x 17 pixel window). For flat areas, this
value could be smaller, for steeper areas, it could be larger. A larger
value could cause more computation time and more wrong points, but
a smaller value could result in fewer matched points.
Feature Point Density: Defines the feature point density percentage
based on an internal default. Increasing the value above 100% (for
example, in poor contrast area) will get more feature points to get more
matched points. Decreasing the value below 100% (for example, in an
area with crowded details) will result in fewer feature points to
accelerate the computation.
Normally, you do not need to adjust this parameter if you are using
scanned aerial photos. However, if you select the Avoid Shadow
option, you should set this parameter to a higher value (for
example, 200%).
You do not need to use this option unless shadows are very
prominent in your images.
Image Scan Type: Positive Select this option if you are working with
a positive image (bare ground appears white in the image).
Image Scan Type: Negative Select this option if you are working
with a negative image (bare ground appears black in the image).
Use Manual Tie Points for Initial Connection between
Images: Select this option if the manually measured tie points will be
used as the initials for APM to find additional tie points automatically.
Select this option when the initial map coordinates are very coarse, or
when no map information is available for one or more of the images. If
you try to rectify a raw image to another raw image, you should also
select this option. You should manually collect a minimum of three
points.
Ideal Situations for Good For the best APM results, try to ensure that the following conditions are
APM Performance met as much as possible. Not meeting one or more of these conditions
does not necessarily mean that the APM results will be of poor quality.
• Use images that were captured in the same season, at the same
time of day (similar illumination conditions), and with similar weather
situations with good visibility.
• Select the same band or a similar band in the images for point
matching to ensure similarity of radiometric characteristics.
• Before using APM, confirm that the Initial Pyramid Layer Number
is set to 1 in Image Files (General) category of the Preference
Editor. This retains the largest pyramid layer when computing
pyramid layers to ensure point matching accuracy.
Situations to Avoid These are some situations you should try to avoid, since any or a
combination of the following conditions could result in poorly matched
points.
• Using images with an overlap that is less than 256 x 256 pixels or
with an overlap region that is too narrow. Since APM requires a
sufficient region to deploy the matching strategy, an overlap less
than 20% will not produce desirable results.
• The band to match selection for inputs and reference should not
differ too much in electromagnetic wavelength. For example, an
infrared band might not match well with a blue band.
• Large scale, high-rise urban scenes do not match well due to the
vertical displacement effect, which is difficult to orthorectify.
APM Trouble Shooting Refer to this section if the APM results are not as expected. If APM
and Tips results in a large RMSE, it may indicate bad APM results and/or
inappropriate modeling. Examine the tie points carefully to ensure that
the problem is from the APM results (many bad points, not enough
points, and so on.) before applying the following steps to fine tune the
APM parameters for improved results. If the APM points are correct but
the output does not reflect the quality of the points, you most likely have
chosen an inappropriate model.
Situation Remedies
Many points, but On the APM Strategy tab, change one or more of the
many poor quality following parameters:
points
• Increase the Minimum Point Match Quality (>
0.9)
• Correlation Size and Least
Increase the
Squares Size
• Decrease the Intended Number of Points
Too few points, but On the APM Strategy tab, change one or more of the
good quality following parameters:
Image to Image (2D) An image-to-image transform can warp one image onto another without
Transforms the use of an earth model (DEM). The fit will not be as good as a
rigorous sensor model that requires a DEM because much of the
distortion comes from the terrain.
Rubber Sheeting
Rubber sheeting is a two dimensional image-to-image transformation
which is implemented as a piecewise transformation based on the
triangles formed from the tie points. This has the property that the
transformation is always perfect at the control points and there is
always a well behaved transition from triangle to triangle. However, if an
image has hilly or mountainous terrain, you will have to collect a large
number of tie points. In effect, the tie points will be forming a model of
the terrain surface. This can be impractical since the performance of
rubber sheeting decreases as the number of points increases. Rubber
sheeting is best used in an area of moderate relief when an actual
sensor model and a DEM is not available.
Polynomial
A polynomial model is a two dimensional image-to-image
transformation. The polynomial model is of the form:
Ground to Image (3D) Every image is a mapping of three-dimensional (3D) coordinates into a
Transforms two-dimensional (2D) plane. The ground-to-image transformation
models this mapping using a DEM as the earth model.
Selecting a Model You will get the best results when using a rigorous sensor model and
an accurate DEM. Most of the satellite data are shipped with sensor
model data (either parameters for the rigorous orbital pushbroom or
RPCs) which IMAGINE AutoSync can read. If the model is unknown but
a DEM is available, then it is a reasonable strategy to first try using a
DLT. If the results from the DLT are not acceptable, the image may
have been created using a pushbroom sensor. In this case, the
pushbroom orbital parameters are unknown, so the next best candidate
is to use one of the image-to-image (2D) transformations as described
above.
Selecting a DEM/DTM The quality and accuracy of the results will be directly tied to the quality
of the DEM or DTM (Digital Terrain Model) used. A DTM usually does
not include man-made structures such as buildings or bridges, so it can
be expected that these features will have the most mismatch in the final
results.
Modeling Trouble- Refer to this section for additional modeling troubleshooting help and
Shooting and Tips tips.
• The more rigorous the model, the better the result. Follow the list to
make the best of the available information from the data. The
recommended order of models from the most rigorous to the least
is:
4. Polynomial
5. Rubber Sheeting
Number of Tie
Appropriate Model
Points
< 10 Affine
• During the process of resolving the model, you can undo the last
deletion of points if the model results are not what you expected.
IMAGINE This section provides additional tips and hints for using IMAGINE
AutoSync to generate the best results.
AutoSync Tips and
Hints
Interpreting Results After careful data preparation, you can run APM and tie the images
together through a mathematical model. Then you can review the
results. This section explains how to correctly interpret the results,
identify any problems, and how to resolve them.
Visual Inspection
Visual inspection in the workstation is the most reliable method to verify
results. Use the Swipe tool on overlaying images to inspect them for
proper alignment. A well-aligned set of images will swipe smoothly
without sudden visual interruption, except where there are real changes
(for example, new buildings) or shadows.
Follow-up Actions
When your analysis of the results point to problems either in APM or
modeling, refer to the proper sections of this chapter for specific tips for
improvement:
• Use the Preview Output option on the Input Image context menu
to view the results of the model before calibrating or resampling the
imagery. While in the preview mode, you can continue to delete
GCPs and resolve the model. Also, whenever you select Preview
Output again, the model will be recomputed and the viewer updated
appropriately. This avoids having to return to the Point Review
mode.
• Using very low resolution elevation data with the rigorous models
may result in a shearing affect in the output imagery. This is due to
the difference in the resolution of the input image and the elevation
data.
• It saves time to turn off the display of a large number of tie points in
the Overview in the workstation.
Using the IMAGINE This section provides some helpful tips when using the IMAGINE
AutoSync Wizards AutoSync wizards.
• After you determine a good workflow, you can make batching easier
by creating a template IMAGINE AutoSync .lap file with the proper
settings but no images in the workstation. Then load the template
.lap file in the wizard and add the large datasets.
IMAGINE AutoSync IMAGINE AutoSync supports three main types of workflows. Use the
Workflows workflow that is suitable to the nature of the data and your applications.
Georeferencing Workflow
Use the georeferencing workflow if you know that one input image is
clearly of better accuracy, and both images are georeferenced. For
example, use the georeferencing workflow if you have a database of
high-accuracy images and you want to introduce another
georeferenced image of lesser quality to the database.
General IMAGINE Some general tips and hints for using IMAGINE AutoSync include:
AutoSync Tips and Hints
• Before using APM, confirm that the Initial Pyramid Layer Number
is set to 1 in Image Files (General) category of the Preference
Editor. This retains the largest pyramid layer when computing
pyramid layers to ensure point matching accuracy.
• When resampling, ensure the output cell size is reasonable for the
images. The IMAGINE AutoSync defaults may not be suitable for
your application.
Summary When properly used, IMAGINE AutoSync is a powerful tool for fast
image rectification with a tremendous saving of manual labor. This is
achieved by a streamlined workflow, user-friendly workstation
environment, a state-of-the-art automatic point matching engine, and a
wide selection of intelligent modeling methods.
The final output from IMAGINE AutoSync is the cumulative result of the
workflow you select, the data quality, APM engine usage (parameter
settings), and the model selected. To ensure the best results, you
should make careful and judicious decisions on these factors, starting
with the data preparation, and proceeding with the steps as outlined in
the sections of this chapter.
As with any sophisticated system, using IMAGINE AutoSync requires
that you have a basic understanding of the various components of the
embedded technologies. The more knowledge you have with regard to
the data and the internal working of IMAGINE AutoSync, the better your
chance of success.
General Guidelines Some general guidelines you should follow when using IMAGINE
AutoSync include:
1. Start with careful data preparation to ensure that you obtain the
best data available.
3. Select the most accurate model for rectification and utilize the
provided metadata. Understand the limitations of each model
and troubleshoot accordingly
4. Follow the tips and hints. This will help you avoid frustration
caused by improper use.
Using the Edge In this section, you use the IMAGINE AutoSync Edge Matching wizard
to align two images so that features in the overlapping area match up.
Matching Wizard
Table 1: Edge Match Image Data Set
These data files are air photo images of the Oxford, Ohio area.
You must have ERDAS IMAGINE running.
Using the Input tab In the Input tab, you will add the images to be edge matched. IMAGINE
AutoSync will edge match neighboring images, so input image order in
the CellArray is important.
Click here
to open
the file
Click here
to select
the file
Preview
window
5. Click Next> to continue to the APM Strategy tab in the Edge Matching
Wizard.
Using the APM Strategy In the APM Strategy tab, you can adjust the algorithm settings that
tab control the placement of automatically generated tie points in your
images. You can also select which input image layer to use to achieve
a better point matching result.
Make sure
Defined Pattern
is selected
1. Accept the default settings in the APM Strategy tab. Make sure that
Defined Pattern is selected.
2. Click Next> to continue to the Edge Match Strategy tab in the Edge
Matching Wizard.
Using the Edge Match In the Edge Match Strategy tab, you can select a refinement method
Strategy tab and choose to apply the refinement to the overlapping area only or the
whole image.
1. In the Edge Match Strategy tab, click the Refinement Method list and
select Linear Rubber Sheeting.
3. In the Edge Match Strategy tab, in the Buffer Around the Overlapping
Area (pixels): field, keep the default of 180.
Using the Projection tab In the Projection tab, you can set a projection for your output images.
You can set it to the same projection as the corresponding input image
or to another specified projection.
NOTE: The Output Projection fields will be greyed out in the Projection
tab until you select the Resample geocorrection method in the Output
tab.
2. Click Next> to continue to the Output tab in the Edge Matching Wizard.
Using the Output tab In the Output tab, you can specify the properties for your output images,
including selecting the geocorrection method and specifying names for
the output files and summary report.
Click here
to open the
Resample
Click here to select Settings
the Resample method dialog
Enter a
Click here to open summary
the Output File report
Names dialog name here
3. Accept the default settings in the Resample Settings dialog. Make sure
the Cubic Convolution resample method is selected.
5. In the Output tab, click the Set Output File Names... button.
The Output File Names dialog opens.
Enter a default
file name suffix here Click here to
select a default
output directory
6. In the Output File Names dialog, click the File Selector icon to
select a default output directory of your choice.
7. In the Default Output File Name Suffix field, enter a default file name
suffix of your choice, or use the default _output.
9. In the Output tab, make sure the Generate Summary Report checkbox
is selected and enter a name of your choice for the HTML summary
report. You can also click the File Selector icon to select a directory
of your choice.
10. In the Output tab, click Save to save the project. A File Selector opens,
and you can save the project to a directory of your choice.
12. Click OK in the AutoSync Job status dialog when the operation is
finished.
NOTE: Edge matching can take several minutes to run, based upon
your hardware capabilities and the size of the image files.
Resampling
Resampling is the process of calculating the file values for the
rectified image and creating the new file. All of the raster data layers
in the source file are resampled. The output image has as many
layers as the input image.
ERDAS IMAGINE provides these widely-known resampling
algorithms:
• Nearest Neighbor
• Bilinear Interpolation
• Cubic Convolution
• Bicubic Spline
Calibration
Instead of creating a new, rectified image by resampling the original
image based on the mathematical model, calibrating an image only
saves the mathematical model into the original image as a piece of
auxiliary information. Calibration does not generate new images, so
when the calibrated image is used, the math model comes into play
as needed.
For example, if you want to see the calibrated image in its rectified
map space in a Viewer, the image can be resampled on the fly based
on the math model, by selecting the Orient image to map system
option in the Select Layer To Add dialog.
A major drawback to image calibration is that the processes involved
with the calibrated image is slowed down significantly if the math
model is complicated. One minor advantage to image calibration is
that it uses less disk space and leaves the image’s spectral
information undisturbed.
1. Open a 2D View.
3. Click the Raster Options tab at the top of the Select Layer To Add:
dialog.
5. Click the File tab at the top of the Select Layer To Add: dialog.
6. In the Select Layer To Add: dialog under Filename, select the output
images from the directory in which you saved them.
3. Check Auto Mode in the Viewer Swipe dialog, and type 500 for the
Speed. You can watch as the swipe tool slowly works its way over the
images allowing you to evaluate the quality. Experiment with both
Vertical and Horizontal direction and different speeds.
View Summary Report Once you have finished edge matching the images, you can view the
HTML summary report to review information about the error, tie points,
and so forth.
Using the In this section, you use the georeference workflow in the IMAGINE
AutoSync Workstation to georeference a raw Landsat TM image of
AutoSync Atlanta, Georgia, using a SPOT panchromatic image of the same area.
Workstation The raw Landsat TM image does not have any map information and the
SPOT image is rectified to the State Plane map projection.
This section explains the steps for using the georeference workflow
in the workstation to georeference a raw image (an image without
any map information). When georeferencing a rectified image, you
do not need to manually collect tie points before running APM.
• run APM
4. Click OK.
The Create New Project dialog opens.
Enter a project
name here
Click to select the
Resample geocorrection
method
Click to open the
Resample Settings
dialog
6. In the Project File (*.lap) field, enter a project file name of your choice
or click the File Selector icon.
9. Accept the default settings in the Resample Settings dialog. Make sure
the Cubic Convolution resample method is selected.
11. In the Create New Project dialog, in the Default Output Directory: (*)
field, click the File Selector icon to select a default output directory of
your choice.
12. In the Default Output File Name Suffix field, enter a default file name
suffix of your choice, or keep the default _output.
13. In the Create New Project dialog, make sure the Generate Summary
Report checkbox is selected. The name of the project in the Project
File field defaults as the summary report name, but you can also click
the File Selector icon to select a different name and directory of your
choice.
14. If you are using SPOT DIMAP for input and you selected an imagery.tif
file for the input image file name, you can run APM without manually
measured points.
Workstation toolbar
Project Explorer
Viewer panes
GCP toolbar
CellArray
Status bar
Add Input Image After you have created the IMAGINE AutoSync project, the next step is
to add the input image you want to georeference.
• In the IMAGINE AutoSync toolbar, click the Open Input Images icon
.
• Select File -> Add Images -> Input Images... from the menu bar
2. In the Select Images To Open dialog under Filename, click the file
tmAtlanta.img.
This file is a Landsat TM image of Atlanta that has not been rectified.
Add Reference Image After you have added an input image, the next step is to add an image
to reference against the input image.
• Select File -> Add Images -> Set Reference Image... from the
menu bar
2. In the Select Images To Open dialog under Filename, click the file
panAtlanta.img.
This file is a SPOT panchromatic image of Atlanta. This image has been
georeferenced to the State Plane map projection.
Collect Manual Tie Points Once you have loaded the input and reference images in the IMAGINE
AutoSync Workstation, you can manually collect tie points.
3. To make the input image tie points easier to see in the viewer om the
left, right-click in the Color column to the right of Point ID in the first row
of the CellArray and select the color Yellow. Repeat this for each tie
point in the CellArray.
4. To make the reference image tie points easier to see in the viewer on
the right, right-click in the Color column to the right of Y Input in the first
row of the CellArray and select the color Yellow. Repeat this for each
tie point in the CellArray.
5. In the Main View pane of the input image, click a location to collect a tie
point.
The point you have created is labeled as 1 in the Main View pane and
its X and Y inputs are listed in the CellArray. Also notice that the input
image icon in the Project Tree View now has a green border since it now
has tie points.
6. In the Main View pane of the reference image, click the same location
to collect a tie point.
7. Collect at least three manual tie points in both the input and reference
images.
Also, make sure you scatter your tie points around the images so
they are not all concentrated in one place.
Try to collect tie points that are close to each of the four corners of
the images.
If you are using SPOT DIMAP for input and you selected an
imagery.tif file for the input image file name, you can run APM
without manually measured points.
Run APM After collecting several tie points in the input and reference image, the
next step is to run automatic point matching (APM) to automatically
generate more control points for your images.
Before using APM, confirm that the Initial Pyramid Layer Number
is set to 1 in Image Files (General) category of the Preference
Editor. This retains the largest pyramid layer when computing
pyramid layers to ensure point matching accuracy.
Preview Output Image After you run APM, you can preview the output image to make sure you
are satisfied with the results before resampling or calibrating.
2. In the GCP toolbar, enter or use the nudgers to the right of the field to
enter 2 in the error threshold text box.
3. In the GCP toolbar, click Select GCPs with Error Threshold icon .
The tie points with an error higher than 2 are highlighted in the
CellArray. Note the Total Points value and the Selected value, located
to the right of the Select GCPs with Error Threshold icon.
Total Points - total number of manual GCPs and APM GCPs.
Selected - number of GCPs you have selected
4. Click the Drive To icon to click through the selected points in the
CellArray.
As you click through the points, the points in the Viewers will be
highlighted with a box.
5. When you find a point with a high error, click Delete GCP icon .
The selected point is deleted from the Viewers and the CellArray.
6. After you delete all the points with a high error, right-click on the input
image (tmAtlanta.img) in the Project Explorer Tree View and select
Preview Output to see the current output.
Review Image Map Data The next step is to review the image map data for the input and
reference image. You can review the map data to learn about the map
and projection information in order to determine if you want the output
image to have the same projection as the reference image.
Click to select
panAtlanta.img
Note the information in the Map Info section and that the Projection
Info section shows that the map is georeferenced to State Plane.
3. When you are finished, select File -> Close in the ImageInfo dialog.
Click to select
tmAtlanta.img
Note the information in the Map Info section and that the Projection
Info section shows that this is a raw image with no projection
information. Therefore, for this chapter, use the input projection from
the reference image (panAtlanta.img) for the output image.
Resample Output Image Resampling is the process of calculating the file values for the rectified
image and creating the output file. All of the raster data layers in the
source file are resampled. The output image will have as many layers
as the input image.
You can change the resample settings in the Output tab in the
IMAGINE AutoSync Project Properties dialog. You may want to
experiment with changing the resample settings later, but this
chapter uses the default settings.
Verify Output Image Once the output image is created, you can use the Workstation to
perform the output image verification. You can verify that the input
image (tmAtlanta.img) has been correctly georeferenced to the
reference image (panAtlanta.img) by visually checking that they
conform to each other using the Viewer Blend/Fade, Viewer Swipe, or
Viewer Flicker verification tools.
2. Select Auto Mode in the Viewer Blend/Fade dialog, and type 500 for
the Speed. You can watch as the tool slowly blends the images,
allowing you to evaluate the quality. Experiment with both different
speeds or use the slide to blend and fade the images.
1. To perform visual verification using the Viewer Swipe tool, click the
Start Swipe Tool icon on the IMAGINE AutoSync toolbar.
The Viewer Swipe dialog opens.
1. To perform visual verification using the Viewer Flicker tool, click the
Start Flicker Tool icon on the IMAGINE AutoSync toolbar.
The Viewer Flicker dialog opens.
Click to switch
between the
images
2. Check Auto Mode in the Viewer Flicker dialog, and type 500 for the
Speed. You can watch as the flicker tool switches between the top and
bottom images, allowing you to evaluate the quality. You can also click
Manual Flicker to quickly switch between the images. Experiment with
different speeds.
View Summary Report You can view the summary HTML report for more information.review
information about the error, tie points, and so forth.
A M
APM, Automatic Point Matching 4, 41 Map information 45
AutoSync Workstation 32 Map projection
Create New Project dialog 34 State Plane 32
GCP toolbar 38 Map projection, set 48
Georeference workflow steps 33
Project Properties dialog 48 N
Resample Settings dialog 34 Nearest Neighbor 30, 35
Viewer Blend/Fade tool 50
Viewer Flicker tool 51 P
Viewer Swipe tool 50
Polynomial transformation 14
Workstation Startup dialog 33
R
B Rational Polynomial Coefficients 15
Bicubic Spline 30, 35 Raw imagery workflow 20
Bilinear Interpolation 30, 35 Resample 30, 48
Bicubic Spline 30, 35
C Bilinear Interpolation 30, 35
Calibration 30 Cubic Convolution 30, 35
Cubic Convolution 30, 35 Nearest Neighbor 30, 35
Review points 43
D Rigorous Orbital Pushbroom transformation
Data 15
air photos 23 RMSE analysis 18
Landsat multispectral 32 RMSE results 41
raw 32 Rubber sheeting transformation 14
SPOT panchromatic 32
Direct Linear Transform 16 S
Summary report 32, 35, 52
E
Edge Matching Wizard 23 T
APM Strategy tab 25 Tie points, analysis 18
Edge Match Strategy tab 26 Tie points, collect 38
Input tab 24 Tie points, generate automatically 4, 41
Output File Names dialog 29
Output tab 28 V
Projection tab 27
Verify
Refinement method list 27
Output image 49
Resample settings dialog 29
Viewer Blend/Fade tool 50
Edge Matching workflow 20
Viewer Flicker tool 51
Error standard deviation results 41, 43
Viewer Swipe tool 31, 50
Example data vii
F
File extension (*.lap) 34
G
Georeference workflow 20, 32
Index 57
58 Index