Professional Documents
Culture Documents
1. Introduction
Raw remote sensing (RS) data contain such significant geometric distortions that they
cannot be fused with any other data (RS, topographic, cartographic, semantic, etc.) in a
geographic information system (GIS). Being the first step of multi-source multi-format
data fusion for geomatics applications geometric and radiometric processing must be
adapted to the nature and characteristics of each RS data in order to keep the best
information in the fused products. Each raw data must be separately, or preferably
simultaneously, converted into an ortho-image so that each component of the data set can
be registered, compared, fused, etc. in a GIS. Consequently, the ortho-rectification process
is a large data fusion process, which included data from the platform, the sensor, the
Earth, the map projection and the GIS.
One must admit that since 1970s, a drastic evolution of on-board systems with large
scientific and technology progresses introduced new needs and requirements for
data fusion, such as these non-exhaustive examples: increased number and enhanced
*Email: thierry.toutin@ccrs.nrcan.gc.ca
Table 1. Description of error sources for the two categories, the Observer and the Observed, with the
different sub-categories.
have to be taken into account because the terrain and most of GIS end-user applications
are generally represented and performed, respectively, in a topographic map space and not
in a referenced ellipsoid. The map deformations are logically included in the distortions
of the Observed.
Previous studies made a second-level categorisation into low, medium and high-
frequency distortions (Friedmann et al. 1983), where ‘frequency’ is determined or
compared to the image acquisition time. Examples of low-, medium- and high-frequency
distortions are orbit variations, Earth rotation and local topographic effects, respectively.
While this categorisation was suitable in the 1980s when there were very few RS and digital
systems, today with so many different acquisition systems, it is no longer acceptable
because it differs with each acquisition system. For example, attitude variations are a high-
frequency distortion for QuickBird or airborne push-broom scanner, a medium-frequency
distortion for SPOT-HRV and Landsat-ETMþ, a low-frequency distortion for Landsat-
MSS but no distortions for MERIS. As examples, an assessment of the influences of
satellite viewing angle, Earth curvature, relief height, geographical location and digital
terrain model (DEM) characteristics on planimetric displacement was performed on
high-resolution (HR) satellite imagery (Zhang et al. 2001).
The geometric distortions and their error sources of Table 1 are predictable and
generally well-understood. Some of these distortions, especially those related to the
instrumentation, are systematic and generally corrected at ground receiving stations or by
the image vendors. Others, for example those related to the atmosphere, are not taken into
account and corrected because they are specific to each acquisition time and location while
6 T. Toutin
information on the atmosphere is rarely available. They also are geometrically negligible
for low-to-medium resolution images.
Figure 1. Orbital (left) and attitude (right) parameters defining the orbit and the satellite on its orbit
as a function of time. Courtesy and copyright Serge Riazanoff, VisioTerra, 2009.
. its osculatory orbit (Figure 2): six parameters related to the instantaneous orbit
(semi-major axis, eccentricity, inclination, ascending-node longitude, perigee
argument, mean anomaly).
Depending of the acquisition time (used as 7th parameter) and the size of the image, all
these parameters are time-dependent due to the orbital perturbations and thus generate
a range of non-systematic distortions. All these are non-systematic and time-dependent
distortions are not predictable and must be evaluated for each image from satellite
tracking data and/or ground-control information. Some effects of these distortions
include:
. The platform altitude variation (Z-axis) is the most critical among the
position parameters. It can change the pixel spacing in the across-track direction
while X- and Y-parameters generate only a translation in the sensed ground area.
The SAR platform altitude will alter the SAR incidence angles and thus the
viewing geometry. In addition, X- and Z- variations of the platform will generate
some differential shifts in the range measurement between lines. However, the
altitude variation of the satellite (around 8–10 m/s for near-Earth orbit at around
800 km altitude) is not too significant over the time required to acquire a full scene
(5–10 s for high to medium resolution satellite sensors): it only represents 1/8000
relative pixel variation. For SPOT5 panchromatic super mode (12,000 pixels with
2.5 m spacing), it generates 0.3 mm variation in pixel spacing, which cumulatively
generates around 2 m error (0.0003 m per pixel multiplied by 6000 pixels) at the
edge of the image.
. The platform velocity variations can change the line spacing or create line gaps/
overlaps for optical data and Doppler variations for SAR data. Variations in
spacecraft velocity only cause distortion in the along-track direction.
8 T. Toutin
Figure 2. Description of a satellite osculatory orbit and its approximation by a Keplerian ellipse.
X, Y, Z are the positions in a geocentric frame reference system. I is the orbit inclination;
is the
longitude of the ascending node (N); ! is the argument of the perigee (P); (! þ v) is the argument
of the satellite and is the distance from the Earth centre (O) and the satellite.
. The platform attitude variation (Figure 3) can change the orientation and the
shape of VIR images, but it does not affect SAR image geometry. The roll is the
rotation around the flight vector (X-axis, Figure 3, right), hence in a ‘wing down’
direction, and its variation causes lateral shifts and scale change in the across-
track direction. The pitch is the vertical rotation of the platform, in the ‘nose up’
plane and its variation results in scan-line spacing changes. The yaw is the
rotation around the vertical axis and its variation changes the orientation of each
scanned line resulting in a skew between the scan lines. The variations in the
platform attitude can be severe in term of location on the ground (absolute offset
of hundreds of metres) over the time required to scan a full scene (5–10 s) and are
significant when the variation are sudden over few lines (relative offset of tens of
metres within milliseconds).
Figure 3. Example of SPOT attitude variations (left part) and its impact on image geometry (right
part): pitch (left), roll (centre) and yaw (right).
the instantaneous field of view (IFOV) for VIR sensors. The range gate delay
(timing) for SAR sensors is corrected in the SAR processor. The uncertainties in
the focal length change the pixel size and those in the principal point a systematic
shift of the scan lines and of the full image. The lens distortion causes imaged
positions to be distorted along radial/tangential lines from the principal point.
This distortion, which depends of the detector position in the focal plane, has
been corrected for HR sensors (below 10-m resolution) and for the pre-processed/
geo-referenced images. Information is sometimes included in the metadata
(e.g. SPOT5).
. The panoramic distortion in combination with the Earth curvature and the
topographic relief changes the ground pixel sampling along the swath for optical
and SAR data. Panoramic distortion occurs for large swath sensors (MERIS,
MODIS, ScanSAR, etc.). As the sensor scans across each line, the distance from
the sensor to the ground increases further away from the centre of the swath and
scans a larger area as it moves closer to the edges. The further away from nadir is
an object, the larger is the compression. This effect results in the compression of
image features at points away from the nadir and is called tangential scale
distortion. However, it is the reverse for SAR: the compression of image features
increases from far to near edges because the ground range resolution is reduced at
high incidence angles. As example, when the SAR resolution is 10 m, the ground
range resolution will be 29.0 and 10.1 m at 10 and 70 incidence angle,
respectively. This compression still remains for images in the slant-range geometry
but is taken into account in the SAR processor when images are generated in the
ground-range geometry.
The main distortions related to the non-imaging sensors can be caused by the
clock synchronicity between all instruments and the variations or drift of the time for
an instrument. Because the time is the only parameter making a link between all
instruments, it can produce different errors in their data fusion depending of the measuring
instrument. Generally, the corrections, if possible, will be performed at ground
receiving stations.
10 T. Toutin
where X, Y, Z are the terrain or cartographic coordinates; i, j, k are the integer increments;
and m, n and p are integer values, generally comprised between 0 and 3 equal and being the
degree of the polynomial functions.
Each 2D 1st-, 2nd- and 3rd-degree polynomial functions will then have 3-, 6- and
10-term unknowns. Each 3D 1st-, 2nd- and 3rd-degree polynomial functions will then have
4, 10 and 20-term unknowns. The 1st-degree polynomial functions are also called affine
transformations. Each 3D 1st-, 2nd- and 3rd-degree RFs will have 8-, 20- and 40-term
unknowns. In fact, the 3D RFs are extensions of the collinearity equations (Section 3.3),
which are equivalent to 1st-degree RFs. Depending of the imaging geometry in each axis
(flight and scan), the degree of polynomial functions (numerator and denominator for
RFs) can be different and/or specific terms, such as XZ for 2D or XZ, YZ2 or Z3, etc.
for 3D, can be differently ignored of the polynomial functions, when these terms cannot be
related to any physical element of the image acquisition geometry. These ‘intelligent’
polynomial functions reflect then better the geometry in both axes and reduce the over-
parametrisation and the correlation between terms. Okamoto (1981, 1988) already applied
this reduction of terms for one-dimensional (1D) central perspective photographs and line-
scanners, respectively. Since the simplest solution, the 2D polynomial functions applied
1970–1980s are fortunately no more used with the new EO sensors, only 3D functions are
now discussed.
12 T. Toutin
(2) to compute normally the unknowns of all the polynomial functions with GCPs
(terrain-dependent).
In some aspect, RFM can be considered as a ‘fused’ math model because each of
80 coefficients are not physically representative of a specific distortion but of combined
distortions, and can correct for part or more than one distortion.
the Indian Space Research Organization (ISRO) with Cartosat series and recently MDA
with Radarsat-2. This first approach was also tested under specific circumstances in an
academic environment with aerial photographs, and spaceborne VIR HR images (SPOT,
Eros-A, Formosat-2, Cartosat, QuickBird, etc.) by computing parameters of 1st- to 3rd-
degree RFs from already-solved 3D physical models (Tao and Hu 2001, Chen et al. 2006,
Bianconi et al. 2008), and recently with different spaceborne SAR images (Zhang et al.
2010). However, biases and random errors still exist after applying the RFs depending of
sensors and relief, and the results need to be post-processed with accurate user’s GCPs: (1)
the original RFs parameters can be refined with linear equations (Lee et al. 2002,
Fraser et al. 2006) or (2) the errors are corrected with 2D polynomial functions
(Fraser et al. 2002a,b, Tao and Hu 2002, Fraser and Hanley 2005, Wang et al. 2005,
Toutin 2006a, Tong et al. 2010 and more). The first solution achieves better results
than the second one.
Although each sensor has its own unique characteristics, one can draw generalities for
the development of 2D/3D physical models in order to correct fully all distortions
described previously. The physical model should mathematically model all distortions of
the platform, the sensor and the Earth as well as the deformations of the cartographic
projection. The general starting points to derive the mathematical functions of the 3D
physical model are generally:
(1) the well-known collinearity condition and equations (Bonneval 1972, Wong 1980)
for VIR images:
m11 ðX X0 Þ þ m12 ðY Y0 Þ þ m13 ðZ Z0 Þ
x ¼ ðf Þ , ð4Þ
m31 ðX X0 Þ þ m32 ðY Y0 Þ þ m33 ðZ Z0 Þ
! !
r ¼ S P ð7Þ
16 T. Toutin
! !
where f is the Doppler value; r is the range distance; S and VS are the sensor
! !
position and velocity; P and VP are the target point position and velocity on the
ground and is the radar wavelength.
The collinearity equations are valid for an instantaneous image or scanline acquisition,
such as photogrammetric cameras (LFC, MC), VIR scanner sensors (SPOT, Landsat) and
the Doppler-range equations are valid for a SAR scanline. However, since the parameters
of neighbouring scanlines of scanners are highly correlated along the orbit, it is possible to
link the exposure centres and rotation angles of the different scanlines to integrate
supplemental information (Poli 2007), such as:
. the ephemeris and attitude data using celestial mechanic laws for satellite
images; or
. the global positioning system (GPS) and INS data for airborne images.
Considerable research has been carried out to develop robust and rigorous 3D physical
models describing the acquisition geometry related to different types of images (VIR and
SAR, low, medium and high resolution) and of platforms (spaceborne and airborne):
. for low/medium-resolution VIR satellite images (Wong 1975, Bähr 1976, Masson
d’Autume 1979, Sawada et al. 1981, Khizhnichenko 1982, Friedmann et al. 1983,
Guichard 1983, Toutin 1983, Konecny et al. 1986, 1987, Salamonowicz 1986,
Gugan 1987, Kratky 1987, 1989, Shu 1987, Westin 1990, Kornus et al. 2000,
Sylvander et al. 2000, Westin 2000, Poli 2007);
. for HR VIR satellite images (Cheng and Toutin 1998, Radhadevi et al. 1998,
Hargreaves and Robertson 2001, Bouillon et al. 2002, 2006, Chen and Teo 2002,
Toutin 2003a, 2006b, Poli 2007);
. for SAR satellite images (Gracie et al. 1970, Leberl 1978, 1990, Wong et al. 1981,
Curlander 1982, Naraghi et al. 1983, Guindon and Adair 1992, Tannous and
Pikeroen 1994, Toutin 1995);
. for VIR airborne images (Derenyi and Konecny 1966, Konecny 1976, 1979, Ebner
and Muller 1986) and
. for airborne SLAR/SAR images (La Prade 1963, Gracie et al. 1970, Konecny
1970, Leberl 1972, 1990, Hoogeboom et al. 1984, Toutin 1995) and many others
afterwards.
However, the geometric correction process can address each distortion one-by-one and
step-by-step with a math function for each distortion, or simultaneously with the fusion of
the different math function into an integrated math model. This last solution is better and
more precise because it reduces the cumulative errors and their propagation. Both
solutions were developed with physical models: the ‘one-by-one’ and ‘step-by-step’
solutions with different math functions are generally applied at the ground receiving
station as independent processing boxes (platform, sensor, Earth and projection).
Processing a new orbit, a new sensor or a new image into a new ellipsoid and map
projection thus requires changing only ‘one distortion’ or ‘one step’ box with the new
characteristics and parameters. Conversely, the ‘fusion of the math function’ solution is
mainly applied by the users, with their own ‘fused’ math model, such as the collinearity
and the range-Doppler equations. However, the difficulty in developing the math model by
fusing all available data (orbit, attitude, sensor, focal lens, ground control, etc.) and their
associated math function could have forced some to consider the ‘step-by-step’ solution.
International Journal of Image and Data Fusion 17
. map-oriented images, also called geocoded images, (e.g. level 2A for SPOT, Geo
Standard for IKONOS, 1B2 for ALOS, SSG/SPG for Radarsat) corrected for the
same distortions as geo-referenced images are north oriented. For SAR images,
this level corresponds to the geocoded ellipsoid corrected geometry. Generally,
less metadata related to sensor and satellite are provided; most of metadata are
related to the 2A processing and the ellipsoid/map characteristics.
is a function of different aspects: the method of collection, the sensor type and resolution,
the image spacing, the geometric model, the study site, the physical environment, GCP
definition and accuracy and the final expected accuracy. Figure 5(a) are examples of well-
defined GCPs and tools to extract image coordinates. For HR SAR images, metallic
structures (electrical poles, Figure 5b) are good candidates for GCPs.
Consequently, all the aspects of GCP collection have to be considered as a whole, and
not separately, to avoid too large discrepancies in accuracy between these different aspects.
For example, differential GPS survey with decimetre accuracy is too accurate (and
expensive in remote areas) to process SPOT data (3–10 m resolution), while road
intersections and 1 : 20–50,000 topographic maps are not accurate enough to process
QuickBird images if you expect sub-pixel (decimetre) accuracy, etc. The weakest aspect in
GCP collection, which is of course different for each study site and image, will thus be the
major source of error in the error budget and propagation in the math model.
Furthermore, to ensure robustness and consistency in an operational environment, it is
safer to collect more than the minimum required depending of the math model used and to
have medium-accurate GCPs than no GCP in remote areas.
With 3D 1st-approach RF models provided in metadata, 1–10 GCPs are needed to
remove the remaining errors with 2D polynomial functions (0–2nd degree) or to refine
the RF parameters. With 3D physical models, fewer GCPs (0–6) are generally required
per image. There is, however, no relative orientation between adjacent images because
RFMs were computed independently by the image providers. When GCP accuracy is
worse than the pixel size, its number should be increased, depending also of the final
expected accuracy (Savopol et al. 1994, Wang and Ellis 2005, Fraser et al. 2006). With
deterministic models, GCPs should be only acquired at the border of the image and
cover the full elevation range of the terrain to avoid extrapolation in planimetry and
elevation, without a regular distribution. On the other hand, since empirical models can
be sensitive to GCP distribution, number and accuracy, GCPs should be spread over
the full image(s) in planimetry and also in the elevation range to avoid large errors
between GCPs.
Figure 5. (a and b) Examples of well-defined GCPs and tools for image pointing and extracting their
image coordinates for HR optical and SAR images, respectively.
22 T. Toutin
less than 15–20 (Toutin 2003c–e). There are a number of advantages to the
spatio-triangulation process, namely:
. the reduction of the number of GCPs;
. a better relative accuracy between the images;
. a more homogeneous and precise mosaic over large areas and
. a homogeneous GCP network for future geometric processing.
Whatever the number of images and the geometric models used, each GCP contributes
to two observation equations for each image: an equation in X and an equation in Y. The
observation equations are used to establish the error equations for GCPs, TPs and ETPs.
Each group of error equations can be weighted as a function of the accuracy of the image
and cartographic data. The normal equations are then derived and resolved with the
unknowns computed. In addition for the 3D physical models, conditions or constraints on
osculatory orbital parameters or other parameters (GPS/INS) can be added in the
adjustment to take into account the knowledge and the accuracy of the ephemeris or other
data, when available. They thus prevent the adjustment from diverging and they also filter
the input errors.
Since there are always redundant observations to reduce the input error propagation in
the geometric models a least-square adjustment is generally used. When the mathematical
equations are non-linear, which is the case for physical and 2nd and higher degree
empirical models, some means of linearisation (series expansions or Taylor’s series) must
be used. A set of approximate values for the unknown parameters in the equations must be
thus initialised:
. to zero for the empirical models, because they do not reflect the image acquisition
geometry; or
. from the osculatory orbital/flight and sensor parameters of each image for the
physical models.
More information on least-squares methods applied to geomatics data can be obtained
in Mikhail (1976) and Wong (1980). The results of this processing step are:
. the parameter values for the geometric model used for each image;
. the residuals in X and Y directions (and Z if more than one image is processed) for
each GCP/ETP/TP and their root mean square (RMS) residuals;
. the errors in X and Y directions (and Z if more than one image is processed) for
each ICP if any, and their bias and RMS errors and
. the computed cartographic coordinates for each point, including ETPs and TPs.
When more GCPs than the minimum theoretically required are used, the GCP
residuals reflect the modelling accuracy, while the ICP RMS errors reflect the final
accuracy taking into account ICP accuracy. When GCPs/ICPs are not accurate, their
errors are included in the computed RMS residuals/errors, respectively. By using of
overabundant GCPs with physical models, the input data errors (image measurement and/
or map) do not propagate through the physical models but are mainly reflected in the GCP
residuals due to a global adjustment. Consequently, it is thus ‘normal and safe’ with 3D
physical models to obtain RMS residuals in the same order of magnitude than the GCP
accuracy, but the physical model by itself will be more precise and the internal accuracy of
image(s) will be better than the RMS residuals.
International Journal of Image and Data Fusion 23
4.4 Ortho-rectification
The last step of the geometric processing is the image rectification, preferably with DEM
(Figure 6). To ortho-rectify the uncorrected image into a map-referenced image, there are
two processing operations:
. a geometric operation to compute the cell coordinates in the uncorrected image
for each map image cell; and
. a radiometric operation to compute the intensity value or digital number (DN) of
the map image cell as a function of the intensity values of uncorrected image cells
that surround the previously computed position of the map image cell.
Figure 6. Image rectification to project the uncorrected image into the ground reference system:
the geometric (left) and radiometric (right) operations.
24 T. Toutin
Figure 7. Relationship between the DEM accuracy (in metres), the look angle (in degrees) of the
SAR image, and the resulting positioning error (in metres) generated on the SAR ortho-image.
The different boxes at the bottom represent the range of look angles for each Radarsat beam mode
(Toutin 1998).
distinguishable in the uncorrected images, should also be after the data fusion in the
ortho-rectified images. When the co-registration from the bundle adjustment can ensure
sub-pixel accuracy, the lowest grid spacing should be applied to keep the spatial integrity
of the highest resolution data. On the other hand, a geometric accuracy of one pixel and
more can generate in the data fusion colour artefacts and erroneous information
extraction due to mixed pixels. Consequently, to preserve the spatial integrity of the
highest resolution data degrades the spectral integrity of the fused images. Consequently,
the choice of the common grid spacing for the data fusion should be determined as
a function of the geometric accuracy to avoid mixed pixels, and of the information content
to be extracted.
Since the 2D models do not use elevation information, the accuracy of the resulting
rectified image (without DEM) will depend on the image viewing/look angle and the
terrain relief (Figures 7 and 8). On the other hand, 3D models take into account the
elevation, a DEM is thus fused with the uncorrected image through the mathematical
model to create accurate ortho-rectified images. This rectification should then be called an
ortho-rectification. But if no DEM is available, different altitude levels can be input for
different parts of the image (a kind of ‘rough’ DEM) to minimise the elevation distortion.
It is then important to have a quantitative evaluation of the DEM impact on the
rectification/ortho-rectification process, both in term of elevation accuracy for the
positioning accuracy and grid spacing for the level of details. This last aspect is more
important with HR images because a poor DEM grid spacing when compared to the
image spacing could generate artefacts for linear features (wiggly roads or edges).
Figures 7 and 8 give the relationship between the DEM accuracy (including
interpolation in the grid), the viewing and look angles with the resulting positioning
error on VIR and SAR ortho-images, respectively. These curves were mathematically
International Journal of Image and Data Fusion 25
computed with the elevation distortion parameters of 3D physical model (Toutin 1995)
and can be simplified with the tangent of VIR viewing or SAR look angles, respectively.
The combined influence of the viewing angle, the relief height and the DEM characteristics
was also performed with HR satellite imagery (Zhang et al. 2001).
An other advantage of these curves is that they can be used to find any 3rd parameter
when the other two are known. For example (Figure 7), using SPOT image acquired with
10 viewing angle and 45-m accurate DEM, the error generated on the ortho-image is 9 m.
Inversely, if 4-m geo-positioning accuracy for the ortho-image is required and only 10-m
accurate DEM are available, the VIR image should be ordered with a viewing angle less
than 20 . The same error evaluation can be applied to SAR data using the similarly
computed curves (Figure 8). As other example, if positioning errors of 60 and 20 m on
standard-1 (S1) and fine-5 (F5) ortho-images, respectively, are required, a 20 m elevation
error, which includes the DEM accuracy and the interpolation into the DEM, is thus
sufficient. For HR images (spaceborne or airborne), the surface heights (buildings, forest,
26 T. Toutin
hedges) should be either included in the DTM to generate a digital surface model (DSM)
or taken into account in the overall elevation error. In addition, an inappropriate DEM in
term of grid spacing can generate artefacts with HR images acquired with large viewing
angles, principally over high relief areas (Zhang et al. 2001).
spectral signature or pattern recognition analysis. Consequently, any process based on the
image radiometry should be performed before using interpolation or deconvolution
algorithms.
Figures 9 and 10 are examples of the application of different resampling kernels
applied to WorldView-1 panchromatic mode image and Radarsat-2 SAR ultra-fine mode
(U2) ground-range image (Toutin and Chénier 2009), respectively, during their ortho-
rectification process with DEM. Sub-images (around 200 200 pixels) were resampled
with a factor of three to better illustrate the variations of the resampling kernels: the
28 T. Toutin
Figure 10. Relationship between the DEM accuracy (in metres) the viewing angle (in degrees) of the
VIR image, and the resulting positioning error (in metres) generated on the ortho-image
(Toutin 1995).
International Journal of Image and Data Fusion 29
WorldView and Radarsat resampled image pixels are then 0.15 and 0.5 m, respectively.
Letters A, B, C and D refer to different geometric resampling kernels (nearest neighbour,
bilinear, cubic convolution, sin(x)/x with 16 16 window), respectively, and letters E and
F refer to statistical adaptive SAR filters (Enhanced Lee and Gamma with 5 5 window),
respectively. For both VIR and SAR images, the nearest neighbour resampling kernel (A)
generates ‘blocky’ images, while the bilinear resampling kernel (B) generates fuzzy images
with the feeling of ‘out-of-focus’ images. One can also notice the step gradient, which are
reduced with the kernels sequence (from A to D). The best results are obtained with the
sinusoidal resampling kernels (C and D) but the true sinusoidal function (D) generates
sharper features well-representing the original circle shapes (Figure 9). The sin(x)/x with
16 16 window kernel should thus be favoured for optical images during the ortho-
rectification, even if the processing time is longer.
On the other hand with the Radarsat-2 SAR image (Figure 10), all the white dots
(corresponding to houses) change from square shapes (A) to round shapes (B, C, D),
which do not correspond to the original geometry of square houses. In addition, worm-
shape artefacts (typical from large oversampling) are generated in homogeneous flat areas
around the houses and in the ‘black’ streets; these artefacts become more evident with the
sequence of A–D resampling kernels. Consequently, these three last geometric resampling
kernels (bi-linear, cubic convolution and sin(x)/x) should not be used with SAR images.
The two adaptive filters (E and F) give not only a better image appearance than all
geometric resampling kernel due to the fact that the SAR speckle is filtered at the same
time, but keep the original feature shapes (even the neighboured houses are better
separated and discriminated) without generating worm-shape artefacts. These statistical
kernels are better adapted to SAR images, even with six-time oversampling of SAR
resolution (3 m), should always be favoured during the ortho-rectification.
5. Concluding remarks
Since the launch of the first civilian RS satellite in the 1970s, the needs and requirements
for geometrically process RS images have drastically changed due to the necessity of fusing
the different data. Furthermore, the fusion of multi-format data in a digital world requires
the highest accuracy possible so as to perform the final ortho-rectification of multi-sensor
images. Consequently, the full geometric correction process was evaluated with a data
fusion perspective: from the different input RS data from the imaging instruments (optical,
SAR, air- and space-borne, panchromatic, multi-band and -wavelength), their metadata
extracted from the non-imaging sensors (GPS, INS, star trackers and others), the
geographic and cartographic data (ground points, contour lines or DEM, maps, GIS, etc.),
the different geometric distortions and their associated math models to correct them until
the final ortho-rectification, which fused almost all the previous information into
a single product. The major advantage of this data fusion perspective is that the geometric
correction is considered as a whole procedure instead of looking, and addressing
separately and independently the different steps of this full and complex procedure. The
users can also better manage all data and information (acquisition, fusion, processing, etc.)
within the whole procedure (evaluation of the intermediate steps, error tracking, etc.) as a
function of the expected products and the desired result accuracy. This will result for the
users in a more appropriate and efficient exploitation of their multi-sensor, multi-source,
multi-format data for the specific thematic applications in development.
30 T. Toutin
Acknowledgement
Earth Sciences Sector (ESS) contribution number: 20100290.
References
Ahn, C.-H., Cho, S.-I., and Jeon, J.C., 2001. Orthorectification software applicable for IKONOS
high-resolution images: Geopixel-Ortho. Proceedings of IGARSS, 9–13 July Sydney,
Australia. Piscataway, NJ: IEEE, 555–557.
Al-Roussan, N., et al., 1997. Automated DEM extraction and ortho-image generation from SPOT
level-1B imagery. Photogrammetric Engineering and Remote Sensing, 63, 965–974.
Baetslé, P.L., 1966. Conformal transformations in three dimensions. Photogrammetric Engineering,
32, 816–824.
Bähr, H.P., 1976. Geometrische Modelle für Abtasteraufzeichnungen von Erkundungdsatlliten.
Bildmessung und Luftbildwesen, 44, 198–202.
Baltsavias, E.P. and Stallmann, D., 1992. Metric information extraction from SPOT images and the
role of polynomial mapping functions. International Archives of Photogrammetry and Remote
Sensing, 29 (B4), 358–364.
Bannari, A., et al., 1995. A theoretical review of different mathematical models of geometric
corrections applied to remote sensing images. Remote Sensing Reviews, 13, 27–47.
Belgued, Y., et al., 2000. Geometrical block adjustment of multi-sensor radar images. Proceedings of
EARSeL workshop: fusion of earth data, 26–28 January 2000 Sophia-Antipolis, France. Nice:
SEE GréCA/EARSeL, 11–16.
Bianconi, M., et al., 2008. A new strategy for rational polynomial coefficients generation. In:
C. Jürgens, ed. Proceedings of the EARSeL joint workshop, 5–7 March, Bochum, Germany,
21–28.
Bonneval, H., 1972. Levés topographiques par photogrammétrie aérienne, dans Photogramme´trie
ge´ne´rale: Tome 3, Collection scientifique de l’Institut Géographique National. Paris: Eyrolles
Editeur.
Bouillon, A., et al., 2002. SPOT5 HRG and HRS first in-flight geometric quality results. Proceedings
of SPIE, vol. 4881A sensors, system, and next generation satellites VII, 22–27 September Agia
Pelagia, Crete, Greece. Bellingham, WA: SPIE, CD-ROM (Paper 4881A-31).
Bouillon, A., et al., 2006. SPOT5 HRS geometric performance: using block adjustment as key issue
to improve quality of DEM generation. ISPRS Journal of Photogrammetry and Remote
Sensing, 60 (3), 134–146.
Campagne, Ph., 2000. Apport de la spatio-triangulation pour la caractérisation de la qualité image
géométrique. Bulletin de la Socie´te´ Française de Photogramme´trie et de Te´le´de´tection, 159,
66–26.
Centre National d’Études Spatiales (CNES), 1980. Le mouvement du ve´hicule spatial en orbite.
Toulouse, France: CNES, 1031.
Chen, L.-C. and Teo, T.-A., 2002. Rigorous generation of orthophotos from EROS-A high
resolution satellite images. International Archives of Photogrametry and Remote Sensing and
Spatial Information Sciences (8–12 July, Ottawa, Canada. Ottawa, ON: Natural Resources
Canada) 34 (B4), 620–625.
Chen, L.-C., Teo, T.-A., and Liu, C.-L., 2006. The geometrical comparison of RSM and RFM for
FORMOSAT-2 satellite images. Photogrammetric Engineering and Remote Sensing, 72 (5),
573–579.
Cheng, P. and Toutin, Th., 1998. Unlocking the potential for IRS-1C data. Earth Observation
Magazine, 6 (3), 24–26.
Curlander, J.C., 1982. Location of spaceborne SAR imagery. IEEE Transaction of Geoscience and
Remote Sensing, 22, 106–112.
International Journal of Image and Data Fusion 31
Davis, C.H. and Wang, X., 2001. Planimetric Accuracy of IKONOS 1-m Panchromatic Image
Products. Proceedings of the ASPRS Annual Conference, 23–27 April, St Louis, Missouri,
USA. Bethesda, USA: ASPRS, CD-ROM (unpaginated).
Derenyi, E.E. and Konecny, G., 1966. Infrared scan geometry. Photogrammetric Engineering, 32,
773–778.
Dowmann, I. and Dolloff, J., 2000. An evaluation of rational function for photogrammetric
restitution. International Archives of Photogrammetry and Remote Sensing (16–23 July,
Amsterdam, The Netherlands. Amsterdam: GITC) 33 (B3), 254–266.
Ebner, H. and Muller, F., 1986. Processing of digital three line imagery using a generalized model for
combined point determination. International Archives of Photogrammetry and Remote Sensing
(19–22 August Rovamieni, Finland. Rovamieni: ISPRS) 26 (B3), 212–222.
Escobal, P.R., 1965. Methods of orbit determination. Malabar, USA: Krieger, 1–479.
Fraser, C.S., Baltsavias, E., and Gruen, A., 2002b. Processing of IKONOS imagery for sub-metre
3D positioning and building extraction. ISPRS Journal of Photogrammetry and Remote
Sensing, 56, 177–194.
Fraser, C.S., Dial, G., and Grodecki, J., 2006. Sensor orientation via RPCs. ISPRS Journal of
Photogrammetry and Remote Sensing, 60 (3), 182–194.
Fraser, C.S. and Hanley, H.B., 2005. Bias-compensated RPCs for sensor orientation of high-
resolution satellite imagery. Photogrammetric Engineering and Remote Sensing, 71 (8),
909–915.
Fraser, C.S., Hanley, H.B., and Yamakawa, T., 2002a. Three-dimensional geopositioning accuracy
of IKONOS imagery. Photogrammetric Record, 17, 465–479.
Fraser, C.S. and Ravanbakhsh, M., 2009. Georeferencing accuracy of GeoEye-1 imagery.
Photogrammetric Engineering and Remote Sensing, 75 (6), 634–638.
Friedmann, D.E., et al., 1983. Multiple scene precision rectification of spaceborne imagery with few
control points. Photogrammetric Engineering and Remote Sensing, 49, 1657–1667.
Gracie, G., et al., 1970. Stereo radar analysis, US Engineer Topographic Laboratory, Report no.
FTR-1339-1, Ft. Belvoir, VA, USA.
Grodecki, J., 2001. IKONOS stereo feature extraction – RPC approach. Proceedings of the ASPRS
annual conference, 23–27 April St Louis, MO. Bethesda, MD: ASPRS, CD-ROM
(unpaginated).
Grodecki, J. and Dial, G., 2003. Block adjustment of high-resolution satellite images
described by rational polynomials. Photogrammetric Engineering and Remote Sensing,
69 (1), 59–68.
Gugan, D.J., 1987. Practical aspects of topographic mapping from SPOT imagery. Photogrammetric
Record, 12, 349–355.
Guichard, H., 1983. Etude théorique de la précision dans l’exploitation cartographique d’un satellite
à défilement: application à SPOT. Bulletin de la Socie´te´ Française de Photogramme´trie et de
Te´le´de´tection, 90, 15–26.
Guindon, B. and Adair, M., 1992. Analytic formulation of spaceborne SAR image geocoding and
value-added products generation procedures using digital elevation data. Canadian Journal of
Remote Sensing, 18, 2–12.
Hargreaves, D. and Robertson, B., 2001. Review of quickbird-1/2 and orbview-3/4 products from
MacDonald Dettwiler processing systems. Proceedings of the ASPRS annual conference, 23–27
April St Louis, MO. Bethesda, MD: ASPRS, CD-ROM (unpaginated).
Hoogeboom, P., Binnenkade, P., and Veugen, L.M.M., 1984. An algorithm for radiometric and
geometric correction of digital SLAR data. IEEE Transactions on Geoscience and Remote
Sensing, 22, 570–576.
Kalman, L.S., 1985. Comparison of cubic-convolution interpolation and least-squares
restoration for resampling Landsat-MSS imagery. Proceedings of the 51st annual ASP-
ASCM convention: ‘‘Theodolite to Satellite’’, 10–15 March Washington, DC. Falls Church,
VA: ASP, 546–556.
32 T. Toutin
OGC, 1999. The open GISTM abstract specifications: the earth imagery case, Vol. 7, Available
from: http://www.opengis.org/techno/specs/htm/
Okamoto, A., 1981. Orientation and construction of models. part III: mathematical basis of the
orientation problem of one-dimensional central perspective photographs. Photogrammetric
Engineering and Remote Sensing, 47, 1739–1752.
Okamoto, A., 1988. Orientation theory of CCD line-scanner images. International Archives of
Photogrammetry and Remote Sensing (3–9 July Kyoto, Japan: ISPRS) 27 (B3), 609–617.
Okamoto, A., et al., 1998. An alternative approach to the triangulation of SPOT imagery.
International Archives of Photogrammetry and Remote Sensing (7–10 September Stuttgart,
Germany. Stuttgart: German Society for Photogrammetry and Remote Sensing) 32 (B4),
457–462.
Palà, V. and Pons, X., 1995. Incorporation of relief in polynomial-based geometric corrections.
Photogrammetric Engineering and Remote Sensing, 61, 935–944.
Palubinskas, G., Reinartz, P., and Bamler, R., 2010. Image acquisition geometry analysis for the
fusion of optical and radar remote sensing data. International Journal of Image and Data
Fusion, 1 (3), 271–282.
Poli, D., 2007. A rigorous model for spaceborne linear array sensors. Photogrammetric Engineering
and Remote Sensing, 73 (2), 187–196.
Radhadevi, P.V., Ramachandran, R., and Mohan Murali, A., 1998. Precision rectification of IRS-
1C PAN data using an orbit attitude model. ISPRS Journal of Photogrammetry and Remote
Sensing, 53 (5), 262–271.
Robertson, B., 2003. Rigorous geometric modeling and correction of quickbird imagery. Proceedings
of the international geoscience and remote sensing IGARSS 2003, 21–25 July 2003 Toulouse,
France. Toulouse: CNES, CD-ROM.
Salamonowicz, P.H., 1986. Satellite orientation and position for geometric correction of scanner
imagery. Photogrammetric Engineering and Remote Sensing, 52, 491–499.
Savopol, F., et al., 1994. La correction géométrique d’images satellitaires pour la Base nationale de
données topographiques. Geomatica, e´te´, 48, 193–207.
Sawada, N., et al., 1981. An analytic correction method for satellite mss geometric distortion.
Photogrammetric Engineering and Remote Sensing, 47, 1195–1203.
Schreier, G., ed., 1993. SAR geocoding and systems. Karlsruhe: Wichmann.
Schut, G.H., 1966. Conformal transformations and polynomials. Photogrammetric Engineering, 32,
826–829.
Shu, N., 1987. Restitution géométrique des images spatiales par la méthode de l’équation de
colinéarité. Bulletin de la Socie´te´ Française de Photogramme´trie et de Te´le´de´tection, 105, 27–40.
Suri, S., Schwind, P., Uhl, J., and Reinartz, P., 2010. Modifications in the SIFT operator for
effective SAR image matching. International Journal of Image and Data Fusion, 1, 243–256.
Sylvander, S., et al., 2000. Vegetation geometrical image quality. Bulletin de la Socie´te´ Française de
Photogramme´trie et de Te´le´de´tection, 159, 59–65.
Tao, V. and Hu, Y., 2001. A comprehensive study of the rational function model for
photogrammetric processing. Photogrammetric Engineering and Remote Sensing, 67,
1347–1357.
Tao, V. and Hu, Y., 2002. 3D reconstruction methods based on the rational function model.
Photogrammetric Engineering and Remote Sensing, 68, 705–714.
Tannous, I. and Pikeroen, B., 1994. Parametric modelling of spaceborne SAR image geometry.
Application: SEASAT/SPOT image registration. Photogrammetric Engineering and Remote
Sensing, 60, 755–766.
Tong, X.H., Liu, S.J., and Weng, Q.H., 2010. Bias-corrected RPCs for high accuracy positioning
of QuickBird images. ISPRS Journal of Photogrammetry and Remote Sensing, 65 (2), 218–226.
Toutin, Th., 1983. Analyse mathématique des possibilités cartographiques du satellite SPOT,
Me´moire du diplôme d’Etudes Approfondies, Ecole Nationale des Sciences Géodésiques Saint-
Mandé, 1–74.
34 T. Toutin
Toutin, Th., 1995. Multi-source data integration with an integrated and unified geometric modelling.
EARSeL Advances in Remote Sensing, 4 (2), 118–129.
Toutin, Th., 1998. Evaluation de la précision géométrique des images de RADARSAT. Journal
canadien de te´le´de´tection, 24, 80–88.
Toutin, Th., 2003a. Error tracking in IKONOS geometric processing using a 3D parametric
modelling. Photogrammetric Engineering and Remote Sensing, 69, 43–51.
Toutin, Th., 2003b. Block bundle adjustment of Ikonos in-track images. International Journal of
Remote Sensing, 24, 851–857.
Toutin, Th., 2003c. Block bundle adjustment of Landsat7 ETMþ images over mountainous areas.
Photogrammetric Engineering and Remote Sensing, 69, 1341–1349.
Toutin, Th., 2003d. Path processing and block adjustment with RADARSAT-1 SAR images. IEEE
Transactions on Geoscience and Remote Sensing, 41, 2320–2328.
Toutin, Th., 2003e. Compensation par segment et bloc d’images panchromatiques et multibandes de
SPOT. Journal canadien de te´le´de´tection, 29 (1), 36–42.
Toutin, Th., 2004. Spatiotriangulation with multi-sensor VIR/SAR images. IEEE Transactions on
Geoscience and Remote Sensing, 42 (10), 2096–2103.
Toutin, Th., 2006a. Comparison of DSMs generated from stereo hr images using 3D
physical or empirical models. Photogrammetric Engineering and Remote Sensing, 72 (5),
597–604.
Toutin, Th., 2006b. Generation of DSMs from SPOT-5 in-track HRS and across-track HRG stereo
data using spatiotriangulation and autocalibration, ISPRS Journal of Photogrammetry and
Remote Sensing, 60 (3), 177–181.
Toutin, Th., 2006c. Spatiotriangulation with multisensor HR stereo-images. IEEE Transactions on
Geoscience and Remote Sensing, 44 (2), 456–462.
Toutin, Th. and Chénier, R., 2009. 3-D radargrammetric modeling of RADARSAT-2 ultrafine
mode: preliminary results of the geometric calibration. IEEE-GRSL, 6 (2), 282–286.
Toutin, Th. and Omari, K., 2011. A new hybrid model for geometric processing of Radarsat-2 data
without user GCP. Photogrammetric Engineering and Remote Sensing, 76, (accepted).
Toutin, Th., Zakharov, I., and Schmitt, C., 2010. Fusion of Radarsat-2 polarimetric images for
improved stereo-radargrammetric DEM. International Journal of Image and Data Fusion,
1 (1), 67–82.
Touzi, R., 2002. A review of speckle filtering in the context of estimation theory. IEEE Transactions
on Geoscience and Remote Sensing, 40, 2392–2404.
Valadan Zoej, M.J. and Petrie, G., 1998. Mathematical modelling and accuracy testing of SPOT
Level-1B stereo-pairs. Photogrammetric Record, 16, 67–82.
Vassilopoulou, S., et al., 2002. Orthophoto generation using IKONOS imagery and high-resolution
DEM: a case study on volcanic hazard monitoring of Nisyros Island (Greece). ISPRS Journal
of Photogrammetry and Remote Sensing, 57, 24–38.
Veillet, I., 1991. Triangulation spatiale de blocs d’images SPOT. The`se de Doctorat, Observatoire de
Paris, Paris, 1–101.
Wang, H. and Ellis, E.C., 2005. Spatial accuracy of orthorectified IKONOS imagery and historical
aerial photographs across five sites in China. International Journal of Remote Sensing, 26 (9),
1893–1911.
Wang, J., et al., 2005. Evaluation and improvement of geo-positioning accuracy of IKONOS stereo
imagery. Journal of Surveying Engineering, 131 (2), 35–42.
Westin, T., 1990. Precision rectification of SPOT imagery. Photogrammetric Engineering and Remote
Sensing, 56, 247–253.
Westin, T., 2000. Geometric modelling of imagery from the MSU-SK conical scanner. Bulletin de la
Socie´te´ Française de Photogramme´trie et de Te´le´de´tection, 159, 55–58.
Wong, K.W., 1975. Geometric and cartographic accuracy of ERTS-1 imagery. Photogrammetric
Engineering and Remote Sensing, 41, 621–635.
International Journal of Image and Data Fusion 35
Wong, K.W., 1980. Basic mathematics of photogrammetry. In: C. Slama, ed. Manual of
Photogrammetry, 4th ed., Chapter II. Falls Church, VA: ASP, 37–101.
Wong, F., Orth, R., and Friedmann, D., 1981. The use of digital terrain model in the rectification of
satellite-borne imagery. Proceedings of the 15th international symposium on remote sensing of
environment, 11–15 May Ann Arbor, MI: ERIM, 653–662.
Yang, X., 2001. Piece-wise linear rational function approximation in digital photogrammetry.
Proceedings of the ASPRS annual conference, 23–27 April St Louis, MO. Bethesda, MD:
ASPRS, CD-ROM (unpaginated).
Zhang, J., 2010. Multi-source remote sensing data fusion: status and trends. International Journal of
Image and Data Fusion, 1 (1), 5–24.
Zhang, G., et al., 2010. Evaluation of the RPC model for spaceborne sar imagery. Photogrammetric
Engineering and Remote Sensing, 77 (6), 727–733.
Zhang, Y., Tao, V., and Mercer, B., 2001. Assessment of the influences of satellite viewing angle,
earth curvature, relief height, geographical location and DEM characteristics on planimetric
displacement of high-resolution satellite imagery. Proceedings of the ASPRS annual
conference, 23–27 April, St Louis, MO. Bethesda MD: ASPRS, CD-ROM (unpaginated).