You are on page 1of 14

W3605_PUB2 07/02/2016 12:50:1 Page 1

REMOTE SENSING GEOMETRIC CORRECTIONS 2. SPATIAL RESOLUTION, SPATIAL SAMPLING,


AND GEOMETRICAL PROPERTIES OF AN IMAGE

Geometrical properties of remote sensing images must be


1. INTRODUCTION well understood and properly considered in the usage of
such data, particularly for cartographic and land mapping
Satellite remote sensing data have become an essential tool applications. In the geometrical processing of remote sens-
in many applications, mainly environmental monitoring at ing data, the spatial resolution of the images and the
a global scale. The geometric processing of remote sensing spatial sampling distance both play a critical role (5). These
data is an essential element in scientific and operational two different concepts must be clearly distinguished. The
applications based on multisource data integration into spatial sampling distance – often called “pixel size” – is just
models or management tools using geographical informa- the distance (in the given geometry) between two consecu-
tion systems (GISs). Although remote sensing techniques tive pixels. Such pixel size or sampling distance is usually
have been applied in many different fields, the operational different along track (scanning along the orbital path) and
use has been increasing with the improvements in the across track (instantaneous field of view for a given image
adequacy of preprocessing steps, particularly the geometric line). Moreover, the sampling distances may vary, even
processing of the data (1). This is particularly true when within an image, and the magnitude and type of such
using time series of data, or spatial mosaics containing variations is closely linked to the image acquisition princi-
multiple scenes, or even more when using multisource data ple, which is assumed to be known for each given image.
information from several satellite systems for a given The concept of “spatial resolution” is more complex and
application. New applications and increased possibilities difficult to be expressed by a single number. In fact, a
are becoming possible with the operational availability of system is defined by the so-called point spread function
high spatial resolution data at a global scale with a perio- (PSF), which defines how a point source is transformed
dicity of few days, and new series of advanced sensors with through the whole imaging system and provides complete
improved spatial, spectral, and angular sampling capabili- information about spatial resolution. To express the “spa-
ties, requiring accurate and automatic method for the tial resolution” as a single number, the PSF function is
geometrical processing of the data (2). often approximated (or fitted) to a simple function such a
Satellite images are becoming essential for scientific Gaussian, and the full-width-half-maximum (FWHM) of
research, cartographic mapping, and everyday usage for the Gaussian is assumed as the spatial resolution of the
practical applications, together with satellite navigation. image. Obviously, FWHM provides only a rough indication
In all the cases, the geometrical processing of the data to of the spatial resolution, but the ratio FMWH over the
convert the raw satellite information into usable geograph- sampling distance gives an idea of the image quality. Such
ical maps is necessary because original data are acquired in a ratio tends to be slightly over 1 and is selected to
a geometry that does not correspond to the geometry used guarantee sharp images while avoiding aliasing as much
in Earth maps due to panoramic view distortions, satellite as possible. More sophisticated approaches to analyze the
motion, and Earth rotation (3). The problem becomes more spatial properties of an image are based on the Fourier
critical as the spatial resolution of the data increases. transform and use concepts such as the modulation trans-
Currently, satellite images are acquired with resolutions fer function (MTF), but for the geometrical processing of
up to about 0.5 m, and the accuracy requirements to match remote sensing images, it is often assumed that a detailed
pixel-level accuracy in the final resulting products is really knowledge of image optical properties is usually not neces-
challenging. Fortunately, as the geometric precision and sary. However, a resampling process in which images have
accuracy requirements have been increasing, the capabili- to be resampled to other geometries, the PSF/MTF con-
ties of space technologies to deliver more and more accu- cepts play a key role in preserving image resolution prop-
rately positioned data have also increased with enhanced erties; therefore, it is important to know the PSF of the
satellite positioning methods and improved satellite abso- instrument for a proper geometrical processing of the
lute orientation methods and pointing capabilities (4). The images.
increase in systematic spatial coverage, improved spatial In the case of synthetic-aperture radar (SAR) data, the
resolution, and free and easy availability of the data for the concept of spatial resolution is also somehow confusing,
users has transformed the level of usability of remote since the resolution depends on the way the original raw
sensing data. data are processed. Typical available systems can provide a
There is a need for spectral and radiometric calibration resolution of only few meters. However, this resolution
and removal of technical anomalies (dead pixels, striping cannot be used in regular applications due to the presence
resulting from pixel nonuniformity, etc.) (1), particularly of noise (or undesired signals). Two approaches are fol-
for automatic techniques based on image correlation lowed: spatial averaging and multilooking, which reduces
approaches, but we will not discuss such preprocessing the spatial resolution by increasing pixel spacing, and local
aspects in this article and assume that all technical radio- filtering, which reduces the spatial resolution but main-
metric anomalies have been removed from the data, or at tains the same pixel spacing. In both the cases, the local
least compensated, before geometrical processing is per- filtering or average is performed over a window determined
formed using the techniques described in the following from the local level of noise. If the statistical properties of
sections (Figure 1). the data are so that the local entropy can be determined,

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright  2016 John Wiley & Sons, Inc.
DOI: 10.1002/047134608X.W3605.pub2
W3605_PUB2 07/02/2016 12:50:2 Page 2

2 Remote Sensing Geometric Corrections

Figure 1. Geometrical processing of multiangular acquisitions from CHRIS/PROBA over one target along the orbital track. The five
consecutive acquisitions along the orbital path (a) are combined in one single multiangular dataset (b).

optimal resolutions can be established. Otherwise, the track produces the successive lines forming the final image.
effective resolution can be critical for accomplishing the The result is that all images are somehow geometrically
requirements of the selected applications. Spatial resolu- distorted by the combination of all such effects, and a good
tion considerations play a significant role in the case of SAR knowledge and characterization of such perturbations in
data processing, especially in those approaches that are the resulting images is the key for an adequate geometrical
based on statistical analysis. processing of the data.
Apart from pure geometrical effects, other technical
distortions (data noises, detectors nonuniformity, etc.)
3. INPUT IMAGE ACQUISITION GEOMETRY also play a role. Such effects can be corrected or compen-
sated; for the geometrical processing of the data, we
The input image for the geometrical processing is the one assume that technical anomalies have been previously
resulting from the acquisition process, which is thus dis- removed. A significant part of the errors resulting in the
torted by several factors. Images are acquired from moving geometric processing of remote sensing data comes from
platforms (satellites), where changes are not only in the the disregard of some effects, approximations made, and
position of the observing platform but also in the orienta- neglected disturbances, most of which are related to the
tion angles and pointing toward the target being observed. geometry of image acquisition. Effects introduced by angu-
On the other side, targets are always moving due to Earth’s lar variability, surface reflectance anisotropy, and topo-
rotation, so remote sensing images are always acquired in a graphical distortions, as introduced by image acquisition
situation where both the acquisition platform and observed geometry, will determine the appearance of initial images
target are in relative motion, which automatically causes to be later geometrically processed, and such distortions
geometric distortions in the images. Moreover, to increase cannot be properly removed later if the effects are not well
the usefulness of the images and the repetition of observa- accounted for in the processing algorithms. For instance, a
tions, most images are acquired under large angular view- particular issue for very high spatial resolution mapping,
ing angle, either due to large observed swath, observations particularly in oblique observations, is the role of atmo-
outside nadir to cover more areas, or multiangular views spheric refraction. The variation in the refraction index of
over the same area. Because Earth is not flat but approxi- the atmospheric constituents as a function of the path
mately ellipsoid, all observations are affected by panoramic inside the atmosphere can usually be neglected, but it
distortions. The presence of topographic effects (defined as plays a role at the level of precision in which remote sensing
variable target height over the reference Earth ellipsoid) observations are currently available. Due the fact that
introduces additional perturbations over the angular pan- oblique off-nadir acquisitions are quite common by tilting
oramic distortions. Angular effects produce a change in the the whole satellite or using the angular observation capa-
effective spatial resolution of the data. For an instrument bilities of the instrument in the platform, to avoid cloudy
with a large swath, the spatial resolution near nadir is areas near nadir, such changes in the light path through
much better than at the extreme oblique viewed conditions the atmosphere influence the geometry of the acquired
for each given image line. Images are usually acquired line images. Thus, such distortions caused by atmospheric
by line, where detectors provide simultaneously a full line refraction effects must also be accounted for in the geomet-
of the image, and the satellite motion along the orbital ric processing of the data.
W3605_PUB2 07/02/2016 12:50:3 Page 3

Remote Sensing Geometric Corrections 3

4. OUTPUT IMAGE PROJECTION GEOMETRY knowledge of the satellite position and orientation angles
for each instantaneous acquisition in a proper geometrical
While the input image for the geometrical processing of the reference frame linked to the target location. If this is the
data is given by acquisition geometry, the output image case, geometrical processing of the data is almost straight-
resulting from the geometrical processing depends on the forward and can be implemented by using readily informa-
user application, as there is no universal cartographic tion, in an automatic way, without dependence on or usage
mapping projection that is optimal for all applications, of any image or external reference.
depending mainly of the scale (global, continental, regional, The usage of ground control points (GCPs) to increase
local) and the geographical location of the target area, as the accuracy in the determination of the geometrical trans-
different cartographic projections are usually defined for formation from raw image geometry to output cartographic
different latitude ranges. reference depends primarily on the spatial resolution of the
When images have to be transformed from the original data being processed. For lower resolution sensors (i.e.,
acquisition geometry (typically not useful for most applica- >100 m resolution), automatic registration is possible with
tions) to some cartographic reference, a specific map projec- subpixel accuracy using only orbital/ephemeris informa-
tion has to be chosen. Local maps available for each area tion from the satellite, platform position, and attitude and
define the type of cartographic projection, which are differ- instrument pointing information that can be always con-
ent in each country. The Universal Transverse Mercator sidered available. For very high spatial resolution systems
(UTM) is often selected because of its good geometrical (i.e., <5 m resolution), the precision of single-image posi-
properties for most areas, but the best projection to be tioning and orientation is not enough to directly achieve
used for a given particular problem is difficult to identify. subpixel accuracy from known image acquisition informa-
Even for global mapping, the selection of a given projection tion and, therefore, some additional tools are needed to
depends on several criteria, like fidelity of pixel correspon- achieve the required accuracy, either GCPs or multiple
dence to actual surface area, but several options exist. In images of the same area in a composite-geometry strategy
many cases, data are simply projected to a latitude–longi- (block bundle adjustment techniques).
tude grid, where the latitude–longitude relative factor is
compensated to make the resulting pixel (in surface area)
5.1. Direct and Inverse Space Projection Methods
almost square. This is only possible for some reference
latitude, which is chosen as the latitude of the central point The mathematical transformation model from image acqui-
in the reference area. One advantage of this projection, apart sition geometry to the output cartographic projection can
from the simplicity, is that most applications require com- be formulated in two complementary ways, each one with
putation of derived quantities that are given as functions of its corresponding advantages and disadvantages in each
latitude and longitude, so that computations are easy if a case. The “direct” space projection method starts from the
latitude–longitude grid is used. The usage of ancillary infor- initial acquisition geometry and makes use of the knowl-
mation, like digital elevation models (DEMs) or boundary edge of the physical acquisition geometry and platform/
cartographic elements (e.g., coast line, rivers, political divi- instrument dynamics during image acquisition to project
sions), often implies a selection of a given cartographic the initial image into the output geometry. The “inverse”
projection to represent the output data. projection methods starts from the output cartographic
In principle, changing the image from one cartographic image geometry and identifies for each point (pixel) of
projection to another is just a matter of a known mathe- the output image the corresponding point in the input
matical transformation, and computational tools are avail- image.
able that allow transfer of data between cartographic The geometrical processing steps are illustrated in
projections by means of given mathematical transforma- Figure 2, and the correlation between direct and inverse
tions. The problem is then of the resampling of the data. mapping strategies is illustrated through the way of estab-
Each time the geometry of the data is changed, resampling lishing the mathematical relationship between the two
is needed, with the unavoidable loss of some information lower boxes in the diagram. For the mathematical trans-
and introduction of interpolation artifacts. Single-step pro- formation itself, the choosing of direct or inverse transfor-
cedures from the original acquisition geometry to the final mation method is arbitrary and the two approaches can be
cartographic product, by using a single resampling of the used without any special preference. However, when image
data, are always preferable. pixels have to be resampled from one geometry to the other,
the inverse method has some better properties, as will be
discussed in a later section in this article.
5. GEOMETRICAL TRANSFORMATION MODELS
5.2. Physical Models of Image Acquisition
Many different approaches have been described in the
literature for geometric registration or geocoding of The preferred geometrical processing scheme is the one in
remotely sensed data (6–8). Some of them are quite sen- which the mathematical transformation from original
sor-specific, but the trend is to adopt general approaches geometry to output cartographic projection (Figure 2) is
applicable to almost every satellite case, using adequate fully defined through a known, well-defined, and fully
parameters for each given case. deterministic equation. Of course, the knowledge will never
The geometrical transformation from raw image acqui- be absolutely perfect, but if the accuracy in the knowledge
sition to the final cartographic output is determined by the of such mathematical transformation is enough to provide
W3605_PUB2 07/02/2016 12:50:3 Page 4

4 Remote Sensing Geometric Corrections

mechanical effects may change slightly such orientation


angles with respect to the spacecraft body reference and
thus must also be taken into account, particularly for very
high spatial resolution systems.
The Earth-centered/Earth-fixed reference frame has the
origin at the center of mass of the Earth and rotates with
the Earth. The rotation angle defines the Greenwich Mean
Sidereal Time (GMST). This frame is closely related with
the Geocentric Inertial Frame but does not rotate with
Earth but has one axis with the vernal equinox (intersec-
tion of the Earth’s equatorial plane with the plane of the
Earth’s orbit around the Sun), at a given epoch, and is not
inertial.
The local-vertical/local-horizontal reference frame is
referenced to the spacecraft’s orbit, with one of the axis
pointing along the nadir vector, directed toward the
Earth’s center, and another axis pointing along the space-
Figure 2. General scheme of the geometrical processing of remote craft’s orbital velocity vector. Such frame (attitude refer-
sensing data showing both the direct and inverse transformation
ence frame) serves to define the line of sight according to
approaches.
instrument pointing characteristics and then derive the
angular conditions under which the image is acquired
(Figure 3).
an error in the resulting image within a given range Combining such different reference frames through
(typically less than one pixel size in the output image), the knowledge of the spacecraft position and attitude
then the mathematical transformation can be assumed in angles at each time instant, the spacecraft attitude and
the geometrical processing without any additional restric- instrument pointing angles, and thus the line of sight
tion or adjustment needed to compensate potential errors. from the instrument to the Earth for each time instant
Otherwise, some error compensation method should be along the observation period, it is possible to compute the
introduced, as described later. observation point of Earth’s surface for each acquisition
In this case, it is extremely important that accurate point and then to register each observation over a given
positioning and knowledge of the instantaneous observa- specific point location on Earth for each pixel of the
tion angles and three-dimensional (3D) surface models is acquired image.
available, which is not always the case, but the current To do such geometric transformation properly, it is
trend in Earth observation is to have a mathematical model necessary in addition to have a good 3D model of the Earth
of the observation and the needed onboard tools to compute surface to account for topographic distortions (Figure 4).
the relevant geometrical parameters, so that such geomet- Basically, the mathematical transformation from input
ric processing of the data can be implemented by using a image geometry provides a line of sight from the satellite
physical model of image acquisition. to the Earth surface. The exact point where such line of
The mathematical transformation from input geometry sight from the satellite, for each pixel of the image, inter-
to output coordinates is not a simple formula but a complex sects the Earth surface determines the output image.
transformation that can be better formulated as a combi- Obviously, Earth cannot be assumed flat unless for some
nation of multiple transformations from successive refer-
ence frames describing in each step the image geometrical
properties in each one of such reference frames.
Several different reference frames should be considered
when describing image acquisition in remote sensing sys-
tems. The inertial reference frame is somehow a preferred
reference frame where Newton’s inertial law applies. This
frame has its axes fixed relative to the distant starts,
assumed fixed. In practice, like in the International Celes-
tial Reference Frame (ICRF), the axes are determined with
respect to the position of several hundred distant extra-
galactic sources of radio waves, determined by very long
baseline interferometry. The origin of the ICRF is at the
center of mass of the solar system.
The spacecraft body reference frame is defined by an
origin in some reference point (i.e., center of mass or optical
bench) of the spacecraft body, typically used to align the
spacecraft components and instruments. Instruments are
mounted in this reference frame with fixed viewing angles Figure 3. Angular deformations in the image due to different
with respect to this reference, although thermo-elastical or angular views.
W3605_PUB2 07/02/2016 12:50:4 Page 5

Remote Sensing Geometric Corrections 5

the topographic structure of the local Earth surface over


the reference ellipsoid. Fortunately, today, there are global
DEMs available everywhere in the world with accuracy
acceptable for most mapping applications. When the accu-
racy of such global DEMs is not enough for regional map-
ping, usually there are other DEMs with local coverage but
more detailed and with better accuracy to be used in such
areas. With the availability of airborne lidar 3D mapping
capabilities, DEMs with high accuracy (even with submeter
resolution) are becoming common in many places. One
issue that must be taken into account when using very
high resolution digital 3D surface model is that at such
high resolutions the concept of “surface” is not well defined
and the surface representing the top level of objects (trees,
buildings) is different from the surface representing the
bottom level (soil under vegetation, streets). Differences
can be of tens of meters in some cases, so the geometrical
processing of the data must take into account where the
“surface level” is defined.
Geometrical and radiometric distortions introduced by
topography (Figure 5) can be taken into account in a
consistent manner in the geometric processing of the
data if an appropriate 3D model of the surface is available.
However, some topographic effects introduce severe image
distortions and even the impossibility to see some surface
Figure 4. Geometrical distortions introduced by topography on areas, or view areas under quite different illumination
remote sensing images: first row in red indicates the “nominal” conditions (cast shadows), in some cases, making the deter-
pixels for a flat surface and second row in blue indicates the actual mination of the output image in a cartographic projection
pixels acquired due to surface topography, with irregular sampling very difficult. Multiple angular views of the same surface
distance. are the only way of producing consistent maps of topo-
graphically structured areas.

specific areas, so at least an ellipsoidal shape model 5.3. Parametric Statistical Models
must be assumed for the Earth surface. Although there
are general ellipsoid models globally applicable to the Since the full physical transformation model can be math-
whole Earth volume, in many cases, local ellipsoid models ematically very complex, and the accurate knowledge of
more suitable to the local Earth shape over a limited time variations in each geometrical parameter is not
geographical area are used. This is not a problem provided always available with enough precision, alternative meth-
the surface coordinates in any Earth ellipsoid model can ods are needed. Such methods should be able to make some
be related to the satellite coordinates where the input approximations or simplifications over the full accurate
image is defined. Even if a proper ellipsoid model is mathematical transformation model but still provide
used, still the geometrical processing requires, in addition, acceptable results in most cases.

Figure 5. Radiometric effects due to varying viewed area, changes in local illumination angle, and multiple light reflections over adjacent
slopes in rugged terrain (b) as opposite to the case of flat surfaces (a) due to perturbations introduced by topography.
W3605_PUB2 07/02/2016 12:50:6 Page 6

6 Remote Sensing Geometric Corrections

For instance, for optical high-resolution sensors with in a simple still accurate way is the usage of rational
rather narrow field of view, the following approximation polynomials instead of simple polynomials, often named
applies for each line of the image: rational polynomial coefficients (RPCs) approach. Such
RPC approach includes assuming the form for the mathe-
m11 X X 0 † ‡ m12 Y Y 0 † ‡ m13 Z Z0 † matical transformation
xˆ f
m31 X X 0 † ‡ m32 Y Y 0 † ‡ m33 Z Z0 †
(1) P 1† X; Y; Z†
m21 X X 0 † ‡ m22 Y Y 0 † ‡ m23 Z Z0 † xˆ
yˆ f P 2† X; Y; Z†
m31 X X 0 † ‡ m32 Y Y 0 † ‡ m33 Z Z0 †
P 3† X; Y; Z†
where x; y† are the output image coordinates and X; Y; Z† yˆ (3)
P 4† X; Y; Z†
are the input imagecoordinates,
 f is the focal length of the
instrument, where mij iˆ1;2;3 are the nine elements of the P 5† X; Y; Z†

jˆ1;2;3
P 6† X; Y; Z†
3  3† orthogonal rotation matrix that transforms input
geometry to output geometry, together with the translation
where x; y; z† are the coordinates in the image space,
given by the vector X 0 ; Y 0 ; Z0 †.
n Z† are theocoordinates of points in object space,
X; Y;
Rigorously such approximation is only valid for instan-
and P i† X; Y; Z† are third-order polynomial func-
taneous observations, which are then applicable only to iˆ1;2;...;6
tions defined as
frame cameras or single lines of push-broom scanning
sensors. Assuming a model of dynamical motion of the
X
3 X
3 X
3
acquisition platform (both for position and attitude), which P i† X;Y;Z†ˆ Cklm X k Y m Zn
are often assumed also as polynomials over time, the same kˆ0 mˆ0 nˆ0
approach can be applied over the whole image simply by k‡m‡n3 (4)
varying the coefficients of the transformation accordingly
as a function of time. ˆ c1i† ‡ c2i† X ‡ c3i† Y ‡ c4i† X  Y ‡ c5i† X 2 ‡ c6i† Y 2
i† 2 i† 2 i† 2 i† 3
∙∙ ∙ ‡ c17 Y  Z ‡ c18 X  Z ‡ c19 Y  Z ‡ c20 Z
5.4. Polynomial Approximations
Given the difficulties of deriving a rigorous and accurate n o
geometrical transformation model between input and out- cki† being the 20 coefficients usually supplied by
kˆ1;2;...;20
put geometry, simple models based on polynomial approxi- the instrument manufacturer or data distribution company
mations are used in many cases. Such methods can be on an image-by-image basis.
introduced as Taylor’s polynomial expansion of the full Because of the characteristics of remote sensing
mathematical model, although in practice, they are treated images, rational polynomials somehow correspond to a
as simple mathematical models with a number of free kind of perfect sensor model that represents the main
coefficients that are fitted statistically by using some aspect of actual image acquisitions, becoming a preferred
least-square approximation method based on a series of approach for high-resolution images with small swath. In
GCPs defined by correspondence between the input and the most cases, however, the knowledge of satellite position,
output images (9). orientation angles, and pointing geometry from the sat-
The mathematical transformation relating the input ellite system is not enough to provide accurate outputs,
x; y† and the output x´ ; y´ † geometries is given by the poly- and refinement of the RPCs by using external GCPs is the
nomial model, for instance, a two-degree model given by adopted approach (11).
X2 X2
x´ ˆ Akm xk ym ˆ a0 ‡ a1 x ‡ a2 y ‡ a3 x  y ‡ a4 x2 ‡ a5 y2
kˆ0 mˆ0
k‡m2 5.6. Automatic Identification of Reference Points
in the Image
X
2 X
2
y´ ˆ Bkm xk ym ˆ b0 ‡ b1 x ‡ b2 y ‡ b3 x  y ‡ b4 x2 ‡ b5 y2 Some of the methods previously described require identifi-
kˆ0 mˆ0
cation of a given number of reference points in the image
k‡m2
(GCPs) either to define the empirical parametric mathe-
(2) matical transformation in the absence of knowledge of the
true transformation or to refine the true known transfor-
The approach can be extended to other functional models or
mation when this is not accurate enough. Identification of
higher order polynomials (10), although higher orders over
some single features both in the input and output images
5 tend to give exaggerated distortions and order higher than
would be enough in most cases, but such identification
9 would give numerical implementation problems.
cannot be straightforward.
Formerly, when the number of images to be processed
was rather small, the ground features used as GCPs were
5.5. Rational Polynomial Approach
detected manually by expert operators. With an increase in
A particular approach whose results are quite adequate to the number of remote sensing images acquired daily by
handle the geometrical processing of remote sensing data many satellite systems, automatic methods are needed.
W3605_PUB2 07/02/2016 12:50:6 Page 7

Remote Sensing Geometric Corrections 7

Current approaches tend to use feature detection methods, Image cross-correlation or other automatic registration
that is, different objects in the images (edges, lines, con- techniques are very convenient for operational processing
tours) are used as feature points and represented by their and absolutely necessary when a large volume of images
descriptors in a feature-matching technique to aquatically must be processed, particularly for near-real-time applications.
register two different images or to refine the registration
previously done by some general transformation method Cross-Correlation Methods. Cross-correlation is one of
(12–14). the most commonly used approaches in the image co-
Salient points typically represent a small region of an registration process. It generally uses a pattern-matching
image with enough variability among its pixels to be easily approach. The cross-correlation is in fact a type of similarity
identified from the rest of the image due to its intrinsic measure that allows characterization of the displacement of
internal variability. The source image represents the origi- an image I x; y† of magnitude δx in the X-direction and δy in
nal image from which some salient points indicating the Y-direction. Two-dimensional cross-correlation func-
regions with enough internal variability in a local range tions can be defined as
have been detected. Several algorithms exist to extract P   
salient points from an image, but most of them are based I in i ‡ δx ; j ‡ δy I in I out i; j† I out
  i;j
on the analysis of structural properties through morphol- C δx ; δy ˆ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P   rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P
2 2
ogy operators, such as the operator that maximizes the I in i ‡ δx ; j ‡ δy I in I out i; j† I out
following condition i;j i;j

  (7)
T
det A WA λ1 λ2
Mˆ    (5) This similarity measure is computed forwindow  pairs from
trace AT WA ‡ ε λ1 ‡ λ2 ‡ ε the source and target images by varying δx ; δy at each step,
until a window pair for which the maximum correlation is
where A is a matrix defined by the local gradients of the achieved, for a given maximal size of the allowed displace-
image, related to the local autocorrelation matrix of the ment or a maximum number of iterations.
image, W is a diagonal weighting matrix, and λ1 ; λ2 † are The cross-correlation technique is often implemented by
eigenvalues of A. The small constant ε is introduced to taking advantage of the fact that the Fourier transform of
avoid a singular denominator in the case of rank zero the correlation of two images is the product of the Fourier
autocorrelation matrix. transform of one image and the complex conjugate of the
After salient points have been extracted from the source Fourier transform of the other image. The phase shift
image, they only become tie points if they are properly correlation method is based on the Fourier shift, which
matched against the corresponding points in the target is equivalent to the translation of an image with respect the
image. Then, GCPs are tie points extracted from the source other in the co-registration process. Notice, however, that
input image and matched over the target output image, such methods are only useful when transformation from
which allows assigning them true geographical coordinates the original image to the target cartographic product can be
because the target image corresponds or has been previ- represented by a small shift of the image, at least locally.
ously referenced to a geolocated image or reference map. This is, in general, only possible if the image has been
already transformed by means of a nominal general trans-
formation (based on knowledge of orbital motion, satellite
5.7. Refinement Methods to Increase Final Accuracy attitude, and image acquisition process), so the image
cross-correlation method is intended only as a refinement
As previously indicated, the most common case is that a of the transformation to compensate for residual errors.
mathematical transformation from input to output images The cross-correlation method is not, in principle, applicable
exists and is known but not with the necessary precision to for nonlinear transformations, which is usually the case for
perform the geometrical correction of the data with the remote sensing images but plays an important role in the
required output accuracy. In such cases, the common strat- geometrical processing of remote sensing data when com-
egy is to perform a first transformation of the image by bined with a physical transformation method to compen-
assuming the known mathematical transformation and sate for residual effects of a final matching of the resulting
then considering that the remaining error can be modeled images with a given cartographic reference.
by means of simple (linear) transformation from the
approximately corrected image to the final corrected image. Mutual Information Methods. This method is optimized
The refinement transformation can be expressed, after for registering images with different modalities (e.g., regis-
linearization, in the form tering SAR with optical images, or thermal with visible/
infrared images) and represents the most common tech-
@u @u nique in multimodal registration. The normalized mutual
x´ ˆ x ‡ u x; y†  x ‡ u0 ‡ Δx ‡ Δy
@x @y information (MI) between the patch in the base image and
(6)
@v @v the patch in the warp image is computed as the matching
y´ ˆ x ‡ v x; y†  y ‡ v0 ‡ Δx ‡ Δy
@x @y score. MI is based on information theory and measures the
mutual dependence of two random variables (15).
where Δx; Δy† determine the displacement of the image MI-based registration process begins with the estimation
over the nominal transformation. of the joint probability of the intensities of corresponding
W3605_PUB2 07/02/2016 12:50:7 Page 8

8 Remote Sensing Geometric Corrections

pixels in two images. Originating from information theory, the input image would result in a projected image in a grid
MI is a measure of statistical dependency between two data that is not uniform and, thus, cannot be transformed into
sets and particularly suited to register images acquired with an image. The transformation from the irregular grid of the
different modalities. MI between two random variables x output geometry to a given regular grid is called “resam-
and y is given by pling.” Such resampling can be done in several different
ways, from the very simple zero-order interpolation (near-
I M x; y† ˆ H x† H yjx† ˆ H x† ‡ H y† H x; y† (8) est-neighbor interpolation) up to higher order approaches,
although the usage of more than three-degree polynomials
where is not very common (16, 17) (Figure 6).
X Resampling can be implemented in direct or inverse
H x† ˆ Ex ‰log P x††Š ˆ P x†log P x†† form of the geometrical transformation from the initial
x
X image in the input geometry to the cartographically
H y† ˆ Ey ‰log P y††Š ˆ P y†log P y†† (9) projected image (Figure 7). In the direct case, or forward
y model, holes and/or overlaps between the resulting pix-
X
H x; y† ˆ P x; y†log P x; y†† els can be produced in the output image due to the
x;y discretization and rounding effects in the resulting pix-
els. Such anomalies can be removed by an extra interpo-
In such equations, H represents “entropy of random varia- lation or smoothing, but the inverse transformation, or
ble,” P x† and P y† are the marginal probability distribu- backward approach, is usually preferred to avoid such
tions of x and y, respectively, and P x; y† is the joint problems. In such an approach, the output image is
probability distribution for both variables x; y†. determined using the coordinates of the target pixels,
This method is based on the maximization of I M x; y†. performing the interpolation in a regular grid on the
Such maximization is implemented by using the gradient output coordinates, and avoiding potential holes or over-
descent optimization method or other maximization tech- laps. The new values for each pixel position are interpo-
niques. Often a speed up of the registration is implemented, lated from the initial measured pixels (defined in the
exploiting the coarse-to-fine resolution strategy (the input geometry) via convolution of the image with an
pyramidal approach). interpolation kernel.
It is observed that MI gives more accurate result than any The definition of the interpolation kernel can be made in
other registration method in many cases. Usually, MI tends different ways. An ideal interpolator is the two-dimen-
to produce more accurate results than the traditional corre- sional (2D) sinc function, but it is difficult to implement
lation-based measures for cross-modality image registration in practice due to the infinite extension of the filter and the
but takes longer to run, since it is more computationally finite size of the sampled images. A truncated sinc or any
intensive and implementation becomes more sophisticated other interpolation function with bounded support must be
than more elementary approaches. On the other side, when used. Moreover, separable interpolation functions, so that
images have low resolution or contain little spatial informa- the n  n two-dimensional convolution is replaced by n ‡ 1
tion, then this method gives worse results. one-dimensional convolutions, are often used, with much
faster implementation. At the end, most approaches are
6. SPATIAL RESAMPLING TECHNIQUES reduced to spline interpolations of different orders. Exam-
ples are the nearest neighbor (order cero), bilineal
The problem of data resampling is given by the fact that the approach (order 1), quadratic splines (order 2), or cubic
output image must be defined over a regular grid (output splines (order 3). Higher order polynomial kernels or
spatial resolution), while the geometrical transformation of splines of order larger than 3 are rarely used because

Figure 6. Acquisition of multiple images over a given site with different resolutions and angular views (a) and resampling of the initial
images acquired over different geometries to a final output grid of regular spacing (b).
W3605_PUB2 07/02/2016 12:50:11 Page 9

Remote Sensing Geometric Corrections 9

The values of the vector ~


μ are given by
" ! #
1 1 ~I T G 1~
χ →
~
μ ˆG ~
χ ‡ I (12)
~
I T G 1~
I

where
Z
χi ˆ dA F ~ r †F ~
ri ;~ rP ;~
r† (13)
S

I is the identity vector, and the matrix G is defined as
Z
 
Gij ˆ dA F ~ r †F ~
r i ;~ r j ;~
r (14)
Figure 7. Direct and inverse mapping in the resampling of the S
original acquired image (left) to the output cartographic product
(right). Other image interpolation techniques based on the usage of
multiple images (20) or superresolution techniques (21) are
sometimes used. The usage of superresolution techniques is
the additional benefits do not compensate the computa- becoming quite common in the case of high spatial resolution
tional complexity. Rather than common splines, the B- sensors, where a compromise between MTF and signal-to-
splines approach is commonly used due to artifacts and noise ratio (SNR) is required to keep good radiometric
enhancement of noise introduced by regular splines for properties while enhancing the spatial resolution as much
some images, depending on the actual PSF of the as possible. Superresolution methods typically enhance res-
instrument. olution (better MTF performance) at the expenses of noise
Splines and B-splines approaches, particularly of enhancement (degraded SNR performance).
third order, have become common strategy for image While cubic interpolation is the most commonly used
resampling because they provide enough accuracy in approach, particularly when the input and output images
most cases. However, when radiometric fidelity becomes have significantly different pixel spacing, for remote sens-
essential, more sophisticated resampling approaches can ing data, many users still prefer the nearest-neighbor
be used. Optimum interpolation approaches (18, 19) approach, because it does not alter the number and values
make use of the actual pixel acquisition geometry, PSF of the discrete radiometric levels of the initial images, to
associated to each pixel as varying with angular view, avoid creating spurious “synthetic” intensity levels not
ground area overlapping consecutive pixels, and other present in the original image. Such nearest-neighbor
image acquisition properties to address the problem of approach can be justified when the input and output
resampling trying to emulate as much an possible the images have quite similar pixel spacing and the images
physical properties of the image and then come to an have few discrete radiometric levels; otherwise, it intro-
output image that would look like an actual image duces geometric artifacts in lines and borders of the image
acquired under such output viewing geometry (optimum due to 0.5 pixel potential deviation. Current remote sensing
approach). images have 10, 12, and up to 16 bits per pixel and these
An optimum interpolation approach can be imple- many levels of intensity in the images (as compared to old
mented (18) as a relative sum of pixels intensities over a systems with only 8 bits – 256 radiometric levels) result in
6  6 pixels window, around the pixel being interpolated, quite continuous transitions, so that such nearest-neighbor
given by the expression approach is no longer justified. Higher order interpolation
resampling does not create additional spurious intensity
levels in most cases for current remote sensing systems.
X
i‡k X
j‡k
  X 36
R~
rP† ˆ μαβ R ~
rα;⠈ μi R ~
ri† (10)
αˆi k‡1 βˆj k‡1 iˆ1
7. PRACTICAL IMPLEMENTATION CASES
where k = 3 can be assumed in most cases and
The implementation of the general approaches described
Z previously to specific remote sensing systems depends very
 
R~ri ˆ dA F ~ r †R ~
r i ;~ r† (11) much on the specific characteristics of each system, partic-
S ularly the type and accuracy of the available information
about the instantaneous actual geometry of the acquisition,
R~ r i † being the value of each pixel i in the input image which allows to implement automatic general transforma-
resulting from the convolution of the observed radiance tion approaches, or the scarcity or inaccuracy of such
field R ~ r † and the effective PSF of the instrument given by information, and, thus, the need to use more generic para-
F~r i ;~r † for each given original pixel. The integral is per- metric or polynomial approaches to reconstruct the geo-
formed over an area S large enough to cover the surface metric transformation. We will consider here airborne and
contributing to the sensor PSF, typically over a 6  6 pixels satellite cases, as both are the most common remote sens-
window in the image. ing platforms for image acquisitions.
W3605_PUB2 07/02/2016 12:50:11 Page 10

10 Remote Sensing Geometric Corrections

7.1. The Space-borne Case filters – to produce reliable instantaneous estimates. Sun–
Earth sensors, three-axis magnetometers, and rate-inte-
The most common case for remote sensing applications is
grating gyroscopes provide additional attitude information.
the one provided by images acquired by orbiting satellites,
The final achieved accuracy in spacecraft attitude informa-
as they deliver routinely data streams in a systematic
tion for the geometrical processing of the image data is
manner. The automatic geometric processing of space-
enough for rather low-resolution sensors (over 50 m ground
borne images using only available information from the
pixel), but limitations exist for very high resolution images in
satellite system without relying on ground reference points
the order of meter resolution due to high-frequency platform
is only possible if the satellite system can provide both
vibration modes. Typical spacecraft attitude control systems
accurate positioning data (x, y, z) and accurate information
include reaction wheels and control moment gyros, or taking
about platform orientation (attitude angles).
advantage of external torques either induced by gravity
For the determination of the satellite/instrument posi-
gradients or more commonly magnetic torques induced by
tioning (22), current satellite systems use the so-called
gradients in Earth’s magnetic field, although aerodynamic
precise orbit determination (POD) package. Although the
torques are also applied in some cases, particularly for low
implementation details vary for each system, the package
Earth orbits. Solar radiation pressure torques must be also
includes one or several global navigation satellite system
considered and accounted for. Mass-expulsion torques
(GNSS), Doppler orbit determination, and radio-positioning
(thrusters) can be used for major changes in orbital condi-
integrated on satellite (DORIS), laser retro-reflector (LRR)
tions. Attitude stability is the main driver in the geometric
systems to allow ground tracking by multiple laser systems,
quality of remote sensing images.
or similar devices. With such techniques, the position of the
The applicability of a general physical transformation
satellite is determined with high accuracy, thus facilitating
model to every existing Earth observation satellite is pos-
the procedure of geolocation of observations and geometrical
sible because for every satellite there is always available
registration of remote sensing images.
information to compute the satellite position (23–25) and
The determination of the orientation of the platform,
(nominal) attitude at each given time where an image is
and then the instrument line of sight toward the observed
acquired. The North American Air Defense Command’s
point, becomes more critical. First of all, it must be taken
(NORAD’s) two-line element (TLE) sets provide the basic
into account that for a typical satellite altitude of 800 km,
orbital parameters to determine the position and velocity of
and observing a target on Earth with a resolution of 30 m,
each one of the spacecrafts tracked by the United States
to have an error below 0.3 pixel in automatic geolocation
Surveillance System, covering almost all satellites orbiting
requires an extremely high pointing knowledge accuracy.
the Earth (26). The orbital elements are mean values
To stabilize the platform at this level of accuracy and be
obtained by removing periodic variations using a particular
able to determine the actual absolute pointing with such
orbital and Earth gravity model (SGP4). Accurate orbit
level of accuracy, the satellite must have a very good
predictions and determinations are only obtained if the
attitude and orbital control system (AOCS), including
model used computes such periodic variations in exactly
both the determination and the control of the spacecraft
the same way they were removed when determining the
attitude. Platform vibrations are especially critical for
mean orbital elements. Some users tend to use a more
instrument having high spatial resolution, and the deter-
sophisticated orbital model than the one used in TLEs, but
mination of instantaneous spacecraft attitude for each
this does not necessarily produce better results. In fact,
instantaneous acquisition becomes even more critical.
TLEs are not the only source of orbital information, but the
Knowledge of the absolute spacecraft attitude angles (and
advantage is that such TLE information is available for
thus corresponding instrument pointing angles) is achieved
every satellite, while other sources are only available for a
by means of devices such as star trackers, Sun sensors,
given satellite family or agency. In any case, the mathe-
Earth’s horizon sensors, magnetometers, gyroscope read-
matical orbital model used to compute satellite position
ings, or multiple GPS positioning at several places of the
and nominal attitude must be always consistent with the
spacecraft. For Earth observation systems, particularly in
source of orbital information.
high-spatial resolution, absolute pointing knowledge is more
In practical cases, to achieve the needed accuracy, it is
relevant than absolute pointing capabilities because out-of-
quite common to use some GCPs to refine the automatic
nominal conditions can be corrected by processing if devia-
geo-referentiation of the images (27, 28). For multitempo-
tions are known. Current start tracker systems are based on
ral data, the usage of a reference image and the automatic
solid-state systems that are able to track many stars simul-
registration of all other images to such a reference by using
taneously, match the tracked stars with an internal catalog,
automatic methods, typically cross-correlation, is the usual
and then compute the satellite attitude with respect to the
approach. In this last case, temporal decorrelation of the
celestial reference frame. A start tracker is basically a digital
images due to changes in Earth surface conditions must be
camera with a pixel-array detector. Typical start trackers
taken into account by selecting adequate reference image
can update attitude information at a rate between 0.5 and
dataset accounting for seasonal variability or using relax-
10 Hz, providing an accuracy of few arcseconds in the bore-
ation methods able to cope with such spatial decorrelation
sight direction but with larger errors for rotation about such
as temporal distance with the reference image increases.
direction. Each spacecraft tends to have several start track-
The case of advanced very high resolution radiometer
ers pointing at perpendicular directions, but given the lim-
(AVHRR) data deserves special attention because of the
ited accuracy (noise in the measurements) and limited
extended usage of these data in many applications and the
updated rate of attitude determinations, the data have to
peculiarities of the geometrical processing of such data.
be filtered over time – in most cases using Kalman-type
National Oceanographic and Atmospheric Administration
W3605_PUB2 07/02/2016 12:50:11 Page 11

Remote Sensing Geometric Corrections 11

(NOAA) satellites have provided one of the longest records limitations of position and stability that airborne platforms
of global information about terrestrial conditions over sev- can offer versus orbital observations (34).
eral decades, becoming a key dataset in the study of the While airborne sensors can also have accurate position-
dynamics of vegetation. Such AVHRR data represent a ing via GNSS systems, the pointing capabilities of airborne
difficult case of geometric processing because of the contin- systems are much more limited than for the satellite case.
uous circular scanning perpendicular to satellite motion. First of all, the motion conditions for airborne sensors tend
Each scan provides a line of the image, while satellite to be much more unstable than in the satellite case due to
motion provides the successive lines of image. The circular flight speed and wind/turbulence effects depending on the
scanning allows use of the same detector for all pixels of the flight altitude. Start-trackers systems are not possible for
image, which is convenient for radiometric reasons, but airborne sensors, so inertial navigation systems (INSs) and
produces a very peculiar geometry in the original images. multiple GNSS receivers are used to determine the approx-
Given the extended usage of AVHRR data, the correction of imate instantaneous attitude of the aircraft in order to
such distortions and projection of output images into a compute the line of sight for each given pixel in the image.
common cartographic reference has been the subject of Special airborne cases deserve particular attention, like
many studies (29, 30) and has served as a correction model the case of very high altitude aircrafts (i.e., AVIRIS –
for other similar cases, like the along-track scanning radi- Airborne Visible-InfraRed Imaging Spectrometer flying
ometer (ATSR) and advanced ATSR (AATSR) on board on ER-2) where the stability of the platform and fly condi-
ERS and ENVISAT satellites. tions allow some simplification in the geometrical proce-
Some other specific cases require also special considera- dures, as opposed to the case of free-flying balloons or very
tions. For instance, the Sea and Land Surface Temperature low altitude aircrafts, which are exposed to high frequency
Radiometer (SLSRT) instrument on board Sentinel-3 cap- fluctuations in position and attitude due to wind/turbu-
tures dual-angle images by the same detectors using a dual lence. A detailed treatment of each case is out of the scope of
conical scanning system: one close to nadir and another with this article, but it must be kept in mind that each specific
an extreme oblique backward angle of 55°. The two angular sensor/aircraft typically requires a dedicated specific
images must be co-registered for the applications, but given approach, even if the general principles for geometric
the conical scanning, very different angular view, and very processing remain valid for all cases.
different associated spatial resolution in ground coordinates The recent development of unmanned aerial vehicles
for each given nadir/off-nadir pixels, the procedures (UAVs) and the quite significant advances in the geometric
described previously require some adaptations. processing of such UAV data using automatic methods (35)
Particular attention deserves the space-borne case corre- deserve attention. Although legal regulations and permis-
sponding to multiangular measurements. Examples of this sion rules are pending, the technology is evolving quite
case are the multiangular imaging spectroradiometer rapidly. UAV instruments, which include not only RGB
(MISR) on board NASA/Terra and the compact high-resolu- cameras but also sophisticated spectrometers and even
tion imaging spectrometer (CHRIS) on board Project for active sensors, are becoming quite interesting tools for
On-Board Autonomy (PROBA) satellite. MISR provides remote sensing in very high spatial resolution (even cm)
nine angular views of the same target along the orbital track and can be easily deployed over limited geographical areas of
in a systematic manner through nine different cameras. interest. The instability of the UAV platforms makes very
CHRIS is fully programmable but provides typically five difficult the geometric processing of the data. The geometri-
angular views of the same target by pointing the satellite cal rectification of UAV remote sensing images is usually
accordingly along-track and across-track as needed to get done by modelling the platform position and attitude using
access to the selected site (example shown in Figure 1). only the instantaneous available information, or with a more
Moreover, CHRIS/PROBA uses a motion compensation sophisticated approach using GCPs through post-processing
technique to increase the SNR ratio, which basically consists of the data.
of acquiring the images by rotation of the satellite along a A critical issue in both the cases of airborne and UAV data
given axis to scan the Earth’s surface at a speed slower than is that of temporal synchronization between the GNSS and
the actual spacecraft ground speed. The combination of attitude data from the platform and the imaging data acqui-
multiple off-nadir angular views and the motion compensa- sition system. As technology improves, it is expected that such
tion techniques makes the geometric processing of such data current issues can be solved and a fully automatic geometrical
very specific. However, similar automatic techniques as processing of the data can become possible. For the moment,
those previously described have been used successfully in to compensate for any potential distortion, either temporal
these cases and are routinely applied in such multiangular synchronization issues of imperfect position or attitude deter-
data processing schemes (31–33). mination, the usage of some ground references is quite com-
mon to achieve the desired accuracy in the output products.
7.2. The Airborne Case
Airborne sensors are also very common in remote sensing, 8. VALIDATION OF THE GEOMETRICAL PRECISION IN
particularly for local applications requiring very high spa- DATA PROCESSING AND FINAL MAP ACCURACY
tial resolution over limited areas. Airborne sensors are also
very relevant, as they are often used to simulate and test Evaluating the achieved accuracy is a requisite for carto-
capabilities for future space-borne sensors. In the case of graphic applications of remote sensing data and those
airborne sensors, the same basic principles for geometrical requiring integration of the outputs with cartographic
processing previously discussed also apply but with the reference, often in a GIS environment. Specific rules for
W3605_PUB2 07/02/2016 12:50:11 Page 12

12 Remote Sensing Geometric Corrections

the accuracy needed in the final cartographic product as a In practice, the best validation comes from the usage of
function of the spatial scale or pixel resolution are available the data in applications requiring multisource inputs. In
from international standard organizations (36), and the current practices, remote sensing images are in most cases
output of the geometrical processing must comply with integrated (often in a GIS environment) with many other
such rules in terms of required spatial resolution of input data sources, some of them in higher spatial accuracy, so
data for a given output map scale and tolerated errors in such multisource data integration serves per se as a way of
horizontal and vertical displacements associated to each evaluating the performance of the results of the geometri-
spatial scale (37). cal processing for remote sensing images.
The required spatial resolution is associated to the differ-
ent cartographic scales, as well as the precision needed in the
9. SPATIO-TEMPORAL MULTISOURCE DATA
final cartographic products. In classical paper, cartography
INTEGRATION
with an output pixel size of 0.2 mm is assumed, so that the
input image spatial resolution determines the output map
Integration of multiple images acquired by different satel-
scale. For instance, to produce a map at 1:50,000 scale, a
lite systems is often needed to satisfy the needs of a given
pixel size of 10 m may be appropriate, while to produce the
application. More and more final applications make use of
map at a scale 1:25,000, a pixel size of 5 m would be neces-
several remote sensing data streams to provide inputs to a
sary. A pixel size of 20 m is typically used for 1:100,000
physical/statistical model or data assimilation scheme.
mapping. Obviously, not only pixel size but also radiometric
While geometrical processing of the data is relevant also
issues and mostly the spatial stability and spatial uniformity
for applications using only a single data stream from a
of the whole image matter for cartographic mapping. In
given satellite system, the tendency to integrate multiple
digital cartography and GISs usage of the data, such values
data sources in different resolutions is forcing the limits in
are somehow relative, as different resolution images are
terms of rigorous mathematical modeling of geometrical
usually combined. However, even if digital maps can be
effects and accuracy achieved in the geometrical integra-
represented at any spatial scale, still the intrinsic accuracy
tion (41). Often the data integrated have very different
of the map is linked to the spatial scale, so geometrical errors
acquisition geometry (i.e., oblique viewing or large-angle
in the map still correspond and are acceptable at a given
conical scanning versus nadir pointing) and, obviously,
scale and will probably translate into unacceptable geomet-
accurate geometrical data integration is a key requisite
rical errors at finer scales.
in multisource applications.
On the other side, remote sensing images are not always
used for cartographic mapping but, in many cases, also for 9.1. Spatial Mosaicking
thematic mapping associated to some land cover changes or
biophysical parameters retrieval procedure. In such cases, Single image acquisitions by satellite systems usually do
the mapping requirements are not associated to the geo- not include the whole area of interest for a given applica-
metrical accuracy in the position of each output pixel but to tion. This is true not only for global or continental level
the size of the minimum area that can be mapped as a land mapping; even for regional applications, several satellite
cover type or the mapping area to which a given attribute images must be combined to form a mosaic of images
derived from the image can be associated. The “minimum covering the whole area of interest.
mapping unit” is the smallest detail to be mapped, deter- Satellite data acquisitions are typically done along
mining the scale requirement for thematic mapping with strips of limited width (swath). The width varies from
remote sensing data using a given image type. For hundreds of meters for very high resolution systems to
instance, using remote sensing images of 5 m resolution, thousands of kilometers for low-resolution systems. The
the typical scale for thematic mapper would be 1:10,000, main constraint in swath size is the limited memory and
and the minimum mapping unit is 0.02 ha. For 1:40,000 data transmission capabilities. Since the total data volume
thematic mapping scale, a pixel size of 20 m would be is limited, an increase in spatial coverage (swath size) is
adequate, providing a minimum mapping unit of 0.36 ha. typically at the expenses of reduced spatial resolution. For
The validation of the accuracy in the output carto- many applications, mainly those requiring high-resolution
graphic products can be made in several ways (37–40). data, a single strip is not enough to cover the study area
The most common approach is the usage of ground test and several images have to be “mosaicked.” A large buffer
points (defined similarly as to GCPs but now used to test image is defined and all pixels are set to zero. Then, each
the results) or to measure the geometrical cross-correlation single image is referenced over the large frame back-
with a reference image of the same area assumed to be the ground. When two single images overlap each other, a
truth. The resulting geometrical error is expressed in terms decision has to be made about how to combine both pixels
of spatial sampling, so the errors are expressed in terms of to define the unique value in the mosaic.
pixel size. An error in the order of 1 pixel or below is Accurate geometric registration of each single image
acceptable (<0.3 pixel ideal), but it is also important to forming the mosaic is not enough to make the mosaic
report the statistical distribution of error and the spatial look like a single image. Original images are acquired
distribution of such errors across the image. The tendency under different viewing geometries and illumination, so
to avoid scanning systems (whiskbroom) and using push- corrections are needed in order to avoid artifacts in the
broom system has made the images spatially stable and boundaries between original single images. Since images
more suitable for applications requiring more precise are acquired at different times, motions or changes in
geometry. objects (i.e., clouds) can result in discontinuities. Simple
W3605_PUB2 07/02/2016 12:50:11 Page 13

Remote Sensing Geometric Corrections 13

image-processing techniques are often used (local histo- consistent archive, combined with a no-cost data policy, has
gram equalizations plus local cross-correlation and linear motivated a massive usage of these data time series over
composites across overlaps) to improve appearance. How- extensive geographical areas to monitor land surface
ever, physically based methods are preferred to compen- changes over time. Currently, Landsat 8 and Landsat 7
sate for perturbing effects, especially if the data have to be together acquire over 1200 new images per day, which
used in numerical studies or as input to physical models poses a challenge to geometrical processing of such amount
after the mosaic images have been produced. of data.
European Space Agency’s (ESA’s) Copernicus Sentinel-2
9.2. Temporal Composites system flies a constellation of two identical satellites in
complementary orbit, providing global coverage of the
Monitoring the surface conditions by remote sensing Earth’s land surface every 10 days with one single satellite
implies the use of multitemporal data. In fact, the main and every 5 days with both satellites together. The multi-
advantage of remote sensing is the capability to provide spectral imager instrument provides systematic acquisi-
time series of images systematically acquired over a given tions on a global scale, with 10/20 m spatial resolution. The
target. Obviously, geometric registration among all the standard Level-1C product delivered by this mission
multidate images must be set within 1 pixel to make sense includes top of atmosphere reflectances in fixed carto-
of the use of the multitemporal composites. graphic geometry (combined UTM projection and WGS84
Many of the most important applications of remote ellipsoid) in tiles of 100 km × 100 km, each of which is
sensing data are based on the detection of changes in approximately 500 MB (J2 K compressed). The instrument
surface conditions either due to natural biological dynam- acquires and has to transmit 1.6 TB of data per orbit using
ics (i.e., vegetation growth), abrupt changes such as crop ground stations and via high data-rate laser links to the
harvesting, forest fires, or natural disasters, other anthro- geostationary telecoms Alphasat and via the European
pogenic induced changes such as urban area expansion, Data Relay Satellite (EDRS) system. This results in
development of transportation lines, or global environmen- 800 GB per day compressed raw data, that is, 400 TB per
tal changes induced by climate effects. All such applica- year from one single satellite. With two satellites operating
tions require accurate geometrical registration of a time simultaneously, hopefully over several decades, the Senti-
series of satellite images to monitor such changes to avoid nel system represents a challenge for geometric processing
misinterpretation as changes of artifacts introduced by given the global availability of very high spatial resolution
imperfect geometrical co-registration of the images. (10 m) and the accuracy required in the geometrical proc-
essing for such applications.
10. PERSPECTIVES IN NEW DATA SOURCES AND DATA Adding the many other data sources in very high spatial
PROCESSING METHODOLOGIES resolution expected for the coming years from other space
agencies and private companies, the amount of data to be
The field of Earth’s observation by orbiting satellites is processed will increase exponentially and the exploitation
evolving quite rapidly. On the one side, space agencies will also grow due to tendency of free availability of such
continue to develop Earth-observing systems more com- remote sensing data for final users. Moreover, more and
plete and with increasing spectral, spatial, and temporal more end-user applications do not rely on a single-satellite
capabilities, oriented mostly to scientific studies and or remote sensing system, but several sources of data with
administrative services. On the other side, private compa- different spatial and temporal resolution are combined to
nies are developing more and more commercial systems, get the desired outputs. Precise geometrical integration of
particularly in very high spatial resolution of even less than such multisource data is the driver to make possible such
1 m, which are very appreciated by end-user applications. applications.
Increasing technical capabilities and application fields are
making such remote sensing technologies to evolve to 10.2. Specific Methodologies for Operational Processing
rapidly cover a wide range of cases from global mapping of Large Data Volume
with satellites in different orbits up to very precise local
mapping by using UAVs, which basically exploit the same While classical geometric processing of remote sensing
imaging technologies used in the case of satellites but in a data was traditionally based on the usage of GCPs, defined
much fine spatial resolution. by the user in most cases, the technological improvements
One key element in the development of geometric proc- that allow quite precise determination of position and
essing methods for remote sensing data is the increasing attitude of the observing platforms, the availability of
amount of data to be processed (42). The systematic avail- DEMs describing the 3D structure of the observed surface,
ability of large amount of image data in a routine basis is and the computational capabilities that allow implementa-
now the core driver in most applications. tion of automatic methods over large amounts of data, all
have resulted in the increased tendency to move toward
10.1. Perspectives in Data Sources and Implications more and more automatic methods.
for Data Processing The availability of large amounts of data freely avail-
able to the users, together with the increasing applica-
Since 1972, Landsat satellites have continuously acquired tions of remote sensing data in administrative
space-based images of the Earth’s land surface, providing applications and end-user practices, has motivated auto-
valuable data source for many applications. This large and matic methods for handling the geometrical processing of
W3605_PUB2 07/02/2016 12:50:13 Page 14

14 Remote Sensing Geometric Corrections

the data that are robust enough to be applied for very 18. J. Moreno and J. Melia. IEEE Trans. Geosci. Remote Sens.
different spatial resolution systems and for every location 1994, 32, pp 131–151.
on Earth. 19. G. A. Poe, IEEE Trans. Geosci. Remote Sens. 1990, 28, pp
The development of data processing techniques toward 800–810.
more and more automatic methods is becoming essential 20. D. Baldwin, W. Emery, and P. Cheeseman. IEEE Trans. Geo-
and the usage of machine learning features, often devel- sci. Remote Sens. 1998, 36, pp 244–255.
oped before in other science domains, are being imple- 21. P. Cheesman, B. Kanefsky, R. Kraft, J. Stutz, and R. Hanson.
mented for the processing of remote sensing data to In Maximum Entropy and Bayesian Methods; Heidberg, G. R.,
cover such needs to speed up the processing procedures Ed.; Kluwer: the Netherlands, 1996; pp 293–308.
and to handle huge amounts of data. Big-data techniques, 22. A. E. Roy. Orbital Motion, 3rd ed.; Hilger: Bristol, 1988.
parallel computing, cloud computing, and other technolog- 23. D. Brouwer. Astron. J. 1959, 64, pp 378–397.
ical developments contribute to the success, but the 24. P. R. Escobal. Methods of Orbit Determination. Wiley: New
increasing user needs and improved technological capabil- York, 1965.
ities are continuously creating a challenge for the accurate 25. J. Morrison and S. Pines. Astron. J. 1961, 66, pp 15–16.
geometrical processing of remote sensing data. 26. F. R. Hoots and R. L. Roehrich. Models for Propagation of
NORAD Elements Sets; Spacetrack Rep. No. 3, NORAD, Aero-
space Defense Command: Peterson AFB, CO, 1980.
BIBLIOGRAPHY 27. V. Kratky. Int. Arch. Photogramm. Sens., 1988, 27 (4), pp 180–
189.
1. P. M. Mather. Computer Processing of Remotely-Sensed 28. P. V. Radhadevi, T. P. Sasikumar, and R. Ramachandran.
Images, 3rd ed.; Wiley: UK, 2004. ISPRS J. Photogramm. Remote Sens. 1994, 49 (4), pp 22–28.
2. J. LeMoigne, N. S. Netanyahu, and R. D. Eastman., Eds. 29. J. Moreno and J. Melia. IEEE Trans. Geosci. Remote Sens.
Image Registration for Remote Sensing. Cambridge University 1993, 31, pp 204–226.
Press: Cambridge, 2011. 30. G. W. Rosborough, D. G. Baldwin, and W. J. Emery. IEEE
3. R. Bernstein et al. In Manual of Remote Sensing, 2nd ed.; Trans. Geosci. Remote Sens. 1994, 32 (3), pp 644–657.
Colwell, R. N., Ed.; American Society Photogrammetry: Falls 31. M. J. Barnsley, J. J. Settle, M. A. Cutter, D. R. Lobb, and
Church, VA, 1983; vol. I, chapter 21, pp 873–922. F. Teston. IEEE Trans. Geosci. Remote Sens. 2004, 42 (7),
4. T. Toutin. Int. J. Remote Sens. 2004, 25 (10), pp 1893–1924. pp 1512–1520, doi: 10.1109/TGRS.2004.827260.
5. P. N. Slater. Remote Sensing Optics and Optical Systems. 32. D. J. Diner, J. C. Beckert, T. H. Reilly, C. J. Bruegge, J. E.
Addison-Wesley: Reading, MA, 1980. Conel, R. Kahn, J. V. Martonchik, T. P. Ackerman, R. Davies,
6. R. Bernstein. IBM J. Res. Develop. 1976, 20, pp 40–57. S. A. W. Gerstl, H. R. Gordon, J-P. Muller, R. Myneni, R. J.
7. A. Bannari, D. Morin, G. B. Benie, and F. J. Bonn. Remote Sellers, B. Pinty, and M. M. Verstraete. IEEE Trans. Geosci.
Sens. Rev. 1995, 13, p 2747. Remote Sens. 1998, 36, p 1072.
8. L. M. G. Fonseca and B. S. Manjunath. Photogramm. Eng. 33. V. M. Jovanovic, D. J. Diner, and R. Davies. In Image Regis-
Remote Sens. 1996, 62 (9), pp 1049–1056. tration for Remote Sensing; Le Moigne, J. et al., Eds.; Cam-
bridge University Press: Cambridge, 2011.
9. W. A. Gruen. S. Afr. J. Photogramm., Remote Sens. Cartogr.
1985, 14, pp 175–187. 34. D. Schläpfer and R. Richter. Int. J. Remote Sens. 2002, 23 (13),
pp 2609–2630.
10. D. Poli and T. Toutin. Photogramm. Record 2012, 27, pp 58–73,
doi: 10.1111/j.1477-9730.2011.00665.x. 35. I. Colomina and P. Molina. ISPRS J. Photogramm. Remote
Sens. 2014, 92, pp 79–97.
11. Z. Xiong and Y. Zhang. Photogramm. Eng. Remote Sens. 2009,
75, pp 1083–1092. 36. USGS. National Mapping Program Standards, http://
nationalmap.gov/gio/standards/
12. T. Fuse, C. S. Fraser, and P. M. Dare. Int. Arch. Photogramm.
Remote Sens. 2004, 35 (III/8), pp 601–605. 37. X. Dai and S. Khorram. IEEE Trans. Geosci. Remote Sens.
1998, 36, pp 1566–1577.
13. A. Goshtasby, G. C. Stockman, and C. V. Page. IEEE Trans.
Geosci. Remote Sens. 1986, 24 (3), pp 390–399. 38. M. A. Aguilar, M. del Mar Saldaña, and F. J. Aguilar. Int. J.
Appl. Earth Observ. Geoinform. 2013, 21, pp 427–435.
14. A. D. Ventura, A. Rampini, and R. Schettini. IEEE Trans.
Geosci. Remote Sens. 1990, 28 (3), pp 305–314. 39. T. Toutin. Photogramm. Eng. Remote Sens. 2003, 69, pp 43–51.
15. H. M. Chen, M. K. Arora, and P. K. Varshney. Int. J. Remote 40. J. Wang, Y. Ge, G. B. M. Heuvelink, C. Zhou, and D. Brus. Int.
Sens. 2003, 24, pp 3701–3706. J. Appl. Earth Observ. Geoinform. 2012, 18, pp 91–100.
16. S. K. Park and R. A. Schowengerdt. Comput. Vision Graphics 41. J. Moreno, S. Gandia, and J. Melia. IEEE Trans. Geosci.
Image Process. 1983, 23, pp 258–272. Remote Sens. 1992, 30, pp 1006–1014.
17. R. G. Keys. IEEE Trans. Acoust. Speech Signal Process. 1981, 42. M. Halem. Proc. IEEE. 1989, 77, pp 1061–1091.
29, pp 1153–1160.
JOSE MORENO
University of Valencia,Valencia, Spain

You might also like