You are on page 1of 6

SPATIAL INTERPOLATION

Spatial interpolation is the process of using points with known values to estimate values at other
points. In GIS applications, spatial interpolation is typically applied to a raster with estimates made
for all cells. Spatial interpolation is therefore a means of creating surface data from sample points.

Control Points-Control Points are points with known values. They provide the data necessary for
the development of an interpolator for spatial interpolation. The number and distribution of control
points can greatly influence the accuracy of spatial interpolation.

Deterministic methods for spatial interpolation

There are two main groupings of interpolation techniques: deterministic and geostatistical.
Deterministic interpolation techniques create surfaces from measured points, based on either
the extent of similarity (inverse distance weighted) or the degree of smoothing (radial basis
functions). Geostatistical interpolation techniques (kriging) utilize the statistical properties of
the measured points. Geostatistical techniques quantify the spatial autocorrelation among
measured points and account for the spatial configuration of the sample points around the
prediction location.

Deterministic interpolation techniques can be divided into two groups, global and local. Global
techniques calculate predictions using the entire dataset. Local techniques calculate predictions
from the measured points within neighborhoods, which are smaller spatial areas within the larger
study area. Geostatistical Analyst provides global polynomial as a global interpolator and
inverse distance weighted, local polynomial, radial basis functions, kernel smoothing, and
diffusion kernel as local interpolators.

A deterministic interpolation can either force the resulting surface to pass through the data values
or not. An interpolation technique that predicts a value that is identical to the measured value
at a sampled location is known as an exact interpolator. An inexact interpolator predicts a value
that is different from the measured value. The latter can be used to avoid sharp peaks or troughs in
the output surface. Inverse distance weighted and radial basis functions are exact interpolators,
while global polynomial, local polynomial, kernel interpolation with barriers, and diffusion
interpolation with barriers are inexact.

1
Classification of spatial interpolation methods (Summary)
Global Local
Deterministic Stochastic Deterministic Stochastic
Trend surfaces Regression Thiessen (exact) Kriging (exact)
Inexact Inexact Density Estimation (Inexact)
Inverse Distance Weighted (exact)
Splines (exact)

Fundamentals of interpolation
Interpolation includes the set of mathematical and statistical procedures applied to derive estimates
at unsampled locations within the spatial coverage of the sample values available. Interpolation is
considered to be an effective method of converting data from point observations to a continuous
surface, whether it is represented as isopleths, irregular tiles or a regular grid. Burrough and
McDonnell (1998) state that interpolation is necessary in three broad situations. The case that
reflects the current situation is that “the data we have do not cover the domain of interest
completely. Fundamentally, interpolation is rooted in Tobler’s First Law of Geography that states
that attribute values are more likely to be closely related to those features in close proximity rather
than those that are distant.
Interpolation methods can be subdivided into two separate manners.
Firstly, techniques can be categorized as being either exact or inexact interpolators. Exact
interpolators consider the original sample data to be an ‘exact” measurement of the true surface
and thus retain them in the predicted surface for their respective locations. Inexact interpolators
consider that localized variation may be such that an original sample value is equivalent to a
measurement of the true surface plus noise. New values are estimated at sample points, as well as
unsampled points, corresponding to the best fitting surface.
The second classification of interpolation procedures is between global and local methods.
Global interpolation techniques utilize all available data points for the study area, whereas local
interpolation techniques only operate with the nearest data points within a defined neighborhood.

Regression methods are a common type of global interpolation. Regression techniques attempt to
establish a function that describes the relationship among attributes. Trend surface analysis does
this based solely on the geographical coordinates of the sample locations. Transfer functions are
based on the relationship between sample data and a second spatially variable attribute. In both

2
cases, the study area is modelled by a smooth mathematical function. For example, trend surface
analysis fits a polynomial function to the attribute versus geographical location surface using the
least squares method (Burrough and McDonnell, 1998). This approach can be very useful in
minimizing the cost of a study and maximizing the predictive performance of the surface model
by attempting to construct an empirical relationship between the variable of interest and a second
variable, which is either cheaper to map or for which data is already more readily available
(Burrough and McDonnell 1998).

Burrough and McDonnell (1998) define four steps in any local interpolation procedure. Firstly,
the neighbourhood around a prediction point must be defined. Secondly, the existing sample data
points within this search area must be defined. Thirdly, a mathematical function must be applied
to represent the variation among these limited data points. Fourthly, this must be evaluated for a
point on the grid being used. All the grid points can be calculated in this manner.

Local interpolation procedures can be categorized into three major classes; nearest neighbours,
inverse distance weighting, and splines. However, there are also a few less commonly used
methods. The most common form of nearest neighbor interpolation is the Thiessen polygon
method (Thiessen, 1911) in which each predicted grid cell is assigned the same value as its closest
neighbour. The resulting “surface” of discrete polygons intuitively seems unsuited to the
development of precipitation field, yet for almost 50 years, it was the predominant used
interpolation procedure even for prediction (Pardo-Iguzquiza, 19980. This was suppressed by the
development of geostatistics in the 1960s.

Inverse Distance Weighting (IDW) improves upon the Thiessen method by including the effects
of more than one neighbour and weighting their influence based on proximity. Weighting is based
on inverse distance; nearer stations hold greater influence than more distant ones. In accordance
to Tobler’s law, the technique can be adjusted by altering the power to which distance is raised
and the number of neighbours considered. As the power increases, there is a more rapid increase
in influence with distance from a particular sample value. Most commonly, a power of two is
utilized (Inverse Squared Distance) Groovaerts, 2000). The number of neighbours to be considered
is regulated by explicitly choosing the number or setting a search radius for each interpolated point,

3
either of which must be defined a priori. Splines are a piece-wise function that fits a polynomial
curve to a small number of local points.

Geostatistical methods of interpolation


Geostatistics originate from the work of a French Mathematician G. Matheron and South African
mining engineer D.G. Krige, who worked on developing an optimal interpolation procedure based
on regionalized variables, for application in mining. Consequently, geostatistical techniques are
also known as “kriging; geostatistical methods are founded in the principles of statistical spatial
autocorrelation. They are suited to situations when the variation within or density of sample
attribute data are too great to be adequately modelled by simpler methods. Another advantage is
that these methods contain a probabilistic estimate of the interpolation quality itself (Burrough and
Mcdonnell, 1998).

When data sampling is sparse, it is crucial to adequately evaluate the validity of the assumptions
that are made in regards to the underlying variation amongst the data. Geostatistical interpolation
techniques are based upon the acknowledgement that modelling by means of a simple, smooth
mathematical function is often inappropriate for spatially continuous variable because the spatial
variation is too irregular. Geostatistical methods divide spatial variation into three components: (a)
deterministic variation (different levels of trends) that can be treated as useful, soft information,
(b) spatially auto correlated, but physically difficult to explain variations and finally (c)
uncorrelated noise (Burrough and McDonnell, 1998).

Geostatistical methods of spatial estimation are improved over the simpler Thiessen methods
because the use of the information from surrounding samples produces estimates of greater
precision and the estimation variance, which provides an uncertainty measurement for the
estimates, is minimized (Pardo-Iguzguiza, 1998). Geostatistical procedures are also an
advancement over other interpolation methods because their weights are chosen a priori such that
the interpolated surface will be optimized, providing a Best Linear Unbiased Estimate (BLUE).
Earlier interpolation methods did not provide any means of determining whether or not the best
weighting values had been chosen. As a result of these improvements, geostatistical approaches
have become the standard tool for interpolation (Martinez –cob, 1996; Pardo-Iguzguiza, 1998;
Goovaerts, 2000)

4
Many geostatistical techniques exist. ESRI (20020 provides the following definitions of the three
most common forms of kriging:

Ordinary kriging produces interpolation values by assuming a constant but unknown mean value,
allowing local influences due to nearby neighbouring values. Because the mean is unknown, there
are a few assumptions. This makes ordinary kriging particularly flexible, but perhaps less powerful
than other methods.

Simple kriging produces interpolation values by assuming a constant but known mean value,
allowing local influences due to nearby neighbouring values. Because the mean is known, it is
slightly more powerful than ordinary kriging, but in many situations the selection of a mean value
is not obvious.

Universal Kriging produces values but assuming a trend surface with unknown coefficients, but
allowing local influence due to nearby neighbouring values. It is possible to over fit the trend
surface, which does not leave enough variation in the random errors to properly reflect uncertainty
in the model. When used properly, Universal Kriging is more powerful than ordinary kriging
because it explains much of the variation in the data through the nonrandom trend surface. Other
forms include Indicator Kriging, Probability Kriging, and Disjunctive Kriging. More information
on these methods can be found in (Burrough and McDonnell. 1998). Cokriging is a multivariate
form that uses information on more than one co-regionalized variable. It can be especially useful
when there is a second attribute that is more densely measured or cheap to measure. However, the
second attribute must be measured at a greater number of points; otherwise, Cokriging will not
results in any improvement over univariate kriging methods. Furthermore, there must exist a high
degree of correlation between the two variables (Eastman, 1999’ Goovaerts, 2000).

Watch this video

https://www.youtube.com/watch?v=_qP2NvQ01ak
Interpolation using Kriging methods

https://youtu.be/wxPNOAK2NOc

IDW method

https://youtu.be/_qP2NvQ01ak

5
Thiessen polygon video

https://youtu.be/7kCJIWIByMQ

You might also like