Professional Documents
Culture Documents
ch11 Kriging
ch11 Kriging
Kriging is a spatial prediction method of nice statistical properties: BLUE (best linear unbiased
estimator). The method was first developed by G. Matheron in 1963, two volumes published in
French. Matheron named the method after the South African mining engineer, D.G. Krige, who in
the 50s developed methods for determining ore grades, although the specific prediction method of
Matheron has not much to do with Krige (see Cressie 1990 for the history).
Kriging shares the same weighted linear combination estimator as those given in the last chapter:
n
z wi zi
i 1
where zi is the sample value at location i, wi is a weight, n is the number of samples.
As we will show next that estimators of the above form are unbiased if the sum of the weights is 1.
The distinguishing feature of kriging, therefore, is its aim of minimizing the error variance.
Many kriging methods have been developed for different prediction purposes, e.g., block kriging,
universal kriging, cokrigin, etc. Here we will only concentrate on the most basic one: ordinary
kriging.
The estimation error at location s0 is the difference between the predictor and the random variable
modeling the true value at that location:
R0 Z0 Z0 wi Zi Z0
So, as long as wi 1, the weighted linear estimator (*) is unbiased. All the methods in
chapter 10 meet this condition, thus are unbiased. However, the unbiasedness tells us nothing
about how to determine the weights wis.
2
Minimizing error variance
Kriging is such a method that determines the weights so that the mean squared error (MSE) is
minimized:
MSE E ( Z 0 Z 0 ) 2
subject to the unbiasedness constraint wi 1.
E ( Z0 Z0 )2 var Z0 Z0
var( Z0 ) var( Z0 ) 2 cov( Z0 , Z0 )
var wi Zi var( Z0 ) 2 cov( wi Zi , Z0 )
wi w jCij 2 2 wiCi 0
i j
Once we have chosen a data generating model (through a covariogram or variogram), the
minimization of MSE can be achieved by setting the n partial first derivatives to 0, then the n
weights wis can be obtained by solving the n simultaneous equations. However, this procedure
does not quite work for our problem because we only want the solutions that meet the
unbiasedness condition.
3
Minimizing error variance using the Lagrange multiplier
The Lagrange multiplier is a useful technique for converting a constrained minimization problem
into an unconstrained one.
MSE n
2 w j C1 j 2C10 2 0
Take the first term w1 as an example: w1 j 1
n
w j C1 j C10 (**)
j 1
C w = D
C11 ... C1n 1 w1 C10
... ... ... ... ... ...
w = C-1D
Cn1 ... Cnn 1 wn Cn 0
1 ... 1 0 1
(n+1)(n+1) (n+1)1 (n+1)1
4
Estimating the variance of errors
Because kriging predictor is unbiased, the variance of the prediction errors is just the MSE:
The first term on the right hand side From the the equation of the first derivative (i.e., the (**)
equation on the previous page), we have
MSE 2 wiCi 0
The variance is often called the ordinary kriging variance, expressed in a matrix form:
OK
2
2 w' D
Note: 2 is simply C(0).
5
Interpretation of kriging
The kriging system may be better understood through the following intuitive interpretation. Two steps are
involved in determining the linear weight of kriging:
1. The D vector provides a weighting scheme similar to that of the inverse distance method. The higher the
covariance between a sample (denoting i = 1, 2, , n) and the location being estimated (denoting 0), the
more that sample would contribute to the estimation. Like an inverse distance method, the covariance
(thereof weight) between sample i and location 0 generally decreases as the sample gets farther away.
Therefore, D vector contains a type of inverse distance weighting in which the distance is not the
geometric distance to the estimating sample but a statistical distance.
2. What really makes kriging differ from the inverse distance method is the C matrix. The multiplication of D
by C-1 does more than simply rescale D so that w sums to 1. C records (covariance) distances between all
sample pairs, providing the OK system with information on the clustering of the available sample data. So C
helps readjust the sample weight according to their clustering. Clustered samples will be declustered by C.
Therefore, OK system takes into account of two important aspects of estimation problem: distance and
clustering.
C w = D
C11 ... C1n 1 w1 C10
... ... ... ... ... ...
Cn1 ... Cnn 1 wn
Cn 0 w = C-1D
1 ... 1 0 1
(n+1)(n+1) (n+1)1 (n+1)1 6
Ordinary kriging in terms of variogram g(h)
In practice, kriging is usually implemented using variogram rather than covariogram because it
has better statistical properties (unbiased and consistent). From chapter 9 (page 6): g(h) = C(0) -
C(h), we have C(h) = C(0) - g(h). Substituting this covariogram into the unconstrained MSE on
page 4 leads to
MSE wi w j ( 2 g ij ) 2 2 wi ( 2 g i 0 ) 2 wi 1
i j
wi w jg ij 2 wig i 0 2 wi 1.
i j
Similar to the covariogram, the weights can be solved by setting the equations of the 1 st
derivatives w.r.t. wis to zero. The final kriging equation in matrix notation is:
w = D
0 g 12 ... g 1n 1
w1 g 10
g ... g 2 n 1 g 20
w2
0
21
... ... ... ... ... ... ... w = -1D
g n0
g n1 g n 2 ... 0 1 wn
1
1 1 ... 1 0
(n+1)(n+1) (n+1)1 (n+1)1
7
Ordinary kriging variance in terms of variogram g(h)
Following the same steps as for the variance based on the covariogram, we have the ordinary
kriging variance in terms of variogram:
OK
2
wig i 0 w' D,
8
Checking and removing trends (make the data stationary)
Example: soil pH value in the Gigante plot of Panama, using the full data set (soil.dat, has 349 data
points).
The data appear to have a trend in the northwest-southeastern direction. To remove such a trend, we
fit the data using using model: z = 5.67 - 0.003295x + 0.001025y + 4.521e-6x2+ e. Terms y2 and xy
are not significant. It seems that the trend surface analysis has detrended the data.
Before detrended After detrended
800
800
High
600
600
400
400
200
200
Low
0
0 100 200 300 400 500 0 0 100 200 300 400 500
9
Has the trend really been removed?
We further check it using variograms. The comparison of the variograms before and after detrending
confirms that there is no trend in the residuals. We are confident that the residuals of the trend surface
analysis are likely stationary. We can now go on to do kriging.
>soil.geodat=as.geodata(soil.dat,coords.col=2:3,data.col=5,borders=T)
>variog.b0=variog(soil.geodat,uvec=seq(0,500,by=5), max.dist=500)
>plot(variog.b0)
>variog.b2=variog(soil.geodat,uvec=seq(0,500,by=5),trend="2nd",max.dist=500)
>plot(variog.b2)
10
Fitting a variogram
Several variogram models can be fitted to the data. For illustration purpose, only two models
(the spherical and logistic models) are shown here. By visual inspection, it seems that the
logistic model may capture the spatial autocorrelation better than the spherical model,
particularly at short distance lag. However, the sigmoid shape of the logistic model may not
reflect the intrinsic feature of the data. We will use the spherical model for kriging.
Spherical model
Logistic model:
Logistic model
0.0001072h 2
g (h) 0.1065
0.0009258h 2
11
R implementation using geoR- ordinary kriging
1. Compute variogram by directly considering trend (i.e., removing 2 nd order trend. Kriging will
automatically put back the trend in the final prediction):
>variog.b2=variog(soil.geodat,uvec=seq(0,500,by=5),trend="2nd",max.dist=500)
>pH.prd.mat=matrix(pH.prd$predict,byrow=T,ncol=84)
>image(unique(prd.loc$x),unique(prd.loc$y),t(pH.prd.mat),
xlim=c(-20,500),ylim=c(-20,820),xlab="x",ylab="y")
>lines(gigante.border,lwd=2,col=green)
>contour(pH.prd,add=T)
14
SrfID gx gy Site pH pred var se.fit
Evaluating the outputs 31 240 8 site8mN 4.92 4.304 0.0088 0.0936
of kriging prediction: 123 240 232 site8mS 5.97 5.933 0.0109 0.1046
124 240 220 site20mS 5.57 5.762 0.0203 0.1426
128 300 260 site20mN 5.41 5.375 0.0198 0.1406
1. Independent data validation:
190 358 420 site2mW 5.28 5.268 0.0033 0.0573
Compare the predicted with the observed 242 362 540 site2mE 5.23 5.437 0.0039 0.0627
data. As shown in the left table, these 13 243 368 540 site8mE 5.04 5.343 0.0104 0.1020
data samples were not included in the 290 238 600 site2mW 4.63 5.086 0.0042 0.0647
kriging analysis. The predictions were 291 232 600 site8mW 5.45 5.177 0.0145 0.1202
generated from: 292 220 600 site20mW 5.4 5.371 0.0254 0.1595
310 422 660 site2mE 5.33 5.545 0.0033 0.0573
>pH.prd13=krige.conv(soil.geodat,loc= 312 440 660 site20mE 6.17 5.747 0.0185 0.1361
prd.loc13,krige=krige.control(cov.model= 360 478.6 538.6 site2mSW 5.55 5.310 0.0033 0.0572
"spherical",cov.pars=c(0.09549,130.71043)))
2. Cross-validation:
Deleting one observation each time from
the data set and then predicting the deleted
observation using the remaining
observations in the data set. This process is
repeated for all observations. Residuals are
then analyzed using standard techniques of
regression analysis to check the underlying
model assumptions.
15
Block kriging
In many occasions, we are interested in estimating the value in a block (cell) rather than that
at a point. The block kriging system is similar to that of the
OK, of the form:
C w = D + +
C11 ... C1n 1 w + ..
... 1 C1A . .+
... ... ... ...
... C
Cn1 ... Cnn 1 wn nA
Block A
+
1 ... 1 0 1
+
(n+1)(n+1) (n+1)1 (n+1)1
+ observed samples
1
The block kriging variance is: 2 C AA w' D, where C AA 2 Cij .
OK
A iA jA
16
R implementation - block kriging
+ +
+ ..
. .+
Block A
+
+
+ observed samples
. regularly spaced locations
set up within the block
17
Spatial estimation: additive/nonadditive variables
Some precaution is necessary before applying geostatistical analysis to your data. The
method does not universally apply to any type of data.
5 balls 3 scaled up 8
Additive variable:
7 4 11
3 colors 3 colors
(1b, 2r, 2 w) (1b, 1g, 1y) scaled up 5 colors
Nonadditive variable:
3 colors 1 color
(3b, 2r, 2 w) (4g)
4 colors
Nonadditive variables include: number of species in a block, ratio data (e.g., number of cars
per household in a city block). Geostatistics is invalid for analyzing nonadditive variables
because subtraction makes no sense here. 18
Spatial estimation: scale effect
Few spatial data (point process is an exception) can avoid the problem of the size of sample area
(called support in geostat, or modifiable areal unit in geography, or grain size in landscape
ecology).
In many practical applications, the support of the samples is not the same as the support of the
estimates we are trying to calculate. For example, when assessing gold ore grades in an area, we
take samples from drill hole cores, but in mining operation we treat truckloads as the size of
sample (consider a truckload either as ore or as waste).
So a critical and difficult question is: can we infer about the properties of a variable at different
levels of supports from the observations sampled at a particular support? In other words, can we
scale down or up a spatial process?
25
40
?
20
30
15
20
10
10
5
0
0 1 2 3 4 0 2 4 6 8
rgamma(100, 1) rgamma(100, 3)
19
Spatial estimation: scale effect
Number of stems and number of species per Grain No. stems/m2 No. species/m2
size (m) (std. error) (std. error)
m2 at different sampling scales (grain size) in
a 1000500 m rain forest of Malaysia. The 55 0.671 (0.244) 0.585 (0.197)
entire plot has 335,356 trees belonging to
814 species. The densities at each grain size 1010 0.671 (0.167) 0.475 (0.095)
were computed as follows: (1) divide the plot
into a grid system using a given scale (e.g., 2020 0.671(0.130) 0.318 (0.038)
55 m), (2) count the total number of stems
2525 0.671 (0.121) 0.267 (0.026)
and the number of species in each cells,
respectively, (3) average these two quantities
5050 0.671 (0.100) 0.129 (0.008)
across all the cells, and (4) then divide the
averages by the scale. 100100 0.671 (0.085) 0.049 (0.001)
The results clearly show how sampling scale 250250 0.671 (0.048) 0.011 (0.0004)
profoundly affects the species diversity. They
suggest that diversity based on per unit area 500500 0.671 (0.041) 0.003 (< 0.001)
(the last column) is a misleading
measurement for comparing diversity 5001000 0.671 0.0016
between two ecosystems.
20
Spatial estimation: scale effect
This problem of the discrepancy between the support of our samples and the
intended support of our estimates is one of the most difficult we face in
estimation.
21