Professional Documents
Culture Documents
Solution To Ordinary and Universal Kriging Equations: 1 Lagrange Multipliers
Solution To Ordinary and Universal Kriging Equations: 1 Lagrange Multipliers
Meredith Franklin
February 6, 2014
1 Lagrange Multipliers
Lagrange multipliers are used for finding the maxima/minima of multivariate func-
tions that are subject to a constraint. The function f (x1 , x2 , ..., xn ) and constraint
g(x1 , x2 , ..., xn ) = 0 are continuous, have continuous first partial derivatives and Og 6= 0.
The two functions meet where their tangent lines are parallel. This is equivalent to say-
ing that f and g meet when their gradients are parallel (since the gradient of a function
is perpendicular to the function). Thus, Of = Og, where the constant is called
the Lagrange multiplier. The extremum is found by solving the n+1 gradient equations
(extremum are when all partials are set equal to 0). For example:
L
0= = 2x + 2xy (1)
x
L
0= = 2y + x2 (2)
y
L
0= = x2 y 16 (3)
Equation (1) gives 2x(1 + y) = 0 which requires x = 0 or y = 1/. Equation (2) gives
x2 = 2y/. When plugged back into the constraint equation, we find = 2. Thus the
minima under the constraint g = 0 occurs when y = 1/2 and x = 1/ 2.
1
process at unobserved locations, s0 D. Kriging is a method that enables prediction
of a spatial process based on a weighted average of the observations. In the case of
an intrinsically stationary process with constant unknown mean, we use the ordinary
kriging (OK) method.
N
X
Z(s0 ) = i Z(si ) (4)
i=1
We want to find the best linear unbiased predictor (BLUP) by minimizing the variance
of the interpolation error (i.e. minimize mean square prediction error), V ar(Z(s0 )
Z(s0 )) = E[(Z(s0 )Z(s0 ))2 ]. For the predictor to be unbiased, E[Z(s0 )] = E[Z(s0 )] =
is required. Given (4) this means:
N
X
E[Z(s0 )] E[Z(s0 )] = E[ i Z(si )] E[Z(s0 )]
i=1
N
X
= i E[Z(si )] E[Z(s0 )]
i=1
XN
= i
i=1
XN
= ( i 1)
i=1
Thus N
P
i=1 i = 1 for unbiasedness to hold and we have a minimization problem with a
constraint that can be solved using Lagrange multipliers. We minimize
XN
2
L(i , ) = E[(Z(s0 ) Z(s0 )) ] + 2( i 1)
i=1
N
X XN XN
= V ar[ i Z(si )] + V ar[Z(s0 )] 2Cov[ i Z(si ), Z(s0 )] + 2( i 1)
i=1 i=1 i=1
N X
X N N
X XN
= i j Cov[Z(si ), Z(sj )] + V ar[Z(s0 )] 2 i Cov[Z(si ), Z(s0 )] + 2( i 1)
i=1 j=1 i=1 i=1
N
P N P
P N
Recall the variance of a linear combination V ar[ i Z(si )] is i j Cov[Z(si ), Z(sj )]
i=1 i=1 j=1
Differentiate with respect to i and and set equal to 0
N
L(i , ) X
= 2 j Cov[Z(si ), Z(sj )] 2Cov[Z(si ), Z(s0 )] + 2 = 0
i
j=1
2
Which gives:
N
X
j Cov[Z(si ), Z(sj )] + = Cov[Z(si ), Z(s0 )]
j=1
And
N
L(i , ) X
= 2 i 2 = 0
i=1
Which gives:
N
X
i = 1
i=1
Where
N X
X N N
X N
X
i j Cov[Z(si ), Z(sj )] = i j Cov[Z(si ), Z(sj )]
i=1 j=1 i=1 j=1
N
X
= i (Cov[Z(si ), Z(s0 )] )
i=1
3
So, the MSE gives us the ordinary kriging variance,
N
X
2 2
OK = i (Cov[Z(si ), Z(s0 )] )
i=1
Which in matrix form is
2
OK = 2 w0 D
XN p
X p
X
E[Z(s0 )] E[Z(s0 )] = E[ i k xk (si )] k xk (s0 )
i=1 k=1 k=1
N
X p
X p
X
= i E[k xk (si )] k xk (s0 )
i=1 k=1 k=1
We require
N
X
i = 1
i=1
N
X
i xk (si ) = xk (s0 ), k = 1, . . . , (p 1)
i=1
4
As above, we obtain the result w = C 1 D which expands to:
1
1 C11 C12 ... C1N x11 ... x1p C10
2
C21 C22
... C2N x21 ... x2p
C20
..
.. .. .. .. .. .. .. ..
.
.
. . . . . .
.
N = CN 1 CN 2 . . . CN N xN 1 . . . xN p CN 0
1 x11 x12 . . . x1N 0 ... 0 x10
.. .. .. .. .. .. .. .. ..
. . . . . . . . .
p xp1 xp2 . . . xpN 0 ... 0 xp0
U2 K = 2 w 0 D