Professional Documents
Culture Documents
3 Cokriging
Suppose that the data are k X 1 vectors Z(s1), . . . , Z(sn) and write
Z = (Z(s1), . . . , Z(sn))y,
(3.2.45)
An n X k matrix with (i,j) the element Zj(s1). It is desired to predict, say, Z1(s0)
based not only on Z1 = (Z1(s1), . . . , Z1(sn)y, but also based on the covariablesZj =
(Zj(s1), . . . , Zj(sn))',j 1. More generally, it may be desired to predict Z(s0) =
(Z1(s0), . . . , Zk(s0)), s0 D.
For example, to assess the feasibility of openening a copper mine, mineral
exploration is carried out: Samples at know spatial locations s1, . . . ,sn are taken
and assayed. Although the percentage copper is reported, it does not occur in
isolation. Percentages of lead and zinc are also typically found, along with other
minerals. The data at a single location, reported as fractions of 100%, are
compositional in nature (i.e., their sum cannot exceed 100). Assume that
thiscompositional problem has altredy been resolved (see, e.g.,
ORDINARY KRIGING
Aitchison, 1986), so that the vector-valued stochastic process (Z(s):s D) is
adequately described by its second-order parameters. Other examples can be found
in soil science (e.g., Ahmed and de Marsly, 1987), and geophysics (e.g., Krajewski,
1987).
Let
E(Z(s)) = ,
(3.2.46)
s D,
s, u D,
(3.2.47)
jiZj(si).
i=1 j=1
P1(Z; s0) =
(3.2.48)
Notice that (3.2.48) assumes that all components of Z(si) are available at each i.
Should this not case, an easy modification is possible (e.g., Journel and Huijbregts,
1978, p. 325).
Asking for a predictor that is uniformly unbiased, that is, E(p1(Z; s0)) 1, for all ,
yields the necessary and sufficient condition
n
i=1
1i
= 1,
i=1
ji
= 0,
for j = 2, . . . ,k.
(3.2.49)
E(Z1(s0) -
i=1 j=1
Zj(si))2,
ji
(3.2.50)
Subject to the constaints (3.2.49).in principle, this problem is no more difflcuk than
ordinary kriging, except there are more Lagrange multipliers m1, . . . ,mk needed for
the extra constaints in (3.2.49).
A covariance-based approach to cokriging [cf. (3.2.31) and (3.2.32)] is
straightforward. The cokriging equations are
n
i=1 j=1
ji
I = 1, . . . , n, j = 1, . . . , k,
(3.2.51)
n
i=1
1i
= 1,
i=1
ji
= 0,
for j = 2, . . . ,k.
i=1 j=1
ji
(3.2.52)
unbiasedness assumpations
1i
i=1
= 1.
2k
k=1
= 0,
1i
i=1
11
1 n
21
,.
2 n
, subject to
= 1 and
2k
k=1
= 0,.
{2 jj , ( . ) }
can be resolved by rescaling. Notice that in order to perform the subtraction in (3.2.53), a meaningful
data preparation is necessary before estimating the cross-variograms [e.g., divide the original jth variabel
1
by
{ ^ jj ( h0 ) }2
, where
Rd and ^ jj ( . ) Is a semivariogram
estimator calculated from the original observations on the jth variable, j = 1,,n].
Although the rescaling is not needed when defining and estimating
{2 v jj , ( . ) }
, is lack of
general appliicabilitily in cokriging means that it should be avoided. Moreover,anCy article that uses
cokriging equations with
{v jj , ( su ) }
replacing
{C jj , ( s , u ) }
fitted. Those of his models for which C(s,s + h) = C*(h) are easily shown to result in a symmetric C*(h),
for all
hR
{C jj , }
or
{2 jj , }
{ v jj , ( . ) }
Z j ( si ) is actually
{ Z ( s 0 ; t k ) :i=1, . n ; j=1, , k }
. In
order to do this optimally, equation analogous to(3.2.51) show that knowledge of temporal-temporal,
spatial-spatial, and spatial-temporal covariation is needed,here the rescaling on
2 jj , is not necessarily
needed because all observations arise from one underlying space-time process.
The problem of simultaneously kriging
possibly different locations) begs the question of which multivariate criterion will be minimized. Caution
is necessary for those who might use
k
E ( Z j ( s0 ) p j ( Z ; s0 ) )2
j=1
Because
Z 1 ( . ) , Z 2 ( . ) , are not necessarily measure in the same units. The following generalized
'
E ( Z ( s 0 ) p ( Z ; s0 ) ) C ( s 0 , s0 )
Z ( s0 )
, is unit less.
( Z ( s0 ) p ( Z ; s 0 ) ) ]
The criterion is to be preferred to the preceding weighted sum of mean-squared prediction errors. A
matrix criterion is given by Ver Hoef and Cressie(1991).
For matrix formulation of cokriging,the reader is referred to myers (1982,1984) and ner hoef and
cressie(1991). Myers does not use the cross-variograms
for
C jj'
{ v jj' }
2k ( s0 ) = ' +m ,
^p ( Z ; s 0 )= ' Z ,
(3.2.54)
Where
'
= +1
'
( 11' 1 )
'
1 1
, m=
'
( 11' 1 )
' 1
1 1
(3.2.55)
1
var ( Z ( s )Z ( u ) ) , one obtains the
( su ) in (3.2.54) with ( s , u )
2
()
Z ( s0 ) assuming only
optimal linear unbiased predictor of
i=1
i=1
assume that
jj ' ( s , u )
( 12 ) var ( Z ( s )Z ( u) )
j
j'
j j ( su ) .
'
^p ( Z ; s 0 )= ' Z ,
2k ( s0 ) =C ( s0 , s 0 ) c+m ,
(3.2.56)
Where
= c+1
( 1' 1' 1 c )
1' 1 1
1 , m=
( 11' 1 c )
1' 1 1
(3.2.57)
'
C=( C ( s 0 , s0 ) , , C ( s0 , s 0 ) ) , and = (C( s i , s j . That is, the kriging equations can be written in
( s , u ) ; see,e.g., Ma et
al. (1987) and Arato (1990). Whittel (1954) and Vecchia(1987) show how certain stationary covariance
fuctions from stochastic differential equations that arise from physical considerations.
Kriging on the phere
The calculus of kriging equations(3.2.5) and (3.2.12) relies on a well definite notion of vector addition
and subtraction. Young (1987) takes this from the euclidean setting and proposes analogous definitions on
the sphere. A so celled vector variogram is used in the derivation of kriging equations on the sphere.
Kriging with Nonnegative Weight
Because data are never Gaussian and, in particular, are often naturally bounded from above or below (e.g.,
percentage ore grades are between 0 and 100), practitioners are concerned about prediction techniques
that could potentially put the predictor outside the natural boundaries. For example, suppose
for all
Z ( s ) 0,
'
s D . Then, one way toensure that the kriging predictor ^
Z^ ( s0 ) = Z 0 is to specify
n
E( Z ( s 0 ) i Z ( si ) )
i=1
subject to
i=1
i=1
Barnes and Johnson (1984), Szidarovsky et al.(1987), and Herzfeld (1989) contain
' Z
i s .
The extra constraint leads to a (perhaps unnecessarily large) increase in mean-squared prediction error. In
^
fact, negative kriging weights can be advantageous because they allow Z ( s0 ) to range outside the
limits, max
{ Z ( si ) :i=1, , n }
and min
{ Z ( si ) :i=1, , n }
Should it become essential to ensure that a predictor lies between predetermined bounds, a transformation
of Z(.) to stretch it to be more Gaussian-like, followed by kriging, by back-transforming (section 3.2.2),
may be preferable to kriging with extra constraints.