You are on page 1of 260

ISATIS 2013

Technical References

ISATIS 2013
Technical References
Published, sold and distributed by GEOVARIANCES
49 bis Av. Franklin Roosevelt, BP 91, 77212 Avon Cedex, France
http://www.geovariances.com
Isatis release 2013, February 2013
Contributing authors:
Catherine Bleins
Matthieu Bourges
Jacques Deraisme
Franois Geffroy
Nicolas Jeanne
Ophlie Lemarchand
Sbastien Perseval
Frdric Rambert
Didier Renard
Yves Touffait
Laurent Wagner
All Rights Reserved
1993-2013 GEOVARIANCES
No part of the material protected by this copyright notice may be reproduced or utilized in any form
or by any means including photocopying, recording or by any information storage and retrieval sys-
tem, without written permission from the copyright owner.
1
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1 1 Hints on Learning Isatis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
2 2 Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Generalities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
3 3 Structure Identification in the Intrinsic Case . . . . . . . . . . . . . . . . . .9
3.1 3.1 The Experimental Variability Functions. . . . . . . . . . . . . . . . . . . . . .10
3.2 3.2 Variogram Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
3.3 3.3 The Automatic Sill Fitting Procedure. . . . . . . . . . . . . . . . . . . . . . . .42
4 4 Non-stationary Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
4.1 4.1 Unique Neighborhood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
4.2 4.2 Moving Neighborhood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57
4.3 4.3 Case of External Drift(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62
5 5 Quick Interpolations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
5.1 5.1 Inverse Distances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64
5.2 5.2 Least Square Polynomial Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65
5.3 5.3 Moving Projected Slope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66
5.4 5.4 Discrete Splines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
5.5 5.5 Bilinear Grid Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
6 6 Grid Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
6.1 6.1 List of the Grid Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . .73
6.2 6.2 Filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91
7 7 Linear Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .97
2
7.1 7.1 Ordinary Kriging (Intrinsic Case). . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.2 7.2 Simple Kriging (Stationary Case with Known Mean) . . . . . . . . . . 101
7.3 7.3 Kriging of One Variable in the IRF-k Case . . . . . . . . . . . . . . . . . . 102
7.4 7.4 Drift Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.5 7.5 Estimation of a Drift Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7.6 7.6 Kriging with External Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.7 7.7 Unique Neighborhood Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.8 7.8 Filtering Model Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.9 7.9 Factorial Kriging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.10 7.10 Block Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.11 7.11 Polygon Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7.12 7.12 Gradient Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.13 7.13 Kriging Several Variables Linked through Partial Derivatives . 121
7.14 7.14 Kriging with Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
7.15 7.15 Kriging with Measurement Error . . . . . . . . . . . . . . . . . . . . . . . . 125
7.16 7.16 Lognormal Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.17 7.17 Cokriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.18 7.18 Extended Collocated Cokriging . . . . . . . . . . . . . . . . . . . . . . . . . 133
8 8 Gaussian Transformation: the Anamorphosis . . . . . . . . . . . . . . . . 135
8.1 8.1 Modeling and Variable Transformation . . . . . . . . . . . . . . . . . . . . . 136
8.2 8.2 Histogram Modeling and Block Support Correction . . . . . . . . . . . 143
8.3 8.3 Variogram on Raw and Gaussian Variables . . . . . . . . . . . . . . . . . . 147
9 9 Non Linear Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
9.1 9.1 Indicator Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
9.2 9.2 Probability from Conditional Expectation . . . . . . . . . . . . . . . . . . . 152
9.3 9.3 Disjunctive Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
9.4 9.4 Uniform Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
9.5 9.5 Service Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
9.6 9.6 Confidence Intervals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
10 10 Turning Bands Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
10.1 10.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
10.2 10.2 Non Conditional Simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10.3 10.3 Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
11 11 Truncated Gaussian Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . 169
12 12 Plurigaussian Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
12.1 12.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
12.2 12.2 Variography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
12.3 12.3 Simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
12.4 12.4 Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
13 13 Impalas Multiple-Point Statistics . . . . . . . . . . . . . . . . . . . . . . . . . 185
14 14 Fractal Simulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
3
14.1 14.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .194
14.2 14.2 Midpoint Displacement Method. . . . . . . . . . . . . . . . . . . . . . . . . .195
14.3 14.3 Interpolation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196
14.4 14.4 Spectral Synthesis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .197
15 15 Annealing Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199
16 16 Spill Point Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
16.1 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204
16.2 16.2 Basic Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .205
16.3 16.3 Maximum Reservoir Thickness Constraint . . . . . . . . . . . . . . . . .206
16.4 16.4 The "Forbidden types" of control points . . . . . . . . . . . . . . . . . . .207
16.5 16.5 Limits of the algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .208
16.6 16.6 Converting Unknown volumes into Inside ones . . . . . . . . . . . . .209
17 17 Multivariate Recoverable Resources Models. . . . . . . . . . . . . . . . .211
17.1 17.7 Theoretical reminders on Discrete Gaussian model applied to Uniform Con-
ditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .212
17.2 17.8 Theoretical reminders on Discrete Gaussian model applied to block simula-
tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218
18 18 Localized Uniform Conditionning . . . . . . . . . . . . . . . . . . . . . . . . .225
18.1 18.1 Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .226
19 19 Skin algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .229
Isatoil. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233
20 19 Isatoil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .235
20.1 19.1 Data description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .236
20.2 19.2 Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .238
20.3 19.3 Modelling the geological structure. . . . . . . . . . . . . . . . . . . . . . . .239
20.4 19.4 Modelling the petrophysical parameters . . . . . . . . . . . . . . . . . . .251
4
1
Introduction
2
Technical References 3
1 Hints on Learning Isatis
The Beginner's Guide, the On-Line documentation tools and the Case Studies Manual are the main
ways to get started with Isatis.
Using the Beginner's Guide to learn common tasks
This Beginner's Guide is a great place to start if you are new to Isatis. Find in this guide a quick
overview of the package and several tutorials to learn how to work with the main Isatis objects. The
Getting Started With Geostatistics part teaches you the basics about geostatistics and guides you
from exploratory data analysis and variography to kriging and simulations.
Browsing the On-Line documentation tools
Isatis offers a comprehensive On-line Help describing the entire set of parameters that appears in
the user interface.
If you need help on how to run a particular Isatis application, just press F1 within the window to
start the On-Line Help system. You get a short recall about the technique, the algorithm imple-
mented in Isatis and a detailed description of all the parameters.
Technical references are available within the On-Line Help System. They present details about the
methodology and the underlying theory and equations. These technical references are available in
pdf format and may be displayed on the screen or printed.
A compiled version of all the Isatis technical references is also available for your convenience: just
click on Technical References on the top bar of any On-Line Help window.
Going through geostatistical workflows with the Case Studies
A set of case studies is developed in the Case Studies manual. The Case Studies are mainly
designed:
4 Hints on Learning Isatis
l for new users to get familiar with Isatis and give some leading lines to carry a study through,
l for all users to improve their geostatistical knowledge by presenting detailed geostatistical
workflows.
Basically, each case study describes how to carry out some specific calculations in Isatis as pre-
cisely as possible. You may either:
l replay by yourself the case study proposed in the manual, as all the data sets are installed on
your disk together with the software,
l or just be guided by the descriptions and apply the workflow on your own datasets.
Technical References 5
2 Getting Help
You have 3 options for getting help while using Isatis: the On-Line Help system, the Frequently
Asked Questions and the Technical Support team (support@geovariances.com).
Using the On-Line Help System
Isatis software offers a comprehensive On-line Help System to complement this Beginner's Guide.
The On-Line Help describes the whole set of parameters that appears in the user interface. To use
help, choose Help > Help in the main Isatis window or press F1 from any Isatis window.
Table of Contents and Index - These facilities will help you navigate through the On-Line Help
System. They are available on the top bar of the main On-Line Help window.
FAQ - A local copy of Isatis Frequently Asked Questions is available on your system. Just click on
FAQ on the top bar of the main On-Line Help window.
Support site - You may also directly access the Geovariances Support Web site to check for
updates of the software or recent Frequently Asked Questions.
Register - Directly access the Ask for Registration section of Geovariances Support Web site in
order to get personal login and password and access the restricted download area.
Technical References - Short technical references are available from the On-Line Help System:
they present more detailed information about the geostatistical methodologies that have been imple-
mented in the software and their underlying theory. In particular, you will find the mathematical
formulae used in the main algorithms.
Accessing Geovariances Technical Support
No matter where in the world you are, our professional support team manages to help you out keep-
ing in mind the time and quality imperatives of your projects. Whatever your problem is (software
installation, Isatis use, advanced geostatistical advice...), feel free to contact us:
support@geovariances.com
If your message concerns an urgent operational problem, feel free to contact the Help Desk by
phone: +33 (0)1 60 74 91 00.
6 Getting Help
Using Web-Based Resources
You can access the Support section of Geovariances Web site from the main On-Line Help menu of
Isatis: just click on Support Site.
Geovariances Web site - Visit www.geovariances.com to find articles and publications,
check the latest Frequently Asked Questions, be informed about the coming training sessions.
Isatis-release mailing list - Send an e-mail to support@geovariances.com to be registered
on the isatis-release mailing list. You will be informed about Isatis updates, new releases and fea-
tures.
Register - Get personal login and password and access the restricted download area.
Check for Updates - Check the Geovariances Web site for Isatis updates.

Generalities
Technical References 9
3 Structure Identification
in the Intrinsic Case
This page constitutes an add-on to the Users Guide for:
m Statistics / Exploratory Data Analysis
m Statistics / Variogram Fitting
This technical reference reviews the main tools available in Isatis to describe the spatial variability
(regularity, continuity, ...) of the variable(s) of interest, commonly referred to as the "Structure", in
the Intrinsic Case.
10 Structure Identification in the
3.1 The Experimental Variability Functions
Though the variogram is the classical tool to measure the variability of a variable as a function of
the distance, several other two-points statistics exist. Let us review them through their equation and
their graph on a given data set. n designates the number of pairs of data separated by the considered
distance and and stand for the value of the variable at two data points constituting a pair.
m m
Z
is the mean over the whole data set
m is the variance over the whole data set
m m
+
Z
is the mean calculated over the first points of the pairs (head)
m m
-
Z
is the mean calculated over the second points of the pairs (tail)
m is the standard deviation calculated over the head points
m is the standard deviation calculated over the tail points
m and is considered to be at a distance of +h from .
3.1.1 Univariate case
The Transitive Covariogram
(fig. 3.1-1)
Z

Z
2

Z
+

Technical References 11
The Variogram

(fig. 3.1-2)
The Covariance ( centered )

(fig. 3.1-3)
1
2n
------ Z

( )
2
n

1
n
--- Z

m
Z
( ) Z

m
Z
( )
n

12 Structure Identification in the


The Non-Centered Covariance

(fig. 3.1-4)
The Non-Ergodic Covariance
(fig. 3.1-5)
1
n
--- Z

Technical References 13
The Correlogram
(fig. 3.1-6)
The Non-Ergodic Correlogram
(fig. 3.1-7)
1
n
---
Z

m
Z
( ) Z

m
Z
( )

Z
2
------------------------------------------------
n

1
n
---
Z

m
+
Z
( ) Z

m
-
Z
( )

Z
+

Z
-
-----------------------------------------------------
n

14 Structure Identification in the


The Madogram (First Order Variogram)
(fig. 3.1-8)
The Rodogram ( 1/2 Order Variogram )
(fig. 3.1-9)
1
2n
------
Z

1
2n
------
Z

Technical References 15
The Relative Variogram
(fig. 3.1-10)
The Non-Ergodic Relative Variogram
(fig. 3.1-11)
1
2n
------
Z

( )
2
m
Z
2
-------------------------
n

1
2n
------
Z

( )
2
m
+
Z
m
-
Z
+
2
-------------------------


2
---------------------------------
n

16 Structure Identification in the


The Pairwise Relative Variogram
(fig. 3.1-12)
Although the interest of the madogram and rodogram, as compared to the variogram, is quite obvi-
ous (at least graphically), as it tends to smooth out the function, the user must always keep in mind
that the only tool that corresponds to the statement of kriging (namely minimizing a variance) is the
variogram. This is particularly obvious when looking at the variability values (measured along the
vertical axis) on the different figures, remembering that the experimental variance of the data is rep-
resented as a dashed line on the variogram picture.
3.1.2 Weighted Variability Functions
It can be of interest to take into account weights during the computation of variability functions.
These weights can for instance be derived from declustering; in this case, their integration is
expected to compensate potential bias in the estimation of the experimental function from clustered
data. For further information about these weighted variograms, see for instance Rivoirard J. (2000),
Weighted Variograms, In Geostats 2000, W. Kleingeld and D. Krige (eds), Vol. 1, pp. 145-155.
For instance, the weights are integrated in the weighted experimental variogram
equation in the following way:
1
2n
------
Z

( )
2
Z

+
2
------------------


2
---------------------------
n

( )
1 N , , =
Technical References 17
(eq. 3.1-1)
The other experimental functions are obtained in a similar way.
3.1.3 Multivariate case
In the multivariate case kriging requires a multivariate model. The variograms of each variable are
usually designated as "simple" when the variograms between two variables are called cross-vario-
grams.
We will now describe, through their equation, the extension given to the statistical tools listed in the
previous section, for the multivariate case. We will designate the first variable by (Z) and the second
by (Y), and m
z
and m
y
refer to their respective means over the whole field, m
+
Z
and m
+
Y
to their
means for the head points, m
-
Z
and m
-
Y
to their means for the tail points.
The Transitive Cross-Covariogram
(fig. 3.1-1)
The Cross-Variogram
1
2
---

( )
2
n

-----------------------------------------------
Z

1
2n
------
Z

( ) Y

( )
n

18 Structure Identification in the


(fig. 3.1-2)
The Cross-Covariance (centered)
(fig. 3.1-3)
The Non-Centered Cross-Covariance
1
n
---
Z

m
Z
( ) Y

m
Y
( )
n

1
n
--- Z

Technical References 19
(fig. 3.1-4)
The Non-Ergodic Cross-Covariance
(fig. 3.1-5)
The Cross-Correlogram
1
n
--- Z

m
+
Z
( ) Y

m
-
Y
( )
n

1
n
---
Z

m
Z
( ) Y

m
Y
( )

Y
------------------------------------------------
n

20 Structure Identification in the


(fig. 3.1-6)
The Non-Ergodic Cross-Correlogram
(fig. 3.1-7)
The Cross-Madogram
1
n
---
Z

m
+
Z
( ) Y

m
-
Y
( )

+
Z

-
Y
-------------------------------------------------
n

1
2n
------ Z

( ) Y

( )
n

Technical References 21
(fig. 3.1-8)
The Cross-Rodogram
(fig. 3.1-9)
The Relative Cross-Variogram
1
2n
------
Z

( ) Y

( )
4
n

1
2n
------
Z

( ) Y

( )
m
Z
m
Y
----------------------------------------------
n

22 Structure Identification in the


(fig. 3.1-10)
The Non-Ergodic Relative Cross-Variogram
(fig. 3.1-11)
The Pairwise Relative Cross-Variogram
1
2n
------
Z

( ) Y

( )
m
+
Z
m
-
Z
+
2
-----------------------


m
+
Y
m
-
Y
+
2
------------------------


-----------------------------------------------------------
n

1
2n
------
Z

( ) Y

( )
Z

+
2
----------------- -


Y

+
2
----------------- -


----------------------------------------------
n

Technical References 23
(fig. 3.1-12)
This time most of the curves are no longer symmetrical. In the case of the covariance, it is even
convenient to split it into its odd and even parts as represented below. If designates the distance
(vector) between the two data points constituting a pair, we then consider:
The Even Part of the Covariance
(fig. 3.1-13)
The Odd Part of the Covariance
h
1
2
--- C
ZY
h ( ) C
ZY
h ( ) + [ ]
1
2
--- C
ZY
h ( ) C
ZY
h ( ) [ ]
24 Structure Identification in the
(fig. 3.1-14)
Note - The cross-covariance function is a more powerful tool than the cross-variogram in term of
structural analysis as it allows the identification of delay effects. However, it necessitates stronger
hypotheses (stationarity, estimation of means), it is not really used in the estimation steps.
In fact, the cross-variogram can be derived from the covariance as follows:

and is therefore similar to the even part of the covariance. All the information carried by the odd
part of the covariance is simply ignored.
A last remark concerns the presence of information on all variables at the same data points: this
property is known as isotopy. The opposite case is heterotopy: one variable (at least) is not defined
at all the data points.
The kriging procedure in the multivariate case can cope nicely with the heterotopic case. Neverthe-
less, in the meantime one has to calculate cross-variograms which can obviously be established
from the common information only. This consideration is damaging in a strong heterotopic case
where the structure, only inferred on a small part of the information, is used for a procedure which
possibly operates on the whole data set.
3.1.4 Variogram Transformations
Several transformations based on variogram calculations (in the generic sense) are also provided:
The ratio between the cross-variogram and one of the simple variograms.
h ( ) C
ZY
0 ( )
1
2
--- C
ZY
h ( ) C
ZY
h ( ) + [ ] =
Technical References 25
(fig. 3.1-1)
When this ratio is constant, the variable corresponding to the simple variogram is "self-krigeable".
This means that in the isotopic case (both variables measured at the same locations) the kriging of
this variable is equal to its cokriging. This property can be extended to more than 2 variables: the
ratio should be considered for any pair of variables which includes the self-krigeable variable.
The ratio between the square root of the variogram and the madogram:
(fig. 3.1-2)
This ratio is constant and equal to for a standard normal variable, when its pairs satisfy the
hypothesis of binormality. A similar result is obtained in the case of a bigamma hypothesis.
The ratio between the variogram and the madogram:

26 Structure Identification in the


(fig. 3.1-3)
If the data obeys a mosaic model with tiles identically and independently valuated, this ratio is con-
stant.
The ratio between the cross-variogram and the square root of the product of the two simple vari-
ograms:
(fig. 3.1-4)
When two variables are in intrinsic correlation, the two simple variograms and the cross variogram
are proportional to the same basic variogram. This means that this ratio, in the case of intrinsic cor-
relation must be constant. When two variables are in intrinsic correlation cokriging and kriging are
equivalent in the isotopic case.
Technical References 27
3.2 Variogram Model
3.2.1 Basic Structures
The following pages illustrate all the basic structures available in Isatis to fit a variogram model on
an experimental variogram. Each basic structure is described by:
l its name.
l its mathematical expression, which involves:
m A coefficient which gives the order of magnitude of the variability along the vertical axis
(homogenous to the variance). In the case of bounded functions (covariances), this value is
simply the level of the plateau reached and is called the sill. The same concept has been kept
even for the non-bounded functions and we continue to call it sill for convenience. The inter-
est of this value is that it always comes as a multiplicative coefficient and therefore can be
calculated using automatic procedures, as explained further. The sill is equal to "C" in the
following models.
m A parameter which affects the horizontal axis by normalizing the distances: hence the name
of scale factor. This term avoids having to normalize the space where the variable is defined
beforehand (for example when data are given in microns whereas the field extends on sev-
eral kilometers). This scale factor is also linked to the physical parameter of the selected
basic function.
When the function is bounded, it reaches a constant level (sill) or even changes its expres-
sion after a given distance: this distance value is the range (or correlation distance in statis-
tical language) and is equal to the scale factor. For the bounded functions where the sill is
reached asymptotically, the scale factor corresponds to the distance where the function
reaches 95% of the sill (also called practical range). For functions where the sill is reached
asymptotically in a sinusoidal way (hole-effect variogram), the scale factor is the distance
from which the variation of the function does not exceed 5% around the sill value.
This is why, in the variogram formulae, we systematically introduce the coefficient (norm)
which gives the relationship between the Scale Factor (SF) and the parameter a:
.
For homogeneity of the notations, the norm and a are kept even for the functions which
depend on a single parameter (linear variogram for example): the only interest is to manipu-
late distances "standardized" by the scaling factor and therefore to reduce the risk of numer-
ical instabilities.
Finally, the scale factor is used in case of anisotropy. For bounded functions, it is easy to say
that the variable is anisotropic if the range varies with the direction. This concept is general-
ized to any basic function using the scale factor which depends on the direction, in the calcu-
lation of the distance.
m A third parameter required by some particular basic structures .

SF a =

28 Structure Identification in the


l a chart representing the shape of the function for various values of the parameters.
l a non-conditional simulation performed on a 100 X 100 grid. As this technique systematically
leads to a normal outcome (hence symmetrical), we have painted positive values in black and
negative ones in white. Except for the linear model where the median is used as a threshold.
Spherical Variogram
(eq. 3.2-1)
(fig. 3.2-1)
Variograms (SF=1.,2.,3.,4.,5.,6.,7.,8.,9.,10.) & Simulation (SF=10.)
Exponential Variogram
(eq. 3.2-2)

h ( ) C
3
2
-- -
h
a
-----


1
2
---
h
a
-----


3
= h a < ( )
h ( ) C = h a ( )
1 =
h ( ) C 1
h
a
---------


exp =
2.996 =
Technical References 29
(fig. 3.2-2)
Variograms (SF=1.,2.,3.,4.,5.,6.,7.,8.,9.,10.) & Simulation (SF=10.)
Gaussian Variogram
(eq. 3.2-3)
(fig. 3.2-3)
Variograms (SF=1.,2.,3.,4.,5.,6.,7.,8.,9.,10.) & Simulation (SF=10.)
Cubic Variogram
h ( ) C 1
h
a
-----


2



exp =
1.731 =
30 Structure Identification in the
(eq. 3.2-4)
(fig. 3.2-4)
Variograms (SF=1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,) & Simulation (SF=10.)
Cardinal Sine Variogram
(eq. 3.2-5)
h ( ) C 7
h
a
-----


2
35
4
------
h
a
-----


3

7
2
---
h
a
-----


5
3
4
---
h
a
-----


7
+ =
1 =
h ( ) C 1
h
a
-----


sin
h
a
-----
------------------- =
20.371 =
Technical References 31
(fig. 3.2-5)
Variograms (SF=1.,5.,10.,15.,20.,25.) & Simulation (SF=25.)
Stable Variogram
(eq. 3.2-6)
(fig. 3.2-6)
Variograms (SF= 8. & = .25, .50, .75, 1., 1.25, 1.5, 1.75, 2.)
h ( ) C 1
h
a
-----




exp =
3

=
32 Structure Identification in the
Note - The technique for simulating stable variograms is not implemented in the Turning Bands
method.
Gamma Variogram
(eq. 3.2-7)
(fig. 3.2-7)
Variograms (SF= 8. & = .5,1.,2.,5.,10.,20.) & Simulation (SF= 10. & = 2.)
Note - For , this model is called the hyperbolic model.
J-Bessel Variogram
(eq. 3.2-8)
where (from Chils J.P. & Delfiner P., 1999, Geostatistics: Modeling Spatial Uncertainty, Wiley
series in Probability and Statistics, New-York):
h ( ) C 1
1
1
h
a
----- +

------------------------ = a 0 > ( )
20

1 =
1 =
h ( ) C 1 2

1 + ( )
J

h
a
-----


h
a
-----

----------------- =
d
2
--- 1 >


1 =
Technical References 33
- the Gamma function is defined for by (Euler's integral)
(eq. 3.2-9)
- the Bessel function of the first kind with index is defined by the development
(eq. 3.2-10)
- the modified Bessel function of the first kind, used below, is defined by
(eq. 3.2-11)
- the modified Bessel function of the second kind, used in K-Bessel variogram hereafter, is defined
by
(eq. 3.2-12)
(fig. 3.2-8)
Variograms (SF=1. & = .5,1.,2.,3.,4.,5.) & Simulation (SF=1. & = 1)
K-Bessel Variogram
0 >
( ) e
u
u
1
u d
0

x ( )
x
2
---

1 ( )
k
k! k 1 + + ( )
------------------------------------
x
2
---


2k
k 0 =

=
I

x ( )
x
2
---

1
k! k 1 + + ( )
------------------------------------
x
2
---


2k
k 0 =

=
K

x ( )

2
---
I

x ( ) I

x ( )
sin
---------------------------------- =
34 Structure Identification in the
(eq. 3.2-13)
(fig. 3.2-9)
Variograms (SF=1. & = .1,.5,1.,2.,5.,10.) & Simulation (SF=1. & = 1.)
Exponential Cosine (Hole Effect Model)
Note that C(h) is a covariance in R
2
if and only if , in R
3
if and only if (in
Chils, Delfiner, Geostatistics, 1999).
Note - This model cannot be used in the Turning Bands simulations.
Generalized Cauchy Variogram
(eq. 3.2-14)
h ( ) C 1
h
a
-----

2
1
( )
-------------------------K

h
a
-----


= 0 > ( )
1 =
C h ( )
h
xy
a
1
----------


2
h
z
a
2
-----


h
z
a
1
-------


exp h R
n
a
1
0 a
2
0 > , > , ( ) cos exp =
a
2
a
1
a
2
a
1
3
h ( ) C 1 1
h
a
-----


2
+



= 0 > ( )
20

1 =
Technical References 35
(fig. 3.2-10)
Variograms (SF=10. & = .1,.5,1.,2.,5.,10.) & Simulation (SF=10. & = 1.)
Linear Variogram
(eq. 3.2-15)
(fig. 3.2-11)
Variogram (SF= 5.) & Simulation (SF = 5.)
Power Variogram
h ( ) C
h
a
-----


=
1 =
36 Structure Identification in the
(eq. 3.2-16)
(fig. 3.2-12)
Variograms (SF= 5. & = 0.25, 0.5, 0.75, 1., 1.25, 1.5, 1.75)
Note - The technique for simulating Power variograms is not implemented in the Turning Bands
method.
3.2.2 The Anisotropy
By anisotropy we mean the difference in the variability of the phenomenon in the different direc-
tions of the space. For the practical description (in 2D) of this concept, we focus on two orthogonal
directions and distinguish between the two following behaviors, illustrated through basic structures
with sill and range:
l same sill, different ranges: geometric anisotropy.
Its name comes from the fact that by stretching the space in one direction by a convenient factor
we also stretch the corresponding directional range until it reaches the range on the orthogonal
direction. In this new space, the phenomenon is then isotropic: the correction is of a geometric
nature.
h ( ) C
h
a
-----

= 0 2 < < ( )
1 =
Technical References 37
(fig. 3.2-1)
Nugget Effect + Spherical (10km, 4km)
l same range, different sills: zonal anisotropy
This describes a phenomenon where its variability is larger in one direction than in the orthogo-
nal one. This is typically the case for vertical orientation through a "layer cake" deposit by
opposition to any horizontal orientation. No geometric correction will reduce this dissimilarity
.
38 Structure Identification in the
(fig. 3.2-2)
Nugget Effect + Spherical (N/A, 4km)
l Practical calculations
The anisotropy consists of a rotation and the ranges along the different axes of the rotated system.
The rotation can be defined either globally or for each basic structure.
In the 2D case, for one basic structure, and if "u" and "v" designate the two components of the dis-
tance vector in the rotated system, we first calculate the equivalent distance:
(eq. 3.2-1)
where a
u
and a
v
are the ranges of the model along the two rotated axes.
Then this distance is used directly in the isotropic variogram expression where the range is normal-
ized to 1.
In the case of geometric anisotropy, the value a
u
/a
v
corresponds to the ratio between the two main
axes of the anisotropy ellipse.
For zonal anisotropy, we can consider that the contribution of the distance component along one of
the rotated axes is discarded: this is obtained by setting the corresponding range to "infinity".
Obviously, in nature, both anisotropies can be present, and, moreover, simultaneously.
Finally the setup of any anisotropy requires the definition of a system: this is the system carrying
the anisotropy ellipsoid in case of geometric (or elliptic) anisotropy, or the system carrying the
direction or plane of zonal anisotropy.
d
2
u
a
u
-----


2
v
a
v
-----


2
+ =
Technical References 39
This new system is defined by one rotation angle in 2D, or by 3 angles (dip, azimuth and plunge) in
3D. It is possible to attach the anisotropy rotation system globally or individually to each one of the
nested basic structures. This possibility leads to an enormous variety of different textures.
3.2.3 Integral Ranges
The integral range is the value of the following integral (only defined for bounded covariances):
(eq. 3.2-1)
A is a function of the dimension of the space.
The following table gives the integral ranges of the main basic structures when the sill C is set to 1.
with and the parameter .
1-D 2-D 3-D
Nugget
Effect
0 0 0
Exponential
2b
Spherical
3b/4
Gaussian
Cardinal
Sine
Stable
Gamma
A C h ( ) h d

=
b SF a = =
2b
2
8b
3

5
---b
2

6
---b
3
b b
2
b
3
b + +
2b
1 +

-------------


b
2

2 +

-------------


4
3
---b
3

3 +

-------------


2b
1
------------ 1 >
+ else
2b
2
1 ( ) 2 ( )
------------------------------ 2 >
+ else
8b
3
1 ( ) 2 ( ) 3 ( )
--------------------------------------------- 3 >
+ else
40 Structure Identification in the
3.2.4 Convolution
If we know that the measured variable Z is the result of a convolution p applied on the underlying
variable
(eq. 3.2-1)
We can demonstrate that the variogram of Z can be deduced from the variogram of Y as follows:
(eq. 3.2-2)
Therefore, if the convolution function is fully determined (its type and the corresponding parame-
ters), specifying a model for Y will lead to the corresponding model for Z.
3.2.5 Incrementation
In order to introduce the concept of incrementation, we must recall the link between the variogram
and the covariance:
(eq. 3.2-1)
where is calculated as the variance of the smallest possible increment:
(eq. 3.2-2)
J-Bessel
K-Bessel
Gen.
Cauchy
2b
1 + ( )

1
2
--- +


----------------------
1
2
--- >
+ else
4b
2

3
2
--- >
+ else
8 b
3
1 + ( )

1
2
---


----------------------
+

5
2
--- >
else
2b

1
2
--- +


( )
---------------------- 4b
2

8 b
3

3
2
--- +


( )
----------------------
b
1
------------ 1 >
+ else
b
2
1 ( ) 2 ( )
------------------------------ 2 >
+ else
b
3
1 ( ) 2 ( ) 3 ( )
--------------------------------------------- 3 >
+ else
Z Y*p =

Y

Z
*P = with P p*p =
h ( ) C 0 ( ) C h ( ) =
h ( )
h ( )
1
2
--- Z x h + ( ) Z x ( ) [ ] =
Technical References 41
We can then introduce the generalized variogram as the variance of the increment of order
(k+1):
(eq. 3.2-3)
where
which requires data to be located along a regular.
The scaling factor M
k
is there to ensure that in the case of a pure nugget effect:
(eq. 3.2-4)
The benefit of the incrementation is that the generalized variogram can be derived using the gener-
alized covariance:
(eq. 3.2-5)
Then, we make explicit the relationships between and for several orders :
Generally speaking, we can say that the shape of is not modified when considering K(h):
k
0
1
2
h ( )
h ( )
1
M
k
-------Var 1 ( )
q
C
k 1 =
q
Z x k 1 q + ( )h + [ ]
q 0 =
k 1 +

=
M
k
C
2k 2 +
k 1 +
=
h ( )
0 h 0 =
C
0
h 0

=
h ( )
1
M
k
------- 1 ( )
p
C
2k 1 +
k 1 p + +
K ph ( )
p k 1 =
k 1 +

=
h ( )
h ( ) K 0 ( ) K h ( ) =
K 0 ( )
4
3
---K h ( )
1
3
---K 2h ( ) +
K 0 ( )
3
2
---K h ( )
3
5
---K 2h ( )
1
10
----- -K 3h ( ) +
h ( )
42 Structure Identification in the
l if K(h) is a standard covariance (range a and sill C), reaches the same sill C for the same
range: its shape is slightly different.
l if K(h) is a generalized covariance of the type, then is of the same type: the only
difference comes from its coefficient which is multiplied by:
(eq. 3.2-6)
3.2.6 Multivariate Case
When several variables are considered simultaneously, we work in the scope of the Linear Model
of Coregionalization which corresponds to a rather crude hypothesis, although it has been used
satisfactorily in a very large number of cases.
In this model, every variable is expressed as a linear combination of the same elementary compo-
nents or factors. Therefore all simple and cross-variograms can be expressed as linear combinations
of the same basic structures (i.e. the variograms of the factors).
The covariance model is then defined by the list of the nested normalized basic structures (sill=1)
and the matrix of the sills (square, symmetrical and whose dimension is equal to the number of vari-
ables): each element is the sill of the cross-variogram between variables "i" and "j" (or the sill
of the variogram of variable "i" for ) for the basic structure "p".
Note - The cross-covariance value at the origin may be badly defined in the heterotopic case, or
even undefined in the fully heterotopic case. It is possible to specify the values of the simple and
cross-covariances at the origin, using for instance the knowledge about the variance-covariance
coming from another dataset.
3.3 The Automatic Sill Fitting Procedure
Isatis uses an original algorithm to fit a univariate or a multivariate model of coregionalization to
the experimental variograms. The algorithm called Multi Scale P.C.A. has been developed by C.
Lajaunie (See Lajaunie C., Bhaxtguy J.P. Elaboration d'un programme d'ajustement semi-
automatique d'un modle de corgionalisation - Thorie, Technical report N21/89/G, Paris:
ENSMP, 1989, 6p).
This technique can be used, when the set of basic structures has been defined, in order to establish
the matrix of sills.
It obviously also works for a single variable. Nevertheless, we must note that it can only be used to
infer the sill coefficients of the model but does not help for all the other types of parameters such as:
h ( )
h

h ( )
1
M
k
------- 1 ( )
p
C
2k 1 +
k 1 p + +
p
1
p k 1 =
k 1 +

b
p
ij
b
p
ii
Technical References 43
l the number and types of basic structures,
l for each one of them, the range or third coefficient (if any),
finally for the anisotropy. This is why the term automatic fitting is somehow abusive.
Considering a set of N second order stationary regionalized random functions Z
i
(x) we wish to
establish the multivariate model taking into account all the simple and cross covariances C
ij
(h).
If the variables Z
i
(x) are intrinsic, the covariances no longer exist and the model must then be
derived from simple and cross variograms . Nevertheless, this chapter will be developed in
the stationary case.
A well known result is that the matrix for each basic structure p must be (semi-) definite posi-
tive in order to ensure the positiveness of the variance of any linear combination of the random vari-
ables Z
i
(x).
In order to build this linear model of coregionalization, we assume that the variables Z
i
are decom-
posed on a basis of random variables generically denoted Y, stationary and orthogonal. These vari-
ables are regrouped in P groups of Y
p
random functions characterized by the same covariance C
p
(h)
called the basic structure. The count of variables within each group is equal to the number of vari-
ables N. We will then write:
(eq. 3.3-1)
The coefficients are the coefficients of the linear model. The covariance between two variables
Z
i
and Z
j
and can be written:
(eq. 3.3-2)
which can also be considered as:
(eq. 3.3-3)

ij
h ( )
b
p
ij
Z
i
x ( ) a
p
ik
Y
k
p
k 1 =
N

p 1 =
P

=
a
p
ik
C
ij
h ( ) a
p
ik
a
p
jk
C
p
h ( )
k 1 =
N

p 1 =
P

=
C
ij
h ( ) b
p
ij
C
p
h ( )
p 1 =
P

=
44 Structure Identification in the
Obviously the terms , homogeneous to sills, are symmetric and the matrices
B
p
whose generic terms are are symmetric, semi-definite positive: they correspond to the vari-
ance-covariance matrix for each basic structure.
3.3.1 Procedure
Assuming that the number of basic structures P, as well as all the characteristics of each basic
model C
p
(h), are defined, the procedure determines all the coefficients and derives the vari-
ance-covariance matrices.
Starting from the experimental simple and cross-covariances on a set of U lags h
u
, the
procedure tries to minimize the quantity:
(eq. 3.3-1)
where is a weighting function chosen in order to reduce the importance of the lags with few
pairs, and to increase the size of the first lags corresponding to short distances. For more informa-
tion on the choice of these weights, the user should refer to the next paragraph.
Each matrix B
p
is decomposed as:
(eq. 3.3-2)
where X
p
is the matrix composed of the normalized eigen vectors and is the diagonal matrix of
the eigen values. Instead of minimizing (eq. 3.3-1) under the constraints that B
p
is definite positive,
we prefer writing that:
(eq. 3.3-3)
imposing that each coefficient
(eq. 3.3-4)
b
p
ij
a
p
ik
a
p
jk
k 1 =
N

=
b
p
ij
a
p
ik
C
ij

h ( )
C
ij
*
h
u
( ) C
ij
h
u
( ) [ ]
2
h
u
( )
u 1 =
U

i j ,

=
h
u
( )
B
p
X
p

p
X
p
T
=

p
b
p
ij
a
p
ik
a
p
jk
k 1 =
N

=
a
p
ik

p
k
x
p
ik
=
Technical References 45
where is the k-th term of the diagonal of and is the k-th vector of the matrix . This
hypothesis will ensure the matrix B
p
to be definite positive.
Equation (eq. 3.3-1) can now be reformulated:
(eq. 3.3-5)
Without losing generality, we can impose orthogonality constraints:
(eq. 3.3-6)
If we introduce the terms:
(eq. 3.3-7)
The criterion (eq. 3.3-5) becomes:
(eq. 3.3-8)
By differentiations against each , we obtain: for each i, k and p:
(eq. 3.3-9)
We shall describe the case of a single structure first before reviewing the more general case of sev-
eral nested basic structures.

p
k

p
x
p
ik
C
ij
*
h
u
( ) a
p
ik
a
p
jk
C
p
h
u
( )
k 1 =
N

p 1 =
P

2
h
u
( )
u 1 =
U

i j ,

=
a
p
ik
a
p
jk
k

0 = i j ( )
K
ij
C
ij
*
h
u
( ) h
u
( )
u 1 =
U

=
T
pq
C
p
h
u
( )C
q
h
u
( ) h
u
( )
u 1 =
U

=
A
ij
p
C
p
h
u
( )C
ij
*
h
u
( ) h
u
( )
u 1 =
U

=
K
ij
a
p
ik
a
p
jk
a
q
il
a
q
jl
T
pq
2 a
p
ik
a
p
jk
A
ij
p
i j p k , , ,

i j p q k l , , , , ,

+
i j ,

=
a
p
ik
a
p
jk
a
q
il
a
q
jl
T
pq
j l q , ,

a
p
jk
A
ij
p
j

=
46 Structure Identification in the
3.3.1.1 Case of a Single Basic Structure
As the number of basic structures is reduced to 1, the indices p and q are omitted in the set of equa-
tions (eq. 3.3-9)
(eq. 3.3-1)
Using the orthogonality constraints, the only non-zero term in the left-hand side of the equality is
obtained when j=i:
(eq. 3.3-2)
If we introduce:
(eq. 3.3-3)
then:
(eq. 3.3-4)
This leads to an eigen vector problem. If we denote respectively by and x
ik
the eigen values and
the corresponding normalized eigen vectors, then:
(eq. 3.3-5)
The minimum of is then equal to:
(eq. 3.3-6)
where K designates the set of indices corresponding to positive eigen values.
This result will now be generalized to the case of several nested basic structures.
a
jk
a
il
a
jl
T
l

a
jk
A
ij
j

= i k ,
a
ik
a (
il
)
2
T
l

a
jk
A
jk
j

= i k ,
P
i
a (
il
)
2
l

=
a
ik
P
i
( )
2
T a
jk
A
jk
j

= i k ,

k
a
ik

k
T
-----x
ik
=
k
0 >
a
ik
0 =
k
0

K
ij

k
( )
2
T
------------
k K

i j ,

=
Technical References 47
3.3.1.2 Case of Several Basic Structures
The procedure is iterative and consists in optimizing each basic structure in turn, taking into
account the structures already optimized. The following flow chart describes one iteration:
1. Loop on each basic structure p=1, ..., P
If we define:
(eq. 3.3-1)
we optimize in the equation:
(eq. 3.3-2)
we then set, due to orthogonality constraints:
(eq. 3.3-3)
2. Improvement of the solution by selecting the coefficients m
p
which minimize:
(eq. 3.3-4)
If m
p
is positive, we update the results of step (1):
(eq. 3.3-5)
Return to step (1)
Step (2) is used to equalize the weight of each basic structure as the first structure processed in
step (1) has more influence than the next ones.
The coefficient m
q
is the solution of the linear system:
K
ik
p
h ( ) C
ij
*
h ( ) b
ij
q
C
q
h ( )
q p

=
a
p
ik
K
ik
p
h
u
( ) a
p
ik
a
p
jk
C
p
h
u
( )
p k ,

2
h
u
( )
i j u , ,

b
p
ij
a
p
ik
( )
2
k

=
C
ij
*
h
u
( ) m
p
b
p
ij
C
p
h
u
( )
p

2
h
u
( )
i j u , ,

=
b
p
ij
b
p
ij
m
p

a
p
ik
m
p
a
p
ik

48 Structure Identification in the


(eq. 3.3-6)
Note - This procedure ensures that converges but does not induce that the b
p
converge.
3.3.2 Choice of the Weights
The principle of the Automatic Sill Fitting procedure is to minimize the distance between the exper-
imental value of a variogram lag and the corresponding value of the model. This minimization is
performed giving different weights to different lags. The determination of these weights depends on
one of the four following rules.
l Each lag of each direction has the same weights.
l The weight for each lag of each direction is proportional to the total number of pairs for all the
lags of this direction.
l The weight for each lag of each direction is proportional to the number of pairs and inversely
proportional to the average distance of the lag.
l The weight for each lag of each direction is inversely proportional to the number of lags in this
direction.
3.3.3 Printout of the Linear Model of Coregionalization
This paragraph illustrates a typical printout for a model established for two variables called "Pb"
and "Zn":
Model : Covariance part
=======================
Number of variables = 2
- Variable 1 : Pb
- Variable 2 : Zn
and fitted using a linear combination of two basic structure:
l an exponential variogram with a scale factor of 2.5km (practical range)
l a linear variogram (Order-1 G.C.) with a scale factor of 1km.
Number of basic structures = 2

S1 : Exponential - Scale = 2.50km

Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1 1.1347 0.5334
Variable 2 0.5334 1.8167

Decomposition into factors (normalized eigen vectors) :
Variable 1 Variable 2
Factor 1 0.6975 1.2737
Factor 2 0.8051 -0.4409

Decomposition into eigen vectors (whose variance is eigen values) :
m
q
b
p
ij
b
q
ij
T
pq
i j ,

b
p
ij
A
ij
p
i j ,

= p

Technical References 49
Variable 1 Variable 2 Eigen Val. Var. Perc.
E.Vect 1 0.4803 0.8771 2.1087 71.45
E.Vect 2 0.8771 -0.4803 0.8426 28.55

S2 : Order-1 G.C. - Scale = 1km

Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1 0.2562 0.0927
Variable 2 0.0927 0.1224

Decomposition into factors (normalized eigen vectors) :
Variable 1 Variable 2
Factor 1 0.4906 0.2508
Factor 2 -0.1246 0.2438

Decomposition into eigen vectors (whose variance is eigen values) :
Variable 1 Variable 2 Eigen Val. Var. Perc.
E.Vect 1 0.8904 0.4552 0.3036 80.20
E.Vect 2 -0.4552 0.8904 0.0750 19.80
For each basic structure, the printout contains the following information:
In the Variance-Covariance matrix, the sill of the simple variogram for the first variable "Pb" and
for the exponential basic structure is equal to 1.1347. This sill is equal to 1.8167 for the second vari-
able "Zn" and the same exponential basic structure. The cross-variogram has a sill of 0.5334. These
values correspond to the matrix for the first basic structure.
This Variance-Covariance matrix is decomposed into the orthogonal normalized vectors Y
1
and Y
2
.
In this example and for the first basic structure, we can read that:
(eq. 3.3-1)
These coefficients are the coefficients in the procedure described beforehand and one can
check, for example that for the first basic structure (p=1):
(eq. 3.3-2)
The last array corresponds to the decomposition into eigen values and eigen vectors. For example:
b
p
ij
Zn 0.6975Y
1
0.8051Y
2
+ =
Pb 1.2737Y
1
0.4409Y
2
=
a
p
ik
b
1
11
a
1
11
( )
2
a
1
12
( )
2
+ =
1.1347 0.6975 ( )
2
0.8051 ( )
2
+ =
b
1
22
a
1
21
( )
2
a
1
22
( )
2
+ =
1.8167 1.2737 ( )
2
0.4409 ( )
2
+ =
b
1
11
a
1
11
a
1
21
a
1
12
a
1
22
+ =
0.5334 0.6975 ( ) 1.2737 ( ) 0.8051 ( ) -0.4409 ( ) + =
50 Structure Identification in the
(eq. 3.3-3)
We can easily check that the vectors and are orthogonal and normalized.
Each eigen vector corresponds to a line and is attached to an eigen value. They are displayed by
decreasing order of the eigen values. As the variance-covariance matrix is definite positive, the
eigen values are positive or null. Their sum is equal to the trace of the matrix and it makes sense to
express them as a percentage of the total trace. This value is called "Var. Perc.".
a
1
11

1
1
x
1
11
=
0.6975 ( ) 2.1087 0.4803 ( ) =
a
1
12

1
2
x
1
12
=
0.8051 ( ) 0.8426 0.8771 ( ) =
a
1
21

1
1
x
1
21
=
1.2737 ( ) 2.1087 0.8771 ( ) =
a
1
22

1
2
x
1
22
=
-0.4409 ( ) 0.8426 -0.4803 ( ) =
x
1
1.
x
1
2.
Technical References 51
4 Non-stationary Model-
ing
This page constitutes an add-on to the User Guide for Statistics / Non-stationary Modeling
This technical reference describes the non-stationary variogram modeling approach, where both the
Drift and the Covariance part of the Structure are directly derived in a calculation procedure.
In the non-stationary case (the variable shows either a global trend or local drifts), the correct tool
cannot be the variogram any more as we must deal with variables presenting much larger fluctua-
tions. Generalized covariances are used instead. As they can be specified only when the drift
hypotheses are given, a Non-stationary Model is constituted of both the drift and the generalized
covariance parameters.
The general framework used for the non-stationary case is known as the Intrinsic Random Func-
tions of order k (IRF-k for short). In this scope, the structural analysis is split into two steps:
m determination of the degree of the polynomial drift.
m influence of the optimal generalized covariance compatible with the degree of the drift.
The procedure described hereafter only concerns the univariate aspect. Conversely, it is developed
to enable the use of the external drift feature.
52 Non-stationary Modeling
4.1 Unique Neighborhood
4.1.1 Determination of the Degree of the Drift
The principle is to consider that the random variable Z(x) is only constituted of the drift which cor-
responds to a large scale function with regard to the size of the neighborhood. This function is usu-
ally modeled as a low order polynomial.
(eq. 4.1-1)
f
l
(x) denotes the basic monomials
a
l
are the unknown coefficients
K represents the number of monomials and is related to the degree of the polynomial through the
dimension of the space
The procedure consists in a cross-validation criterion assuming that the best (order of the) drift is
the one which results in the smallest average error. The cross-validation is a generic name for the
process which in turns considers one data point (called the target), removes it and estimates it from
the remaining neighboring information. The cross-validation error is the difference between the
known and the estimated values. When the theoretical variance of estimation is available, the previ-
ous error can be divided by the estimation standard deviation.
The estimation m*(x) is obtained through a least squares procedure, the main lines of it are recalled
here. If designates the neighboring information we wish to minimize:
(eq. 4.1-2)
Replacing by its expansion:
(eq. 4.1-3)
which must be minimized against each unknown a
l
(eq. 4.1-4)
In matrix notation:
Z x ( ) m x ( ) a
l
f
l
x ( )
l
K

= =
Z

( ) ( )
2

=
m

( )
Z
2

2 a
l
f
l

a
l
a
m
f
l

f
m

l m ,

+
l

=
a
m
f
l

f
m

f
l

= l
Technical References 53
(eq. 4.1-5)
The principle in this drift identification phase consists in selecting data points as targets, fitting the
polynomials for several order assumptions, based on their neighboring information and derives the
minimum square errors for each assumption. The optimal drift assumption is the one which pro-
duces, on average, the smallest error variance.
The drawback to this method is its lack of robustness against possible outliers. As a matter of fact,
an outlier will produce large variances whatever the degree of the polynomial and will reduce the
discrepancy between results.
A more efficient criterion, for each target point, is to rank the least squared errors for the various
polynomial orders. The first rank is assigned to the order producing the smallest error, the second
rank to the second smallest one and so one. These ranks are finally averaged on the different target
points and the smallest averaged rank corresponds to the optimal degree of the drift.
4.1.2 Inference of the Covariance
Here again, we consider the generic form of the generalized covariance:
(eq. 4.1-1)
where K
p
(h) corresponds to predefined basic structures.
The idea consists in finding the coefficients b
p
but, this time, among a class of quadratic estimators.
(eq. 4.1-2)
using systematically all the information available.
The principle of the method is based on the MINQUE theory (Rao) which has been rewritten in
terms of generalized covariances.
Let Z be a vector random variable following the usual decomposition
(eq. 4.1-3)
Let us first review the MINQUE approach. The covariance matrix of Z, can be expanded on a basis
of authorized basic models:
(eq. 4.1-4)
introducing the variance components . We can estimate them using a quadratic form
F
T
F ( )A F
T
Z ( ) =
K h ( ) b
p
K
p
h ( )
p

=
b

p
Z

=
Z X U + =
Cov Z Z , ( )
2
1
V
1

2
r
V
r
+ + =

2
p
54 Non-stationary Modeling
(eq. 4.1-5)
where the following conditions are satisfied on the matrix A
p
:
1. Invariance: A
p
X = 0 (X is the drift vector composed of columns of coordinates)
2. Unbiasedness:
3. Optimality: ||A
p
||
2
V
= T
r
(A
p
VA
p
V) minimum
where V is a covariance matrix used as a norm.
Rao suggested defining V as a linear combination of the V
p
:
(eq. 4.1-6)
The MINQUE is reached when the coefficients coincide with the variance components , but
this is precisely what we are after.
Using the vector which constitutes an increment of the data Z we can refer A
p
by:
where:
and check that the norm V is only involved through:
If A and B designate real symmetric n*n matrices, we define the scalar product
<A, B>
n
= T
r
(AVBV) (eq. 4.1-7)
If A and B satisfy invariance conditions, then we can find respectively S and T, such that:
(eq. 4.1-8)
Then:
(eq. 4.1-9)

2
p
Z
T
A
p
Z =
T
r
A
p
V
q
( )
pq
=
V
p
2
V
p
=

p
2

p
2

S
p

T
X 0 =
W
T
V =
Technical References 55
which defines a scalar product on the (n-k)*(n-k) matrix if k designates the number of drift terms.
With these notations, we can reformulate the MINQUE theory:
(eq. 4.1-10)
(eq. 4.1-11)
l The unbiasedness condition leads to:
(eq. 4.1-12)
We introduce the following notations:
(eq. 4.1-13)
then
(eq. 4.1-14)
l The optimality condition:
(eq. 4.1-15)
(eq. 4.1-16)
(eq. 4.1-17)
(eq. 4.1-18)
56 Non-stationary Modeling
If H designates the subspace spanned on the H
i
, the optimality condition induces that S
p
belongs
to this space and can be written:
(eq. 4.1-19)
The unbiasedness conditions can be written:
(eq. 4.1-20)
This system has solutions as soon as the matrix H(H(i,j) = <H
i
,H
j
>) is non singular.
When the coefficients have been calculated, the matrices S
p
and A
p
are determined and finally
the value of is obtained.
These coefficients must then be replaced in the formulation of the norm V and therefore in W. This
leads to new matrices H
i
and to new estimates of the coefficients . The procedure is iterated
until the estimates of have reached a stable position.
Still there is no guarantee that the estimate satisfies the consistency conditions for K to be a
valid generalized covariance.
It can be demonstrated however that the coefficients linked to a single basic structure covariance
lead to positive results which produce authorized generalized covariances.
The procedure resembles the one used in the moving neighbourhood case. All the possible combi-
nations are tested and the ones which lead to non-authorized generalized covariances are dropped.
In order to select the optimal generalized covariance, a cross-validation test is performed and the
model which leads to the standardized error closest to 1 is finally retained.

pi
b
p

pi
b
p
b
p
Technical References 57
4.2 Moving Neighborhood
This time, the procedure is quite different whether we consider a Moving or a Unique Neighbor-
hood Technique. It consists in finding the optimal generalized covariance, knowing the degree of
the drift.
4.2.1 Determination of the Degree of the Drift
The procedure consists in finding the optimal drift considered as the large scale drift with regards to
the (half) size of the neighborhood. As a matter of fact, each sample is considered in turn as the
seed for the neighborhood search. This neighborhood is then split into two rings: the closest sam-
ples to the seed belong to the ring numbered 1, the other samples to ring number 2.
As for the Unique Neighborhood case, the determination is based on a cross-validation procedure.
All the data from ring 1 are used to fit the functions corresponding to the different drift hypotheses.
Each datum of ring 2 is used to check the quality of the fit. Then the roles of both rings are inverted.
The best fit corresponds to the minimal average variance of the cross-validation errors, of for a
more robust solution, to the minimal re-estimation rank. The final drift identification only considers
the results obtained when testing data of ring 2 against drift trials fitted on samples from ring 2.
4.2.2 Constitution of ALC-k
We can then consider that the resulting model is constituted of the drift that we have just inferred,
completed by a covariance function reduced to a pure nugget effect, the value of which is equal to
the variance of the cross-validation errors.
The value of the polynomial at the test data (denoted by the index "
0
") is:
(eq. 4.2-1)
This establishes that this estimate is a linear combination of the neighboring data. The set of
weights is given by:
(eq. 4.2-2)
As the residual from the least squares polynomial of order k coincides with a kriging estimation
using a pure nugget effect in the scope of the intrinsic random functions of order k, and as the nug-
get effect is an authorized model for any degree k of the drift, then:
(eq. 4.2-3)
58 Non-stationary Modeling
is an authorized linear combination of the points .
with the corresponding weights .
We have found a convenient way to generate one set of weights which, given a set of points, consti-
tutes an authorized linear combination of order k (ALC-k).
4.2.3 Inference of the Covariance
The procedure is a cross-validation technique performed using the two rings of samples as defined
when determining the optimal degree of the drift. Then each datum of the ring 1 is considered with
all the data in ring 2: they constitute a measure. Similarly, each datum of ring 2 is considered with
all the data of ring 1. Finally one neighborhood, centered on a seed data point, which contains 2N
data points leads to (a maximum of) 2N measures.
The first task is to calculate the weights that must be attached to each point of the measure in order
to constitute an authorized linear combination of order k.
Now the order k of the random function is known since it comes from the inference performed in
the previous step. The obvious constraint is that the number of points contained in a measure is
larger than the number of terms of the drift to be filtered.
A simple way to calculate these weights is obtained through the least square fitting of polynomials
of order k.
We will now apply the famous "Existence and Uniqueness Theorem" to complete the inference of
the generalized covariance. It says that for any ALC-k, we can write:
(eq. 4.2-1)
introducing the generalized covariance K(h) where designates the value of this function K for
the distance between points and .
We assume that the generalized covariance K(h) that we are looking for is a linear combination of a
given set of generic basic structures K
p
(h), the coefficients b
p
(equivalent to sills) of which still
need to be determined:
(eq. 4.2-2)
We use the theorem for each one of the measures previously established, that we denote by using
the index "m":
Z

{ } Z
0
, ( )
{ } 1 , ( )
K


Technical References 59
(eq. 4.2-3)
If we assume that each generic basic structure K
p
(h) is entirely determined with a sill equal to 1,
each quantity:
(eq. 4.2-4)
as well as the quantity
(eq. 4.2-5)
are known.
Then the problem is to find the coefficients such that
(eq. 4.2-6)
for all the measures generated around each test data. This is a multivariate linear regression prob-
lem that we can solve by minimizing:
(eq. 4.2-7)
The term is a normation weight introduced to reduce the influence of ALC-k with a large vari-
ance. Unfortunately this variance is equal to:
Var

m
Z

m
K

=
b
p

m
K
p

m
2
60 Non-stationary Modeling
(eq. 4.2-8)
which depends on the precise coefficients that we are looking for. This calls for an iterative proce-
dure.
Moreover we wish to obtain a generalized covariance as a linear combination of the basic struc-
tures. As each one of the basic structures individually is authorized, we are in fact looking for a set
of weights which are positive or null. We can demonstrate that, in certain circumstances, some coef-
ficients may be slightly negative. But in order to ensure a larger flexibility to this automatic proce-
dure, we simply ignore this possibility. We should however perform regression under the
positiveness constraints. Instead we prefer to calculate all the possible regressions with one non-
zero coefficient only, then with two non-zero coefficients, and so on ... Each one of these regres-
sions is called a subproblem.
As mentioned before, each subproblem is treated using an iterative procedure in order to reach a
correct normation weight.
The principle is to initialize all the non-zero coefficients of the subproblem to 1. We can then derive
an initial value for the normation weights . Using these initial weights, we can solve the
regression subproblem and derive the new coefficients. We can therefore obtain the new value of
the normation weights. This iteration is stopped when the coefficients b
p
remain unchanged
between two consecutive iterations.
We must still check that the solution is authorized as the resulting coefficients, although stable, may
still be negative. The non-authorized solutions are discarded.
Anyhow, it can easily be seen that the monovariate regressions always lead to authorized solutions.
Let us assume that the generalized covariance is reduced to one basic structure
K(h) = bK
0
(h) (eq. 4.2-9)
The single unknown is the coefficient , which is obtained by minimizing:
(eq. 4.2-10)
The solution is obviously:

m
2
( )
0
Technical References 61
(eq. 4.2-11)
As is an ALC-k, the term corresponds to the variance of the ALC-k and is therefore
positive. We can check that b* r 0.
We have obtained several authorized sets of coefficients, each set being the optimal solution of the
corresponding subproblem. We must now compare these results. The objective criterion is to com-
pare the ratio between the experimental and the theoretical variance:
(eq. 4.2-12)
The closer this ratio is to 1, the better the result.

m
K
0

m
( )
62 Non-stationary Modeling
4.3 Case of External Drift(s)
The principle of the external drift technique is to replace the large scale drift function, previously
modelled as a low order polynomial, by a combination of a few deterministic functions f
l
known
over the whole field. However, in practical terms, the first constant monomial universality condi-
tion is always kept; some of the other traditional monomials can also be used so that the drift can
now be expanded as follows:
(eq. 4.3-1)
when the f
l
denotes both standard monomials and external deterministic functions.
When this new decomposition has been stated, the determination of the number of terms in the drift
expansion as well as the corresponding generalized covariance is similar to the procedure explained
in the previous paragraph.
Nevertheless some additional remarks need to be mentioned.
The inference (as well as the kriging procedure) would not work properly as soon as some of the
basic drift functions and the data locations are linearly dependant.
In the case of a standard polynomial drift these cases are directly linked to the geometry of the data
points: a first order IRF will fail if all the neighboring data points are located on a line; a second
order IRF will fail if they belong to any quadric such as a circle, an ellipse or a set of two lines.
In the case of external drift(s), this condition involves the value of these deterministic functions at
the data points and is not always easy to check. In particular, we can imagine the case where only
the external drift is used and where the function is constant for all the samples of a (moving) neigh-
borhood: this property with the universality condition will produce an instability in the inference of
the model or in its use via the kriging procedure.
Another concern is the degree that we can attribute to the IRF when the drift is represented by one
or several external functions. As an illustration we could imagine using two external functions cor-
responding respectively to the first and second coordinates of the data. This would transform the
target variable into a IRF 1 and would therefore authorize the fitting of generalized covariances
such as K(h) = |h|
3
. As a general rule we consider that the presence of an external drift function
does not modify the degree of the IRF which can only be determined using the standard monomials:
this is a conservative position as we recall that the generalized covariance that can be used for an
IRF(k), can always be used for an IRF(k+1).
E Z x ( ) [ ] m x ( ) a
0
a
l
f
l
x ( )
l

+ = =
Technical References 63
5 Quick Interpolations
This page constitute an add-on to the Users Guide for Interpolate / Interpolation / Quick Interpola-
tion
The term Quick Interpolation is used to characterize an estimation technique that does not require
any explicit model of spatial structure. They usually correspond to very basic estimation algorithms
widely spread in the literature. For simplicity purpose, only the univariate estimation techniques are
proposed.
64 Quick Interpolations
5.1 Inverse Distances
The estimation is a linear combination of the neighboring information.
(eq. 5.1-1)
The weight attached to each information is inverse proportional to the distance from the data to the
target, at a given power (p):
(eq. 5.1-2)
If the smallest distance is smaller than a given threshold, the value of the corresponding sample is
simply copied at the target point:
Z

1
d

P
------
1
d

P
------

------------- =
Technical References 65
5.2 Least Square Polynomial Fit
The neighboring data is used in order to fit a polynomial expression of a degree specified by the
user.
If designates each monomial at the point the least square system is written:
(eq. 5.2-1)
which leads to the following linear system:
(eq. 5.2-2)
When the coefficients a
l
of the polynomial expansion are obtained, the estimation is:
(eq. 5.2-3)
where designates the value of each monomial at the target location.
f

l
x


l
f

l
l

minimum
a
l
f

l
f

= l
Z

a
l
Zf
0
l
l

=
f
0
l
66 Quick Interpolations
5.3 Moving Projected Slope
The idea is to consider the data samples 3 by 3. Each triplet of samples defines a plane whose value
at the target location gives the plane-estimation related to that triplet. The estimated value is
obtained by averaging all the estimations given by all the possible triplets of the neighborhood. This
can also be expressed as a linear combination of the data but the weights are more difficult to estab-
lish.
Technical References 67
5.4 Discrete Splines
The interested reader can find references on this technique in Mallet J.L., Automatic Contouring in
Presence of Discontinuities (In Verly et Al eds., Geostatistics for Natural Resources Characteriza-
tion, Part 2, Reidel, 1984). The method has only been implemented on regular grids.
The global roughness is obtained as a combination of the following constraints, defined in 2D:
if we interpolate the top of a geological stratigraphic layer, as such layers are gener-
ally nearly horizontal, it is wise to assume that the interpolator is such that:
(eq. 5.4-1)
l if we consider the layer as an elastic beam that has been deformed under the action of geologi-
cal stresses, it is known that shearing stresses in the layer are proportional to second order deriv-
atives. At any point where the shearing stresses exceed a given threshold, rupture will occur. For
this reason, it is wise to assume the following condition at any point where no discontinuity
exists:
(eq. 5.4-2)
The global roughness can be established as follows:
(eq. 5.4-3)
where is a real number belonging to the interval [0, 1].
Practice has shown that the term has little influence on the result. For this reason, the term
is often dropped from the global criterion.
Finally, as we are dealing with values located on a regular grid, we replace the partial derivatives by
their digital approximations:
Z x y , ( ) =

R
1
( )
x

2
and = R
2
( )
y

2
are minimum =
R
3
( )
x
2
2


, = R
4
( )
y
2
2


and = R
5
( )
x y
2


are minimum =
R ( ) R
1
( ) R
2
( ) + { } 1 ( ) R
3
( ) R
4
( ) R
5
( ) + + { } + =

R
5
( )
R
5
( )
68 Quick Interpolations

(eq. 5.4-4)
Due to this limited neighborhood for the constraints, we can minimize the global roughness in an
iterative process, using the Gauss-Seidel Method.

x
------


i j , ( )
i 1 j , + ( ) i 1 j , ( ) =

y
------


i j , ( )
i j 1 + , ( ) i j 1 , ( ) =
x
2
2





i j , ( )
i 1 j , + ( ) 2 i j , ( ) i 1 j , ( ) + =
y
2
2





i j , ( )
i j 1 + , ( ) 2 i j , ( ) i j 1 ) , ( + =

x y
-----------


i j , ( )
i 1 + j 1 + , ( ) i 1 j 1 + ) i 1 + j 1 , ( ) i 1 j 1 , ( ) + , ( =
Technical References 69
5.5 Bilinear Grid Interpolation
When the data are defined on a regular grid, we can derive a value of a sample using the bilinear
interpolation method as soon as the sample is surrounded by four grid nodes:
(fig. 5.5-1)
(eq. 5.5-1)
We can check that the bilinear technique is an exact interpolator as when x = y = 0,
(eq. 5.5-2)
Z

y
y
------
x
x
------ Z i 1 j 1 + ; + ( ) 1
x
x
------


Z i j 1) + ; ( +


=
+ 1
y
y
------


x
x
------ Z i 1 j ; + ( ) 1
x
x
------


Z i j ) ; ( +


Z

Z i j , ( ) =
70 Quick Interpolations
Technical References 71
6 Grid Transformations
This page constitutes an add-on to the On-Line Help for: Interpolate / Interpolation / Grid Operator
/ Tools / Grid or Line Smoothing.
Except for the Grid filters, located in the Tools / Grid or Line Smoothing window and discussed in
the last section, all the Grid Transformations can be found in Interpolate / Interpolation / Grid
Operator and are performed on two different variable types:
l The real variables (sometimes called colored variables) which correspond to any numeric vari-
able, no matter how many bits the information is coded on,
l The binary variables which correspond to selection variables.
Any binary variable can be considered as a real variable; the converse is obviously wrong.
The specificity of these transformations is the use of two other sets of information:
l The threshold interval: it consists of a pair of values defining a semi-open interval of the type
[a,b[. This threshold interval is used as a cutoff in order to transform a real variable into its indi-
cator (which is a binary variable).
l The structuring element: it consists of three parameters defining the extension of the neighbor-
hood, expressed in terms of pixels. Each dimension is entered as the radius of the ball by which
the target pixel is dilated: when the radius is null, the target pixel is considered alone; when the
radius is equal to 1, the neighborhood extension is 3 pixels,...
An additional flag distinguishes the type of the structuring element: cross or block. The following
scheme gives an example of a 2-D structuring element with radius of 1 along X (horizontal) and 2
along Y (vertical). The left side corresponds to a cross type and the right side to a block type.
(fig. 6.0-1)
72 Grid Transformations
When considering a target cell located on the edge of the grid, the structuring element is reduced to
only nodes those which belong to the field: this produces an edge effect.
Technical References 73
6.1 List of the Grid Transformations
The transformations are illustrated on a grid of 100 by 100 pixels, filled with two multigaussian iso-
tropic simulations called the initial simulations. We will also use a binary version of this simula-
tions by coding as 1 all the positive values and as 0 the negative ones: this will be called the binary
simulations
. (fig. 6.1-1)
The previous figure presents the two initial simulations on the upper part and the corresponding
binary simulations on the bottom part. The initial simulations have been generated (using the Turn-
ing Band method) in order to reproduce:
- a spherical variogram on the left side
- a gaussian variogram on the right side
Both variograms have the same scale factor (10 pixels) and the same variance.
Each transformation will be presented using one of the previous simulations (either in its initial or
binary form) on the left and the result of the transformation on the right.
74 Grid Transformations
In this paragraph, the types of the arguments and the results of the grid transformations are speci-
fied using the following coding:
m v binary variable
m w real or colored variable
m s real or colored selection variable
m t threshold
l v = real2binary(w)
converts the real variable w into the binary variable v. The principle is that the output variable is
set to 1 (true) as soon as the corresponding input variable is different from zero.
l w = binary2real(v)
converts the binary variable v into the real variable w.
l v = thresh(w,t)
transforms the real variable w into its indicator v through the cutoff interval t. A sample is set to
1 if it belongs to the cutoff interval and to 0 otherwise.
l v
2
= erosion(s,v
1
)
performs the erosion on the input binary image v
1
, using the structuring element s, storing the
result in the binary image v
2
. A grain is transformed into a pore if there is at least one pore in its
neighborhood, defined by the structuring element. The next figure shows an erosion with a
cross structuring element (size 1).
(fig. 6.1-2)
Technical References 75
l v
2
= dilation(s,v
1
)
v
2
is the binary image resulting from the dilation of the binary image v
1
using the structuring
element s. A pore is replaced by a grain if there is at least one grain in its neighborhood, defined
by the structuring element. The next figure shows an erosion with a cross structuring element
(size 1).
(fig. 6.1-3)
l v
2
= opening(s,v
1
)
v
2
is the binary image resulting from the opening of the binary image v
1
using the structuring
element s. It is equivalent to erosion followed by a dilation, using the same structuring element.
The next figure shows an erosion with a cross structuring element (size 1).
76 Grid Transformations
(fig. 6.1-4)
l v
2
= closing(s,v
1
)
v
2
is the binary image resulting from the closing of the binary image v
1
using the structuring ele-
ment s. It is equivalent to a dilation followed by an erosion, using the same structuring element.
The next figure shows an erosion with a cross structuring element (size 1).
(fig. 6.1-5)
l v
3
= intersect(v
1
,v
2
)
v
3
is the binary image resulting from the intersection of two binary images v
1
and v
2
. A pixel is
considered as a grain if it belongs to the grain in both initial images.
Technical References 77
l v
3
= union(v
1
,v
2
)
v
3
is the binary image resulting from the union of two binary images v
1
and v
2
. A pixel is con-
sidered as a grain if it belongs to the grain in one of the initial images at least.
l v
2
= negation(v
1
)
v
2
is the binary image where the grains and the pores of the binary image v
1
have been inverted
l w
2
= gradx(w
1
)
w
2
is the real image which corresponds to the partial derivative of the initial real image w
1
along
the X axis, obtained by comparing pixels at each side of the target node.
(eq. 6.1-1)
l w
2
= grad_xm(w
1
)
w
2
is the real image which corresponds to the partial derivative of the initial real image w
1
along
the X axis, obtained by comparing the value at target with the previous adjacent pixel.
Practically on a 2D grid:
(eq. 6.1-2)
l w
2
= grad_xp(w
1
)
w
2
is the real image which corresponds to the partial derivative of the initial real image w
1
along
the X axis, obtained by comparing the value at target with the next adjacent pixel.
Practically, on a 2D grid:
(eq. 6.1-3)
Note - The rightmost vertical column of the image is arbitrarily set to pore (edge effect).
The next figure represents the gradient along the X axis of the initial (real) simulation.
w
2
ix iy , ( )
w
1
ix 1 + iy , ( ) w
1
ix 1 iy , ( )
2 dx
-------------------------------------------------------------------------- =
w
2
ix iy , ( )
w
1
ix iy , ( ) w
1
ix 1 iy , ( )
dx
----------------------------------------------------------------- =
w
2
ix iy , ( )
w
1
ix 1 + iy , ( ) w
1
ix iy , ( )
x d
----------------------------------------------------------------- =
78 Grid Transformations
(fig. 6.1-6)
l w
2
= grady(w
1
)
w
2
is the real image which corresponds to the partial derivative of the initial real image w
1
along
the Y axis, obtained by comparing pixels at each side of the target node.
Practically on a 2D grid:
(eq. 6.1-4)
l w
2
= grad_ym(w
1
)
w
2
is the real image which corresponds to the partial derivative of the initial real image w
1
along
the Y axis, obtained by comparing the value at target with the previous adjacent pixel.
Practically on a 2D grid:
(eq. 6.1-5)
l w
2
= grad_yp(w
1
)
w
2
is the real image which corresponds to the partial derivative of the initial real image w
1
along
the Y axis, obtained by comparing the value at target with the next adjacent pixel.
Practically on a 2D grid:
(eq. 6.1-6)
w
2
ix iy , ( )
w
1
ix iy 1 + , ( ) w
1
ix iy 1 , ( )
2 dy
-------------------------------------------------------------------------- =
w
2
ix iy , ( )
w
1
ix iy , ( ) w
1
ix iy 1 , ( )
dy
----------------------------------------------------------------- =
w
2
ix iy , ( )
w
1
ix iy 1 + , ( ) w
1
ix iy , ( )
y d
----------------------------------------------------------------- =
Technical References 79
Note - The upper line of the image is arbitrarily set to pore (edge effect).
The next figure represents the gradient along the Y axis of the initial (real) simulation.
(fig. 6.1-7)
l w
2
= gradz(w
1
)
w
2
is the real image which corresponds to the partial derivative of the initial real image w
1
along
the Z axis, obtained by comparing pixels at each side of the target node.
Practically on a 2D grid:
(eq. 6.1-7)
l w
2
= grad_zm(w
1
)
w
2
is the real image which corresponds to the partial derivative of the initial real image w
1
along
the Z axis, obtained by comparing the value at target with the previous adjacent pixel.
Practically on a 2D grid:
(eq. 6.1-8)
l w
2
= grad_zp(w
1
)
w
2
is the real image which corresponds to the partial derivative of the initial real image w
1
along
the Z axis, obtained by comparing the value at target with the next adjacent pixel.
Practically on a 2D grid:
w
2
ix iz , ( )
w
1
ix iz 1 + , ( ) w
1
ix iz 1 , ( )
2 dz
------------------------------------------------------------------------- =
w
2
ix iz , ( )
w
1
ix iz , ( ) w
1
ix iz 1 , ( )
dz
---------------------------------------------------------------- =
80 Grid Transformations
(eq. 6.1-9)
l w
2
= laplacian(w
1
)
w
2
is the real image which corresponds to the laplacian of the initial image w
1
. The next figure
represents the laplacian of the initial (real) simulation.
Practically on a 2D grid:
(eq. 6.1-10)
Note - The one pixel thick frame of the image arbitrarily set to pore (edge effect).
(fig. 6.1-8)
l w
4
= divergence(w
1
,w
2
,w
3
)
w
4
is the real image which corresponds to the divergence of a 3-D field, whose components are
expressed respectively by w
1
along X, w
2
along Y and w
3
along Z.
Practically on a 3D grid:
w
2
ix iz , ( )
w
1
ix iz 1 + , ( ) w
1
ix iz , ( )
dz
---------------------------------------------------------------- =
w
2
ix iy , ( )
w
1
ix 1 + iy , ( ) 2w
1
ix iy , ( ) w
1
ix 1 iy , ( ) +
x
2
d
-------------------------------------------------------------------------------------------------------------+ =
w
1
ix iy 1 + , ( ) 2w
1
ix iy , ( ) w
1
ix iy 1 , ( ) +
y
2
d
-------------------------------------------------------------------------------------------------------------
Technical References 81
(eq. 6.1-11)
l w
4
= rotx(w
1
,w
2
,w
3
)
w
4
is the real image which corresponds to the component along X of the rotational of 3D field,
whose components are expressed respectively by w
1
along X, w
2
along Y and w
3
along Z.
Practically on a 3D grid:
(eq. 6.1-12)
l w
4
= roty(w
1
,w
2
,w
3
)
w
4
is the real image which corresponds to the component along Y of the rotational of 3D field,
whose components are expressed respectively by w
1
along X, w
2
along Y and w
3
along Z.
Practically on a 3D grid:
(eq. 6.1-13)
l w
4
= rotz(w
1
,w
2
,w
3
)
w
4
is the real image which corresponds to the component along Z of the rotational of 3D field,
whose components are expressed respectively by w
1
along X, w
2
along Y and w
3
along Z.
Practically on a 3D grid:
w
4
ix iy iz , , ( )
w
1
ix 1 + iy iz , , ( ) w
1
ix iy iz , , ( )
x d
--------------------------------------------------------------------------------+ =
w
2
ix iy 1 iz , + , ( ) w
2
ix iy iz , , ( )
y d
--------------------------------------------------------------------------------+
w
3
ix iy , iz 1 + , ( ) w
3
ix iy iz , , ( )
z d
--------------------------------------------------------------------------------
w
4
ix iy iz , , ( )
w
2
ix iy 1 + iz , , ( ) w
2
ix iy iz , , ( )
y d
--------------------------------------------------------------------------------- =
w
3
ix iy , iz 1 + , ( ) w
3
ix iy iz , , ( )
z d
--------------------------------------------------------------------------------
w
4
ix iy iz , , ( )
w
3
ix iy iz 1 + , , ( ) w
3
ix iy iz , , ( )
y d
--------------------------------------------------------------------------------- =
w
1
ix 1 + iy iz , , ( ) w
1
ix iy iz , , ( )
x d
--------------------------------------------------------------------------------
82 Grid Transformations
(eq. 6.1-14)
l w
2
=gradient(w
1
)
w
2
is the real image containing the modulus of the 2D gradient of w
1
.
Practically on a 2D grid:
(eq. 6.1-15)
(fig. 6.1-9)
l w
2
=azimuth2d(w
1
)
w
2
is the real image containing the azimuth (in radian) of the 2D gradient of w
1
.
Practically on a 2D grid:
(eq. 6.1-16)
w
4
ix iy iz , , ( )
w
2
ix iy 1 + iz , , ( ) w
2
ix iy iz , , ( )
y d
--------------------------------------------------------------------------------- =
w
1
ix 1 + iy iz , , ( ) w
1
ix iy iz , , ( )
x d
--------------------------------------------------------------------------------
w
2
ix iy , ( )
w
1
ix 1 + iy , ( ) w
1
ix iy , ( )
x d
-----------------------------------------------------------------


2
w
1
ix iy 1 + , ( ) w
1
ix iy , ( )
y d
-----------------------------------------------------------------


2
+ =
w
2
ix iy , ( )
w
1
ix iy 1 + , ( ) w
1
ix iy , ( )
y d
-----------------------------------------------------------------


w
1
ix 1 + iy , ( ) w
1
ix iy , ( )
x d
-----------------------------------------------------------------


, atan =
Technical References 83
(fig. 6.1-10)
l w = labelling_cross(v)
w is the real image which contains the ranks (or labels) attached to each grain component that
can be distinguished in the binary image v. A grain component is the union of all the grain pixels
that can be connected using a grain path. Here two grains are connected as soon as they share a
common face. The labels are strictly positive quantities such that two pixels belong to the same
grain component if and only if they have the same label. The grain component labels are ordered
so that the largest component receives the label 1, the second largest the label 2, and so on. The
pore is given the label 0. In the following figure, only the 14 first largest components are repre-
sented separately (using different colors); all the smallest ones are displayed using the same pale
grey color.
A printout of the grain component dimensions is provided: for each component dimension,
sorted in decreasing order, the program gives the count of occurrences and the total count of
pixels involved.
84 Grid Transformations
(fig. 6.1-11)
l w = labelcond_cross(v
1
,v
2
)
This function is similar to the labelling_cross function, but restricted to v
2
.
l w = labelling_block(v)
This function is similar to the labelling_cross function, but, this time, two grains are connected
as soon as they share a common face or vertex. Therefore the connectivity probability is larger
here which leads to fewer but larger components.
(fig. 6.1-12)
Technical References 85
l w = labelcond_block(v
1
,v
2
)
This function is similar to the labelling_block function, but restricted to v
2
.
l w
2
= moving_average(s,w
1
)
w
2
is a real image where each pixel is obtained as the average of the real image w
1
performed
over a moving neighborhood centered on the target pixel, whose dimensions are given by the
structuring element s. The next figure represents the moving average transformation applied to
the initial simulation using the smallest cross structuring element.
(fig. 6.1-13)
l w
2
=moving_average_cond(s,w,wf)
Performs the same operation as the moving_average function, but the neighborhood of a target
cell is reduced to the peripheral cells (included in the structuring element s) where the secondary
variable wf has the same value as in the target cell.
l w
2
= moving_median(s,w
1
)
w
2
is a real image where each pixel is obtained as the median of the real image w
1
performed
over a moving neighborhood centered on the target pixel, whose dimensions are given by the
structuring element s. The next figure represents the moving median transformation applied to
the initial simulation using the reference structuring element.
86 Grid Transformations
(fig. 6.1-14)
l w
2
=moving_median_cond(s,w,wf)
Performs the same operation as the moving_median function, but the neighborhood of a target
cell is reduced to the peripheral cells (included in the structuring element s) where the secondary
variable wf has the same value as in the target cell.
l w
2
=fill_average(w
1
)
When w
1
is an image containing several holes (non informed areas), you may wish to complete
the grid before performing other operations. A convenient solution is to use the fill_average
option which will replace any unknown grid value by the average of the first non-empty rings
located around the target node
.
(fig. 6.1-15)
Technical References 87
l v=imagestat(v)
This operation provides an easy way to get basic statistics on a binary image. The output binary
image is equal to the input binary image.
l w
2
=shadowing(w
1
,a,b)
the resulting variable w
2
corresponds to the image of the input variable w
1
considered as a relief
and represented with the shadow created by a light source. The source location is characterized
by its longitude (a) and its latitude (b). The longitude angle (a) is counted from the north
whereas the latitude is positive when located above the ground level. The following image
shows the shadowed image with a light source located at 10 degrees longitude and 20 degrees
latitude.
(fig. 6.1-16)
l w
2
=integrate(w
1
)
This operation considers the only active grid nodes where the input variable w
1
is defined. The
output variable returns the rank of the active node. Moreover some statistics are printed out (in
the message area) where the count of active nodes is given together with the cumulated and
average quantity of the variable or the positive variable (where only its positive values are taken
into account). The following picture shows the resulting variable represented with a grey color
map (black for low values and white for large values).
88 Grid Transformations
(fig. 6.1-17)
l v=dynprogx(w
1
)
This function creates a binary image (selection) where one pixel is valid per YOZ plane, which
corresponds to the continuous path of the maximum value between the first and last YOZ planes
of the 3-D block. In the next figure, remember that the large values correspond to the lighter
color of the left side image.
(fig. 6.1-18)
l v=dynprogy(w
1
)
This function creates a binary image (selection) where one pixel is valid per XOZ plane, which
corresponds to the continuous path of the maximum value between the first and last XOZ planes
of the 3-D block. In the next figure, remember that the large values correspond to the lighter
color of the left side image.
Technical References 89
(fig. 6.1-19)
l v=dynprogz(w
1
)
This function creates a binary image (selection) where one pixel is valid per XOY plane, which
corresponds to the continuous path of the maximum value between the first and last XOY
planes of the 3-D block. In the next figure, remember that the large values correspond to the
lighter color of the left side image.
l v=maxplanex(w)
This function creates a binary image (selection) where one pixel is valid per YOZ plane, which
corresponds to the largest value of the function in this plane. In the next figure, remember that
the large values correspond to the lighter color of the left side image.
(fig. 6.1-20)
90 Grid Transformations
l v=maxplaney(w)
This function creates a binary image (selection) where one pixel is valid per XOZ plane, which
corresponds to the largest value of the function in this plane. In the next figure, remember that
the large values correspond to the lighter color of the left side image.
(fig. 6.1-21)
l v=maxplanez(w)
This function creates a binary image (selection) where one pixel is valid per XOY plane, which
corresponds to the largest value of the function in this plane. In the next figure, remember that
the large values correspond to the lighter color of the left side image
Technical References 91
6.2 Filters
A last type of transformation corresponds to the filters. They can be used on data regularly spaced
of any dimension: therefore, they can be applied on grids or on points sampled along a line (in this
case, we consider that the sampled are regularly spaced not paying attention to their actual coordi-
nates). The three elementary filters provided in Isatis are described hereafter.
6.2.1 Low pass filter
The data set is considered as the product of several one-dimensional regular sampling patterns: in
the case of a 3D grid for example, the process will be performed as the succession of a first elemen-
tary filtering step along X, followed by one step along Y and a last step along Z.
Moreover this filter can be iterated several times over the whole data set in each direction.
The mathematical formulae corresponding to the low pass filter can be written in 1D where Z(i-1),
Z(i) and Z(i+1) represent three consecutive nodes
(eq. 6.2-1)
The parameter is automatically set to 0.5. Nevertheless, we can ask to perform a two-passes pro-
cedure where a second pass is performed with = -0.5.
The next picture shows an illustration of the low-pass filtering procedure performed on a 2D grid of
100 X 100 nodes. The top figure presents a simulated variable (spherical variogram with a range of
10) after thresholding (positive values in white and negative values in black). The bottom figure
shows the variable after 50 iterations of the two_passes procedure.
i ( ) i ( )
i 1 + ( ) i 1 ( ) +
2
i ( ) +

92 Grid Transformations
. (fig. 6.2-1)
Note - The filter is applied in a given direction at a node only if its neighborhood is complete
otherwise the initial value is left unchanged. Therefore all the nodes can be treated and the output
grid is complete.
6.2.2 The median filtering
Once again the data set is considered as the product of several one-dimensional regular sampling
pattern and the filter can be iterated several times over the whole data set in each direction. The
principle is to select each node in turn along the 1D row and to replace its value by the median
Technical References 93
value over its neighborhood (whose extension radius "n" is given by the user) according to the for-
mulae:
(eq. 6.2-1)
Note - The final size of the neighborhood is equal to 2n+1 nodes. The neighborhood is truncated
when it intersects the edge so that all the nodes can be treated and the output grid is complete.
The next picture shows an illustration of the median filtering procedure performed on a 2D grid of
100 X 100 nodes. The top figure presents a simulated variable (spherical variogram with a range of
10) after thresholding (positive values in white and negative values in black). The bottom figure
shows the variable after 5 iterations of the median filter where n=2
Z i ( ) Median Z i n ( ) Z i ( ) Z i 1 + ( ) , , , , [ ]
94 Grid Transformations
. (fig. 6.2-1)
An additional feature consists in constraining the first two transformations using an auxiliary cutoff
variable: the deformation for each pixel (distance between the initial value and the modified value)
may not exceed an amplitude given by the cutoff variable. Moreover when the cutoff variable lies
below a given threshold, no deformation is performed. This allows the user to filter the result of a
kriging estimation performed on a grid, with respect to the estimation standard deviation
Technical References 95
6.2.3 The incrementing procedure
This filter is particular as it allows the calculation of the gradient (in a given direction) by a stan-
dard finite difference procedure, according to the formulae
(eq. 6.2-1)
where stands for the regular distance between two consecutive nodes.
Note - This procedure induces an edge effect as the last value along each ID row (in the direction
of calculation) is not processed and is left undefined in the output grid. The procedure can be
iterated to provide higher order incrementations
The next picture shows an illustration of the incrementing procedure performed on a 2D grid of 100
X 100 nodes. The top figure presents a simulated variable (spherical variogram with a range of 10)
after thresholding (positive values in white and negative values in black). The bottom left figure
represents the gradient (iterated once) along the X direction whereas the bottom right figure is the
gradient along the Y direction
.
Z i ( )
Z i ( ) Z i 1 ( )

------------------------------------

96 Grid Transformations
(fig. 6.2-1)
Technical References 97
7 Linear Estimation
This page constitutes an add-on to the Users Guide for Interpolate / Estimation / (Co-)Kriging
(unless specified).
This technical reference presents the outline of the main kriging applications. In fact, by the generic
term "kriging", we designate all the procedures based on the Minimum Variance Unbiased Linear
Estimator, for one or several variables. The following cases are presented:
m ordinary kriging,
m simple kriging,
m kriging in the IRF-k case,
m drift estimation,
m estimation of a drift coefficient,
m kriging with external drift,
m unique neighborhood case,
m filtering model components,
m factorial kriging,
m block kriging,
m polygon kriging,
m gradient estimation,
m kriging several variables linked through partial derivatives,
m kriging with inequalities,
m kriging with measurement error,
m lognormal kriging,
m cokriging,
m extended collocated cokriging.
98 Linear Estimation
7.1 Ordinary Kriging (Intrinsic Case)
We designate by Z the random variable. We define the kriging estimate, denoted Z*, as a linear
combination of the neighboring information , introducing the corresponding weights
(eq. 7.1-1)
For a better legibility, we will omit the summation symbol when possible using the Einstein nota-
tion. We consider the estimation error, i.e. the difference between the estimation and the true value
Z
*
- Z
0
.
We impose the estimator at the target (denoted "0") to be:
l unbiased:
(eq. 7.1-2)
(which assumes that the expectation of the linear combination exists).
l minimum variance (optimal):
(eq. 7.1-3)
(which assumes that the variance of the linear combination exists).
We will develop the equations assuming that the random variable Z has a constant unknown mean
value:
(eq. 7.1-4)
Then equation (eq. 7.1-2) can be expanded:
(eq. 7.1-5)

This is usually called "the Universality Condition".
Introducing the equation (eq. 7.1-3) is expanded using the covariance C
Z

=
E Z

Z
0
[ ] E

Z
0
[ ] 0 = =
Var Z

Z
0
[ ] Var

Z
0
[ ] = minimum
E Z [ ] m =
E Z

Z
0
[ ] m

1



0 = = m

1 =
C

Cov Z

, ( ) =
Technical References 99
(eq. 7.1-6)
which should be minimum under the constraints given in (eq. 7.1-5) .
Introducing the Lagrange multiplier , we must then minimize the quantity:
(eq. 7.1-7)
against the unknown and .
(eq. 7.1-8)
We finally obtain the (Ordinary) kriging system:
(eq. 7.1-9)
Using matrix notation:
(eq. 7.1-10)
In the intrinsic case, we know that we can use the variogram instead of the covariance C and
that:
(eq. 7.1-11)
We can then rewrite the kriging system:

2
Var Z

Z
0
[ ]

b
C

C
0
C
00
+ = = minimum

C
0
C
00
2




+ + =

--------- 0

+ C
a0
= =


------ 0

1 = =

+ C
0
=

1 =

2
C
00

C
0
=

1
1 0





C
0
1
= and
2
C
00





C
0
1
=

h ( ) C 0 ( ) C h ( ) =
100 Linear Estimation
(eq. 7.1-12)
In the intrinsic case, there are two ways of expressing kriging equations: either in covariance terms
or in variogram terms. In view of the numerical solution of these equations, the formulation in
covariance terms should be preferred because it endows the kriging matrix with the virtues of defi-
nite positiveness and involves an easier practical inversion.

+
0
=

1 =

2

00

0
=

Technical References 101


7.2 Simple Kriging (Stationary Case with Known
Mean)
We assume that the expectation of the random variable is constant equal to "m". There is no further
need for a Universality Condition and the (Simple) kriging system is:
(eq. 7.2-1)
In matrix notation:
(eq. 7.2-2)
In this particular case of stationarity, the estimator is given by:
(eq. 7.2-3)

C
0
=

2
C
00

C
0
=

( )

( ) C
0
( ) = and
2
C
00

( ) C
0
( ) =
Z
*
0




m +

=
102 Linear Estimation
7.3 Kriging of One Variable in the IRF-k Case
We now briefly review the kriging system in the non stationary case. We assume that, at least
locally, the expectation of the variable is no more constant, but can be modeled as a low-variation
function described by a polynomial
(eq. 7.3-1)
where are the basic monomials and a
l
are the unknown coefficients.
Before applying the kriging conditions, we must make sure that the mean and the variance of the
kriging error exist. We need this error to be an authorized linear combination (ALC) for the degree
k of the polynomial to be filtered:
is an ALC-k for (eq. 7.3-2)
If we now consider the kriging error, the combination consists of the neighboring points with
the corresponding weights and the target point Z
0
with the weight -1.
Then (eq. 7.3-2) can be written:
(eq. 7.3-3)
They are called the "Existence Equations".
If we now consider the unbiasedness condition (eq. 7.1-2):
(eq. 7.3-4)
Due to (eq. 7.3-3), the expectation of the estimation error is always zero.
The optimality condition (eq. 7.1-3) leads to:
(eq. 7.3-5)
where K(h) is the new structural tool called the "generalized covariance". This variance must be
minimized under the existence equations. Introducing as many Lagrange parameters as there are
existence equations, we must then minimize the quantity:
(eq. 7.3-6)
against the unknowns and
E Z
X
[ ] a
l
f
x ( )
l
=
f
l
x ( )

l
f
0
l
= l k
Z

l
f
0
l
0 = l k
E Z
*
Z
0
[ ] E

Z
0
[ ]

a
l
f

l
a
l
f
0
l
a
l

l
f
0
l
( ) = = =

2
Var Z
*
Z
0
[ ]

K
0
K
00
+ = = minimum

K
0
K
00
2
l

l
f
0
l
( ) + + =


l
Technical References 103
(eq. 7.3-7)
We finally obtain the (IRF-k) kriging system:
(eq. 7.3-8)
In matrix notation:
(eq. 7.3-9)

--------- 0


l
f

l
+ K
0
= =

-------- 0

l
f
0
l
l k = =


l
f

l
+ K
0
=

l
f
0
l
= l k

2
K
00

K
0

l
f
0
l
=

l
f

l
0

l



K
0
f
0
l
= and
2
K
00

l



T
K
0
f
0
l
=
104 Linear Estimation
7.4 Drift Estimation
Let us rewrite the usual universal kriging dichotomy:
(eq. 7.4-1)
where is the drift
We wish to estimate the value of the drift at the target point by kriging:
(eq. 7.4-2)
The unbiasedness condition implies that:
(eq. 7.4-3)
therefore
The optimality condition leads to:
(eq. 7.4-4)
Finally the kriging system is derived:
(eq. 7.4-5)
In matrix notation:
(eq. 7.4-6)
and
Z x ( ) Y x ( ) m x ( ) + =
m x ( ) a
l
f
l
x ( ) =
m

x
0
( )

=
E m

m
0
[ ]

a
l
f

l
a
l
f
0
l
0 = = a
l

l
f
0
l
=
Var m

m
0
[ ]

= minimum


l
f

l
+ 0 =

l
f
0
l
=

l
f

l
=

l
f

l
0

l




0
f
l
0




=
Technical References 105
(eq. 7.4-7)
2

l



T
0
f
l
0



=
106 Linear Estimation
7.5 Estimation of a Drift Coefficient
Let us rewrite the usual universal kriging dichotomy:
(eq. 7.5-1)
where m(x) = a
l
f
l
(x) is the drift
We wish to estimate the value of one of the drift components (say the one corresponding to the
basic drift function number l
0
) at the target point by kriging:
(eq. 7.5-2)
The unbiasedness condition implies that:
(eq. 7.5-3)
This leads to the following conditions on the weights:
(eq. 7.5-4)
The optimality condition leads to:
(eq. 7.5-5)
Finally the kriging system is derived:
(eq. 7.5-6)
Z x ( ) Y x ( ) m x ( ) + =
a
*
l
0
a
*
l
x
0
( )

= =
E a
l
0

a
l
0
[ ]

a
l
f

l
a
l
0
0 = = l

l
0 for = l l
0

l
0
1 for = l l
0
=

Var a
l
0

a
l
0
[ ]

= minimum


l
f

l
+ 0 =

l
0 =

l
0
1 =

Technical References 107


7.6 Kriging with External Drift
We recall that when kriging the variable in the scope of the IRF-k, the expectation of Z(x) is
expanded using a basis of polynomials: E[Z(x)] = a
l
f
l
(x) with unknown coefficients a
l

Here, the basic hypothesis is that the expectation of the variable can be written:
(eq. 7.6-1)
where S(x) is a known variable (background) and where a
0
and a
1
are unknown.
Once again, before applying the kriging conditions, we must make sure that the mean and the vari-
ance of the kriging error exist. We need this error to be a linear combination authorized for the drift
to be filtered. This leads to the equations:
(eq. 7.6-2)
These existence equations ensure the unbiasedness of the system.
This optimality constraint leads to the traditional equations:
(eq. 7.6-3)
where K(h) is then a generalized covariance.
Introducing the Lagrange parameters and , we must now minimize:
(eq. 7.6-4)
against the unknowns , and :
E Z x ( ) [ ] a
0
a
1
S x ( ) + =

1 =

S
0
=

Var Z

Z
0
[ ]

K
0
K
00
+ = minimum

0

1

K
0
K
00
2
0

( ) 2
1

S
0
( ) + + + =


0

1
108 Linear Estimation
(eq. 7.6-5)
We finally obtain the kriging system with external drift:
(eq. 7.6-6)
In matrix notation:
(eq. 7.6-7)
and
(eq. 7.6-8)

--------- 0 =



0
S

+ + K

= ( )

-------- 0 =


1 =


-------- 0 =


S
0
= ( )


0

1
S

+ + K
0
=

1 =

S
0
=

1 S

1 0 0
S

0 0

1





K
0
1
S
0
=

2
K
00

1





T
K
0
1
S
0
=
Technical References 109
7.7 Unique Neighborhood Case
We recall the principle of the kriging or cokriging, although only the kriging case will be addressed
here for simplicity. We wish to estimate the variable Z at any target point (Z
*
) using the neighboring
information , as the linear combination:
(eq. 7.7-1)
where the kriging weights are the unknown.
The Kriging conditions of unbiasedness and optimality lead to the following linear Kriging Sys-
tem:
(eq. 7.7-2)
and the variance of the kriging estimation error is given by:
(eq. 7.7-3)
with the following notations:
Indices relative to data points belonging to the neighborhood of the target
point
0
Index which refers to the target point
The value of the covariance part of the structural model expressed for the dis-
tance between the data points
The value of the drift function ranked "l" applied to the data point
The value of the modified covariance part of the structural model expressed
for the distance between the point and the target point.
The value of the drift function ranked "l" applied to the target point
The value of the modified covariance part of the structural model (iterated
twice) expressed between the target point and itself.
Z

l
f

l
0

l




C

0
f
0
l
=

2
C

00

l




C

0
f
0
l




=
,
C

l
C

0
f
0
l
C

00
110 Linear Estimation
The terms and depend on the type of quantity to be estimated:
A second look at this kriging system allows us to write it as follows:
(eq. 7.7-4)
where:
It is essential to remark that, given the structural model:
l The left-hand side matrix depends on the mutual location of the data points present in the neigh-
borhood of the target point.
l The right-hand side depends on the location of the data points of the neighborhood with regard
to the location of the target point.
l The choice of the calculation option only influences the right-hand side and leaves the left-hand
side matrix unchanged.
In the Moving Neighborhood case, the data points belonging to the neighborhood vary with the
location of the target point. Then the left-hand matrix A, as well as the right-hand side vector B
must be established each time and the vector of kriging weights X is obtained by solving the linear
kriging system. The estimation is derived by calculating the product of the first part of the vector
punctual
drift
block average
first order partial derivative
A is the left-hand side kriging matrix
X is the vector of kriging weights (including the possible Lagrange multipliers)
B is the right-hand side kriging vector
stands for the matrix product
* will designate the scalar product
C

0
C

00
C

0
C

00
C

C = C

C =
C

0 = C

0 =
C

C v ) d (
v

= C

C v d ( v ) d ,
v

=
C

C
x
------ =
C


2
C
x
2

--------- =
A X B =
A B
Technical References 111
X (excluding the Lagrange multipliers) by the vector of the variable value measured at the neigh-
boring data samples Z, that we can write in matrix notation as:
(eq. 7.7-5)
where is the vector of the variable value complemented by as many zero values as there are drift
equations (and therefore Lagrange multipliers) and designates the scalar product.
Finally the variance of the estimation error is derived by calculating another scalar product:
(eq. 7.7-6)
In the Unique Neighborhood case, the neighboring data points remain the same whatever the target
point. Therefore the right-hand side matrix is unchanged and it seems reasonable to invert it once
for all A
-1
. For each target point, the right-hand side vector must be established, but this time the
vector of kriging weights X is obtained by a simple scalar product:
(eq. 7.7-7)
Then, the rest of the procedure is similar to the Moving Neighborhood case:
(eq. 7.7-8)
(eq. 7.7-9)
If the variance of the estimation error is not required, the vector of kriging weights does not even
have to be established. As a matter of fact, we can invert the following system:
(eq. 7.7-10)
The estimation is immediately obtained by calculating the scalar product (usually referred as the
dual kriging system):
(eq. 7.7-11)
Z

X
t
*Z

=
Z

2
C

00 X
t
*B =
X A
1
B =
Z

X
t
*Z

2
C

00 X
t
*B =
A C Z

=
Z

C
t
*B =
112 Linear Estimation
7.8 Filtering Model Components
Let us imagine that the target variable Z can be considered as a linear combination of two random
variables Y
1
and Y
2
, called scale components, in addition to the mean:
(eq. 7.8-1)
where Y
1
is centered (mean is zero), characterized by the variogram and Y
2
by . If the two
variables are independent, it is easy to see that the variogram of the variable Z is given by:
(eq. 7.8-2)
Instead of estimating Z , we may be interested in estimating one of the two components, the estima-
tion of the mean has been covered in the previous paragraph. We are going to describe the estima-
tion of one scale component (say the first one):
(eq. 7.8-3)
Here again, we will have to distinguish whether the mean is a known quantity or not. If the mean is
a known constant, then it is obvious to see that the unbiasedness of the estimator is fulfilled auto-
matically without implying additional constraints on the kriging weights. If the mean is constant but
unknown, the unbiasedness condition leads to the equation:
(eq. 7.8-4)
Note that the formalism can be extended to the scope of IRF-k (i.e. defining the set of monomials
f
l
(x) which compose the drift) and impose that:
(eq. 7.8-5)
Nevertheless the rest of this paragraph will be developed in the intrinsic case of order 0 and we can
establish the optimality condition:
(eq. 7.8-6)
This leads to the system:
Z m Y
1
Y
2
+ + =

2

1

2
+ =
Y
1

0 =

f
l
x

( )

0 = l
Var Y
1
Y
0
1
[ ]

0
1

00
1

= minimum
Technical References 113
(eq. 7.8-7)
The estimation of the second scale component Y
2*
, will be obtained by simply changing in-
to in the right-hand side of the kriging system, keeping the left-hand side unchanged.
Similarly, rather than extracting a scale component, we can also be interested in filtering a scale
component. Usually this happens when the available data measure the variable together with an
acquisition noise. This noise is considered as independent from the variable and characterized by its
own scale component, the nugget effect. The technique is applied to produce an estimate of the
variable, filtering out the effect of this noise, hence the name. In Isatis instead of selecting one scale
component to be estimated, the user has to filter out components.
Because of the linearity of the kriging system, we can easily check that:
(eq. 7.8-8)
This technique is obviously not limited to two components per variable, nor to one single variable.
We can even perform components filtering using the cokriging technique.

+
1
0
=

0 =

0
1

0
2
Z

Y
1
Y
2
+ + =
114 Linear Estimation
7.9 Factorial Kriging
Let us consider a set of variables Z
1
,...,Z
N
and rewrite the setup of the paragraph "Multivariate
Case" of the chapter "Structure Identification" slightly differently. In the scope of the linear model
of coregionalization, the structures of the Z
i
can be written as linear combinations of the same struc-
tures .
For each structure , we introduce a set of orthogonal variables Y
p
1
,...,Y
p
N
(means 0 and variances
1), mutually independent and characterized by the same variogram and write:
(eq. 7.9-1)
Because of the mutual independence, we can easily derive the simple and cross-variograms of the
different variables:
(eq. 7.9-2)
We usually introduce the coefficients and deduce that:
(eq. 7.9-3)
These coefficients correspond to the sills matrices (B
p
), symmetrical, definite positive.
Note that the decomposition of the Z
i
is not unique and thus the Y
p
i
have no physical meaning.
For a given scale component "p", we usually derive the Y
p
i
from the decomposition of the (B
p
)
matrix into a basis of orthogonal eigen vectors. Each Y
p
i
then corresponds to an eigen factor. The Y
p
i
are finally sorted by decreasing eigen value (percentage of variance of the scale component).
The principal task of the Factorial Analysis is to estimate, through the traditional cokriging, a given
factor for a given scale component. Two remarks should be made:

1

P
, ,

p
Z
i
m
i
a
p
ik
Y
k
p
k 1 =
N

p 1 =
P

+ =

Zi Zj ,
a
p
ik
a
p
jk

p
k 1 =
N

p 1 =
P

=
b
p
ij
a
p
ik
a
p
jk
k 1 =
N

Zi Zj ,
b
p
ij

p
p 1 =
P

=
Technical References 115
l As the factors are mutually independent, we can recover the kriging estimates of the variables
by applying the linear decomposition on the estimated factors:
(eq. 7.9-4)
The estimation of the mean is the multivariate extension of the drift estimation (previous para-
graphs).
l For a given scale component, some eigen values may happen to be equal to 0 or almost null.
This means that the contribution of these factors to the estimator (or to the simulated value in
the simulation process) is null.
Z
i

m
i

a
p
ik
Y
k
p
( )

k 1 =
N

p 1 =
P

+ =
116 Linear Estimation
7.10 Block Kriging
The kriging principle can be used for the estimation of any linear combination of the data. In partic-
ular, instead of the estimation of Z at a target point, we might be interested in computing the aver-
age value of Z over a volume v, called block. The block kriging performs this calculation; it is
obtained by modifying the right-hand side of the point kriging system (see the paragraph Kriging of
One Variable in the IRF-k Case):
l by which corresponds to the integral of the covariance function between the data
point and a point which describes the volume v :
(eq. 7.10-1)
The integral must be expanded over the number of dimensions of the space in which v is defined.
l by which correspond to the mean values of the drift functions over the volume:
(eq. 7.10-2)
We obtain the following block kriging system:
(eq. 7.10-3)
The block kriging variance is given by
(eq. 7.10-4)
It requires the calculation of the term K
vv
instead K
00
of the term
(eq. 7.10-5)
K
0
K
v
K
v
1
v
----- K
x
x d
v

=
f
l
0
f
l
v
f
v
l
1
v
----- f
x
l
x d
v


l
f

l
l

K
v
=

f
v
l
=

2
K
vv

K
v

l
f
v
l
l

=
K
vv
1
v
2
-------- K
xy
v

x x d d
v

=
Technical References 117
For each block v, the K
vv
integral needs to be calculated once, whereas needs to be calculated
as many times as there are points in the block neighborhood. Therefore these integral calculations
have to be optimized.
Formal expressions of these integrals exist for a few basic structures. Unfortunately, this is not true
for most of them, and moreover these formal expressions sometimes lead to time consuming calcu-
lations. Furthermore, the same type of numerical integration MUST be used for the K
vv
and the
terms, otherwise we may end up with negative variances.
Numerical integration methods relying on the discretization of the target block are therefore pre-
ferred in Isatis. Two types of discretization are combined:
l the regular discretization,
l the random discretization.
In the regular discretization case, the block is partitioned into equal cells and the target is replaced
by the union of the cell centers c
i
.This allows the calculation of the terms:
(eq. 7.10-6)
where N is the number of cells in the blocks.
The double integral of the K
vv
calculation is replaced by a double summation:
(eq. 7.10-7)
Applying in this case only the regular discretization sometimes lead to over-estimating the nugget
effect. A random discretization is therefore substituted, where the first point of the discretization
describes the centers of the previous regular cells whereas the second point is randomly located
within its cell. In this case, there is almost no chance that a point c
i
coincides with the point c
j
and
the function K(h) is never called for a zero-distance. The nugget effect of the structure therefore
vanishes as soon as the covariance is integrated. This effect is recommended as soon as the dimen-
sion of the block is much larger than the dimension of the sample, which is usually the case.
Note - The drawback of this method is linked to its random aspect. For each calculation of a K
vv
term the set of points requires a set of random values to be drawn which will vary from one trial to
another. This is why it is recommended that the user exercises this calculation to determine the
optimum as a trade-off between accuracy and stability of the result on the one hand, and
computation time on the other : this possibility is provided in the Neighborhood procedure.
K
v
K
v
K
v
K
v
1
N
--- - K
c
i
i 1 =
N

=
K
vv
1
N
2
------ K
c
i
c
j
,
N

=
118 Linear Estimation
7.11 Polygon Kriging
Estimating the average value of Z over an irregular shape (i.e. a polygon) and its associated vari-
ance of estimation is an almost straightforward extension of Block Kriging.
The polygon is first discretized in regular cells v
i
by the user. The procedure is then similar than the
one presented for the block kriging case, except for the calculation of the and K
vv
terms which
are now calculated as a weighted discrete summation:
(eq. 7.11-1)
(eq. 7.11-2)
where each weight w
i
corresponds to the surface of the intersection between the cell v
i
centered in c
i
and the polygon.
A random discretization is also performed for the computation of the K
vv
term
.
K
v
K
v
1

i
w
i
---------- w
i
K
c
i
i 1 =
N

=
K
vv
1

j
w
i
w
j
----------------------
w
i
w
j
K
c
i
c
j
i 1 =
N

i 1 =
N

=
Technical References 119
7.12 Gradient Estimation
The objective is to estimate the derivative where u is a unit vector whose X and Y compo-
nents are . Letting will give us and will give us
From a mathematical standpoint, it is necessary to clearly define what is meant.by . There are
two concepts involved:
l One is the ordinary concept of a two-sided directional derivative of a fixed function z(x),
defined as the limit, if it exists, of:
(eq. 7.12-1)
This fixed function Z(x) is the field under study and is really what we are after. Contrary to our
usual notation, we have used a lower-case letter "z" to emphasize the difference from the random
field Z(x) .
l The other concept is that of a derivative of the random field Z(x). It is defined in the mean
square sense as the random variable , if it exists, such that:
(eq. 7.12-2)
It can be shown that if Z(x) has a stationary and isotropic covariance K(h) , then Z(x) is differen-
tiable in the mean square sense if and only if K(h) is twice differentiable at h = 0. Then K(h) exists
for every h and -K(h) is the covariance of Z(x).
Unfortunately, common covariance models (like the spherical, the exponential, ...) are not twice dif-
ferentiable. Strictly speaking, it is then impossible to estimate because this quantity is simply
not defined. In practice however, we cannot let this theoretical difficulty rule out the estimation of
Z
u
------
2
u sin , cos ( ) = 0 =
Z
u
------

2
--- =
Z
Y
------
Z
u
------
z
u
-----
z x ( ru ) z x ( ) +
r
---------------------------------------
r 0
lim =
z
u
-----
Z
u
------
E
Z x ru + ( ) Z x ( )
r
----------------------------------------
Z
u
------
2
r 0
lim 0 =
Z
u
------
120 Linear Estimation
gradients. Considering the case when does exist we not that, by linearity of kriging systems, its
kriging estimator satisfies :
This already tells us that the kriging system of is simply obtained by differentiating the right-
hand side of the point system. The idea is to accept as our gradient estimate even when the
covariance (or variogram or generalized covariance) is not twice differentiable.
The only difficulty now left is that Z
*
(x) itself may have singularities. One way to avoid the prob-
lem is to use the symmetric formula :
(eq. 7.12-3)
The notation means that r decreases to zero from above, i.e. takes on positive values only.
This formula may be best justified in terms of one-sided directional derivatives. We now turn to the
derivation of the kriging equations of . For the sake of simplicity the results will be obtained as
if Z had no nugget effect. If not so, it is clear that what we estimate then is not the derivative of Z ,
which would not make sense, but the derivative of the continuous part of the phenomenon Z.
The gradient estimation kriging system is then:
(eq. 7.12-4)
where is equal to -K(0).
Z
u
------
Z
u
------

Z
u
------

u
--------- =
Z
u
------
Z

u
---------
Z
u
------
Z x ru + ( ) Z x ru ( )
2r
----------------------------------------------------
r 0
+

lim =
r 0
+

Z
u
------


l
f

l
l

K
0

u
------------ =

f
0
l

u
------- =

K
0

u
------------
l
f
0
l

u
------- var
Z
u
------ +
l

Z
u
------
Technical References 121
7.13 Kriging Several Variables Linked through Partial
Derivatives
We now discuss the particular multivariate case where we consider a main variable (that we call the
"depth") and two auxiliary variables corresponding to the gradients of the depth along the directions
X and Y. This trivariate case will be performed using a traditional cokriging procedure, as soon as
the model has been derived.
This model is very restrictive as if designates the covariance of the depth variable Z, we have the
following constraints:
We immediately see that this requires the mathematical function to be at least twice differentia-
ble, which discards basic structures such as the nugget effect, the spherical variogram, the linear
variogram which are not differentiable at the origin (the function must be extended by symmetry
around the vertical axis for negative distances).
To overcome the problem, we will replace the (punctual) gradient by a dip which represents the
average slope integrated over a small surface (ball) centered on the data point.
l
l
l
l
l
l

Cov Z Z , ( )
Cov
z
x
-----
z
x
----- ,



2

x
2

---------
Cov
z
y
-----
z
y
----- ,



2

y
2

---------
Cov Z
z
x
----- ,


Cov
z
x
----- Z ,


=

x
------
Cov Z
z
y
----- ,


Cov
z
y
----- Z ,


=

y
------
Cov
z
x
-----
z
y
----- ,


C = ov
z
y
-----
z
x
----- ,



2

x y
-----------

122 Linear Estimation


(eq. 7.13-1)
where p(u,v) stands for the convolution weighting function. The integral and the derivation signs
can be inverted and this leads to the interesting differentiability feature as soon as p is differentia-
ble. Therefore, we consider the gaussian weighting function:
(eq. 7.13-2)
Where a is the radius of the integration ball B on which the dip measurement is integrated.
The structural analysis should therefore be performed with these constraints in mind. Moreover,
this implies that if the depth is an IRF-2, then its two gradient components are IRF-0 ; if the depth is
an IRF-1, then its two gradient components are stationary. This property reinforces the difficulty of
the inference.
The principle in this software is to perform the structural analysis on the depth variable and to
derive (without any check) the structures of the gradient components.
Note - The multivariate structure does not belong to the class of linear coregionalization models.
There are also some constraints on the drift equations.
In fact if we write the cokriging system of depth and gradient.
(eq. 7.13-3)
The unbiasedness condition is expanded as follows:
(eq. 7.13-4)
(eq. 7.13-5)
whatever the set of coefficient and therefore we have:
(eq. 7.13-6)
G
x

x
----- Z x u y v , ( )p u v , ( ) u v d d
B





=
p u v , ( )
1
2a
2
------------
u
2
v
2

2a
2
---------------------


=
Z

z
x
-----

z
y
-----

=
E Z

Z
0
[ ] 0 =
a
l

f
l

x
------

f
l
y
------

0 =

a
f
a
l

f
l

x
------

f
l
y
------

+
a

l
Technical References 123
Finally the cokriging system can be expressed as follows, assuming that the depth variable is an
IRF-1.
(eq. 7.13-7)
The same type of cokriging system and the same numerical recipe is used when cokriging two vari-
ables Z and Y such that:
l
l


x
------

y
------ 1 x y

x
------

2

x
2

---------

2

x y
----------- 0 1 0

y
------

2

x y
-----------

2

y
2

--------- 0 0 1
1 0 0 0 0 0
x 1 0 0 0 0
y 0 1 0 0 0


x
------

y
------
1
x
0
y
0
=
Z Y =
Z a
Y
x
------
Y
y
------ + =
124 Linear Estimation
7.14 Kriging with Inequalities
Note - Isatis window: Interpolate / Estimation / Conditional Expectation with Inequalities..."
The aim of this technique is to deal with a variable which is defined in some locations by a value
(hard data) and in some other locations by an interval (soft data). One of the bounds of an interval
can be undefined. These soft data correspond to the inequalities.
To solve this problem, Isatis proposes to replace the soft data by a new set of hard data. The way to
replace the intervals is to calculate the conditional expectation of the target variable at each soft
data location.
To calculate the conditional expectation, Isatis uses a Gibbs Sampler technique which simulates for
each soft data a given number of realizations of our target variable according to its variogram model
and conditioned by the intervals and the hard data.
Note - The Gibbs Sampler is an iterative algorithm which consist in starting with an authorized
vector of gaussian values consistent with the inequality constraints. Each value is then modified in
turn using a kriging procedure and adding a random value. In Isatis the parameters attached to this
algorithm are fixed; a unique neighborhood is compulsory for the simple kriging step.
The simulations can only be performed in the gaussian space. The user has previously to transform
the hard data into a gaussian variable and keep the anamorphosis attached to this transformation.
The intervals represented by 2 variables have also to be transformed in the gaussian space by the
same anamorphosis function.
After the simulation, the program has just to calculate the average value of the realizations (after
back transformation in the raw space) at each soft data point. These average values are called the
conditional expectation. This conditional expectation is in fact the most probable value of the vari-
able at the soft data locations. The standard deviation of these realizations is also calculated and
stored.
Then, the final step is to krige the target variable using both the hard data and the conditional expec-
tation values.
Technical References 125
7.15 Kriging with Measurement Error
The user will find this kriging option in the "Interpolate / Estimation / (Co-)Kriging..." window,
"Special Kriging Options..." button.
A slight modification of the theory makes it possible to take into account variable measurement
errors at data points, provided the variances of these errors are known.
Suppose that, instead of we are given where is a random error satisfying the fol-
lowing conditions:
(eq. 7.15-1)
Then the kriging estimator of Z can be written and the variance
becomes:
(eq. 7.15-2)
Then the kriging system of Z
0
remains the same except that is now added to the diagonal terms
; no change occurs in the right-hand side of the kriging system.
These data error variances are related, though not identical, to the nugget effect.
Let us first recall the definition of the nugget effect.
By definition, the nugget effect refers to a discontinuity of the variogram or the covariance at zero
distance. Mathematically, it means that the field Z(x) is not continuous in the mean square sense.
The origin of the terminology "nugget effect" is as follows.
Gold ore is often discovered in the form of nuggets, i.e. pebbles of pure gold disseminated in a ster-
ile matrix. Consequently, the ore grade varies discontinuously from inside to outside the nugget. It
has been found convenient to retain the term "nugget effect" even if this is due to causes other than
actual nuggets.
Generally, discontinuity of the variogram is only apparent. If we could investigate structures at a
smaller scale, we would see that Z(x) is in fact continuous but with a range much smaller than the
Z

+ e

E e

( ) 0 =
Cov e

, ( ) 0 if =
Var e

( ) V

=
Cov e

, ( ) 0 , =

Z
*
0

+ ( )

=
Var Z
0

Z
0
[ ]

( )
2
V

K
0
K
00
+

+
,

=
V

126 Linear Estimation


nearest distance between data points. This is the reason why one could conveniently replace this
nugget effect by a transition scheme (say a spherical variogram) with a very short range.
But the "nugget effect" (as used in the modeling phase) can also be due to another factor: the mea-
surement error. In this case, the discontinuity is real and is due to errors of the type . This time,
the discontinuity remains whatever the size of the structure investigation. If the same type of mea-
surement error is attributed to all data, the estimate is the same whether:
l you do not use any nugget effect in your model and you provide the same for each data, or
l you define a nugget effect component in your model whose sill C is precisely equal to .
Unlike the estimate itself, the kriging variance differs depending on which option is chosen. Indeed,
the measurement error is considered as an artefact and is not a part of the phenomenon of inter-
est. Therefore, a kriging with a variance of measurement error equal for each data and no nugget
effect in the model will lead to smaller kriging variances than the estimation with a nugget compo-
nent equal to .
The use of data error variances really makes sense when the data is of different qualities. Many
situations may occur. For example, the data may come from several surveys: old ones and new
ones. Or the measurement techniques may be different: depth measured at wells or by seismic,
porosities from cores or from log interpretation, etc ...
In such cases error variances may be computed separately for each sub-population and, if we are
lucky, the better quality data will allow identification of the underlying structure (possibly includ-
ing a nugget effect component), while the variogram attached to the poorer quality data will show
the same previous structure incremented by a nugget effect corresponding to the specific measure-
ment error variance .
In other cases, it could be possible to evaluate directly the precision of each measurement and
derive : if we are told that the absolute error on Z is , by reference to Gaussian errors we
may consider that, and take: .
Another use of this technique, is in the post processing of the macro kriging where we calculate
"equivalent samples" with measurement error variances. These variances are in fact calculated from
a fitted model depending on the number of initial samples inside pseudo blocks.
e

Z
Z 2 = V

Z 2 ( )
2
=
Technical References 127
7.16 Lognormal Kriging
Isatis window: Interpolate / Estimation / Lognormal kriging.
The principle is to estimate a stationary variable Z which can be written as:
(eq. 7.16-1)
and transformed into:
(eq. 7.16-2)
where is the shift which makes Z a positive variable and is supposed to be normally distributed.
In this paragraph, we refer to the value of the mean of the raw punctual variable (denoted M
Z
) and
the mean and dispersion variance of the log-variable (denoted m
Y
and S
2
Y
), with the theoretical
relationship:
(eq. 7.16-3)
Using the variogram of the Y variable (denoted as ), we can estimate the value of Y
*
in any point
of the space as a linear combination of the information at the data points:
(eq. 7.16-4)
and derive the corresponding variance of estimation . The kriging system can be performed
either in the strict stationary case (simple kriging) or in the intrinsic case (ordinary kriging) then
honoring the condition through a Lagrange parameter .
The derivation of the estimate and the corresponding variance of estimation on the raw scale is less
trivial than simply taking the antilog (exponential) of the log-scale quantities. The following formu-
lae consider the cases of Simple or Ordinary Kriging, for point or block estimations.
For block estimation, values for block v are supposed to be lognormally distributed according to the
formula:
Z Y ( ) exp =
Y Z + ( ) ln =

M
Z
m
Y
s
Y
2
2
----- +


exp =

Y
Y




m
Y
+

Y
2

1 =
128 Linear Estimation
(eq. 7.16-5)
where values of coefficients a and b are calculated in order to honor the appropriate mean and vari-
ance for Z
v
, i.e.:
(eq. 7.16-6)
Introducing the variogram instead, it can be shown that:
(eq. 7.16-7)
(eq. 7.16-8)
The same formulae include the punctual case with a = 1 and .
An unbiased lognormal estimator of Z
v
can then be derived from the kriging of with a
tractable estimation variance (no change of support model isrequired).The relative standard devi-
ation of estimation , which corresponds to , is saved in Isatis.
7.16.1 Point Estimation
Simple Kriging
(eq. 7.16-1)
(eq. 7.16-2)
Z
v
a
1
v
--- Y x ( ) x d
v




b + exp
E Z
v
( ) E Z ( ) =
Var Z
v
( ) Var Z ( )
v v ,
Z
=

Y
a
2
s
Y
2
1

2
----- e

Y
x y ( )
x d y d
v




ln +
s
Y
2

v v ,
Y

---------------------------------------------------------------------- =
b 1 a ( )m
Y
1
2
--- 1 a
2
( )s
Y
2
a
2

v v ,
Y
+ [ ] + =
b

v v ,
Y
2
--------- 0 = =
1
v
--- Y x ( ) x d
v


x
M
Z
+
-----------------
Z

1
2
---
Y
2
+ exp =

2
e
s
Y
2
1 e

Y
2

( ) =
Technical References 129
Ordinary Kriging
(eq. 7.16-3)
(eq. 7.16-4)
7.16.2 Block Average Estimation
Simple Kriging
(eq. 7.16-1)
(eq. 7.16-2)
Ordinary Kriging
(eq. 7.16-3)
(eq. 7.16-4)
Z

1
2
---
Y
2
+ + exp =

2
e
s
Y
2
1 e

Y
2
+ ( )
e

2 ( ) + [ ] =
Z

aY

1
2
---a
2

Y
2
b + + exp =

2
e
a
2
s
Y
2

,
Y
( )
1 e
a
2

Y
2

[ ] =
Z

aY

1
2
-- -a
2

Y
2
b a
2
+ + + exp =

2
e
a
2
s
Y
2

,
Y
( )
1 e
a
2

Y
2
+ ( )
e
a
2

2 ( ) + [ ] =
130 Linear Estimation
7.17 Cokriging
This time, we consider two random variables Z
1
and Z
2
characterized by:
l the simple variogram of Z
1
denoted
l the simple variogram of Z
2
denoted
l the cross-variogram of Z
2
and Z
2
denoted
We assume that the variables have unknown and unrelated means:
Let us now estimate the first variable at a target point denoted "0", as a linear combination of the
neighboring information concerning both variables and using respectively the weights and :
(eq. 7.17-1)
The first variable is also called the main variable.We still apply the unbiasedness condition (eq. 7.1-
2)
(eq. 7.17-2)
which leads to:
(eq. 7.17-3)
(eq. 7.17-4)
Let us consider the optimality condition (eq. 7.1-3) and minimize the variance of the estimation
error:
(eq. 7.17-5)

11

22

12
E Z
1
[ ] m
1
= and E Z
2
[ ] m
2
=

1

2
Z
1

1

2
+ =
E Z
1
Z
0
1
[ ] 0 =

m
1

2

m
2
m
1

0 = m
1
m
2
,

1 =

0 =

Var Z
1
*
Z
0
1
[ ]
1

11

2
1

12

22

+ =
2
1

11
0
2
2

12
0

11
00
+
Technical References 131
under the unbiasedness conditions.
This leads to the cokriging system:
(eq. 7.17-6)
In matrix notations:
(eq. 7.17-7)
with the estimation variance:
(eq. 7.17-8)
Note - If instead of Z
1*
, we want to estimate Z
2*
, the matrix is unchanged and only the right-hand
side is modified:
(eq. 7.17-9)
and the corresponding estimation variance:
(eq. 7.17-10)
Let us first remark that both variables Z
1
and Z
2
do not have to be systematically defined at all the
data points. The only constraint is that when estimating Z
1
, the number of data where Z
2
is defined
is strictly positive.

11

12


1
+ +
11
0
=

12

22


2
+ +
12
0
=

1 =

0 =

11


12

1 0

12


22

0 1
1 0 0 0
0 1 0 0

11
0

12
0
1
0
=

1
( )
2

11
00

1

11
0

2

12
0

1
+ + =

11
0

12
0
1
0

12
0

22
0
1
0

1
( )
2

22
00

1

12
0

2

22
0

2
+ + + =
132 Linear Estimation
This system can easily be generalized to more than two variables. The only constraint lies in the
"multivariate structure" which ensures that the system is regular if it comes from a linear coregion-
alization model.
Technical References 133
7.18 Extended Collocated Cokriging
Isatis window: Interpolate / Estimation / Bundled Collocated Cokriging.
This technique is used when trying to estimate a target variable Z, known on a sparse sampling, on
a regular grid while a correlated variable Y is available at each node of this grid.
The original technique, strictly "Collocated Cokriging", has been extended in Isatis and is also
referred to as "Multi Collocated Cokriging" in the literature.
The first task that must be performed by the user consists in writing the value of the variable Y at
the points of the sparse sampling. Then he must perform the bivariate structural analysis using the
variables Y and Z. This may lead to a severe problem due to the large heterotopy between these two
variables: as a matter of fact, if the inference is carried out in terms of variograms, the two variables
need to be defined at the same points. If the secondary variable Y is dense with regards to the pri-
mary variable Z, we can always interpolate Y at the points where Z is defined and therefore the
influence (at least as far as the simple variogram and the cross-variogram are con-
cerned) only considers those samples: all the remaining locations where Y only is defined are sim-
ply neglected.
In the literature, we also find another inference method. The variogram is constructed on the
whole dense data set whereas the simple variogram and the cross variogram are
set as being similar to up to the scaling of their sills and to the use of the nugget effect: the
whole system must satisfy to the definite positiveness conditions. By definition, we are in the
framework of the linear model of coregionalization. This corresponds to the procedure programmed
in "Interpolate / Estimation / Collocated Cokriging (Bundled)".
The Cokriging step is almost similar to the one described in Paragraph "Kriging Two Variables in
the Intrinsic Case", the only difference is the neighborhood search. Within the neighborhood (cen-
tered on the target grid node), any information concerning the Z variable must be used (because Z is
the primary variable and because the variable is sparse). Regarding the Y variable (which is
assumed to be dense with regards to Z), several possibilities are offered:
l not using any Y information: obviously this does not offer any interest,
l using all the Y information contained within the neighborhood: this may lead to an untractable
solution because of too many information,
l the initial solution (as mentioned in Xu, W., Tran, T. T., Srivastava, R. M., and Journel, A. G.
1992, Integrating seismic data in reservoir modeling: The collocated cokriging alternative. SPE
paper 24742, 67Th Annual Technical Conference and exhibition, p.833-842) consists in using
the single value located at the target grid node location: hence the term collocated. Its contribu-

Z
h ( )
Y Z ,
h ( )

Y
h ( )

Z
h ( )
Y Z ,
h ( )

Y
h ( )
134 Linear Estimation
tion to the kriging estimate relies on the cross-correlation between the two variables at zero dis-
tance. But, in the Intrinsic case, the weights attached to the secondary variable must add up to
zero and therefore, if only one data value is used, its single weight (or influence) will be zero.
l the solution used in Isatis is to use the Y variable at the target location and at all the locations
where the Z variable is defined (Multi Collocated Cokriging). This neighborhood search has
given the more reliable and stable results so far.
In general collocated cokriging is less precise than a full cokriging - making use of the auxiliary
variable at all target points when estimating each of these.
Exception are models where the cross variogram (or covariance) between the two variables is pro-
portional to the variogram (or covariance) of the auxiliary variable.
In this case collocated cokriging coincides with full cokriging, but is also strictly equivalent to the
simple method consisting in kriging the residual of the linear regression of the target variable on the
auxiliary variable.
The user interested by the different approaches to Collocated Cokriging can refer to Rivoirard J.,
Which Models for Collocated Cokriging?, In Math. Geology, Vol. 33, No 2, 2001, pp. 117-131.
Technical References 135
8 Gaussian Transforma-
tion: the Anamorphosis
In Isatis the gaussian anamorphosis is used in three different ways:
m for variable transformation into the gaussian space useful in the simulation processes (nor-
mal score transformation),
m for histogram modeling and a further use in non linear techniques (D.K., U.C., Global Sup-
port Correction, grade-tonnage curves, ...),
m for variogram transformation.
For information on the theory of Non Linear Geostatistics see Rivoirard J., Introduction to Disjunc-
tive Kriging and Non-linear Geostatistics (Oxford: Clarendon, 1994, 181p).
136 Gaussian Transformation: the Anamorphosis
8.1 Modeling and Variable Transformation
Note - Isatis window:
- Statistics / Gaussian Anamorphosis Modeling
- Statistics / Normal Score Transformation
- Statistics / Raw <-> Gaussian Transformation
8.1.1 Gaussian Anamorphosis Modeling
The gaussian anamorphosis is a mathematical function which transforms a variable Y with a gauss-
ian distribution in a new variable Z with any distribution: . For mathematical reasons
this function can be conveniently written as a polynomial expansion:
(eq. 8.1-1)
where the H
i
(Y) are called the Hermite Polynomials. In practice, this polynomial expansion is
stopped to a given order. Instead of being strictly increasing, the function consequently shows
maxima and minima outside an interval of interest, that is for very low probability of Y, for instance
outside [-2.5, 3.] in (fig. 8.1-1) (horizontal axis for the gaussian variable and the vertical axis for the
Raw Variable)
Z Y ( ) =
Y ( )
i
H
i
Y ( )
i 0 =

Technical References 137


. (fig. 8.1-1)
The modeling of the anamorphosis starts with the discrete version of the curve on the true data set
(fig. 8.1-2). The only available parameters are 2 control points (A and B in (fig. 8.1-2)) which pos-
sibly allow the user to modify the behaviour of the model (fig. 8.1-2) on the edges. But this oppor-
tunity is in practice important only when the number of samples is small. The other parameters
available are the Authorized Interval on the Raw Variable (defined between a minimum value
Zamin and a maximum one Zamax) and the order of the Hermite Polynomial Expansion (number of
polynomials). The default values for the authorized interval are the minimum and the maximum of
the data set. In this configuration, the 2 control points do not modify the experimental anamorphosis
previously calculated.
138 Gaussian Transformation: the Anamorphosis
(fig. 8.1-2)
After the definition of this discretized anamorphosis, the program calculates the coefficients of
the expansion in Hermite Polynomials. It draws the curve and calculates the Practical Interval of
Definition and the Absolute Interval of Definition:
l the bounds of the Practical Interval of Definition are delimited by the two points [Ypmin,
Zpmin] and [Ypmax, Zpmax] (fig. 8.1-3). The two calculated points are the points where the
curve crosses the upper and lower authorized limits on raw data (Zamin and Zamax) or the
points where the curve is no longer increasing with Y.
l the bounds of the Absolute Interval of Definition are delimited by the two points [Yamin,
Zamin] and [Yamax, Zamax] (fig. 8.1-3). These two points are the intersections of the curve
with the horizontal lines defined by the Authorized Interval on the Raw variable. The values
generated using the anamorphosis function will never be outside this Absolute Interval of Defi-
nition.
The Figure 3 explains how the anamorphosis will be truncated later during use

i
Technical References 139
: (fig. 8.1-3)
8.1.2 Gaussian Variable into Raw Variable
The back-transformation from the gaussian variable to the raw variable is easy to perform as the
anamorphosis has been built for that. Nevertheless, the anamorphosis is not strictly increasing for
all the values of Y and the transformation is divided in 5 cases according to the (fig. 8.1-3):
Condition on Y Result on Z
Y Y
amin
< Z Z
amin
=
Y
amin
Y Y
pmin
Z linear Z
amin
Z
pmin
, ( ) =
140 Gaussian Transformation: the Anamorphosis
An optional bias correction formulae exists for this back-transformation:
(eq. 8.1-1)
with the kriging dispersion variance. In the simple kriging case, and
as a consequence .
8.1.3 Raw Variable into Gaussian Variable
The anamorphosis function is defined as a function of Y: and to transform the raw vari-
able into a gaussian one we have to invert this function: . This inversion can be per-
formed in Isatis in 3 different ways:
l Linear Interpolator Inversion
The inversion is just performed using a linear interpolation of the anamorphosis after discretiza-
tion. This interpolation also takes into account the previous intervals of definition of the
anamorphosis function:
Condition on Z Result on Y
Condition on Y Result on Z
Y
pmin
Y Y
pmax
< < Z
i
H
i
Y ( )
i 0 =
NH 1

=
Y
pmax
Y Y
amax
Z linear Z
pmax
Z
amax
, ( ) =
Y Y
amax
> Z Z
amax
=
Z Y

1
v d
2
u + ( )g u ( ) u d

v d
2

dv
2
C 0 ( )
sk
2
=
1
dv
2

sk
=
Z Y ( ) =
Y
1
Z ( ) =
Z Z
amin
< Y Y
amin
=
Technical References 141
In the middle case, local linear interpolation is used.
l Frequency Inversion
The program just sorts the raw values. A cumulative frequency is then calculated for each sam-
ple FC
i
from the smallest value adding the frequency of each sample:
(eq. 8.1-1)
The frequency W
i
is given by the user (The Weight Variable) or calculated as W
i
= 1/N. Note
that two samples with the same value will get different cumulative frequencies. The program
has finally to calculate the gaussian value:
(eq. 8.1-2)
In this way, two equal raw data have different gaussian values. The resulting variable is "more"
gaussian. This inversion method is generally recommended in Isatis.
l Empirical Inversion
The empirical inversion calculates for each raw value the attached empirical frequency and cal-
culates the corresponding gaussian value. This time, two equal raw values will have the same
gaussian transformed value.
Note - It is important to note that even if the gaussian transformed values can be calculated
without any anamorphosis model, if the user performs this operation for a simulation (for example),
he will have to back-transform these gaussian simulated values and this time the anamorphosis
model will be necessary. So it is very important to check during this step that this back
transformation will not be a problem, particularly from an interval of definition point of
view. Indeed, one has to keep in mind the fact that a simulation generates gaussian values on an
Condition on Z Result on Y
Z
amin
Z Z
pmin
Y linear Y
amin
Y
pmin
, ( ) =
Z
pmin
Z Z
pmax
< < Y Z
i
H
i
Y ( )
i 0 =
NH 1

=
Z
pmax
Z Z
amax
Y linear Y
pmax
Y
amax
, ( ) =
Z Z
amax
> Y Y
amax
=
FC
i
FC
i 1
W
i
+ =
Y
i
G
1
FC
i
( ) G
1
FC
i 1
( ) + ( ) 2 =
Y Z
142 Gaussian Transformation: the Anamorphosis
interval often larger than the interval of definition of the initial data: so the Practical Interval
should be carefully checked if the model has to be used later, after a simulation process.
Technical References 143
8.2 Histogram Modeling and Block Support Correction
The advantage of the Hermite Polynomial Expansion for the anamorphosis modeling is that we can
have very easily, in the context of the Gaussian Discrete Model, a correction of the coefficients
to get an anamorphosis on a block support.
This Block Support Correction is available in the Statistics / Gaussian Anamorphosis Modeling
window, "Calculate" button and "Block Correction" option.
For the points, we have:
(eq. 8.2-1)
The block support anamorphosis can be written:
(eq. 8.2-2)
A simple support correction coefficient can allow the user to get this new anamorphosis and at the
same time a model of the histogram of the blocks. In fact this coefficient "r" is determined from the
variance of the blocks:
(eq. 8.2-3)
We also have for the points:
. (eq. 8.2-4)
The only problem in the calculation of the coefficient "r" is that we need the anamorphosis model
and a variogram model. And unfortunately the variance of the points can be calculated with
the anamorphosis (see above) or can be considered as the sill of the variogram (in a strict stationary
case). In Isatis we calculate the block variance in the following way:

i
Z Y ( )
i
H
i
Y ( )
i 0 =

= =
Z
v

r
Y
v
( )
i
r
i
H
i
Y
v
( )
i 0 =

= =
varZ
v

i
2
r
2i
i 1 =

=
varZ
i
2
i 1 =

i
( )
144 Gaussian Transformation: the Anamorphosis
(eq. 8.2-5)
(eq. 8.2-6)
where is calculated from the variogram model using a discretization of the block v.
When the sill of the punctual variogram is different from var Z, the value of can be normal-
ized by the ratio: (var Z / variogram sill).
The anamorphosis of the blocks can be stored; the size of the block support is kept for further use
(Uniform Conditioning, Disjunctive Kriging, etc...).
As in the case of "punctual" anamorphosis, this block anamorphosis can be used to transform block
values into gaussian ones and conversely. But the user can also get the grade-tonnage curves of the
histogram attached to this block anamorphosis.
8.2.1 Grade-Tonnage Curves
The metal quantity is only available with strictly positive distributions.
When an anamorphosis has been modelled, the different quantities available for a given cut-
off "z
c
" are:
Obviously these quantities can be calculated for the punctual anamorphosis but also with a given
block support. In this way, the user can have access to global recoverable reserves.
l The tonnage above the cutoff
l The metal quantity above the cutoff
l The mean grade above the cutoff
varZ
i
2
i 1 =

=
varZ
v
varZ v v , ( ) =
v v , ( )
v v , ( )
Y ( )
T z
c
( ) 1 G y
c
( ) = with
y
c

1
z
c
( ) =
Q z
c
( ) y ( )g y ( ) y d
y
c

=
m z
c
( )
Q z
c
( )
T z
c
( )
-------------- =
Technical References 145
8.2.2 Grade-Tonnage Curves with information effect
When calculating global recoverable reserves, it can be interesting to take into account the fact that
the selection of mining units will be performed on future kriging estimates, when more samples will
be available.
This option can be activated in the Statistics / Gaussian Anamorphosis Modeling window, "Calcu-
late" button and "Block Correction" option.
This time, the program will need two other parameters, the variance of the kriged blocks (var Z
*
v
)
and the covariance between the real block grades and the kriged grades (cov(Z
*
v
,Z
v
)).
Note - These two parameters can be calculated in "Interpolate / Estimation / (Co-)kriging", "Test
Window" option "Print Complete Information", when kriging a block with the future configuration
of the samples... These two values are called in the kriging output: Variance of Z* (Estimated Z)
and Covariance between Z and Z*.
The used formulae are in this case:
(eq. 8.2-1)
(eq. 8.2-2)
This time, the different quantities for a given cutoff "z
c
" are
:
l The tonnage above the cutoff
l The metal quantity above the cutoff
l The mean grade above the cutoff
VarZ
v


i
2
s
2i
i 1 =

=
cov Z
v

Z
v
, ( )
i
2
r
i
s
i

i
i 1 =

=
T z
c
( ) 1 G y
c
( ) = with
y
c

s
1
z
c
( ) =
Q z
c
( )
r
y ( )g y ( ) y d
y
c

=
m z
c
( )
Q z
c
( )
T z
c
( )
-------------- =
146 Gaussian Transformation: the Anamorphosis
Isatis gives the values of the two gaussian correlation coefficients: "s" and " " in the "Calculate"
window for information.
Note - In the case where the future block estimates have no conditional bias, then , and
the estimated recoverable reserves are the same as in the case of larger virtual blocks that would be
perfectly known ("equivalent blocks", having a variance equal to the variance of the future
estimates).

s r =
Technical References 147
8.3 Variogram on Raw and Gaussian Variables
Isatis window: Statistics / Gaussian to Raw Variogram.
In the same way that the Hermite Polynomial Expansion can be used to calculate easily the variance
of the raw variable from the polynomial coefficients, a simple relationship between the covariance
of the gaussian transformed variable and the covariance of the raw variable
(eq. 8.3-1)
where:
l is the covariance of the gaussian variable,
l C(h) is the covariance of the raw variable.
This relationship is valid if the pair of variables (Y(x), Y(x+h)) can be considered as bivariate nor-
mal. From the relationship on covariances, we can derive the relationship on variograms.
The use of that relationship is triple:
l One can calculate the covariances (or variograms) on gaussian transformed values and raw val-
ues and check if the relationship holds in order to confirm the binormality of (Y(x), Y(x+h))
pairs
l One can calculate the gaussian variogram on the gaussian transformed values and deduce the
raw variogram
This is interesting because the variogram of the gaussian variable is often more clearly struc-
tured and easy to fit than the raw variogram derived from the raw values.
l One can calculate the gaussian variogram from the raw variogram.
That transformation is not as immediate as the previous one, as each lag the relationship needs
to be inverted (the secant method can be used for instance).
This use of the relationship between gaussian and raw covariance is compulsory to achieve dis-
junctive kriging on gaussian transformed values after change of support. It means that it calcu-
lates the gaussian covariance for the block support v from an analogous relationship
(eq. 8.3-2)
where r is the change of support coefficient in the block anamorphosis.
C h ( )
i
2

i
h ( )
i 1 =

=
h ( )
C
v
h ( )
i
2
r
2i

v
i
h ( )
i 1 =

=
148 Gaussian Transformation: the Anamorphosis
In each case, the relationship has to be applied using a discretization of the space (namely h values).
Technical References 149
9 Non Linear Estimation
Background information about the following non linear estimation techniques is presented hereaf-
ter:
1. Indicator Kriging,
2. Probability from Conditional Expectation,
3. Disjunctive Kriging,
4. Uniform Conditioning,
5. Service Variables,
6. Confidence Intervals.
For a general presentation of non linear geostatistics, the reader should refer to Rivoirard J., Intro-
duction to Disjunctive Kriging and Non-linear Geostatistics (Oxford: Clarendon, 1994, 181p).
150 Non Linear Estimation
9.1 Indicator Kriging
Isatis window:
- Interpolate / Estimation / Bundled Indicator Kriging
- Statistics / Indicator Pre-Processing
- Statistics / Indicator Post-Processing.
We must first define the Indicator function with one cutoff value z
c
, applied on the variable Z:
(eq. 9.1-1)
No specific problem occurs when processing indicators instead of real variable(s), through the krig-
ing algorithm. Nevertheless in case of multi-indicators, we must provide a multivariate model
which is not always easy to establish.
Instead, we can imagine using a generic model (obtained say for the indicator of the median value)
and tune its sill for all the indicators of interest; moreover, if we are only interested in the estimation
(rather than in its variance) we do not even have to bother about the tuning. Indicator variables
being necessarily correlated, indicator kriging is only an approximation of indicator cokriging
(which takes into account the other indicators when estimating one particular indicator), except
when all simple and cross structures are proportional (autokrigeability).
Hence, the only work consists in finding the kriging weights and applying them to each set of indi-
cators to obtain an estimated indicator. This approach is used in the window: "Interpolate / Estima-
tion / Bundled Indicator Kriging".
When the user prefers to fit a multivariate model two options are given in Isatis. In any case, the
user must first use the window "Statistics / Indicator Pre Processing" to create in the data file the
indicators and also to create the variables in the output grid file for kriging. When the indicators
have been created the user can fit one multivariate model using the standard multivariate approach
("Statistics / Exploratory Data Analysis" and "Statistics / Variogram Fitting") or fit each indicator
separately and use "Statistics / Univariate to Multivariate Variogram" to get the multivariate model.
The standard "Interpolate / Estimation / (Co-)Kriging" window will then be used to get the kriged
indicators.
In all the cases, the final problem comes in the interpretation of these results: in order to consider
the kriged indicators as conditional cumulative distribution functions (ccdf), we have to ensure that
the following constraints are fulfilled:
l Definition
(eq. 9.1-2)
Ind Z z
c
( ) 1 Z z
c
{ }
1 if Z z
c

0 if Z z
c
<

= =
Ind Z z
c
( ) [ ]

0 1 , [ ]
Technical References 151
l Inequality
(eq. 9.1-3)
The results may fail to verify these constraints (for example, because of negative weights) and
therefore the results need to be corrected. The correction used in Isatis in "Statistics / Indicator Post
Processing" has been exhaustively described in the GSLIB User's Guide (by Deutsch and Journel; p
77-81): it consists of the average of an upward and a downward correction of the cdf.
The kriged indicators are primarily used to generate conditional probabilities but the user may wish
to transform these results into the probability of exceeding fixed cutoffs, the average value above or
below these cutoffs, accounting for a possible change of support. These transformations of the indi-
cator (co-)kriging results, have been also inspired by the GSLIB methods and we strongly encour-
age the user to refer to the paragraph (v.1.6) "Going Beyond a Discrete cdf" to understand the set of
"recipes" and the corresponding parameters.
Ind Z z
1
( ) [ ]

Ind Z z
2
( ) [ ]

if z
1
z
2
<
152 Non Linear Estimation
9.2 Probability from Conditional Expectation
Isatis window: Statistics / Probability from Conditional Expectation.
We designate by Z the random variable, and wish to estimate the probability for this variable to
exceed a given threshold s.
We consider also Y, the gaussian transform of Z by the anamorphosis function : .
The reader should first have a look at the chapter about the Gaussian Anamorphosis for further
explanation.
Z can be expressed as follows:
(eq. 9.2-1)
where:
- Y
*
and respectively stand for the simple kriging of Y based on available data and its asso-
ciated kriging standard deviation,
- W(x) is a normalized gaussian random function, spatially independent from Y
*
.
The probability for Z to exceed a given threshold s is directly derived from the preceding equation:
(eq. 9.2-2)
where G is the c.d.f. for the gaussian distribution.
Note - At a conditioning point, the probability is equal to 0 or 1 depending upon whether is
smaller or larger than . Conversely, far from any conditioning data, the probability
converges towards the a priori probability .
Z Y ( ) =
Z x ( ) Y

x ( )

x ( )W x ( ) + [ ] =

P Z x ( ) s > [ ] P Y

x ( )

x ( )W x ( )
1
s ( ) > ( ) + [ ] =
P = W x ( )

1
s ( ) Y

x ( )

x ( )
------------------------------------ >
1 G

1
s ( ) Y

x ( )

x ( )
------------------------------------


=
Y

1
s ( )
1 G
1
s ( ) ( )
Technical References 153
9.3 Disjunctive Kriging
Isatis window: Interpolate / Estimation / Disjunctive Kriging.
The reader should first have a look at the chapter about the Gaussian Anamorphosis for further
explanation. See also Rivoirard (1994) for the theory.
In Isatis, disjunctive kriging is used in the frame of the discrete gaussian model to estimate local
recoverable reserves. The aim of this process is the calculation inside each panel of the tonnage and
the metal content of blocks (smaller than the panel) with a mean greater than a given cut-off.
The discrete gaussian model considers the samples randomly distributed in the small blocks, which
realize a partition of the panels. In practice, we center the samples in the middle of the blocks and
the program calculates Hermite Polynomials for the centered samples:
(eq. 9.3-1)
These polynomials will be used for the kriging step. For each panel we krige the polynomials:
(eq. 9.3-2)
with the kriging system:
(eq. 9.3-3)
where is the block gaussian covariance model. Using these kriged polynomials, we can eas-
ily get the recoverable reserves:
l Tonnage
(eq. 9.3-4)
l Metal Quantity
(eq. 9.3-5)
with:
Y


1
Z

( ) H
n
Y

( ) =
H
n

DK

H
n
Y

( )

r
2n

, ( )

+
1
N
--- - r
n

v
i
, ( )
n
i

= for all

,
T z
c
( ) 1 G y
c
( )
H
n 1
y
c
( )g y
c
( )
n
-------------------------------------H
n

K
n 1 =

=
Q z
c
( ) H
n

i
y ( )H
j
y ( )g y ( )dy
i 0 =

j 0 =

=
154 Non Linear Estimation
(eq. 9.3-6)
For these calculations, the program needs an anamorphosis modeled on the block support and a var-
iogram model. This variogram model is obtained in several steps:
l modeling the Raw Punctual Variogram.
l regularization of this variogram on the block support.
l transformation of this regularized variogram in the corresponding gaussian variogram using the
block anamorphosis.
l modeling this last discretized variogram.
It is important to notice that the kriging step is performed without universality conditions. The
weights of the polynomials are decreasing quickly with the order: this means that in practice the
number of kriged polynomials does not need to be important, only 6 or 7 polynomials are generally
enough. But conversely, the fact that we are without universality conditions, can lead to strange
results in under sampled zones (attraction to the mean).
y
c

v
1
z
c
( ) =
Technical References 155
9.4 Uniform Conditioning
Isatis window: Interpolate / Estimation / Uniform Conditioning.
Like Disjunctive Kriging, the aim of Uniform Conditioning is to estimate the tonnage and the metal
content of blocks inside a panel conditionally to the sole panel grade, which can be estimated
assuming only local stationarity (e.g. ordinary kriging).
This time the basic idea is to consider as known the grade of the panel and to calculate directly,
using the anamorphosis function, the tonnage and the metal of blocks inside the panel conditionally
to the gaussian value of the panel.
For the calculation we need the estimation of the grade in the panel Z
*
v
, the anamorphosis of the
blocks and the anamorphosis of the panels .
In Isatis the calculation of is performed in "Statistics / Gaussian Anamorphosis Modeling" win-
dow, "Calculate" button and "Kriged Panel Correction" option.
We have:
(eq. 9.4-1)
(eq. 9.4-2)
We get:
(eq. 9.4-3)
(eq. 9.4-4)
and;
Tonnage

r

s

s
Z
v

r
Y
v
( )
i
r
i
H
i
Y
v
( )
i 0 =
N

= =
Z
*
V

S
Y
*
V
( )
i
S
i
H
i
Y
*
V
( )
i 0 =
N

= =
Y
*
V

S
1
Z
*
V
( ) =
y
c

r
1
z
c
( ) =
T z
c
( ) 1 G
y
c
S
r
---Y
V

1
s
r
--


2

-----------------------





=
156 Non Linear Estimation
Like for the global recoverable reserves, the calculations can be performed using an information
effect assumption. When fitting the anamorphosis function and using the support effect option, the
user will toggle on the information effect button. In the specific case of the uniform conditioning,
the user will have to fit the anamorphosis function of the blocks and the anamorphosis function of
the kriged panels. In the first case, the fit will have to be performed using the Information Effect
option switched ON. But while the user has to enter the variance of the kriged blocks with the ulti-
mate information in the case of the small units as well as the covariance between the true blocks
and the kriged blocks, only the variance of the kriged panels with the current available information
will need to be entered for the panels.
We have for the blocks:
(eq. 9.4-5)
(eq. 9.4-6)
We have also for the panels:
(eq. 9.4-7)
(eq. 9.4-8)
We make the assumption that the gaussian variables: Y
v
and Y
*
V
are independent conditionally to
Y
*
V
and we can write:
(eq. 9.4-9)
It has to be noted that Z
*
V
is almost assumed to be without conditional bias. In this case:

Metal Quantity Q z ( )
S
r
---


i
H
i
Y
V

( )
j
r
j
H
j
y ( )H
i
y ( )g y ( ) y d
y
c
+

j 0 =
N

i 0 =
N

=
varZ
*
v

i
2
s
2i
i 1 =

=
cov Z
*
v
Z
v
, ( )
i
2
r
i
s
i

i
i 1 =

=
varZ
*
V

i
2
S
2i
i 1 =

=
cov Z
v
*
Z
V
, ( )
i
2
s
i
S
i
t
i
i 1 =

=
cor Y
v
Y
*
V
, ( ) cor Y
v
Y
v
*
, ( )cor Y
v
*
Y
*
V
, ( ) =
Technical References 157
(eq. 9.4-10)
This leads the followind relationship to be verified:
(eq. 9.4-11)
and so:
(eq. 9.4-12)
So:
(eq. 9.4-13)
and then we get:
(eq. 9.4-14)
(eq. 9.4-15)
We have also:
(eq. 9.4-16)
(eq. 9.4-17)
and:
Tonnage
Metal Quantity
E Z
v
*
Z
*
V
[ ] E Z
V
Z
*
V
[ ] Z
*
V
= =
cov Z
v
Z
*
V
Z
*
V
, ( ) 0 =
cov Z
v
Z
*
V
, ( ) var Z
*
V
( ) =
cor Y
v
Y
*
V
, ( ) S r =
S r t =
t
S
r
----- =
Y
V


S
1
Z
V

( ) =
y
c

s
1
z
c
( ) =
T z
c
( ) 1 G
y
c
tY
*
V

1 t
2

---------------------



=
Q z ( ) t
i
H
i
Y
*
V
( )
j
r
j

j
H
j
y ( )H
i
y ( )g y ( ) y d
y
c
+

j 0 =
N

i 0 =
N

=
158 Non Linear Estimation
9.5 Service Variables
Isatis window: Tools / Service Variables.
This method can be used to estimate a variable after cut-off with a change of support.
It can be regarded as estimating by ordinary kriging additive variables from transformed data taking
into account the change of support.
A common use of this method consists in estimating the recovered grade Z(v) of a block above a
given cutoff z. The grade will be deduced from the metal quantity and the ore tonnage at the cut-off
considered.
At each data point the probable block ore and metal above cut-off will be calculated using the Gaus-
sian change of support model.
l Probable block ore:
(eq. 9.5-1)
where:
m Y is the gaussian transform of Z by the anamorphosis function :
(eq. 9.5-2)
m r is the coefficient of change of support for the block v
m y is the gaussian equivalent of the cut-off z :
(eq. 9.5-3)
( : normalized coefficients of Hermite polynomials)
m denotes the block containing
m G is the c.d.f. for the gaussian distribution
l Probable block metal:
(eq. 9.5-4)
Once the block ore and metal above cut-off have been calculated at data locations, the same quanti-
ties can be estimated by kriging at the target points. It only requires fitting variogram models on
both variables.
E I
z v

( ) z
Z x

( ) [ ] E I
Y v

( ) y
Y x

( ) [ ] 1 G
y rY x

( )
1 r
2

-------------------------


= =

Z Y ( ) =
z
v
y ( )
n
r
n
H
n
y ( )

= =

n
v

E Z v

( )I
z v

( ) z
Z x

( ) [ ] E
v
Y v

( ) ( )I
Y v

( ) y
Y x

( ) [ ] =
Technical References 159
The advantage of this method, besides its simplicity, is that it does not require strict stationarity. By
using ordinary kriging the attraction towards the mean that occurs in simple kriging is avoided.
160 Non Linear Estimation
9.6 Confidence Intervals
Isatis window: Interpolate / Estimation / Confidence Intervals.
The idea is to derive the confidence interval from a block kriging using the discrete gaussian model.
In the gaussian space any characteristic can be easily calculated once the mean and the variance are
known.
We start from the gaussian transform , centered in the blocks to be estimated and from the block
gaussian variogram previously modeled: .
The kriging system to estimate can be written as:
(eq. 9.6-1)
with:
(eq. 9.6-2)
and:
(eq. 9.6-3)
Knowing these two values we can derive any confidence interval on the kriged gaussian values
from the gaussian density function. For instance, for a 95% confidence level, we have:
(eq. 9.6-4)
which is equivalent for the raw values by using the anamorphosis to:
(eq. 9.6-5)
This gives the bounds of the confidence interval:
(eq. 9.6-6)
(eq. 9.6-7)
Y

Y
V
K

r
2

+ r
V

V
= for all
Y
V
K

K
2
1

r
V

=
Pr Y
V
K
2
K
Y
V
Y
V
K
2
K
+ < < { } 95 % =
Pr
r
Y (
V
K
2
K
) Y
V
) (
r
Y
V
K
( 2
K
) + < < { } 95 = %
Z
min

r
Y (
V
K
2
K
) =
Z
max

r
Y
V
K
( 2
K
) + =
Simulations
Technical References 163
10 Turning Bands Simula-
tions
This page constitutes an add-on to the Users Guide for:
m Interpolate / Non-Conditional Simulations / Random Function /Turning Bands
m Interpolate / Conditional Simulations / Turning Bands
For the theoretical background, the user should refer to Matheron G., The intrinsic random func-
tions and their application (In Adv. App. Prob. Vol.5, pp. 439-468, 1973).
164 Turning Bands Simulations
10.1 Principle
The Turning Band method is a stereological device designed to reduce a multidimensional simula-
tion to unidimensional ones: if C
3
stands for the (polar) covariance to be produced in , it is suf-
ficient to simulate a stationary unidimensional random function with X covariance:
(eq. 10.1-1)
X is then spread throughout the space:
(eq. 10.1-2)
where is a unit vector with a uniform direction.

3
C
1
h ( )

r
----- rC
3
r ( ) [ ] =
Y x ( ) X < x> , ) ( =

Technical References 165


10.2 Non Conditional Simulation
A random function is said to be multigaussian if any linear combination of its variables follows a
gaussian distribution. In the stationary case, Multigaussian Random Function has its spatial distri-
bution totally characterized by its mean value and its covariance.
The easiest way to build a Multigaussian Random Function is to use a parallel procedure. Let Y
1
,
..., Y
n
stand for a sequence of standard independent and identically distributed random functions
with covariance C. The spatial distribution of the random function:
(eq. 10.2-1)
tends to become Multigaussian with covariance C as n becomes very large, according to the Cen-
tral Limit Theorem.
Several algorithms are available to simulate the elementary random functions Y
i
with a given cova-
riance C. The user will find much more information in Lantujoul C., Geostatistical Simulation
(Springer Berlin, 2002. 256p).
The choice of the method to generate the random function X is theoretically free. However in Isatis,
this or that method will be used preferably to optimize the generation of this or that specific model
of covariance. The selection of the method is automatic.
l Spectral Method
The Spectral Method generates a distribution the covariance of which is expressed as the Fou-
rier transform of a positive distribution. This method is rather general and is implemented in Isa-
tis where the covariance is regular at the origin. This is the case for the Gaussian, Cardinal Sine,
J-Bessel or Cauchy models of covariance.
Any covariance is a positive definite function which can be written as the Fourier transform of a
positive spectral measure:
(eq. 10.2-2)
where X is a probability distribution.
The random function is obtained as:
(eq. 10.2-3)
where:
Y
n ( )
Y
1
Y
n
+ +
n
------------------------------ =
Y x ( ) 2 < x> , ) + ( cos =
166 Turning Bands Simulations
m is a random vector with distribution X
m is a uniform variable between 0 and 2p
l Dilution Method
The Dilution Method generates a numerical function F and partitions into R intervals with con-
stant length. Each interval is randomly valuated with F or -F. This method is suitable to simu-
late covariances with bounded ranges. In Isatis, it is used to generate Spherical or Cubic models
of covariance.
When the covariance corresponds to a geometrical covariogram i.e.:
(eq. 10.2-4)
the random e function is obtained as the dilution of primary functions:
(eq. 10.2-5)
where:
m P is a Poisson process of intensity ,
m e is a family of standard random variables,
m g is a numerical function.
l Migration Method
The Migration Method generates a Poisson process that partitions into R independent exponen-
tial intervals which are valuated accordingly to the model of covariance to be simulated. In Isa-
tis it is used for:
m the exponential model: each interval is split into two halves which are alternatively valuated
with +1 and -1;
m the Stable and Gamma models: the intervals are valuated accordingly to an exponential law;
m the generalized Covariance models: the intervals are valuated with the sum of gaussian pro-
cesses.
The simulation of the covariance is then obtained by summation with projection of the simulations
on a given number of lines of the covariance . Each line is called in fact "turning band" and the
problem of the optimal count of Turning Bands remains, although Ch. Lantuejoul provides some
hints in Lantujoul C., Non Conditional Simulation of Stationary Isotropic Multigaussian Random
Functions (In M. Armstrong & P.A. Dowd eds., Geostatistical Simulations, Kluwer Dordrecht,
1994, pp.147-167).

Technical References 167


168 Turning Bands Simulations
10.3 Conditioning
If we consider the kriging estimation of Z(x) using the value of the variable at the data points
, in each point, we can write the following decomposition:
Z(x) = Z(x)
K
+ [Z(x) - Z(x)
K
] (eq. 10.3-1)
In the Gaussian framework, the residual [Z(x)-Z(x)
K
] is not correlated with any data value. It is
therefore independent from any linear combination of these data values, such as the kriging esti-
mate. Finally the estimate and the residual are two independent random functions, not necessarily
stationary: for example at a data point, the residual is zero.
If we consider a non-conditional simulation Z
s
(x) of the same random function, known over the
whole domain of interest and its kriging estimation based on the value of this simulation at the data
points, we can write similarly:
Z
SC
(x) = Z(x)
K
+ [Z
S
(x) - Z
S
(x)
K
] (eq. 10.3-2)
where estimate and residual are independent, with the same structure.
By combining the simulated residual to the initial kriging estimation, we obtain:
Z
SC
(x) = Z
S
(x) + [Z(x) - Z
S
(x)
K
] (eq. 10.3-3)
which is another random function, conditional this time as it honors the data values at the data
points.
Note - This conditioning method is not concerned about how the non-conditional simulation Z
s
(x)
has been obtained.
As non correlation is equivalent to independence in the gaussian context, a simulation of a gaussian
random function with nested structures can be obtained by adding independent simulations of the
elementary structures.
For the same reason, combining linearly independent gaussian random functions with elementary
structures gives, under a linear model of coregionalization, a multivariate simulation of different
variables.
z x

( )
Technical References 169
11 Truncated Gaussian
Simulations
This page constitutes an add-on to the Users Guide for:
m Interpolate / Non-Conditional Simulations / Random Function / Truncated Gaussian
m Interpolate / Conditional Simulations / Truncated Gaussian
This model could be considered as a discrete version of the multigaussian one, for more theoretical
explanations, the user should refer to Galli A. et al., The Pros and Cons of the Truncated Gaussian
Method (In Geostatistical Simulations, M. Armstrong & P.A. Dowd eds, Kluwer, p 217, 1994).
The simulation method must produce a discrete variable (each value represents lithofacies) and be
controlled by a limited number of parameters that can be inferred from the usual data available
(core drills).
The leading idea comes from the two following observations:
m The lithofacies constitute a partition of the space: at a given point (cell of a grid) we may
only have one lithofacies. The different lithofacies can be ordered: for example using the
quantity of clay as a criterion, which in fact, characterizes the quality of the reservoir.
m The Spatial Distribution of these lithofacies is different in the horizontal and in the vertical:
along the vertical axis, it reproduces the sedimental process. Whereas, horizontally, it char-
acterizes the homogeneity of the field.
If we consider a continuous random function Y(X) and in the case of two lithofacies A and its com-
plementary A, we can write that:
(eq. 11.0-1)
(eq. 11.0-2)
where S
a
is the threshold corresponding to the gaussian transform of the proportion p
a
of the facies
A : p
a
= G(S
a
), G : being the cumulated density function of a gaussian distribution.
x A Y x ( ) S
a

x A Y x ( ) S
a
>
170 Truncated Gaussian Simulations
The thresholds are derived from the proportion of the different facies.
The problem is to find the gaussian random variable Y which corresponds to the indicators of the
different facies that we observe at the data points.
When this random variable is found, we must simply truncate the gaussian values to the thresholds
that characterize each facies, to obtain the simulation. Unfortunately the transformation which goes
from the indicator of a facies to the gaussian value is not bijective. Therefore, we cannot convert
each facies at the data point into its gaussian equivalent.
On the other hand, we can derive the covariance of the truncated gaussian from the covari-
ance of the indicators of the different facies C
A
(h). In the case of two facies, we have:
(eq. 11.0-3)
where:
l g is the Gauss density function
l H
n
is the Hermite polynomial or order n
l S
a
is the threshold for the facies A
Therefore we can fit the underlying covariance through its impact in the covariance of the
indicator of the facies A.
For the domain of application of the Truncated Gaussian Method (fluvio deltac environment)
where the behavior along the vertical and the horizontal is quite different, we have chosen a factor-
ized covariance for :
(eq. 11.0-4)
It expresses that, knowing the value of the gaussian variable at a point P, there is independence
between:
l the points belonging to the horizontal plane which contains P,
l the points belonging to the vertical line which contains P.
Moreover, the choice of exponential basic structures for both and yields to interesting screen
effect properties which drastically improve the time consumption of the algorithm.
The Conditional Simulation is finally performed using the random function Y(x) characterized by
its structure , and such that at a data point where the facies is known, the value of the gauss-
h ( )
C
A
h ( ) g s
a
( )
2
H
n 1
2
S ( )
n!
----------------------
n
h ( )
1

=
h ( )
h ( )
h
x
h
z
, ( )
x
h
x
( )
z
h
z
( ) =

x

z
h ( )
Technical References 171
ian variable must respect the thresholds that correspond to this facies. Finally, the gaussian realiza-
tion is converted back into facies by truncation.
Implementation:
l No neighborhood,
l Migrated data.
172 Truncated Gaussian Simulations
Technical References 173
12 Plurigaussian Simula-
tions
Add-on to the On-Line Help for: Interpolate / Conditional Simulations / Plurigaussian.
This documentation is meant to explain the technical procedure involved in the Plurigaussian simu-
lations. It partly refers to Armstrong M., Galli A., Le Loc'h G., Geffroy F., Eschard R, Plurigauss-
ian Simulations in Geosciences, Springer Berlin, 2003. 149p).
174 Plurigaussian Simulations
12.1 Principle
The principle of the categorical simulations is to obtain a variable on a set of target locations (usu-
ally the nodes of a regular grid), each category being represented by an integer value called litho-
type. With no restriction, we will consider that the lithotype values are the consecutive integers
ranging from 1 to NLIT (where NLIT stands for the number of lithotypes to be simulated).
One plurigaussian simulation is obtained as the posterior coding of the combination of several
underlying stationary Gaussian Random Functions (GRF). In Isatis, the number of these GRF is
limited to 2 (denoted Y
1
and Y
2
) and characterized by their individual structure. These two GRF are
usually independent but they can also be correlated (with a correlation coefficient ) according to
the following scheme. Let W
1
and W
2
be two independent GRF, then we set:
(eq. 12.1-1)
(eq. 12.1-2)
In the rest of this chapter, we will consider (except when stated explicitly) that the two GRF are
independent.
The different lithotypes constitute a partition of the 2D gaussian space. Each lithotype (denoted F
i
)
is attached to a domain :
(eq. 12.1-3)
In Isatis we have decided to realize a partition of the 2D gaussian space into rectangles with sides
parallel to the main axes. The projections of these rectangles on the gaussian axes define the thresh-
olds attached to each lithotype and each GRF (denoted and respectively for the lower and
upper bounds of the GRF "j" for the lithotype "i"). Therefore the previous proposition can be stated
as:
(eq. 12.1-4)
The 2D gaussian space, with a rectangle representing each lithotype, is usually referred to as the
lithotype rule and corresponds to the next figure (7 lithotypes example):

Y
1
W
1
=
Y
2
W
1
1
2
W
2
+ =
D
i
x F
i
Y
1
( x ( ) Y
2
x ( ) ) D
i
,
t
j
i
s
j
i
x F
i
t
1
i
Y
1
x ( ) s
1
i

t
2
i
Y
2
x ( ) s
2
i


Technical References 175
(fig. 12.1-1)
For each lithotype, the thresholds on both GRF are related to the proportions of the
lithotypes:
(eq. 12.1-5)
where is the bivariate gaussian density function with 0. mean, variance 1., and as cor-
relation matrix:
(eq. 12.1-6)
When the correlation between the two GRF is 0., the previous equation can be factorized:
(eq. 12.1-7)
In the non-stationary case, the only difference is that the thresholds are not constant anymore.
Instead they vary as a function of the target point. For example, a point "x" belongs to the lithotype
F
i
if:
t
1
i
s
1
i
t
2
i
s
2
i
, , ,
P
F
i
x ( ) E 1
F
i
x ( ) [ ] P Y
1
x ( ) Y
2
x ( ) , ( ) D
i
[ ] g

u v , ( ) u d v d
t
2
i
s
2
i

t
1
i
s
1
i

= = =
g

u v , ( )

1
1
=
P
F
i
x ( ) g u ( ) u d
t
1
i
s
1
i






g v ( ) v d
t
2
i
s
2
i






G s
1
i
( ) G t
1
i
( ) ( ) G s
2
i
( ) G t
2
i
( ) ( ) = =
176 Plurigaussian Simulations
(eq. 12.1-8)
x F
i
t
1
i
x ( ) Y
1
x ( ) s
1
i
x ( )
t
2
i
x ( ) Y
2
x ( ) s
2
i
x ( )


Technical References 177
12.2 Variography
We assume at this stage that the proportions of each lithotype are unknown at any point in space and
that the lithotype rule is chosen. We must now determine the structure of the two GRF by trial and
error, comparing the experimental variograms to their expressions in the model.
For the experimental quantities, we can compute all the simple variograms of the indicators for all
lithotypes:
(eq. 12.2-1)
as well as the cross-variograms:
(eq. 12.2-2)
In general, the previous expressions are not allowed as the function is not stationary nor ergodic.
However, as the underlying GRF are stationary, we will still use the previous equations.
In order to match the expression of the experimental simple variogram, we can write the simple var-
iogram model expression as follows:
(eq. 12.2-3)
which expands as follows:
(eq. 12.2-4)
where the 4 variables gaussian density corresponds to the 4x4 covariance matrix:

F
i
x x h + , ( )
1
2N
------- 1
F
i
x

( ) 1
F
i
x

( ) [ ]
2
x

h =

F
i
F
j
,
x x h + , ( )
1
2N
------- 1
F
i
x

( ) 1
F
i
x

( ) [ ] 1
F
j
x

( ) 1
F
j
x

( ) [ ]
x

h =

F
i
x x h + , ( )
1
2
---Var 1
F
i
x ( ) 1
F
i
x h + ( ) [ ]
1
2
---E 1
F
i
x ( ) 1
F
i
x h + ( ) [ ]
2
= =

F
i
x x h + , ( )=
1
2
--- E 1
F
i
x ( ) [ ] E 1
F
i
x h + ( ) [ ] 2E 1
F
i
x ( ) 1
F
i
x h + ( ) [ ] + { }=
1
2
--- P
F
i
x ( ) P
F
i
x h + ( ) 2 g u
1
u
2
v
1
v
2
, , , ( ) u
1
d u
2
d v
1
d v
2
d
t
2
i
x h + ( )
s
2
i
x h + ( )

t
1
i
x h + ( )
s
1
i
x h + ( )

t
2
i
x ( )
s
2
i
x ( )

t
1
i
x ( )
s
1
i
x ( )

+





g

178 Plurigaussian Simulations


(eq. 12.2-5)
introducing the covariance values of the underlying GRF at distance h: . Similarly,
the cross-variogram can be written:
(eq. 12.2-6)
which expands as follows because we cannot have different lithotypes at the same point:
(eq. 12.2-7)
and:
(eq. 12.2-8)
12.2.1 Stationary case
Incidentally, in the stationary case, we can check that we find the usual formula for the simple vari-
ograms:

1 C
1
h ( ) C
1
h ( )
1 C
1
h ( ) C
2
h ( )
C
1
h ( ) C
1
h ( ) 1
C
1
h ( ) C
2
h ( ) 1
=
C
1
h ( ) C
2
h ( ) ,

F
i
F
j
,
x x h + , ( )
1
2
---E 1
F
i
x ( ) 1
F
i
x h + ( ) [ ] 1
F
j
x ( ) 1
F
j
x h + ( ) [ ] { } =

F
i
F
j
,
x x h + , ( )
1
2
--- E 1
F
i
x ( ) 1
F
i
x h + ( ) [ ] E 1
F
j
x ( ) 1
F
j
x h + ( ) [ ] + { } =

F
i
F
j
,
x x h + , ( )=
1
2
--- g

u
1
u
2
v
1
v
2
, , , ( ) u
1
d u
2
d v
1
d v
2
+ d
t
2
j
x h + ( )
s
2
j
x h + ( )

t
1
j
x h + ( )
s
1
j
x h + ( )

t
2
i
x ( )
s
2
i
x ( )

t
1
i
x ( )
s
1
i
x ( )

u
1
u
2
v
1
v
2
, , , ( ) u
1
d u
2
d v
1
d v
2

d
t
2
j
x ( )
s
2
j
x ( )

t
1
j
x ( )
s
1
j
x ( )

t
2
i
x h + ( )
s
2
i
x h + ( )

t
1
i
x h + ( )
s
1
i
x h + ( )

Technical References 179


(eq. 12.2-1)
and the cross-variograms:
(eq. 12.2-2)
12.2.2 Optimization
Both simple and cross-variograms use the same quadruple gaussian integral I:
(eq. 12.2-1)
This quantity can be optimized according to the calculation environment. We can interchange the
integrals and rewrite the previous formula:
(eq. 12.2-2)
As already mentioned, if the two GRF are not correlated , we can factorize the integral:
(eq. 12.2-3)
with:

F
i
Stat
h ( ) P
F
i
g

u
1
u
2
v
1
v
2
, , , ( ) u
1
d u
2
d v
1
d v
2
d
t
2
i
s
2
i

t
1
i
s
1
i

t
2
i
s
2
i

t
1
i
s
1
i

F
i
Stat
h ( ) g

u
1
u
2
v
1
v
2
, , , ( ) u
1
d u
2
d v
1
d v
2
d
t
2
j
s
2
j

t
1
j
s
1
j

t
2
i
s
2
j

t
1
i
s
1
j

=
I h ( ) g

u
1
u
2
v
1
v
2
, , , ( ) u
1
d u
2
d v
1
d v
2
d
t
2
j
x h + ( )
s
2
j
x h + ( )

t
1
j
x h + ( )
s
1
j
x h + ( )

t
2
i
x ( )
s
2
i
x ( )

t
1
i
x ( )
s
1
i
x ( )

=
I h ( ) g

u
1
u
2
v
1
v
2
, , , ( ) u
1
d u
2
d v
1
d v
2
d
t
2
j
x h + ( )
s
2
j
x h + ( )

t
2
i
x ( )
s
2
j
x ( )

t
1
j
x h + ( )
s
1
j
x h + ( )

t
1
i
x ( )
s
1
i
x ( )

=
( 0) =
I h ( ) g

1
u v , ( ) u d v d
t
1
j
x h + ( )
s
1
j
x h + ( )

t
1
i
x ( )
s
1
i
x ( )

2
u v , ( ) u d v d
t
2
j
x h + ( )
s
2
j
x h + ( )

t
2
i
x ( )
s
2
j
x ( )






=
180 Plurigaussian Simulations
(eq. 12.2-4)
Similarly, each integral can be optimized in the case where C
1
(h) or C
2
(h) is null. For example:
(eq. 12.2-5)
and each integral can be calculated directly using the Gauss integral function :
(eq. 12.2-6)
12.2.3 Calculations
This factorization is crucial as far as the CPU time is concerned: the 4 terms integral is approxi-
mately 100 times more expensive than twice the 2 terms integral.
As one can easily check, in the non-stationary case, the model of the simple and cross-variograms
can only be calculated at the data points, as we need to know the thresholds for the integral of the
gaussian multivariate density. This explains the technique used in the structural analysis of the
plurigaussian model: for each pair of data points, we calculate simultaneously the experimental var-
iogram and the model. These two quantities are then regrouped by multiples of the lags and repre-
sented graphically.

1
1 C
1
h ( )
C
1
h ( ) 1
= and
2
1 C
2
h ( )
C
2
h ( ) 1
=
g

1
u v , ( ) u d v d
t
1
j
x h + ( )
s
1
j
x h + ( )

t
1
i
x ( )
s
1
i
x ( )

g u ( ) u d
t
1
i
x ( )
s
1
i
x ( )






g u ( ) u d
t
1
j
x h + ( )
s
1
j
x h + ( )






=
g u ( ) u d
t
1
i
x ( )
s
1
i
x ( )

G s
1
i
x ( ) ( ) G t
1
i
x ( ) ( ) =
Technical References 181
12.3 Simulation
The plurigaussian simulation consists in simulating two independent random functions and to code
their "product" into lithotypes, according to the lithotype rule. When the two GRF are correlated,
we can still work with independent primary GRF and combine them afterwards as described earlier.
When running conditional simulations, we must convert the lithotypes into values in the gaussian
scale beforehand. This involves an iterative technique known as the Gibbs sampler. We will now
describe this method for a set of lithotypes data (say "N") in the case of non-correlated GRF. We
will focus on one GRF in particular.
We first initialize two vectors of gaussian values drawn randomly within the intervals
corresponding to the given lithotype for the first GRF at the data points. Obviously
the bounds of the interval depend on the values of the proportions at this location:
(eq. 12.3-1)
These values belong to the correct gaussian intervals (by construction) but are not correct with
respect to their covariance. The next steps are meant to fulfill this covariance requirement while
keeping the constraints on the intervals.
We then enter in an iterative process where the following operations are performed:
l Select one of the data points.
l Discard this data and krige its estimate using the gaussian values of all the
other data points. We also compute the standard deviation of this estimation.
l Drawn the residual according to the standard normal distribution within the interval:
(eq. 12.3-2)
l Derive the new value and substitute it for the previous value .
If the two GRF are correlated the interval of the third step becomes:
Y

min
g

max
, [ ]
Y

min
Y

max

Y

min
Y

--------------------------
g

max
Y

-------------------------- ,
Y

+ = Y

182 Plurigaussian Simulations


(eq. 12.3-3)
This iterative procedure requires the process to be stopped. In Isatis the number of iterations is fixed
by the user.
When the gaussian values are defined at the conditioning data points, the rest of the process is stan-
dard. We must first perform the conditional simulations of two independent GRF. Then at each grid
node their outcomes are combined according to the lithotype rule in order to produce lithotype
information.
g

min
Y

1
2

1
2

-------------------------------------------------------------
g

max
Y

1
2

1
2

-------------------------------------------------------------- ,
Technical References 183
12.4 Implementation
The Gibbs sampler is the difficulty of the algorithm. In theory, it requires all the information to be
taken into account simultaneously (unique neighborhood). This constraint rapidly becomes intrac-
table when the number of data is too large.
However there is a strong advantage in considering a unique neighborhood. As a matter of fact, the
covariance matrix C
N
can be established and inverted once for all (for the N data locations). The
inverse of the kriging matrix can be easily derived from and the estimation is then
reduced to a simple scalar product.
In Isatis we decided to run the Gibbs sampler in two steps:
l We first consider each well/line individually. The iterative process is performed several times on
the data along this well before the next well is tackled.
When the number of data along one line is too large (more than 400 points), the well/line is sub-
divided into several pieces using a moving window (of 400 points). The starting 400 samples
are processed first, then the window is moved by 200 samples further before the iterative pro-
cess is performed again. The window is moved until all the data have been processed.
As the wells/lines are mainly vertical, this first step ensures that the vertical behavior of the
covariance is fulfilled.
l In order to reproduce the horizontal behavior of the covariance, we must now run the Gibbs
sampler in a more isotropic way. This is achieved by selecting in a standard Isatis neighborhood
a set of points around the first data point. The Gibbs iterations are performed on this subset of
points and then the program selects another subset around the second point and so on until all
the data points have been the center of a subset.
In Isatis the numbers of iterations for the two steps may be different and are given by the user.
C
N 1
1
C
N
1
184 Plurigaussian Simulations
Impalas Multiple-Point Statistics 185
13 Impalas Multiple-Point
Statistics
IMPALA (Developed by Ephesia Consulting) stands for Improved Multiple-point Parallel
Algorithm using a List Approach, this is a new and high performance parallelized algorithm that
performs Multiple-Point Statistics simulations. The use of lists results in fewer RAM requirement
compared to tree-based algorithms.
The Multiple-Point Statistics simulation technique is used to simulate a categorical variable
through the simulation of a pattern. It tends to reproduce the proportions, the relationships between
facies and the geobodies, from the training image.
13.4.1 Terminology and basic algorithm
The key point of the Multiple-Point Statistics simulation is to scan the training image with a spe-
cific pattern in order to reference all the existing configuration related to this pattern.
A search template is defined as a set of relative nodes location (offsets) h
1
,,h
N
, where h
i
is a 2D
or 3D vector and 1<i<N. For a given reference node u, the search template at u is the set of nodes:
t(u)= {u +h
1
, u +h
N
} (eq. 13.4-1)
And, if s(v) denotes the facies at a node v, the vector:
d(u)= {s(u+h
1
), , s(u +h
N
)} (eq. 13.4-2)
defines the data event at u. Note that a data event with undefined components (for nodes which are
not yet simulated) can be considered.
To attribute a facies at a node u in the simulation grid, we retain the nodes v in the training image
(TI) where the data event d(v) has the same components as those of d(u). Then the occurrence of
all the facies at the nodes v are counted. This provides a probability distribution function (pdf)
that ca be used to draw a facies at the node u randomly. More precisely, if the positions of the know
components in d(u) are i
1
<i
m
(with 0 m N) then the probability to draw the facies k in
the node u is:

186
(eq. 13.4-3)
For 0 k<M, where M is the number of facies. Note that if the number of matching data events is
too small (less than min replicates parameters, see Imapala's mps help page). We consider the last
component in d(u) as undefined (the last informed node is dropped) and we repeat this operation
until this number is acceptable.
In addition, the multigrid approach is used to capture structures within the training image that are
defined at different scales. Let us introduce some terminology.
A isatis grid file is a box shaped set of pixels as:
(eq. 13.4-4)
Where Nx, Ny, Nz are the dimensions in the x axis, y axis (and z axis) direction respectively. In the
main grid G, the I
th
subgrid is defined as:
(eq. 13.4-5)
From the original search template t(u) and data event d(u), define the search template
(eq. 13.4-6)
And the data event
(eq. 13.4-7)
Where the lag vectors are magnified according to the scale of the subgrid SG
i
. Then, the simulation
proceeds successively with the simulation of all the nodes in the multigrid G
m-1
, using the search
template t
m-1
. The process continues with the nodes in the multigrid G
m-2
, using the search tem-
plate t
m-2
and repeats for all the multigrid levels (in decreasing order) similarly. The simulation
starts with the coarsest multigrid level and finishes with the finest one.
13.4.2 Using list to perform multiple-point statistics
The multiple point statistics inferred from the training image and provided by a search template are
stored in a list. An element of the list is a pair of vector (d,c) where d=(s
1
,,s
1
) defines a data event

Impalas Multiple-Point Statistics 187


and c= (c
0
, ,c
M-1
) is a list of occurrence counter for each facies: c
i
is the number of data events
d(v) equal to d found in the training image with facies i at the reference node v.
Such a list is a catalogue that allows to compute pdf to perform simulations. The training image is
scanned once moving the search template to build the list and it is not used any more. The training
image is scanned on all the nodes such the search template is entirely included in the training
image.
l Retrieving pdf using lists:
The pdf used to draw a facies in a node u of the simulation grid is computed from the list. The for-
mula is used and a minimal number of replicates, C
min
, is given. Assume that the simulated node in
the data event centered at u, d(u), are u +h
i1
,, u + h
in
and let d
(j)
(u) be the data event whose
defined components are s(h+h
i1
),, s(h+h
ij
) (only the first j simulated nodes in d(u) are taken in
account), 1 j n. for 1 j n and O k M, let C
(j)
k
be the number of nodes v in the training
image with facies k such that the data event centered at v, d(v), is compatible with d
(j)
(u) (i.e s(v
+h
il
), 1 l j). Then the greatest index j such that the number of replicates is
greater or equals to C
min
and the corresponding pdf:
(eq. 13.4-8)
is used to draw the facies at the node u.
(fig. 13.4-1)


188
(fig. 13.4-2)
13.4.3 Conditioning to hard data
In this section, the method used for handling conditioning data is described. The notation of section
13.4.1 are used.
Before starting the simulation, the conditioning data must be spread into all subgrids SG
0
,...,SG
m-1
,
in order to take them into account while the simulation. This is done as follows:
m For each conditioning data, the attributed facies is assigned to the node in the simulation grid
whose corresponding region contains the point of data. (If more than one conditioning data
lead to the same node location, we retain only the conditioning data whose the point is the
closest to the center of the corresponding region.)
m Each conditioning node U
c
in G obtained by the step above is spread in the subgrid SG
1
as
follows:
- We select all the closest nodes in the subgrid SG
1
to U
c
i.e the nodes in SG
1
that realize
the minimum minucsg1.... where d is the dimension (2 or 3) and u(j) (resp. U
c
(j)) are the
integer coordinates in G of the node U (resp. U
c
).
- If all the selected nodes are unsimulated, we choose one randomly and simulate a facies
for this chosen node using the search template t
0
, corresponding to the scale of the sub-
grid SG
0
. Otherwise, nothing is done (no need to spread this data).
m The previous step is repeated for the subgrids SG
2
,...,SG
m-1
successively: the conditioning
nodes in G obtained in the first step are spread in the subgrid SG
i
using the search template
t
i-1
corresponding to the scale of the subgrid SG
i-1
.
The simulation continues with the simulation of all the unsimulated nodes in the multigrid G
m-1
,
G
m-2
,...,G
0
successively.
Impalas Multiple-Point Statistics 189
13.4.4 Simulation Path
After the conditioning step, for each multigrid a random path covering all the unsimulated nodes is
chosen. The conditioning nodes are placed in the beginning od the path. If there is no conditioning
node, a node is chosen randomly and placed in the beginning of the path. Then, the next node in the
path is selected randomly, among the remaining nodes that are neighbors of the last node in the
path. When all the neighboring nodes of the last node are already in the path and the path is not
complete, the selection is randomly performed among the remaining nodes that are neighbors of
one of the nodes in the path are considered.
(fig. 13.4-3)
(fig. 13.4-3): (a) The position of the conditioning data is presented (encircled). (b) The gray scale
represents the index of the node in the path; the gray scale goes from 0 (white) to 49999 (black).
The multiple point statistics simulation described above is valid for stationary training images,
because spatial pattern (pixels configuration) according to a search template are stored (in a list)
regardless their location in the training image. However, IMAPALAs mps allows to use non sta-
tionary training images. In this case, auxiliary variable is used to describe the non stationarity. The
use of non stationary training image TI (containing facies, primary variable) requires one auxiliary
variables which must be exhaustively know in the simulation grid to guide the simulation.
Below, the method is presented for one auxiliary variable t and the associated maos are called TI
aux
and G
aux
for the training image TI and the simulation G respectively.
In presence of an auxiliary variable, a vector m is appended to each element of the list. An element
of the list is then a triplet of vectors (d,c,m) where d = (s
1
,...,s
N
) defines a data event, c = (c
0
,...,c
M-
190
1
) is a list of occurrence counters for each facies and m = (m
0
,..., m
M-1
) is a list of means for the
auxiliary variable: c
i
is the number of data events d(v) equal to d found in the training image with
facies i at the reference node v and mi is the mean f the auxiliary variable at these nodes v.
Before starting the simulation and building the list, the auxiliary variable, say t, is normalized in the
interval [0,1], via the linear transformation [a,b] to [0,1], t=(t-a)/(b-a), where a and b are respec-
tively the minimum and the maximum of the auxiliary variable t(v), v in TI
aux
U G
aux
.
To simulate a facies in a node u of the grid G, knowing the data event d(u) and the auxiliary variable
t(u) (provided from the auxiliary grid G
aux
), the following is done. A tolerance error u between 0
and 1 is fixed. For each facies k, the set E
k
of the elements in the list that are compatible with the
data event d(u) and that satisfy:
(eq. 13.4-9)
are retrained. Then, the sum C
k
of the occurrence counter C
k
of the elements of E
k
and the resulting
mean M
k
for the auxiliary variable are computed:
(eq. 13.4-10)
where the exponent (e) denotes the corresponding element in the list. The resulting means are used
to penalize the counters. For each facies k, the penalized counter is defined as
(eq. 13.4-11)
Then the conditional pdf knowing the date event d(u) and the auxiliary variable t(u), used to draw
the facies in the node u, is given by:
(eq. 13.4-12)
where :
Impalas Multiple-Point Statistics 191
(eq. 13.4-13)
In summary, the presence of an auxiliary variable adds a step of selection and a step of penalization
for retrieving the conditional pdf.
13.4.5 Soft probabilities
Impala allows to perform multiple points simulation accounting for soft probabilities (for the pri-
mary variable s in each node of the simulation grid).
Assume that one or several soft pdf(s) is (are) given for the facies at a node u. in this case, we com-
bine the pdf provided by the multiple point statistics and the soft pdf(s) using Bordleys formula
(Bordley 1982). Let n be the number of soft pdf(s) and for 0 k<M:
- the probability for facies k for the j
th
soft pdf:
(eq. 13.4-14)
- The probability for the facies k retrieved from the multiple point statistics as described
above:
(eq. 13.4-15)
- The marginal probability for facies k (proportion of facies k) retrieved from the training
image:
(eq. 13.4-16)
The the pdf used to draw the facies at the node u is given by the probabilities (Bordley 1982)
(eq. 13.4-17)
For 0<k<M-1, where:

192
(eq. 13.4-18)
is a constant normalization and
mp
,
1
, ...,
n
the weight in the Bordleys formula (see bordley
1982) for the pdf provided by the multiple point statistics and the soft pdf(s) respectively. The
weight
0
is set to
0
=1
mp

1
...
n
.
A soft probability can be global, i.e. the same one is used for all nodes, or local, i.e given for each
node independently.
Technical References 193
14 Fractal Simulations
This page constitutes an add-on to the Users Guide for Interpolate / Non-Conditional Simulations /
Random Functions / Fractals.
For more information concerning the Fractal Model, please refer to Peitgen H.O., Saupe D., Barns-
ley M.F., The Science of Fractal Images (Springer N.Y., 1998).
194 Fractal Simulations
14.1 Principle
A fractional Brownian motion V
H
(t) is a single valued function of one variable t (referred to as the
time), such that its increments have a Gaussian distribution with variance:
(eq. 14.1-1)
Such a function is continuous, self-affine, stationary and isotropic.
When H is close to zero, the behavior of the trajectories of V
H
(t) is rough whereas it becomes
smoother when H increases.
The value of H = 1/2 corresponds to the traditional Brownian notion.
This formalism can be easily extended to provide a self-affine fractional Brownian in a , which
satisfies the general scaling relation:
(eq. 14.1-2)
and has a fractal dimension:
(eq. 14.1-3)
k represents a positive constant that we assimilate to a variance and denote as in the rest of this
paragraph.
Fractals come in two major variations. Some are composed of several scaled down and rotated cop-
ies of itself such as the well-known Von Kock snowflake, or the Julia sets where the whole set can
be obtained by applying a non linear iterated map to an arbitrarily small section of it. These are
called the deterministic fractals. Their generation simply requires the use of a particular mapping
or rule which then is repeated over and over in a usually recursive scheme.
We can also include an additional element of randomness allowing the simulation of random frac-
tals. Given the fact that fractals have infinite details at all scales, a complete computation of fractal
is clearly impossible. Instead it is sufficient to approximate these computations down to a precision
which matches the size of the pixels of the grid that we wish to simulate. Several simulation algo-
rithms are used. These algorithms will be briefly described in 1 dimension ; they are usually
extended to higher dimensions without any major problem.
E V
H
t
2
( ) V
H
t
1
( ) [ ]
2
k t
2
t
1

2H
= with 0 H 1 < <

n
E V
H
x
2
( ) V
H
x
1
( ) [ ]
2
k x
2
x
1

2H
=
D n 1 H + =

2
Technical References 195
14.2 Midpoint Displacement Method
If the process is to be calculated for time t, between 0 and 1, we first set X(0) = 0 and X(1) is
selected as a sample of a gaussian variable with mean 0 and variance .
We then set the value of X(1/2) plus some Gaussian random offset D
1
with mean 0 and variance
. We can then write the increment:
(eq. 14.2-1)
which has a zero mean and a variance
(eq. 14.2-2)
According to the scaling relation, we must have:
(eq. 14.2-3)
Therefore:
(eq. 14.2-4)
We iterate the same idea until final resolution is reached.

1
2
X
1
2
---


X 0 ( )
1
2
--- X 1 ( ) X 0 ( ) [ ] D
1
+ =
Var X
1
2
---


X 0 ( )
1
4
---
2

1
2
+ =
1
4
---
2

1
2
+
1
2
---
2
=

1
2
1
4
---
2
=
196 Fractal Simulations
14.3 Interpolation Method
In the Midpoint Displacement Method, the resolution is improved by a factor r = 1/2 at each itera-
tion. We can modify the method to accommodate other factors 0 < r < 1.
For this purpose, one would interpolate X(t) at times from the samples one already has
from the previous stage at a sampling rate of . Then a random element D
n
would be added to all
of the interpolated points.
The additional parameter r will change the appearance of the fractal: it is called the lacunarity.
Following the ideas for the midpoint displacement method, we set X(0) = 0 and X(1) as a sample of
a gaussian variable with mean 0 and variance . Then we can deduce in the same fashion as
before that:
(eq. 14.3-1)
t
n
nrt =
t

n
2
1
2
--- 1 r
2 2H
( ) r
n
( )
2H

2
=
Technical References 197
14.4 Spectral Synthesis
The Spectral synthesis method (also known as Fourier filtering method) is based on the spectral
representation of samples of the process X(t). Its Fourier transform over a bounded domain (say
between 0 and T ) is given by:
(eq. 14.4-1)
and the spectral density of X is:
(eq. 14.4-2)
In general, a process X(t) with a spectral density proportional to corresponds to a fractional
Brownian motion with:
(eq. 14.4-3)
For practical algorithm we have to translate the above into conditions on the coefficients a
k
of the
discrete Fourier transform.
(eq. 14.4-4)
The conditions to be imposed on the coefficients in order to obtain that S(f) is proportional to ,
now becomes:
proportional to (eq. 14.4-5)
This relation holds for 0 < k < N/2 and, for , we must have as X is a real func-
tion.
The method thus simply consists of randomly choosing coefficients subject to the condition on their
expectation and then computing the inverse Fourier transform to obtain X in the time domain.
In contrast to the previous algorithms, this method is not iterative and does not proceed in stages of
increasing spatial resolution. We may, however, interpret the addition of more and more Fourier
F f ( ) X t ( )e
2ift
t d
0
T

=
S f ( )
1
T
--- F f ( )
2
T
lim =
f

H
1
2
------------ =
X t ( ) a
k
e
2ikt
k 0 =
N 1

=
f

E a
k
2
( ) k

k
N
2
--- - a
k
a
N k
=
198 Fractal Simulations
coefficients a
k
as a process of adding higher frequencies, thus increasing the resolution in the fre-
quency domain.
Technical References 199
15 Annealing Simulations
This page constitutes an add-on to the Users Guide for Interpolate/Conditional Simulations/
Annealing Simulations
Note - This method cannot be actually considered as simulations in the usual sense, as it is meant
to transform an input realization according to several criteria.
The Annealing procedure is similar to the Auto Regressive Deterministic one except that it does not
require any model and that the result is a discrete variable.
The data points are migrated at the nodes of the grid in order to improve the performances of the
methods. If several data points are migrated to the same node, the closest one prevails: the remain-
ing ones are simply ignored.
It requires the definition of several cutoff intervals which must realize a partition of If some
intervals overlap the results are unpredictable. Each cutoff interval corresponds to facies, numbered
starting from 1.
The procedure starts with an Initial Image which is internally converted into facies.
It requires a Training Image, which will be internally converted into facies, and which will be used
to derive the statistics concerning the proportions and the transitions: they will serve as references.
The principle is to modify iteratively a non conditioning pixel from its current facies value to
another facies value, in order to reduce the gap between experimental quantities and the reference.
These quantities are:
l the transition probabilities between the different facies, for several steps defined by the three
increments expressed in terms of grid meshes.
l the proportion for each facies.
For each grid node, the principle is to establish the energy of the current image.
(eq. 15.0-1)
R

2
E


p
2
E
p
+ =
200 Annealing Simulations
where designates the weight, E the normalized energy and the indices and the variograms
of the indicators and the proportions.
The weights are used to increase or to reduce the relative influence of each component in the calcu-
lation of the energy.
When calculating these transition probabilities, the procedure makes a distinction whether the
quantities are:
l constrained: if one of two points coincides with a data points,
l unconstrained: if none of the two points is a data point.
This transition energy can be expanded as follows:
(eq. 15.0-2)
where:
l designates the subweights,
l the indices c and nc respectively refer to the constrained and unconstrained statistics.
The transition energy integrates the difference between the experimental transition and the ref-
erence, calculated for all the steps:
(eq. 15.0-3)
where:
l is the transition probability from facies i to facies j at the step s,
l is the corresponding reference value,
l is a normation value which considers the number of steps and the number of transitions.
The proportion energy term is calculated along the main directions of the grid so as to capture
its potential lack of homogeneity:

2
p
E

e
c
2
E
c
e
nc
2
E
nc
+ =
e
E

k t
ij
s
t
ij
s

( )
i j ,

=
t
ij
s
t
ij
s

k
E
p
Technical References 201
(eq. 15.0-4)
where:
l indicates if the grid has an extension in the x direction (1 for true ; 0 otherwise),
l is the proportion of facies integrated over the pile of cells located at , whatever their or
indices,
l is the corresponding reference value,
l is the total number of cells,
l measures the dimension of the grid:
(eq. 15.0-5)
The global process consists in iterating, following a random path on each one of the grid nodes
which are not constrained by a data information. If we denote by the value of a facies drawn at
random and different from , and by and the corresponding energies, the Metropolis algo-
rithm gives the following substitution rule:
l if we substitute to ,
l if we substitute to with the probability and we keep with the probability
.
incorporates the difference of energy and the temperature of the system (monotonous function
decreasing with the duration of the process) though the following equation:
(eq. 15.0-6)
where is called the Boltzmann constant.
Instead of the temperature function, we ask the user to specify a maximum number of iteration
and use the number of iterations left:
E
p
1
d Nxyz
---------------------- e
x
p
i
x
p

x
i
( )
2
e
y
p
i
y
p
i
y
( )
2
e
z
p
i
z
p
i
z
( )
2
i

N
z

+
i

N
y

+
i

N
x

=
e
x
p
i
x
p
i
x
N
xyz
d
d ( e
x
e
y
e
z
) + + =
f
n
f
0
e
0
e
n
e
n
e
0
f
n
f
0
e
n
e
0
> f
n
f
0
p f
0
1 p
p
p
e
n
e
0
( )
kT
------------------------


exp =
k
N
max
202 Annealing Simulations
(eq. 15.0-7)
This method has been proven to converge towards the minimum energy state if the cooling speed
(ruled by the Boltzmann's constant) is small enough.
p
e
n
e
0
( )
k N
max
n 1 + ( )
--------------------------------------


exp =
Technical References 203
16 Spill Point Calculation
This page constitutes an add-on to the On-Line Help for: Tools / Spill Point
204 Spill Point Calculation
16.1 Introduction
In Oil & Gas applications, the spill point calculation enables the user to delineate a potential reser-
voir knowing that some control points are inside or outside the reservoir.
For the illustration of this feature, we will consider one map (which may be one of the outcomes of
a simulation process) where the variable is the topography of the top of a reservoir. We will con-
sider the depth as counted positively downwards: the top of the structure corresponds to the lowest
value in the field.
Moreover, we assume that we have a collection of control points whose locations are known and
which belong to one of the following two categories:
l the control point belongs to the reservoir: inside,
l the control point does not belong to the reservoir: outside.
Note - All the points located outside the frame where the image is defined are considered as
outside.
Technical References 205
16.2 Basic Principle
The principle is to find the elevation of the deepest horizontal plane which will split the field into
inside and outside sub-areas while remaining compatible with the control point information (the
Spill). We also look for the crucial point where, if the Spill is slightly increased, the violation of the
constraints will first take place (the Spill Point). The following figure illustrates these definitions:
(fig. 16.2-1)
The Spill Point corresponds to the location of the saddle below volumes A and B. As a matter of
fact, if we consider a deeper spill, these two volumes will connect and the constraints induced by
the control points will not be fulfilled any more as the same location cannot be simultaneously
inside and outside the reservoir.
The volume A is considered as outside whereas B is inside. An interesting feature comes from the
volumes C1 and C2:
l they are first connected (as elevation of the separation saddle is located above the spill point)
and therefore constitute a single volume C,
l the contents of this volume C is unknown.
Hence, after the spill point elevation has been calculated, each point in the frame can only corre-
spond to one of the following four status:
l below the spill point,
l above the spill point and inside the reservoir,
l above the spill point and outside the reservoir,
l above the spill point in an unknown volume.
206 Spill Point Calculation
16.3 Maximum Reservoir Thickness Constraint
This constraint corresponds to an actual limitation that must be taken into account in the Oil Indus-
try. Due to the quality of the rock and the depth of the reservoir, the pressure of the captured fluid
implies the thickness of the reservoir not to exceed a maximum value. This constraint is referred to
as the maximum reservoir thickness. On the previous figure, let us add this constraint:
(fig. 16.3-1)
The new spill point is shifted upwards as otherwise the maximum reservoir thickness constraint
would be violated. Note that, the Spill elevation is clearly known whereas the location of the Spill
Point is rather arbitrary this time. It is the last point that may be included in the reservoir: if the next
one (sorted by increasing depth) was included, the thickness of the reservoir would overpass the
maximum admissible value.
Technical References 207
16.4 The "Forbidden types" of control points
Up to now, all the control points have directly been used in order to derive the Spill characteristics.
There exists a second type of control point: the forbidden type ones. This information is not used
for the calculation of the Spill characteristics. They are simply double-checked a posteriori.
A forbidden outside point is a location which must result as either inside or unknown. Conversely,
a forbidden inside point is a location which must result as either outside or below or unknown.
If in a map (usually a simulation outcome), one of these constraints is not fulfilled, the whole map
is considered as not acceptable and is discarded from the final statistics.
208 Spill Point Calculation
16.5 Limits of the algorithm
It is essential to understand the behavior of the algorithm in the following simplified scenarios.
Let us consider the case of a synclinal where the top of the structure is constrained to belong to the
reservoir whereas a point located on its flank is supposed to be outside. The Spill (considered as
being the deepest horizontal plane where both constraints are fulfilled) is obviously located at the
elevation of the outside control point.
(fig. 16.5-1)
Let us now consider the opposite case where the outside control point is at the top of the structure
and the inside control point is on the flank. In principle, the situation should be symmetric with the
same result for the elevation of the Spill. But if we consider the volume of the reservoir now: the
volume of the reservoir controlled by the inside control point and located above the spill has its vol-
ume reduced to zero. That is the reason why such a map is considered as not acceptable.
(fig. 16.5-2)
Technical References 209
16.6 Converting Unknown volumes into Inside ones
As explained previously, the Spill point calculation may delineate volumes located above the Spill,
but where the algorithm cannot decide if it belongs to the reservoir or not: these volumes are called
unknown.
Then, if these volumes are discarded from the global statistics, the results are biased: the unknown
volumes are always considered as outside. Another possibility is to convert them all empirically to
inside in order to get another biased estimate (by excess this time).
In addition to the bias, this latter operation can lead to contradictions if the maximum reservoir
thickness criterion has been taken into account, as explained in the next figure:
(fig. 16.6-1)
The volume A is considered outside and B inside the reservoir. The volumes C and D are initially
unknown. If we convert them into inside, the maximum reservoir thickness constraint will not be
fulfilled any more for the volume C. We could imagine to move the spill upwards until the con-
straint is satisfied, but then we should also move the Spill Point in a new location. Instead, we have
considered that such a map should rather be considered as not acceptable.
210 Spill Point Calculation
Multivariate Recoverable Resources Models 211
17 Multivariate Recover-
able Resources Models
212
17.7 Theoretical reminders on Discrete Gaussian
model applied to Uniform Conditioning
This section aims at revisiting the results of the 1984 original note by Dr Jacques Rivoirard on mul-
tivariate UC, in the view of its implementation in Isatis. The devopments include the information
effect, on the selection block (ultimate estimate), as well as on the panel (panel grade estimate from
exploration data).
17.7.1 Univariate Uniform Conditioning (UC)
l Let:
- v be the generic selection block (SMU),
- Z(v) its true grade,
- and Z(v)* its ultimate estimate,
l The recoverable resources above cutoff grade z, to be estimated, are:
- the ore T(z) =
- the metal Q(z) =
17.7.1.1 Discrete Gaussian model
We use here the discrete Gaussian model for change of support. A standard Gaussian variable Y
is associated to each raw variable Z. The model is defined by:
m the block anamorphosis: with change of support coefficient r, deduced
from the sample point anamorphosis through the integral relation:
(eq. 17.7-1)
(this expresses Cartiers relation for a point random in a block)
m the block estimate anamorphosis: with change of support coefficient s
(this assumes a linear estimate , with weights summming to 1 and
being positive, so they can be considered as the probabilities of a random point)
m the correlaion between the standard Gaussian variables and .
( )*
1
Z v z
( )*
( )1
Z v z
Z v

( ) ( )
r v
Z v Y =
( ) ( ( )) Z x Y x =
2
( ) ( 1 ) ( )
r
y ry r u g u du = +

[ ] ( ) | ( ) ( ) E Z x Z v Z v =
*
( )* ( )
s v
Z v Y =
1 1
( *) Z v Z

=

* vv

v
Y
* v
Y
Multivariate Recoverable Resources Models 213
17.7.1.2 Uniform Conditioning
UC by panel grade aims at estimating the recoverable resources on a generic selection block within
a large block or panel V, conditionally on the sole panel grade, or for more generality, panel grade
estimate, say, Z(V)*:
(eq. 17.7-2)
(eq. 17.7-3)
The idea is to impose the panel grade, estimated for instance by Ordinary Kriging, in order to avoid
the attraction to the mean that may be caused by some techniques in case of deviation from station-
arity. The estimation of the metal at 0 cutoff must then satisfy the relation:
E[Z(v) | Z(V)*] = Z(V)*
This fundamental relation has several important consequences:
m The panel grade estimate Z(V)* is implicitly assumed to be conditionally unbiased:
E[Z(V) | Z(V)*] = Z(V)*.
Note that, having no conditional bias, Z(V)* cannot take negative values (as may be caused, in
kriging, by negative weights). Negative values will anyway not be supported by the coming
anamorphosis.
m The Gaussian anamorphosis of Z(V)* is necessarily of the same form as that of Z(v).
Let:
(eq. 17.7-4)
(eq. 17.7-5)
(eq. 17.7-6)
[ ]
( )*
( ) * 1 | ( ) *
V Z v z
T z E Z V

=

[ ]
( )*
( ) * ( )1 | ( ) *
V Z v z
Q z E Z v Z V

=

( ) ( ( )) Z x Y x =
( ) ( )
r v
Z v Y =
*
( ) * ( )
V
Z V Y =
214
where Y(x), Yv and YV* are standard Gaussian variables. The fundamental relation gives:
(eq. 17.7-7)
denoting :
Hence the anamorphosis of Z(V)* is inherited from this of Z(v). This holds whatever the estimate,
not only for linear combinations of Z sample values. In pratice S is obtained from the panel estima-
tion and the previous relation will be used to compute:
(eq. 17.7-8)
from r and S.
m The estimate of the metal at 0 cutoff must also satisfy:
(eq. 17.7-9)
assuming that Z(v) and Z(V)* can be considered independent, conditionally on Z(v)*,
that is:
(eq. 17.7-10)
which gives in practice:
(eq. 17.7-11)
[ ]
*
* * * *
( )* ( ) ( ) | ( ) ( )
vV
V r v V r V S V
Z V Y E Y Y Y Y

= = = =
* *
( , )
vV v V
S r r corl Y Y = =
* *
( , )
v V vV
S
corl Y Y
r
= =
[ ] [ ] [ ]
* * * *
* * * *
( ) | ( ) * ( ) | ( )*, ( ) * | ( ) * ( ) | ( ) * | ( ) *
( ) | ( ) ( )* ( )
vv vv v V
r v V r V S V
E Z v Z V E E Z v Z v Z V Z V E E Z v Z v Z V
E Y Y Y Z V Y

= =


= = = =

* * * * vV vv v V
S
r
= =
*
* *
* *
vV
v V
vv vv
S
r


= =
Multivariate Recoverable Resources Models 215
17.7.2 Multivariate model
Indices are now added to distinguish the variables. Let Z1 be the metal grade used for the selection,
and Z2, the secondary metal grade. In addition to the monovariate case seen above, we now want
estimate the other metals, for instance:
Q2(z) =
17.7.2.3 Extension of the discrete gaussian model
The other metal is globally:
(eq. 17.7-12)
It requires:
m the anamorphosis of Z2(v), with change of support coefficient r2;
m the correlation between the gaussian and .
Assuming that and are independent, conditional on , this will be deduced from:
(eq. 17.7-13)
17.7.2.4 Bivariate Uniform Conditioning
Bivariate UC consists in estimating the recoverable resources in panel V from the sole vector
(Z1(V)*,Z2(V)*). The problem is simplified, however, by assuming that:
and are conditional on , independent from the auxiliary metal panel grade, so that
the UC estimates for the selection variable correspond to the univariate case. It results that:
(eq. 17.7-14)
1
2 ( )*
( )1
Z v z
Z v

[ ] [ ]
( )
1 1
1 * 2 1 * 2 1 *2
2 1 *2
2 2 ( )* ( )* 2 1
2, 2 1 * 2, 1 *
2,
( ) ( )1 1 ( )| ( ) *
1 ( ( | )) 1 ( ))
( )
v v v v
v v
Z v z Z v z
Y y r v v Y y r v
r
y
E Q z E Z v E E Z v Z v
E E Y Y E Y
u g u du



= =

= =

=

1 * v
Y
2v
Y 1v
Y
1 *2 1 1 * 1 2 v v v v v v
=
1
( ) Z v
1
( ) * Z v
1
( ) * Z V
[ ]
1
2 2 ( )* 1 2
( ) * ( )1 | ( )*, ( ) *
V Z v z
Q z E Z v Z V Z V

=

216
For 0 cutoff, we impose:
(eq. 17.7-15)
This is similar to the univariate case. We have the following:
m is implicitly assumed to be conditionally unbiased. As a consequence it should not
take negative values, as may be caused in kriging or cokriging by negative weights. Nega-
tive values will anyway not be supported by the coming anamorphosis.
The Gaussian anamorphosis of is of the same form than this of (this holds
whatever the estimate used for , eg kriging, cokriging):
(eq. 17.7-16)
denoting :
(eq. 17.7-17)
In practice S2 is obtained from the panel estimation Z2(V)*, and the previous relation will be used
to get:
(eq. 17.7-18)
Since Z2(v) and Z1(V)* are considered independent, conditional on Z2(V)*, we have:
(eq. 17.7-19)
i.e.
[ ] [ ]
2 1 2 2 2 2
( )| ( )*, ( )* ( )* ( )| ( ) * E Z v Z V Z V Z V E Z v Z V = =
2
( )* Z V
2
( ) * Z V
2
( ) Z v
2
( ) * Z V
2 2 2 2 * 2
2 2, 2 2 * 2, 2 * 2, 2 *
( )* ( ) | ( ) ( )
v V
r v V r V S V
Z V E Y Y Y Y

= = =

2 2 2 2 * 2 2 2 *
( , )
v V v V
S r r corl Y Y = =
2
2 2 * 2 2 *
2
( , )
v V v V
S
corl Y Y
r
= =
2 1 * 2 2 * 1 * 2 *
( , ) ( , ) ( , )
v V v V V V
corl Y Y corl Y Y corl Y Y =
Multivariate Recoverable Resources Models 217
(eq. 17.7-20)
By symmetry between the metals, we can assume Z1(v) and Z2(V)* independent, conditional on
Z1(V)*, that is:
(eq. 17.7-21)
We will finally assume that Z1(v)* and Z2(V)* are independent, conditional on Z1(V)*, that is:
(eq. 17.7-22)
2
2 1 * 2 2 * 1 *2 * 1 *2 *
2
v V v V V V V V
S
r
= =
1
1 2 * 1 1 * 1 *2 * 1 *2 *
1
v V v V V V V V
S
r
= =
1 *2 * 1 *1 * 1 *2 * v V v V V V
=
218
17.8 Theoretical reminders on Discrete Gaussian
model applied to block simulations
The direct block simulation is based on the hypotheses and properties of the Discrete Gaussian
Model.
17.8.1 Univariate Case.
Let v be the generic selection block (SMU) and Z(v) its true grade,
17.8.1.1 Discrete Gaussian model
In the discrete Gaussian model for change of support. A standard Gaussian variable Y is associ-
ated to each raw variable Z. The model is defined by:
m the block anamorphosis: with change of support coefficient r, deduced from
the sample point anamorphosis through the integral relation :
(eq. 17.8-1)
(this expresses Cartiers relation for a point random in a block)
The model is also characterized by the relationships between block covariances and point covari-
ances, i.e.:
- Cov[Yv(x), Yv(x+h)] represents the covariance between any pair of blocks v(x) and
v(x+h).
- The point-block covariance and the point covariance can be shown to be:
- The point-point covariance is then :
Cov[Y(x), Y(x+h))] = r Cov[Y(x), Yv(x+h)] = r Cov[Yv(x), Yv(x+h)], with x within vi
and x+h within vj ,
- except for a point and itself (h=0) where Cov[Y(x), Y(x)+h] = Var[Y(x)] = 1:
17.8.1.2 Determination of parameters
The change of support coefficient is determined by inversion of the variance of Z(v), calculated
from the variogram, as a function of r.
( ) ( )
r v
Z v Y =
( ) ( ( )) Z x Y x =
2
( ) ( 1 ) ( )
r
y ry r u g u du = +

[ ] ( ) | ( ) ( ) E Z x Z v Z v =
Multivariate Recoverable Resources Models 219
(eq. 17.8-2)
(eq. 17.8-3)
where n are the coefficients of the expansion of the anamorphosis function into N Hermite polyno-
mials.
The block gaussian covariance Cov[Yvi , Yvj] is determined by inversion from cov(Z (v(x)), Z
(v(x))), which is the regularized covariance of Z(x).
17.8.2 Multivariate Case
Indices are now added to distinguish the variables. The formulae hereafter are considering 2 vari-
ables and can be extended to more than 2. At the difference of UC there is no difference among any
of the variables (main or auxiliary).
17.8.2.3 Extension of the discrete gaussian model
It requires:
m the anamorphosis of Zj(v), with change of support coefficient rj,
m the covariance between each variable separately,
m the cross-covariances between the variables taken two by two:
m the cross-covariance of block gaussian variables is Cov[Yiv(x) , Yjv(x+h)]
- the point-block cross-covariance is then: Cov[Yi(x), Yjv(x+h)] = ri Cov[Yiv(x),
Yjv(x+h)]
- the point-point cross-covariance is then : Cov[Yi(x), Yj(x+h)] = rirj Cov[Yiv(x) ,
Yjv(x+h)]
- except between the point and itself where the covariance is derived from the statistics on
the data Cov[Yi(x), Yj(x+h)]
17.8.2.4 Determination of the parameters
The previous method, based on the inversion between raw and gaussian statistics could be extended
to the multivariate case. But the unicity of the solution is not guaranteed for deriving the block
gaussian cross-covariance Cov[Yiv(x) , Yjv(x+h)] from the raw block cross-covariance
Cov(Zi(v(x)), Zj(v(x+h)).
Therefore another method (cf Emery and Ortiz, 2005) has been implemented.
It is based on the following property:
var ( ) var ( ) ( , ) Z v Z x v v =

=
=
N
n
n
n
r Var[Z(v)

]
1
2 2

220
the standard block gaussian variable corresponds to the (normalized) regularized point gaussian
(here univariate):
(eq. 17.8-4)
(eq. 17.8-5)
This lead to another way of determining the change of support coefficient:
(eq. 17.8-6)
It is derived from the variogram of the gaussian variable, instead of the variogram of the raw vari-
able in the classical method. In the same mind we can calculate directly the block gaussian cova-
riances and cross-covariances from the regularized covariances and cross-covariances of the
gaussian data:
(eq. 17.8-7)
(eq. 17.8-8)
17.8.3 General workflow for the direct block simulation
The organigram below visualizes the different steps to carry out before to run the direct block sim-
ulations.
( )
v
Y v
Y
r
=
1
( ) ( )
v
Y v Y x dx
v
=

2
var ( ) var ( ) ( , ) 1 ( , )
Y Y
r Y v Y x v v v v = =
1 1 1 1
1 1 2 2 2
1 1 1
( , ) 1 ( , )
cov( ( ), ( ))
cov( , )
h
Y h Y h
h
v v
v v v v
Y v Y v
Y Y
r r r

= = =
[ ] [ ]
1 2 1 2
1 2 1 2
1 2
1 2 1 2 1 2
( , ) cov ( ), ( ) ( , ) cov ( ), ( )
cov( , )
h
Y Y h Y Y h h
v v
v v Y x Y x v v Y v Y v
Y Y
r r r r r r

= = =
Multivariate Recoverable Resources Models 221
222
Multivariate Recoverable Resources Models 223
Figure 1: workflow for multivariate UC and direct block simulations.
The simulation is then achieved by means of the following steps:
m Non conditional simulation using Turning Bands of the block gaussian variogram.
m Transform of the simulated block values into point values at data locations.
m Conditioning by a cokriging based on the Discrete Gaussian Model.
According to that model, Yx and Yv make a pair of bi-gaussian variables. If we consider two
variables "i" and "j", the Gaussian point and block values are obtained by means of a linear
regression. In the formula below the variables G
i
and G
j
are the normal residuals of the linear
regression.
(eq. 17.8-9)
(eq. 17.8-10)
Knowing the covariance model C
ijv
(h) of the block Gaussian values we can derive the covari-
ance between point Gaussian values C
ij
(h).
(eq. 17.8-11)
These covariances are used to establish the cokriging system when conditioning the block simu-
lated values by point Gaussian values considered as random in a block.
The cokriging matrix has to be transformed to account the case where the two data points are
not only located in the same block but are the same point. The modifications are then:
- When the two variables (i and j) are the same we use C
iv
(0) (equal to 1 when the block
Gaussian variogram is normalized).
224
- When the two variables are different we use C
ij
that is the covariance between point gaus-
sian values.
Using the relationship between Y(x) and Y
v
we can derive the covariance between the residuals
G
i
, G
j
.
(eq. 17.8-12)
The corresponding correlations are then:
(eq. 17.8-13)
This correlation matrix must be positive definite, i.e. with positive eigen values.
This property is checked and has to be respected before to use that model for the direct block
simulation.
m Optionally a sample randomly located in the block can be calculated from the simulated
block values. This is directly coming from the discrete gaussian model where the point and
the block values are linked by the relationship above (eq. 17.8-9) (eq. 17.8-10). The two nor-
mal variables Gi and Gi are taken at random from a bi-gaussian distribution with the coeffi-
cient of correlation given in (eq. 17.8-13).
Localized Uniform Conditionning 225
18 Localized Uniform
Conditionning
Note - This technical reference is based on: Abzalov, M.Z. (2006) Localised Uniform Conditioning
(LUC): A New Approach to Direct modelling of Small Blocks. Mathematical Geology 38(4) p393-
411.
The Localized uniform conditioning is designed for non linear estimation in mining industries and
should be used as a Uniform Conditioning's post-processing. This is a specific method which
enables to estimate the grades at the block scale.
The classic Uniform Conditioning method estimates only the panel proportion of recoverable min-
eralization without identifying the location of the recoverable blocks. Beside it is commonly admit
the use of Ordinary Kriging to estimate small blocks is inappropriate when the data are to parsed
compare to the block size. As a matter of fact these two methods (Uniform Conditioning & Ordi-
nary Kriging) do not allow to efficiently determinate the actual location of the economically
extractable blocks. The Localized Uniform Conditioning method aims at filling this lack. This
method estimates the localized block grades by using the grade-tonnage curves given by the uni-
form conditioning and by reproducing the spatial grade distribution obtained by Ordinary Kriging.
Actually the concept of LUC is to use the grade ranking provided by the Ordinary Kriging while
keeping the distribution grade.
226
18.1 Algorithm
The Uniform Conditioning estimates the grade-tonnage curves for each panels. The grade tonnage
curves correspond to the tonnage and grade of mineralization which can be recovered for a cut-off
value. The Local Uniform Conditioning algorithm then estimates the mean grades of the grade
classes in each panel according to the block. Then the algorithm ranks the SMU blocks distributed
in each panel in their grade (estimate by Ordinary Kriging) increasing order. Finally, the mean
grades (Mi) of the grade class (Gci) which have been deduced from the UC model are assigned to
the SMU blocks whose rank matches the grade class. The grade class is the portion of the panel
whose grade is lying between a given cut-off (Zci) and the following cut-off (Zci+1).
In other words:
(eq. 18.1-1)
With Gci = grade class, Ti (Zci ) = the recoverable tonnage at cut-off (Zci ) and Ti+1(Zci+1) = the
recoverable tonnage at cut-off (Zci+1). By defining the SMU ranks as proportions of the panel ton-
nage Tv, the SMU ranks can be converted into the grade classes:
(eq. 18.1-2)
With SMUK = the SMU of a rank K, TK = the proportion of the panel tonnage distributed in SMU
blocks whose rank is equal or lower than (K), and TK+1 = the proportion of the panel distributed in
SMU blocks having higher rank.
Then the UC model enables to deduces the mean grades (Mi ) of the grade classes (MGci ) in the
panels. Finally by matching then class indexes MGci and TGci the mean grade (Mi) of each class
can be transferred to the SMUk blocks.

Localized Uniform Conditionning 227
(fig. 18.1-1)
Example of LUC on 16 SMU blocks per panel, and six cut-off values on the grade tonnage curve
coming form the uniform conditionning.
228
Skin algorithm 229
19 Skin algorithm
The new algorithm used to fill an incomplete grid of values is called the Skin Algorithm. This sim-
ple technique allows a very flexible input and therefore can be used for several applications, such as
Fluid Propagation, Wave propagation
19.1.1 VOCABULARY
The Skin algorithm only makes sense on a regular grid, which is characterized by its geometry and
its total number of cells (N1).
The variable of interest is initially defined on a set of grid nodes that we will call the Initial Domain
of Definition (N2).
The grid may also contain an active selection (masking off N3 cells) which reduces the number of
cells that can be ultimately filled (N4).
An obvious formula states that:
Each cell of the grid (already filled or not) may also be assigned an attribute (say its permeability)
which acts as a weighting factor to speed up or slow down the propagation. This property is not
defined in the basic case of grid filling.
This weighting factor can also be influenced by a speed coefficient which describes the velocity
with which a fluid (known in a cell already filled) would tend to invade the adjacent cell. This fea-
ture can be used to introduce some anisotropy in the weighting factor or discriminating among sev-
eral fluids. This feature, essential in the Fluid propagation algorithm, is not used for Grid Filling.
In this paper, we will refer to the neighborhood of a target cell. This refers to all the cells which are
adjacent and have a vertex (not a corner) in common with the target cell. Therefore, in 1-D, the
neighborhood is limited to the 2 adjacent cells, 4 in 2-D and 6 in 3-D.
19.1.2 INITIALIZATION
We start with the cells contained in the Initial Domain of Definition and establish the skin. The skin
is composed of all the cells immediately contiguous to a cell contained in the Initial Domain of Def-
230
inition. In other words, a cell belongs to the skin if one of its neighboring cells belongs to the Initial
Domain of Definition.
The next cell consists in filling one cell of this skin. All the cells of the skin do not have the same
probability of being elected. For a given target cell belonging to the skin, this probability is com-
puted as the sum of the weights induced by the only cells already filled and which belong to neigh-
borhood of the target cell. This weight is given by the (permeability) property carried by the cell (if
defined) or 1 if not defined. This weight will serve as energy for each cell of the skin. The non-
weighted version of the energy leads to the Skin length.
The Initial Skin Energy is the sum of these energies for all the cells that constitute the skin. When
the initialization step is ended, a message is produced (in the verbose option) giving:
- The total number of cells (N1)
- The number of cells already filled (N2)
- The number of cells masked off (N3)
- The number of cells to be processed (N4)
- The initial energy of the skin
19.1.3 ITERATIVE PROCEDURE
At each step, a cell is drawn at random among the ones belonging to the skin. The draw is per-
formed by generating a uniform value between 0 and the Skin Energy: this procedure favors cells
with high energy.
When a cell of the skin has been elected, it is turned into the set of cells already filled. The skin
energy is updated by:
- "Removing the energy due to the new incoming cell
- "Adding the energy linked to the cells belonging to the neighborhood of the new incom-
ing cell
The new Skin Energy is then used for the next iteration.
19.1.4 MEMORY MANAGEMENT
During these iterations, the current Skin must be stored in memory. The number of cells contained
in the skin at any time should be much smaller than the total number of cells of the grid.
A dynamic algorithm is used which allocates (and de-allocates) memory by quantum (calculated
empirically as three times the square root of the number of cells). If too small, this quantum is set to
1000 to avoid too numerous core allocation and de-allocation steps which may be time consuming.
Skin algorithm 231
19.1.5 END OF ALGORITHM
The algorithm is repeated until all the accessible cells are filled. An accessible cell is a cell that can
be linked to the Initial Domain of Definition through an actual path (avoiding the masked cells).
Note that some isolated cells, although note masked off, can remain unfilled at the end of the algo-
rithm.
When the end of the skin algorithm is reached, a message is printed (in the verbose case) giving:
- The total number of cells
- The number of cells finally processed
- The core allocation quantum
- The number of core quantum allocations
- The number of iterations
- The maximum skin length
- The maximum skin energy
232

Isatoil
Technical References 235
19 Isatoil
One of the main problems during the exploration and the development phases of a reservoir is to
construct a complex multi-layer geological faulted model recognized by seismic campaigns and
several wells (often deviated and sometimes even horizontal). This is the general framework within
which the methodology of Isatoil has been established.
The main sources of uncertainty come from the quality as well as the quantity of the information,
but also the reservoir's geological structure, the variability of the petrophysical properties and the
location of the gas-oil and oil-water contacts.
Therefore the Isatoil methodology has been developed in order to:
l calculate a base case geological model by using estimation techniques (Kriging) which produce
a set of smooth surfaces that honor all the available data. The graphical display of the general
shape of the reservoirs is used to check the suitability of the model from a geological point of
view,
l apply the same concepts by means of simulation techniques so as to obtain reliable distribution
curves of reservoir volumes, over different exploitation segments which constitute a partition of
the field. The series of volume estimates obtained through simulations for a given layer and a
given segment can be represented as risk curves.
The aim of this geostatistical technique, which works with deviated wells, is to provide accurate
estimates, whatever the number of surfaces involved in the geological model.
236 Isatoil
19.1 Data description
The framework of this procedure requires several assumptions:
l the layer cake hypothesis implies to work with a geological sequence of strata which starts from
a given surface named the Top Layering reference surface.
l the same vertical sequence of strata is defined over homegeneous areas of the field. Neverthe-
less missing stratum corresponding to pinch out can be handled.
l each stratum is considered as vertically homogeneous as far as petrophysical parameters are
concerned, so that any information processed on the output grid (seismic time, elevations, petro-
physical parameters) can always be defined using a 2D representation - we will refer to them as
maps -
l the sequence is divided vertically into Layers. The Layers are stacked successively with no vac-
uum, so that the bottom of one Layer always matches the top of the next Layer. Generically, the
top and bottom of a Layer will be called a surface. Some surfaces can be picked using the seis-
mic information (seismic surface), others cannot. All the Layers, when intersected, must be
"visible" on the well information.
The procedure enables to take into account the following information in order to produce a consis-
tent geological model:
l Seismic Time maps. They are provided by a series of picks coming from 3D seismic sections
which cover the whole field. These picks are interpolated in order to produce 2D surfaces which
cover the entire field. When the time map is absent, the corresponding Layer will be missing.
The seismic sections also serve in picking the fault events which create a disruption in the time
map. Within the faulted area, the quality of the time map is questionable and therefore, the pro-
cedure may alter its values. These faults can be represented on 2D maps as fault polygons
which correspond to the projection on the horizontal plane of the areas where the faults have
perturbed the seismic time surface. The procedure is restricted to normal faults.
l Well data. A well consists of a continuous well path through the 3D geological model. The
locations where the well path intersects each surface of the sequence (whether they are reflected
in the seismic time map or not) are recorded, we call them the intercepts. The wells can be ver-
tical or deviated with no limitation - they can even be horizontal -
At some locations along the well path, the values of (static) petrophysical parameters are measured
- porosity and the net to gross ratio values - The saturation could obviously not be considered as
vertically homogeneous within a layer since it strongly depends on the vertical distance to the oil-
water contact. The saturation is therefore calculated using a formulae tabulated within the proce-
dure. The petrophysical measurements are usually located within a layer and are therefore different
from the intercepts.
Technical References 237
l Some Control Surfaces given in depth and which cannot be deduced from the layer cake envi-
ronment, such as unconformities, erosion surfaces or major faults boundaries.
For volume calculation, it is also possible to provide a set of 2D areas which subdivide the field
into compartments where the volumes must be calculated. The procedure enables the reservoir to
contain up to three different phases (gas, oil and water). If present, the order relationship reflects
their density: gas is always located above oil and oil above water. There presence depends upon the
definition of contact surfaces which delineate the transition between two consecutive phases.
These contacts can be defined for each layer and within each area, either as a constant value or a
map (possibly adding a randomization factor).
238 Isatoil
19.2 Workflow
The Isatoil methodology has been developed in order to reflect the nature of the data and the geo-
logical structure of the field. This is reflected in the workflow which is used:
1. select a Top Layering reference surface from which the whole sequence will be derived. This
surface usually corresponds to a good seismic marker
2. perform a Depth Conversion of the set of markers for which seismic time maps are available
3. build the (normal) fault surfaces starting from the fault polygons
4. subdivide a unit (between two consecutive Layers) into zones within which the petrophysical
parameters can be considered as verticaly homogeneous - as well as non correlated between
zones -
5. populate each zone with petrophysical variables - porosity, net to gross ratio and saturation -
in order to derive the in-situ volumes above contact surfaces.
All the results are considered as 2D surfaces (or maps) which must match any information collected
along the deviated wells. For petrophysics, this coarse assumption only holds if the variables can be
considered as vertically homogeneous within each layer.
The same workflow is applied for estimation as well as for simulations. In the latter case, each
sequence produces several outcomes. Caution is required when combining the outcomes of differ-
ent sequences in order to keep the consistency of the global geological model: let us recall, for
example, that, by construction, the sum of the thicknesses of the layers within a unit must match the
total thickness of this unit.
Technical References 239
19.3 Modelling the geological structure
Considering the geometrical surfaces in general, it seems natural to work with thickness counted
from the Top Layering reference surface rather than working directly with depth. This avoids the
problem of considering drifts - regional trends - when the whole geological sequence is tilted.
In this section, all the variables are considered as 2D surfaces. They are referenced by a set of 2D
coordinates possibly reduced to a single index when there can be no confusion. The surfaces are
numbered downwards, starting from 0 with the Top Layering reference surface.
If and designate two wells, we denote the thickness of the layer i at the point x of the
well , while is the true vertical depth down to the layer i at the the point x counted from
the Top Layering reference surface: it also corresponds to the sum of the thicknesses of all the lay-
ers above i, measured at point x :
(eq. 19.3-1)
These definitions are illustrated in the next figure showing two representative well geometries: the
well is vertical while the well is strongly deviated:
(fig. 19.3-1)
T
i
x

( )
D
i
x

( )
D
i
x

( ) T
j
x

( )
j 0 =
j i

=

240 Isatoil
19.3.1 Depth Conversion using a sequential approach
An intuitive approach consists in computing the thickness of the different layers individually and
successively, starting from the Top Layering reference surface - considered as being depth con-
verted for simplicity - The first step is to obtain an estimate of the thickness of the first layer using
the relevant information (i.e. any ) and the corresponding variance of estimation
map.
When computing the thickness of the second layer, the vertical wells provide a valid thickness
information . Conversely, the deviated well does not provide any valuable direct infor-
mation. At the point the thickness of the second layer can only be obtained indirectly as
follows:
(eq. 19.3-1)
where designates the estimation of the thickness of the first layer at the point . We
recall that this value only represents an estimate (with an attached uncertainty) and, therefore, not
an exact quantity.
19.3.2 Depth conversion using a direct approach
Isatoil offers an alternative solution, i.e. to consider the thickness of all the layers as variables
which are treated simultaneously and therefore to use a Cokriging for the estimation.
The relevant information consists in thickness data that we do not have except for vertical wells.
Instead Isatoil makes use of true vertical depth information which is related to the thickness
through the equation (eq. 19.3-1). It shows that any information coming from an intercept of a well
(deviated or vertical) with the layers of interest can serve.
The geostatistical model required by the Cokriging procedure must contain as many variables as
they are Layers in the system, and it must be fitted at once using all the cumulative thickness pro-
vided by the intercepts. The model is built in the framework of Linear Coregionalization.
A restriction in this multivariate approach comes from the highly, -if not entirely - heterotopic
behavior of the information. As a matter of fact, the cumulative thickness information is seldom
defined for different layers at the same 2D point. In fact this only happens for vertical wells where
we know all the cumulative thickness at the same location (we also know the individual thickness
by difference). Therefore, we can hardly calculate any experimental cross-variogram since the
count of pairs would be very low for all lags . The solution consists in working with (non centered)
covariances and cross-covariances instead which implies a strict stationarity limitation.
Let us now establish the principles of the special cokriging technique which consists in estimating
directly the thickness of each layer by cokriging starting from the cumulative thicknesses
T
1
x

( ) ,
T
2
x

( )
x

2
x

( )
T

2
x

( ) D
2
x

( ) T
1
*
x

( ) =
T
1
*
x

( ) x

T
*
i
0
) (
Technical References 241
down to any layer . For legibility purpose, we can introduce the weighting p-vector (its
dimension is equal to the count of layers N), such that:
(eq. 19.3-1)
where the element indicates whether the layer l is included in the cumulative thickness
or not: in other words, if the information at the point x is an intercept with a layer deeper
than l or not. For example, for an intercept with the first surface, the p-vector is
and for an intercept with the second layer.
The cokriging system follows, established for the target layer i and the target node :
(eq. 19.3-2)
When expanding the previous set of equations, we introduce the following covariance terms:
(eq. 19.3-3)
If we recall that the p-vectors are known, the quantities and only depend on the
covariance terms between elementary thicknesses , for which a genuine multivariate model
is required.
D
j
( ) p x ( )
D x ( ) p
l
x ( )T
l
x ( )
l 0 =
N

=
p
l
x ( )
D x ( )
p 1 0 0 0 , , , , [ ] = p 1 1 0 0 , , , , [ ] =
x
0
T
*
i
x
0
( )

D x

( )

p
l
x

( )T
l
x

( )
l 0 =
N

= =
E T
*
i
x
0
( ) T
i
x
0
( ) [ ] 0 =
Var T
*
i
x
0
( ) T
i
x
0
( ) [ ] minimum

ij
h ( ) Cov T
i
x h + ( ) T
j
x ( ) , [ ] =
C
ij
h ( ) Cov T
i
x h + ( ) D
j
x ( ) , [ ] p
l
x ( )
ij
h ( )
l
l j

= =
C
ij
h ( ) Cov D
i
x h + ( ) D
j
x ( ) , [ ] p
l
x h + ( )p
m
x ( )
lm
h ( )
m
m j

l
l i

= =
C
ij
h ( ) C
ij
h ( )

ij
h ( )
242 Isatoil
The cokriging system (eq. 19.3-2) can then be expanded as follows, according to the strict stationar-
ity hypothesis:
(eq. 19.3-4)
where designates the covariance and is expressed using according
to the rank of the layers intercepted at points and and the distance between them.
Similarly depends on the rank of the layer intercepted at the data point
and the target layer i at point and the distance between them . For simplicity, the dis-
tance arguments and will systematically be skipped in the following.
Note that the system (eq. 19.3-4) also provides the variance of the estimation and can be established
for the estimation of each target layer i, hence the additional index attached to the cokriging weights
.
The dimension of this cokriging system is equal to the number of intercepts.
19.3.3 Depth Conversion with velocities
When performing a time-to-depth conversion, it is common practice to work with velocities rather
than with thicknesses directly. Let us consider the interval velocity of each layer which is
defined as the ratio between the thickness of the layer and the time thickness in the same
layer (denoted and known everywhere):
(eq. 19.3-1)
Once more, the actual information provided by a deviated well consists in the true vertical depth
. We can therefore introduce the apparent velocity which corresponds to the velocity
averaged over the layers intercepted down to the layer i, i.e. the cumulative thickness divided by the
cumulative time thickness :

C
,

C
i ,
=

i
2

ii
0 ( )
i

C
i ,

=
C
,
Cov D

, ( )
ij
h ( )
x

h
,
C
i ,
Cov D

T
i
, ( ) =
x

x
0
h
0 ,
h
,
h
0 ,

v
i
( )
T
i
( )
t
i
v
i
T
i
t
i
---- =
D
i
( ) V
i
( )
(
i
)
Technical References 243
(eq. 19.3-2)
For better legibility, we can use the same formalism as before, introducing the p-vector where each
element represents the proportion of the time thickness spent in an intercepted layer, and zero for a
layer not intercepted. Note that, this time, the elements of a p-vector always add up to 1.
(eq. 19.3-3)
and write the apparent velocity as a linear combination of the interval velocities :
(eq. 19.3-4)
Hence the cokriging system, expressed using velocities, and written for the target layer i and the tar-
get node :
(eq. 19.3-5)
which can be expanded as follows:
(eq. 19.3-6)
This time, the covariance terms refer to:
V
i
D
i

i
-----
T
l
l
l i

i
------------
t
l
v
l
l
l i

i
---------------
t
l

i
----- v
l

l
l i
= = = =
p
t
1

----
t
2

---- 0 , , , =
V
i
( ) v
i
( )
V
i
x ( ) p
l
x ( )v
l
x ( )
l 0 =
N

=
v
i
*
x
0
( )

V x

( )

p
l
x

( )v
l
x

( )
l 0 =
N

= =
E v
i
*
x
0
( ) v
i
x
0
( ) [ ] 0 =
Var v
i
*
i
x
0
( ) v
i
x
0
( ) [ ] minimum

C
,

C
i ,
=

i
2

ii
0 ( )
i

C
i ,

=
244 Isatoil
(eq. 19.3-7)
Another interest for working in velocities (rather than in depth) is when the model for the interval
velocities must reflect some physical behavior (rock compaction rule for example).
19.3.4 Depth Conversion with external drift
The Cokriging technique described above is also compatible with the External Drift, when an aux-
iliary variable - known throughout the field - provides an a-priori information on the average behav-
ior of the target variable.
For example, one can think of a relationship induced by the compaction law between the velocity
and the depth. Unfortunately, the depth is precisely what we are after in the time-to-depth conver-
sion process. Therefore, we would rather use the time surface as an external drift when processing
the velocities. The previous statement can be expressed as follows in the probabilistic jargon:
(eq. 19.3-1)
which relates the mathematical expectation of each interval velocity field to the seismic time
through a linear equation. The coefficients and are assumed to be constant over the field. Note
that this formalism enables the use of a more complex function, and even the possibility of involv-
ing more terms.
Introducing the assumptions (eq. 19.3-1) in the cokriging formalism (eq. 19.3-5) leads to the new
formulation of the cokriging system, expressed for the target layer i and the target node :

ij
h ( ) Cov v
i
x h + ( ) v
j
x ( ) , [ ] =
C
ij
h ( ) Cov v
i
x h + ( ) V
j
x ( ) , [ ] p
l
x ( )
ij
h ( )
l
l j

= =
C
ij
h ( ) Cov V
i
x h + ( ) V
j
x ( ) , [ ] p
l
x h + ( )p
m
x ( )
lm
h ( )
m
m j

l
l i

= =
E v
i
[ ] a
i
b
i
T
i
+ =
a
i
b
i
x
0
Technical References 245
(eq. 19.3-2)
where stands for the Kroneker sign which is equal to 1 when and to 0 otherwise. We
introduce two sets of N Lagrange parameters and which stand as additional
unknowns. As the second and third type of equations must be repeated for each layer index l, the
dimension of this system is now equal to the count of intercepts incremented by twice the count of
layers.
19.3.5 Fault reconstruction
Once the seismic time maps have been converted into depth maps - including the areas perturbed by
faults - Isatoil can rebuild the fault surfaces throughout the Layers. These fault surfaces are used
while subdividing a unit into zones. Let us recall that, for a given layer, each fault is characterized
by two polygons which describe the traces of the fault on the horizontal plane as it intersects the top
and the bottom seismic markers.

C
,

C
i ,
=

p
l
x

( )


l
i
=

p
l
x

( )T
l
x

( )


l
i
T
i
x
0
( ) =

i
2

ii
0 ( )
i

C
i ,

l

l
T
l
x
0
( )
l

l
i
i l =

l
} {
l
{ }
246 Isatoil
(fig. 19.3-1)
Isatoil interpolates the fault surface from its traces on the depth maps of two consecutive seismic
markers. These top and bottom surfaces are then extrapolated throughout the fault in a pre-faulted
scenario. Any intermediate surface is then estimated throughout the fault in a pre-faulted scenario
and the fault is finally applied afterwards to reproduce the observed throws.
19.3.6 Subdivision of a unit into zones
In the Layering process described above, the first step consists in estimating the top and the bottom
of each unit from the elevation at the intercept points (with the top and bottom surfaces) measured
along the wells and possibly using the time maps as external drift functions. In the second step,
each one of these units can be subdivided into several layers. The relevant information consists in
the elevation at the intercept points (with the layers) measured along the wells.
Technical References 247
The Zonation process is similar to the one explained in the previous paragraph, using the cokriging
formalism. This time, it must compulsorily be performed using thickness instead of velocity (as
there is no time map information for each layer). Note that the external drift feature is still applica-
ble as long as the geologist has a sound intuition of a set of variables which can serve as external
drifts: there must be as many variables as they are layers within a unit.
The important difference of this second step comes from the additional constraint that must be con-
sidered: the cumulated thickness of the layers within a unit must be equal to the thickness of the
unit. This is achieved by considering a collocation option added to the previous cokriging formal-
ism.
(fig. 19.3-1)
In principle, when the unit is subdivided into N layers, it suffices to calculate the thicknesses of the
N-1 layers: the thickness of the last one is obtained by comparison to the thickness of the total unit.
Nevertheless, in the case of the collocated option, we consider the problem of estimating the N
thicknesses (as if the thickness of the total unit were unknown). This is the reason why we consider
all the intercepts of the wells with the layers, as well as those with the bottom of the unit. The top of
the unit serves as the reference from where the true vertical depth is counted.
Moreover, at the target node location, we simply tell the system that one additional information
must be considered: the thickness of the total unit. When working with true vertical thicknesses, the
task is even simplified as it means that the cumulative depth down to the layer "N" at the target
node, is known.
As the estimation (using the cokriging technique) provides an exact interpolation (all the informa-
tion is honored), the intercept with the bottom of the unit, collocated with the target node, ensures
that the sum of the estimated thicknesses match the thickness of the total unit.
248 Isatoil
19.3.7 Inference of the geostatistical model
This paragraph describes the procedure used for inferring the geostatistical model in the direct mul-
tivariate approach.
We have already mentioned that, due to the large heterotopy of the information carried by the inter-
cepts of deviated wells, we must use non-centered covariances and work in the scope of strict sta-
tionary. This constraint requires the data to be de-trended beforehand: the trend is calculated
globally (using a unique neighborhood) and is constituted of a constant term (representing the
mean) and a second optional term related to the external drift variable (if used).
The modeling phase (fitting a theoretical curve to the experimental quantities) must be carried out
on all the simple and cross-covariances simultaneously in the scope of the Linear model of Core-
gionalization in order to ensure that the model is valid and can be use in kriging or simulation pro-
cesses.
The estimation of the parameters of the model requires a special procedure described in Lajaunie
Ch. (2001), Estimation de modle linaire de corgionalisation partir de donnes de sondages
obliques (Technical report N-18/01/G, ENSMP Paris, 10p). The main lines of this paper are repro-
duced in this paragraph. The procedure must deal with the following particularities:
l the information provided refers to linear transform of the variables of interest (cumulative thick-
ness or apparent velocity),
l the variables are not all sampled at the same points and therefore, it is not possible to use a lin-
ear inversion in order to transform the model back to the variables of interest.
For simplicity, we focus on the problem of modelling the thicknesses of several layers, starting
from information on cumulative thickness. Let us denote by the thicknesses of the N individual
layers which are only measured in a set of points through cumulative quantities:
(eq. 19.3-1)
The weights are known at each sample point and for all layers.
The problem is to fit a linear model of coregionalization on the variables:
(eq. 19.3-2)
where the basic structures are given and where the matrices must be semi-definite positive.
If we introduce the eigen decomposition:
Y
i
x
1
x
M
, ,
Z x ( ) p
i
x ( )Y
i
x ( )
i

=
p
i
x ( )
Y
i
K
Y
ij
x h x , + ( ) a
u
ij
K
u
h ( )
u

=
K
u
A
u
Technical References 249
(eq. 19.3-3)
with which makes sense as . Then, for each pair of measurements
(x,y), we can write:
(eq. 19.3-4)
When the variables have been carefully centered beforehand, in the scope of a stationary model, the
previous term corresponds to the covariance and the coregionalization matrices
will be obtained by minimizing the quantity:
(eq. 19.3-5)
The criterion is quadratic with respect to the coefficients . If we use for the set of indices
and for the pair , and denote by:
(eq. 19.3-6)
then we can write:
(eq. 19.3-7)
where
The whole set of matrices is obtained simultaneously by solving the linear system:
(eq. 19.3-8)
But we must still ensure that each matrix is definite positive which is not guaranteed in the pre-
vious procedure.
a
u
ij

pu
x
pu
i
x
pu
j
p

r
pu
i
r
pu
j
p

= =
r
pu
i

pu
x
pu
i
=
pu
0
K
Z
x y , ( ) p
i
x ( )p
j
y ( )r
pu
i
r
pu
j
K
u
x y ( )
pu ,

i j ,

=
E Z x ( )Z y ( ) [ ]
J A ( ) Z x ( )Z y ( ) p
i
x ( )p
j
y ( )a
u
ij
K
u
x y ( )
pu ,

i j ,




2
x y ,

=
J A ( ) a
u
ij
i j u , , ( ) x y , ( )

k
( ) p
i
x ( )p
j
y ( )k
u
x y ( ) =
( ) Z x ( )Z y ( ) =
J a ( ) a
k
a
l
<
k

l
> , 2 a<
k
> ,
2
+
k

k l ,

=
<f g> , f ( )g ( )

=
k <
k

l
, >a
l
l

<
k
> , =
A
u
250 Isatoil
Therefore we use an iterative procedure where each matrix is optimized in turn while keeping the
definite positive condition fulfilled for all the other matrices. The procedure consists of:
l choosing the basic structure ; in general each basic structure is considered in turn but one
could think of a random path instead,
l solving the linear system for minimizing , keeping all the other matrices unchanged,
l if necessary, correcting the matrix by removing its non positive part.
K
u
A
u
Technical References 251
19.4 Modelling the petrophysical parameters
The petrophysical parameters processed in Isatoil are the porosity and the net to gross ratio.
They are assumed to be vertically homogeneous within a layer so that they can be considered as 2D
variables. Moreover all petrophysical parameters are considered as independent between each other
and from a layer to the next one. This property leads to a separate standard kriging step for each
parameter within each zone. The structure identification is performed using the traditional vario-
gram this time and the strict stationarity limitation does not hold anymore.
In the case of simulation, the Turning Bands technique requires the data to be normally distributed.
In order to reproduce the disymmetric distribution observed for such variables, a prior Normal
Score Transform (using the Gaussian Anamorphosis) is performed on the input data, the simula-
tions are performed and the same anamorphosis function is used to perform the back-transform on
the results.
The saturation variable (traditionally represented as a J function) depends on the location of the
target with respect to the oil-water contact: it can obviously not be considered as homogenous verti-
cally unless we only consider the residual saturation in layers located far away from the contact.
Instead, this variable is integrated throughout the layer using the following empirical tabulated for-
mulae where the three coefficients (a,b and c) are specified for each layer:
(eq. 19.4-1)
where:
l stands for the porosity,
l H
t
(resp. H
b
) corresponds to the distance from the top (resp. the bottom) of the unit to the con-
tact.

252 Isatoil

You might also like