You are on page 1of 350

Institute of Petroleum Engineering

Geomodelling & Reservoir Management G11MM MSc Reservoir Evaluation and Management
Institute of Petroleum Engineering

Geomodelling & Reservoir


This manual and its content is copyright of Heriot Watt University © 2016

Any redistribution or reproduction of part or all of the contents in any form is prohibited.

All rights reserved. You may not, except with our express written permission, distribute or
commercially exploit the content. Nor may you reproduce, store in a retrieval system or transmit
in any form or by any means, electronic, mechanical, photocopying, recording or otherwise without
the prior permission of the Copyright owner.
Modelling O N E



1.1 Historical Perspective SIMULATIONS
1.2 What is Geostatistical Modelling or 14.1 Sequential Stochastic Simualtions
Geomodelling? 14.2 Sequential Indicator Simulations
1.3 What Geostatistics is not or cannot do 14.3 Sequential Gaussian Simulations


3.1 Pixel Models
3.2 Object (Boolean) Models
3.3 Multiple-point Geostatistics
3.4 Kriging with External Drift, Cokriging and
Collocated Cokriging
3.5 Geomodels Honouring Flow Units and
Geological Trends






9.1 Published Modelling Studies






13.1 Measures of Spatial Correlation
13.2 Handling Trends
13.3 Variogram Modelling

R reservoir evaluation and management

Geomodelling & Reservoir Management M
R Geomodelling & Reservoir Management


Having worked through this chapter the Student will be able to:

• To present the student with a range of commonly used geomodelling


• To understand the appropriate use of these techniques

• To understand the limitations of geostatistical modelling

• To identify case studies for guidance in setting up modelling projects

• Feel prepared for a geomodelling exercise

Modelling O N E


Background texts in geostatistics (Journal and Huibregts, 1978; Isaaks and Srivastava,
1989; Dubrule, 1998; Jensen et al., 2000; Deutsch, 2002) and compilations of modelling
applications in petroleum (Yarus and Chambers, 1994) are sources of further reference
material. Useful reviews are also given in Haldorsen and Damsleth (1990) and
Cosentino (2001). The use of geostatistics for petroleum reservoir modelling is a
rapidly evolving subject and the literature is probably (as usual) a few years behind
the industry. Likewise, strictly research papers may be a few years ahead of routine

1.1 Historical Perspective

Geostatistics has been applied in the petroleum industry since the 50’s and has become
a mature technology, paralleling the growth in computing which increasingly enables
the building of realistic large scale, high resolution geomodels on desktop computers
(adapted from Dubrule, 1998).

Since 50’s Markov Chain (Vistelius, 1949)

1D Limitation

Late 60’s Hassi-Messaoud Applications (Delhomme and Gianesini, 1979)

Simplistic Shale-Sand Models

70’s Mining Geostatistics (Journal and Huijbreghts, 1978)

First Mapping (Kriging) Applications

Early 80’s Haldorsen’s Thesis (Haldorsen, 1983)

Stochastic shales & sands (Wytch Farm, Frigg) (Begg et al., 1985)

Late 80’s Norwegian Applications (STORM, IRAP) (Haldorsen & MacDonald, 1987)
Stanford (SCRF, GSLIB) (Deutsch and Journal, 1992)
IFP (Heresim)
Sophisticated Reservoir Models

90’s Earth Model Integration (Shared Earth Model)

Pixel vs object models
Increase in published studies (Begg et al., 1996, Sweet et al., 1996)

2000 Merged Pixel/Object capability (IRAP/RMS)

PC Software (Petrel, IRAP)
Training Images (Stanford)

1.2 What is Geostatistical Modelling or Geomodelling?

The picture of a depositional environment and its heterogeneities will never be complete
for petroleum reservoirs. The goal of geomodelling is therefore to characterise the
formation using reservoir parameters together with their associated uncertainties
(Tyler et al, 1994).

Institute of Petroleum Engineering, Heriot-Watt University 3

R Geomodelling & Reservoir Management

Dubrule (1998, AAPG Continuing Education Course notes 38) gave a few pointers
as to what geostatistical modelling is:

1. Geostatistics provides the user with a toolbox of approaches for generating

realistic 3D representations of the distribution of heterogeneities.

2. The geostatistical method to use depends on the type of variable that is

modelled, on the depositional environment and of the scale at which the
representation must be used.

3. Geostatistics allows the generation of equiprobable realisations of the subsurface,

all compatible with the data and the statistical parameters used as input to the models.

4. The variability between geostatistical realisations is a measure of the uncertainty

remaining after constraining the realisations by the input data and the statistical
parameters. This allows for the quantification of non-uniqueness.

5. Geostatistics treats deterministic information as such: deterministic input is

honored by realisations. Geostatistics generates equiprobable representations
of what cannot be represented deterministically.

6. Geostatistics is the “glue” holding the various subsurface disciplines together.

It provides the means of integrating different kinds of data into the construction
of 3D representations of heterogeneities.

To which we might add:

1. Geostatistics provides a practical way (the only way?) of populating very large
geological models – the largest to date that we’ve come across is 23 million cells!

1.3 What Geostatistics is not or cannot do

In contrast to the above, once again from Dubrule (1998):

1. Geostatistics is not an algorithm tossing a coin to decide what facies are present
between wells.

2. Geostatistics is not a substitute for ignorance about the geology of a field.

3. Geostatistics is not a substitute for the geologist, the geophysicist or the

reservoir engineer.

4. Geostatistics is not a push-button automatic approach.

5. Geostatistics is not an alternative to “deterministic approaches”. Geostatistics

treats known information as deterministic, unknown information as

6. There is no such thing as “absolute” or “objective” measure of uncertainty.

Geostatistical variability is controlled by the chosen model and its parameters,
which may be proved wrong as new data are required.

Modelling O N E

Once again we could add:

1. Geostatistics are not at all useful, where there are too few data to constrain the
variograms, or object distributions. One might say that the one or two well scenario
that we see from time to time is such a case when overly complex geostatistical
models are inappropriate.

We can expand on the positive aspects of geostatistical modelling noted by Dubrule

(1998) in the following sections but first we must mention an important statistical
property - stationarity.


Stationarity is an implicit assumption of a statistical field. The mean and variance should
be independent of location in order for a field to have ‘second-order stationarity’. This
means that the property should be uniformly variable (homogeneously heterogeneous)
over the area/region/zone of interest. The mean, variance and variogram are constant
and relevant to the entire study area (Deutsch, 2002).

Stationarity can be identified in many ways:

1. The lack of a distinct trend.

2. Existence of a sill on a variogram.

3. Averages and variances equal for different subsets of the population.

Stationarity is also a function of scale. Local stationarity implies that the mean and
variance should be constant over a reasonable scale for the property investigated. For
reservoir modelling the properties should be stationary over, at least, the inter-well
spacing for the modelling to have any validity.

Trends are often present in geological data. A trend can be removed and the residuals
should have mean of zero and constant variance. These residuals would then be a
stationary field and could be modelled as such and recombined with the trend to
create the property model.

Stationarity is a key assumption as it allows us to extend properties from a well to a

region – it is also a critical limitation when applying local data to a regional model
and influences our ability to integrate data across the scales. If regions/zones in a
reservoir are characterised by different stationary models then these must also be
modelled separately and recombined for the full model.

“Geostatistics provides the user with a toolbox of approaches for generating realistic
3D representations of the distribution of heterogeneities”

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 5

R Geomodelling & Reservoir Management

3.1 Pixel Models

So called “Pixel models” are built using correlation structures determined by variogram
models. Pixels are the smallest elements and the variogram determines how the
pixels are clustered. There are a number of variograms models. The models refer to
numercial models fitted to the experimental data. Typical models include: Spherical,
Exponential, Gaussian, and Cyclical (Figure 1). The differences between exponential
and spherical are minor and can be detected in the appearance of the crispness of the
clusters (Figure 2). The selection of the model type is not always obvious from the
experimental data or in the appearance of the resulting model.

Normalised Semivariance

Range Spherical

Sample Separation. lag (h) [m]

Figure 1 Various variogram model types. Spherical is of the form (1.5h-0.5h3)

at distances less that the range after which the function approximates the sill.
Exponential is of the form (1-e-h) and gaussian (1-e-h2).
(refer to Deutsch, 2002).

Figure 2 Pixel models from various variogram model types (from Dubrule, 1998).
Top: Spherical model, Below: Exponential model. The exponential model rises
more steeply from the origin than the spherical model and therefore has a more
‘pepper pot’ character. The gaussian model rises more slowly from the origin and
therefore has the most continuity and is appropriate for continuous surfaces such
as structure maps and isochores (Deutsch, 2002)

Modelling O N E

Figure 3 Pixel models from with varying vertical correlation lengths (from
Dubrule, 1998). Top: Vertical correlation 1% model, Below: Vertical correlation
10% model

Figure 4 Pixel models from with varying horizontal correlation lengths (from
Dubrule, 1998). Top: Horizontal correlation 2% model, Below: Horizontal
correlation 10% model

There are two main types of pixel models, Sequential Indicator Simulation (SIS)
and Sequential Gaussian Simulation (SGS). In Sequential Indicator Simulation an
indicator value is chosen from a preconditioned distribution (ensuring 50:50 black:white
proportions in the models shown in Figures 2 to 4). In Sequential Gaussian Simulation
the original (porosity or permeability) data are transformed to a normal Gaussian
distribution (using a power transform). Each cell is then assigned a value drawn at
random from the normalised distribution (using a Monte Carlo procedure). The required

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 7

R Geomodelling & Reservoir Management

correlation structure is then imposed on the field by moving the values according to
the local correlation structure. In this process, the modelling is “sequential”. Note
that the Gaussian models must be de-transformed before usage.

SIS is typically used for modelling lithofacies where the lithofacies (e.g., sand and shale)
are assigned indicator values (1 and 0 perhaps). Figure 5 shows an SIS model with 3
facies. There is a practical limit to the number of indicators that can be accomodated
whilst preserving the correlation structure of each. A background indicator facies is
applied at the outset of the modelling. Commonly, the most abundant lithofacies is
used as background. SIS can also be used for modelling petrophysical rock types
(Figure 6). Each facies may contain several rock types.

SGS is used for modelling porosity data (Figure 7). The porosity data can be drawn
from a single distribution or from several distributions for each rock type

Figure 5 2-D slice from a 3-D Indicator pixel model involving three facies; Dune
(white), Interdune (black) and Fluvial (Grey). This model was built sequentially
with the fluvial added after the dune/interdune modelling.

Figure 6 3-D Indicator pixel model involving four rock types with varying
porosity-permeability relationships

Modelling O N E

130 m

Main Pay

N 635 m
0 30
Porosity (%)

Figure 7 3-D Porosity pixel model (from Dehgani, et al., 1999)

3.2 Object (Boolean) Models

An alternative geomodelling method is provided by the use of object models. These
are more appropriate when the lithofacies have more distinctive shapes (Figure 8)
– sheets, discs, channels, etc. Various placement rules (Figure 9) can be invoked
and these will mimic the geological rules of placement: shallower erodes deeper,
clustering and repulsion, lateral accretion, etc.

Figure 8 Various model objects (from Hunter-Williams, 2000)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 9

R Geomodelling & Reservoir Management




Figure 9 Placement rules in object modelling (after Clementson et al., 1990)

Figure 10 Object model of a deltaic fluvial system (based on a Carboniferous

outcrop from the Fife Coast) showing a range of objects. Background is mudstone
and various facies are modeled: incised channels – rectangular; massive channels
– half cylinders; bar – wedges, limestone and coals – sheets

A reservoir unit can contain a number of different geometries of lithofacies (Figure 10).

3.3 Multiple-point Geostatistics

Traditional geostatistics relies on a variogram to describe geological continuity. The
variogram is a two-point measure of spatial variability and is limited because it:

• Cannot describe curvilinear or geometrically complex patterns,

• The same variogram can result from a number of different situations (Figure 11)
• The variogram is not unique to a given field

Modelling O N E

(a) Boolean Ellipses

EW Variogram





0.0 0 40 80 120
0.0 East 250.000

(b) Indicator Simulation

EW Variogram





0.0 0 40 80 120
0.0 East 250.000

(c) Channels

EW Variogram





0.0 0 40 80 120
0.0 East 250.000

Figure 11 A variety of object and pixel models displaying the same variogram.
Given a variogram, the challenge becomes to choose the correct model as the three
facies fields clearly represent different geology (from Journel, 2003). Geostatistics
alone cannot give the answer

Multiple-point geostatistics uses a training image instead of a variogram to account

for geological information. A template, or several templates, is used to scan the
image to determine the probability of occurrence of certain patterns (Figure 12).
The training image (Figure 13) provides a conceptual description of the subsurface
patterns of facies or properties derived from a geological sketch, an outcrop image,
a modern depositional environment, seismic data, etc, or a combination of these to
generate a series of realisations (Figure 14).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 11

R Geomodelling & Reservoir Management

Training Image

Local Data Event


? u3

P(A B) = 3
A = Sand

Figure 12 Determination of the probability of occurrence of a particular pattern in

a training image (from Journel, 2003)

Non-Stationary Training Image




0.0 East 250.000

Figure 13 A training image of a radiating alluvial fan (from Journel, 2003)

Modelling O N E

One Realization



0.0 East 250.000

Figure 14 A radiating alluvial fan realisation generated by using multiple-point

geostatistical techniques (from Journel, 2003)

Multiple-point geostatistical techniques are being used as part of the geostatistical

toolbox to address a number of reservoir modelling challenges (Strebelle et al, 2002).

3.4 Kriging with External Drift, Cokriging and Collocated Cokriging

Well data provides good vertical resolution from which vertical spatial correlation
(variograms) can be determined. There are often so few well data that there is little
to constrain the horizontal control on spatial distribution. Seismic can provide the
horizontal spatial control. There are a number of techniques that have been developed
to integrate the horizontal control from seismic with the well data.

Kriging with external drift allows the seismically mapped surface to be used as an
external drift, with the match at the wells being determined by the variogram range.
The seismic data is being considered as the low frequency representation of the actual
horizon. The local structure around the wells may have been smoothed out by the
seismic data. Kriging with external drift can put high-frequency information back
into the model (Figure 15). This method might be appropriate to use if the residuals
between seismic depth and well depth are shown to be random and appropriately
described by a variogram. Invariably, systematic shifts can be due to mis-identification
of seismic pick (mis-tie) at the wells and/or presence of subseismic faults (Figure
16) – in which case more alternative techniques might be more appropriate. The
advantage of the kriging with external drift technique is that a number of equiprobable
realisations can be generated – so that an uncertainty map for the depth location of
the seismic event away from the wells can be produced. This technique is used most
often in the industry for time to depth conversion (Dubrule, 2003).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 13

R Geomodelling & Reservoir Management

Seismic Trend and


Kriging ED Large-Range

Kriging ED Small-Range

Figure 15 A summary of kriging with external drift (Dubrule, 2003)

Figure 16 An alternative deterministic solution with sub-seismic faults

Cokriging was developed to integrate data from various sources, particularly to integrate
seismic and well data. Doyen (1988) showed an example of bivariate cokriging used
to map porosity in the subsurface. First of all, a kriged porosity (primary variable)
map was produced from the well control (Figure 17). Then a relationship between
porosity and the inverse of acoustic impedance (secondary variable) was developed
at the well locations. The porosity map is then generated from the well data and
the seismic data. The method is more complex than previous ones, as it requires
the variogram of both primary and secondary variables and their cross- covariance.
Note that where there are well data the resulting map is controlled by those data and
where there are few well data the map is controlled by the seismic data.

Modelling O N E

1 2
Variance γ ( h ) var1 = Σ( Var1 (x) − Var1 (x + h )) ditto Var2

Cross - covariance γ ( h ) var1 var 2 = 1 Σ( Var1 (x) − Var(x + h ))( Var2 ( x ) − Var2 (x + h ))
2N 1


Distance (10m)
Kriged Porosity 80

0 40 80 120 160 200

Distance (10m)

Distance (10m) 160

Porosity by Regression From 120

1/ (Acoustic Impedence) 80

0 40 80 120 160 200

Distance (10m)

Distance (10m)

Porosity by Cokriging 80

0 40 80 120 160 200

Distance (10m)

Figure 17 A summary of porosity prediction from well and seismic data using
cokriging (Doyen, 1988, Dubrule, 2003)

Collocated cokriging is a simplification of cokriging. In petroleum reservoirs, the

primary variable comes from well data and the secondary variable from seismic data.

The secondary variable can be cross-correlated with the primary variable at the wells.
Collocated cokriging uses:

1 The correlation coefficient between primary and secondary variables at the well

2 A variogram model assumed to be representative of the primary variable and

the cross-covariance between primary and secondary variables (an important

3 The variance of the primary and secondary variables.

Jeffry et al. (1996) describe an example where gravity data are used as a secondary
variable to assist depth conversion (Figure 18 and 19). The average velocities measured
at the wells are the primary variable. In this case the variance of the residual gravity
data was used so that the well data was honoured.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 15

R Geomodelling & Reservoir Management

Gridded Well Velocities Only Residual Gravity

Figure 18 A map produced by using gridded well velocities and a residual gravity
map for an oil field (Jeffry et al, 1996)

3000 Cokridged Velocity

Well Control Plus Gravity Data

1000 Well Control ???? Velocity

Isotropic Variogram

0 4 6 12 15
Lag Spacing (km)

2.0 Correlation 0.76






3500 3600 3700
Velocity (???)
Residual Gravity
Isotropic Variogram

Just the variance of residual gravity


is used, not the whole variogram

0 4 6 12 15
Lag Spacing (km)

Figure 19 A summary for the field in Figure 18 of depth conversion using well
and gravity data by collocated cokriging (Jeffry et al., 1996; Dubrule, 2003)

Kriging with external drift, cokriging and collocated cokriging are geostatistical
methods for combining data from different sources and these techniques lend themselves
to the integration of seismic and well data.

3.5 Geomodels Honouring Flow Units and Geological Trends

In a model, the flow unit subdivision of layers will be directed in breaking down the
model into a number of ‘sub-models’. In each of these submodels, a consistent set of
geostatistical contstraints – facies populations, rock types, porosity and permeability
distributions, net/gross, facies proportions, etc – ensures that the ‘sub-models’ conform

Modelling O N E

to the concept of statistical stationary. The statistical requirements of flow units to

be stationary for modelling, encourages statistical considerations to be incorporated
in the definition of a flow unit.

The Rotliegend Hyde Field reservoir was sub-divided into three layers – alpha,
beta and gamma (Sweet et al., 1996) – and sub-models (Figure 20) were built and
subsequently recombined in order to model the gas production for this field (Figure
21). Earlier attempts at modelling this tight gas field had been unsuccessful. In this
well-described modelling study example from the North Sea, the authors emphasise
several points that were found to be important:

• Primary depositional textural properties exert a control on reservoir properties.

• The importance of outcrop analogues in providing lateral correlation lengths.

• The recognition of key deterministic (i.e., correlatable) flow units within the

• The importance of the regional context to get the appropriate trends and
orientations of the lithofacies.

• Lithofacies (rock types) that are petrophysically and physically distinguishable.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 17

R Geomodelling & Reservoir Management

Figure 20 Three layers in the geostatistical model of the Hyde Field (from Sweet
et al., 1996)

Modelling O N E

The basal layer – Alpha - is modelled with a background eolian facies with increasing
amount of fluvial (modelled using SIS) in the upper part of the layer. The middle
layer – beta - comprises deterministic layers of eolian facies and a correlated random
field layer of sandy sabkha/eolian sheet facies (SIS). The upper layer – gamma- in
the model is a mixture of sandy sabkha/eolian sheet facies.

Figure 21 The combined Hyde reservoir model and performance prediction

match to the production data. Being a gas field the offtake from the field is
governed more by demand than capacity (from Sweet et al., 1996)

Often reservoir layers contain systematic variations horizontally and/or vertically

in the statistical properties and these volumes have non-stationary properties. It
may be appropriate to subdivide the model further or impose systematic variation
on the modelling. This systematic variation in facies proportions or petrophysical
properties can be controlled in the model by external drift. External drift is used
as an external control in the alpha layer to increase the proportion of fluvial bodies
toward the top of the layer and in the gamma layer to modify the lateral variation in
eolian facies. These external controls are validated from regional geological models
and can be used to impose and external trend (coarsening up, fining up, proximal to
distal) coming from the geological models (also used in Figure 5, Seifert et al, 1996,
to control the distribution of fluvial elements). Proportion Curves are also used
(as a form of external drift) to impose trends of increasing (or decreasing) facies
proportions (Eschard et al, 1998)

The Hyde Field hybrid stochastic-deterministic modelling study used:

• Three flow units with different models (different realizations) for each unit.

• Arithmetic, geometric and harmonic averages as estimators of effective

permeabilities of kx, ky and kx(kv) respectively. This allowed for planar
anisotropy as well as vertical.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 19

R Geomodelling & Reservoir Management

• Sequential Indicator (SIS, for facies) and Sequential Gaussian (SGS, for
property) Simulation techniques used

• Modelling was conditioned on well data (SIS facies proportions)

• Geological model (several selected realizations) was upscaled for simulation

• External drift for regional trends

• Variograms with no hole effect

• Variograms chosen from experience in field outcrop analogues

• Assessment of small scale anisotropy

Proportion Curves are used (as a form of external drift) to impose trends of increasing
(or decreasing) proportions (Eschard et al, 1998)

Training images/Templates (Kerans and Tinker, 1997) and multipoint geostatistical

techniques are becoming increasingly recognised as being a way of imposing geological
facies models on the geostatistical process. The template controls the spatial information
associated with facies models – radiating submarine fans for example – and can by
use to constrain the modelling, thereby producing more realistic images.


“The geostatistical method to use depends on the type of variable that is modelled,
on the depositional environment and of the scale at which the representation must
be used”

We have seen a range of models being used for various geological problems and

Poroperm data – pixel models from a framework facies model using gaussian
Facies – either Pixel Indicator or Object models
Braided fluvial – shale objects in sand background
Base level rise and fall - modelled with proportion curves (Eschard et al, 1998)
Fluvial – channels models – fettucini (Figure 22), point bar models
Marine – external drift with discontinuous shales
Aeolian – external drift controlling wet-dry, fluvial-aeolian proportions laterally
and vertically (Sweet et al., 1999)
Turbidite – sands in shales – holes in shales
Porescale – vugs in a vuggy carbonate (Dehgani et al, 1999)
Microscale - stochastic poroperm models
Mesoscale - connectivity issues in channel sandstones
Megascale – scenarios – faults vs non-faulted, fluvial vs turbidite models.

Modelling O N E

“Geostatistics allows the generation of equiprobable realisations of the subsurface, all
compatible with the data and the statistical parameters used as input to the models”

Multiple realizations of the same statistical field can be generated (Figure 23).

Figure 22 An object - ‘fettucini’ - model of a fluvial channel system (from Tyler, 1994)

Figure 23 Equiprobable realizations of a correlated random field built with an

exponential model indicator variogram (after Dubrule, 1998)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 21

R Geomodelling & Reservoir Management


“The variability between geostatistical realisations is a measure of the uncertainty

remaining after constraining the realisations by the input data and the statistical
parameters. This allows for the quantification of non-uniqueness”

Questions concerning the role of geology often arise during modelling studies. Does
the geology matter and, if so, by how much? While the interaction of heterogeneity and
flow processes is still being researched, there are increasing indications that geological
information may be quite important for accurate flow behaviour models. The Page
Sandstone is a case of particular interest and it, along with others, is presented below.

A succession of works has detailed the geology (Chandler et al., 1989), the permeability
distribution (Goggin et al., 1988; Goggin et al. 1992) and fluid flow (Kasap and Lake,
1990) through the Page Sandstone, a Jurassic eolian sand that outcrops in the extreme
of northern Arizona near Glen Canyon Dam.

The geological section selected for the flow simulation was the northeast wall of
the Page Knob (Goggin, 1988). The data were derived from probe permeameter
measurements taken on various grids and transects surveyed on the outcrop surfaces,
together with profiles along a core. The individual stratification types comprising
the dunes, i.e., grainfall and windripple, have low variabilities (CV’s of 0.21). The
interdune element has a CV of 0.81. The interdune material is less well-sorted than
the dune elements. Grainfall and windripple elements are normally distributed, the
interdune log-normally. The individual stratigraphic elements in this eolian system
are well-defined by their means, CV’s, and PDF’s.

The vertical outcrop transects are more variable (CV = 0.91) than the horizontal transects
(CV = 0.55), an anisotropy that seems to be typical for most bedded sedimentary rocks.
The global level of heterogeneity for the Page Knob is probably best represented by
the transect along the core, which had a CV = 0.6. Semivariograms were calculated
for the grids and core profiles. The grids allowed spherical semivariogram ranges to
be determined for various orientations. These ranges indicate the dip of the crossbeds
; the ranges were 17 m along the bed and 5 m across the bed (Goggin, 1988). Hole
structures are present in most of the semivariograms, indicating significant permeability
cyclicity that corresponds to dune crossbed set thicknesses.

For our purposes, the most important facet of this work is the modelling of a matched-
density, adverse-mobility-ratio miscible displacement through a two-dimensional
cross section of the Page Sandstone. Figure 24 shows a typical fluid distribution
just after breakthrough of the solvent to the producing well on the right. The dark
band is a mixing zone of solvent concentrations between 30% and 70%. The region
to the left of this band contains solvent in concentrations greater than 70%, to the
right in concentrations less than 30%. The impact of the variability (CV = 0.6) and
continuity can be seen in the character of the flood front in the large fluid channel
occurring within the upper portion of the panel. There is a smaller second channel
that forms midway in the section. As we shall see, both features are important to the
efficiency of the displacement.

Modelling O N E


Figure 24 Distribution of solvent during a miscible displacement in the detailed

simulation of the Page Sandstone outcrop. From Kasap and Lake (1990). The flow
is from left to right

The simulation in Figure 24 represented one of the most geologically realistic,

deterministic, numerical flow simulations run in the late 80’s because it attempted
to account for every geologic detail established through prior work. Specifically,

1. Permeabilities were assigned according to the known stratification types at

every gridblock;

2. The permeability PDF for each stratification type is well-known;

3. A random component was assigned to the permeability of each gridblock to

account for variance of the local PDF’s;

4. Crossbedding was accounted for through the assignment of a full tensorial

permeability in about one-third of the gridblocks;

5. The specific geometry of the features was accounted for through the use of
finite elements; and

6. Each bounding surface feature (low permeability) was accounted for explicitly
with at least one cell in each surface.

In all, over 12,000 finite-element cells were required to account for all of the
geologic detail in this relatively small cross section. Indeed, one of the purposes
of the simulation was to assess the importance of this detail through successively
degraded computations.

Another purpose of the deterministic simulation was to provide a standard against

which to measure the success of conditional simulation. Figure 25A shows two
Conditional Simulation (CS) - produced realizations of solvent flowing through
the Page cross-section. Compared to Figure 24, the permeability distribution was
constructed with a significantly degraded data set that used only data at the two vertical
wells (imagined to be on the edges of the panel) and information about a horizontal
semivariogram. The field on which the simulation of Figure 25A was constructed
used a spherical semivariogram. Figure 25B shows the same field constructed with
a fractal semivariogram.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 23

R Geomodelling & Reservoir Management

We compare both figures to the distribution in Figure 24. Qualitatively, the fractal
representation (Figure 25B) seems to better represent the actual distribution of solvent;
it captures the predominant channel and seems to have the correct degree of mixing.
The spherical representation (Figure 25A) shows far less channeling and too much
mixing (that is, too much of the panel contains solvent of intermediate concentrations).
However, a quantitative comparison of the two cases shows that this impression is in
error (Figure 26). The distribution that gave the best qualitative agreement (Figure
25B) gives the poorest quantitative agreement. Such paradoxes should cause us
concern when performing visual comparisons; however, there is clearly something
incorrect about these comparisons. The key to resolving the discrepancy lies in
returning to the geology.

Figure 25A Simulated solvent distribution through cross section using a spherical
semivariogram. From Lake and Malik (1993). The mobility ratio is 10. The
scale refers to the fractional solvent concentration. Flow is from left to right

Figure 25B Solvent distribution through cross section using a fractal

semivariogram. From Lake and Malik (1993). The mobility ratio is 10. The scale
refers to the fractional solvent concentration. Flow is from left to right

Modelling O N E



Vertical Sweep Efficiency

Spherical semivariogram, l=28m


Spherical semivariogram, l=178m

Reference case

0.2 Fractal semivariogram

0 1 2 3 4
Cumulative Pore Volumes Injected
l = range

Figure 26 Sweep efficiency of the base and CS cases. From Lake and Malik (1993)

Figure 27 shows the actual distribution of stratification types at the Page Sandstone
panel based on the northeast wall of the Page Knob.

0 5 10 15 Grainflow
Scale (meters)

Figure 27 Spatial distribution of stratification types on the northeast wall of the

Page Sandstone panel. (From Kasap and Lake, 1993). The thin bounding surfaces
are black lines, the high-permeability grainflow deposits are shaded, and the
intermediate-permeability windripples are light regions

The thin bounding surfaces are readily apparent (black lines), as are the high-permeability
grainflow deposits (shaded) and the intermediate-permeability windripples (light).
This is the panel for which the simulation in Figure 24 was performed. Even though
the entire cross section was from the same eolian environment, the cross section
consists of two sands with differing amounts of lateral continuity: a highly continuous
upper sand and a discontinuous lower sand. Both sands require separate statistical
treatment because they are so disparate that it is unlikely that the behaviour of both
could be mimicked with the same population statistics. (Such behaviour might be
possible with many realizations generated, but it is unlikely that the mean of this
ensemble would reproduce the deterministic performance.)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 25

R Geomodelling & Reservoir Management

When we divide the sand into a continuous upper portion, in which we use the fractal
semivariogram, and a discontinuous lower portion, in which we use the spherical
semivariogram, the results improve (Figure 28). Now both the predominant and the
secondary flow channels are reproduced.

Figure 28 Solvent distribution through cross section using two semivariograms.

From Lake and Malik, (1993). Shading same as in Figure 24

Now both the predominant and the secondary flow channels are reproduced. More
importantly, the results agree quantitatively, as well as qualitatively (Figure 29).

The existence of these two populations is unlikely to be detected from limited well
data with statistics only; hence, we conclude that the best prediction still requires
a measure of geological information. The ability to include geology was possible
because of the extreme flexibility of CS. The technological success of CS did not
diminish the importance of geology rather, each one showed the need for the other.


Vertical Sweep Efficiency

Two semivariaograms, first realisation.


Two semivariaograms, second realisation.


0.2 Reference case

0 1 2 3 4
Cumulative Pore Volumes Injected

Figure 29 Sweep efficiency of the base and dual population CS cases. From Lake
and Malik (1993)

Modelling O N E

The objective of a geostatistical modelling study is not to produce a single realisation

that the creator thinks is correct: All models are wrong (as they are only models)
but the challenge facing the geoengineer is to judge how wrong a model might be!
The goal is to produce a range of realisations (Figure 30) that straddle the range
of outcomes and give the user some knowledge of the probability of a particular
prediction (e.g., water cut or oil production rate).

It was after this study, following an appreciation of meeting the challenge posed by
geological models with complex layering and varying modelling strategies that Larry
Lake coined the phase “The engineer’s secret weapon is the geologist”!

A 4000
Average Cross Section
Stochastic Cross Section
Gas/Oil Ratio (m /m )




0 2 4 6 8 10
Time (Years)

B 400
Average Cross Section
Oil Production Rate (m /day)

Stochastic Cross Section




0 2 4 6 8 10
Time (Years)

Frequency (No. of Realizations)













Millions of Standard bbl of Oil

Figure 30 Variation in model output from a series of simulations of equiprobable

geomodels (after Aasen, 1990). Top: Gas – oil ratio (m3/m3) as a function
of time; Middle: Oil production rate (m3/day) as a function of time; Bottom:
Frequency (Histogram) of oil recovery (MMBBLS) after 10 years production

Range of outcomes (Uncertainty) can be presented as a range of outcomes with time

or as a histogram from which the probabilities of meeting a specified target can be
determined (Risk). If the range is presented, then the engineer is quantifying how
wrong he/she might be.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 27

R Geomodelling & Reservoir Management


“Geostatistics treats deterministic information as such: deterministic input is honored

by realisation.”
Deterministic information may include:

• Seismic data (amplitudes, net-gross from AVO, porosity distribution, geobodies)

• Seismic events (top structure maps, fault locations, fault throws)

• Well penetrations (formation tops, formation dips)

• Well data (lithology, flow units, facies proportions) (Figure 31)

• Outcrop analogues (facies trends, geobodies, stacking patterns)

• Core data (poroperm data, poroperm relationships, cut-offs, net:gross)

• Test data (kh, distance to boundaries, kv/kh, production profiles)

derived from the data without necessarily considering or recording uncertainty

Conditioning Data

Honour Well Data

(Randomly locate
bodies to coincide
with wells)

Interwell Sand Bodies

Added until N:G reached
(conflicting ones removed)

Figure 31 Conditioning and placement of geobodies according to well data (after

Srivastava, 1994)

Data without uncertainty might be considered “hard data” although data (usually
interpretations) without uncertainty are not common in the petroleum industry.
Distribution functions (porosity, permeability) and empirical relationships (poroperm
X-plots) can also be treated as deterministic data. The permeability times height (kh)
determined from the interpretation can also be treated as hard data. Deterministic data
are associated with certainty, p=1. In reality none of the data are truly deterministic.
Top structure maps are derived by depth conversion from a velocity model which
might be non-unique. Well test kh is derived by imposition of a model (infinite
radial acting) which may be inappropriate or non-unique. Core laboratory plug data
is interpreted following a cleaning and restoring to overburden conditions, which
may introduce artifact.

Modelling O N E

Realisations are equiprobable models which honour the input statistics. The comparison
of modelled (poroperm) data with actual raw data is a good validation of a model
(Figure 32). Plotting the differences (residuals) from model and data is a key statistical
quality check (like examining the residuals following linear regression).

Data: Poro, Zone 1 Data: Poro, Zone 1

Zone: 3 of 3: Subgrip 1,Subgrip 2, Subgrip 3 Zone: 3 of 3: Subgrip 1,Subgrip 2, Subgrip 3
Facies: 3 of 3: Background, Channel, Sheets Facies: 3 of 3: Background, Channel, Sheets

21 21

18 18
Relative Frequency (%)

Relative Frequency (%)

15 15
12 12 Background
9 9 Channel
6 6 Sheets
3 3

0 0
0.00 0.04 0.08 0.12 0.16 0.20 0.24 0.28 0.00 0.04 0.08 0.12 0.16 0.20 0.24 0.28
Poro Poro
369 Observations (0 undef), Min = 0, Max = 0.29922 3917825 Observations (0 undef), Min = 0, Max = 0.29922
Mean = 0.15335, St. dev. = 0.057074, Skewness = -0.20905 Mean = 0.15154, St. dev. = 0.054394, Skewness = -0.43086

Figure 32 Comparison of input (raw) and output (modeled) porosity data for
validation of the model

Other data sources abound in engineering practice and should, as time and expense
permit, be part of the reservoir model also. The most common of these are the following.

1. Pressure-transient data: In these types of tests, one or more wells are allowed
to flow and the pressure at the bottom of the pumped wells observed as a function
of time. In some cases, the pressure is recorded at nonflowing observation wells.
Pressure-transient data have the decided advantage that they capture exactly how a
region of a reservoir is performing. This is because interpreted well-test properties
represent regions in space rather than points. Resolving the disparity in scales between
the regional and point measurements and choosing the appropriate interpretation
model remain significant challenges in using pressure-transient data. However,
this type of data is quite common and may be relatively inexpensive to acquire (in
an offshore situation) making it an important tool in the tool-box. Well tests can
also be used to image the limits of geological objects and help define their scale.

2. Seismic data: The ultimate reservoir characterization tool would be a device that
could image all the relevant petrophysical properties over all the relevant scales and
over a volume larger than interwell spacing. Such a tool eludes us at the moment,
especially if we consider cost and time. However, seismic data come the closest to
the goal in current technology.

There are two basic types of seismic information: three-dimensional, depth-corrected

traces “slabbed” from a full cubic volume of a reservoir, and two-dimensional
maps of various seismic attributes. Seismic data are generally considered as soft
constraints on model building because of the as-yet limited vertical resolution.
However, because of the high sampling rate, seismic data can provide excellent
lateral constraints on properties away from wells. Seismic data integration into
statistical models, mainly through co-kriging and simulated annealing, are becoming
common in large projects (MacLeod et al., 1996; Tjolsen et al., 1996)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 29

R Geomodelling & Reservoir Management

3. Production data: Like pressure-transient data, production data (rates and amounts
of produced fluids versus time) reflect directly on how a reservoir is performing.
Consequently, such data form the most powerful conditioning data available. Like
the seismic integration, incorporating production data is a subject of active research
(see Datta-Gupta et al., 1995, for use of tracer data) because it is currently very
expensive. The expense derives from the need to run a flow simulation for each
perturbation of the stochastic field. Furthermore, production data will have most
impact on a description only when there is a fairly large quantity of it, and, of course,
we become less and less interested in reservoir description as a field ages (until we
start a new process - waterflood, gas flood, etc). Nevertheless, simulated annealing
offers a significant promise in bringing this technology to fruition.


“Geostatistics generates equiprobable representations of what cannot be represented


It is very unlikely that a geologist or engineer will know exactly what lies between
the wells. Using geostatistical techniques, one can generate many realisations (10,
100, 1000) and it is usually assumed that the truth lies between the extremes of the
realisations. Be very careful not to use too few realisations as a few limited realisations
might be quite different from the average.

A statistical study has been carried out on a carbonate reservoir to model properties
between the wells (Lucia and Fogg, 1990; Fogg et al., 1991). Flow units extend
between wells but no internal structure can be easily recognized or correlated. The
San Andres formation carbonates are very heterogeneous with a CV of 2.0 - 3.5
(Kittridge, et al., 1990; Grant et al., 1994). A vertical permeability profile (Figure
33) from a wireline log-derived estimator shows two scales of bedding.



Type B Type A
Depth, ft


3560 Type B


.01 1 100 10000

Permeability, mD

Figure 33

Modelling O N E

A vertical sample semivariogram (Figure 34) shows a spherical variogram model

behaviour out to a range of about 8 ft. This range appears to be between the scales
of the two bedding types (A and B) shown in Figure 26. The spherical model fit to
the sample semivariogram overlooks the hole (at around 24 ft) that may be reflecting
Type B scale cyclicity. The structure of Type A beds may not be reflected in the
sample semivariogram because of the effect of the larger “dominating” Type B cycles.

Horizontal semivariograms were derived from permeability predictions generated

from initial well production data (Figure 35). Because of the sparse data, these
semivariograms alone are indistinguishable. But there is a mapped high-permeability
grainstone trend associated with reef development that is consistent with an anisotropic
autocorrelation structure. The geology is critical to interpreting these semivariograms
(refer to Fogg et al., 1991 for further discussion).

2.0 Sample



Spherical Model

0 8 16 24 32 40
Lag, ft

Figure 34 Spherical model fitted to a vertical sample semivariogram. The spherical

model ignores the hole at 24 ft that indicates cyclicity. From Fogg et al. (1991)

Semivariance,sq. mD





0 1000 2000 3000
Distance, ft

Figure 35 Sample semivariograms in two horizontal directions for the San Andres
carbonate. Semivariograms with different ranges can express the anisotropy
suggested by the geological model. From Fogg et al. (1991)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 31

R Geomodelling & Reservoir Management

Conditional simulations (conditioned on the well data) of the permeability field in

the San Andres formation were generated to investigate the influence of infill drilling
(Figure 36). In this case, the permeability field was modelled as an autocorrelated
random field using single semivariograms in the vertical and horizontal directions.
Both realizations show the degree of heterogeneity and continuity needed for this
type of application. However, there are still deficiencies in the characterization that
might prove important.

1. The profile in Figure 33 suggests a reservoir dominated by two scales of beds.

In the realizations in Figure 33, it is difficult to see that this structure has been

2. The autocorrelated random field model does not explicitly represent the baffles
caused by the bed bounding surfaces; these may have a different flow response
from that of the models in Figure 36. A subsequent study (Grant el al., 1994;
Kerans et al, 1994, Figure 37) found these baffles to be important. The CRF
model is isotropic with now preferred flood direction. A subsequent model
with baffles showed that there was a preferred flood direction. This latter study
confirmed the importance of deterministic surfaces in carbonates.






Height (ft)

0 200 400 600 800 1000 1200 1400





0 200 400 600 800 1000 1200 1400
Distance (ft)

Above 100.0md 100 -1.0md Below 0.1md

100.0-10.0md 10.01md

Figure 36 Two realizations of the spatial distribution of the permeability field

for the San Andres generated by theoretical semivariograms fit to the sample
semivariograms shown in Figs. 25 and 26. The permeability fields are the same
in the two realizations for the extreme left and right ends of the model where the
conditioning wells are located. From Fogg et al., (1991)

Modelling O N E

The panels in Figure 36 also illustrate a generic problem with pixel-based stochastic
modelling: there is no good a priori way to impose reservoir architecture. If the
geometry of the beds were determined to be important, other modelling techniques
(e.g., object-based) might have been useful.

The San Andres study illustrates how semivariograms can be used to generate
autocorrelated random fields that are equiprobable and conditioned on “hard” well
data. These realizations can form the basis of further engineering studies; however, it
is unlikely that all 200 realizations will be subjected to full flow simulation. Usually
only the extreme cases or representative cases, often selected by a geologist’s visual
inspection, will be used. In recent years, streamline simulations have made it easier
to simulate ALL the geomodel realizations and to use quantitative methods to select
the extreme and mean cases.
Cumulative Oil Production as Percent Oil


in Place


Left-to-right Injection
Right-to-left Injection
0 0.2 0.4 0.6 0.8
Injected Pore Volume

Figure 37 Anisotropy in the simulation of outcrop including determinisitic

surfaces (From Kerans et al., 1994). The semivariogram with the hole (Figure 25)
might have flagged repetition (determinism) as possibly indicating the presence of
geological structure


“Geostatistics is the “glue” holding the various subsurface disciplines together. It

provides the means of integrating different kinds of data into the construction of 3D
representations of heterogeneities”

To build a pixel model using variograms, these functions are required to be defined
from data or estimated from analogues. Geological data (from outcrop), geophysical
properties (attribute maps), core data (properm plugs), well test data (boundaries
from pressure build up) can all be used in the model. The model can be conditioned
on data from various sources. A permeability map can be produced from a facies
map conditioned on seismic attributes. Different weighting factors can be put on the
data – strongly favouring the well data when near wells – strongly favouring seismic
data when away from wells. In this way, the models can bring together the different
disciplines and appropriate geomodels (in terms of scenarios considered, model
complexity, number of realizations, etc) are key to a successful geoengineering study.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 33

R Geomodelling & Reservoir Management

Geostatistical models have a range of applications

• Framework for data storage (shared earth model)

• 3D interpolation (populating models)
• Visualisation (communication between disciplines)
• Drilling strategy (number of wells, direction and length of wells)
• Development strategy (well type, completions)
• Quantification of uncertainty (reserves, production, sweep efficiency)

In a study of a mixed aeolian/fluvial reservoir - like the reservoir modelled by

Sweet et al., (1996) - Siefert et al. (1996) undertook a well placement study for a
development strategy study (Figures 38, 39). The question they sought to address
was as follows: In this depositionally complicated gas field is there an optimum length
and orientation to drill a horizontal producing well? The question was answered
by a modelling study and assuming that a certain proportion of the best facies was
needed for a successful well.

Figure 38 Geostatistical model of a mixed fluvia/aeolian reservoir (Figure 5) and

the synthetic wells ‘drilled’ at a wide variety of location, length and orientation
(from Siefert et al., 1996)

Modelling O N E



1 Azimuth
0 A B C D E F G H I J K L
A B C D E F G H I J K L Rank: 8 10 12 9 4 5 2 1 6 7 3 11
Azimuth Combined Rank: 4 5 1 3 6 2

Figure 39 Results of a geostatistical model of a mixed fluvia/aeolian reservoir

showing that the ‘optimum’ orientation will be azimuth G from the frequency of
sandbodies intersected and the along borehole intersections (ABI’s) (after Seifert
et al., 1996). Note that the chosen azimuth may not have the maximum ABI’s, but
it has one of the smallest minimums. This means that the risk can be minimised

Many thousand synthetic wells were drilled in various realisations of the model (Figure
38). Each well was analysed for the frequency and number of aeolian sandbody
(assumed to be the best reservoir units) intersections along each borehole (Figure
39). Azimuth G turned out to be oblique to the geological grain. Drilling parallel
to the geological grain might have encountered the maximum eolian sand interval
intersected - but also a significant chance of missing the eolian sandbodies altogether.
This shows how geostatistical models can be used to assess the uncertainty in well
planning in addition to their use in reservoir performance simulation.

9.1 Published Modelling Studies

There are an increasing number of geomodelling studies that are now published for
reference. The reader is encouraged to mine these further for reference and guidance.

Sweet et al. (1996) built a model of the Rotliegend Permian aeolian sandstone in
the Hyde Field, Southern North Sea. The field is a producing gas field and three
models were built for the three distinct layers and then recombined to account for the
non-stationarity and geological facies changes between layers. Stochastic modelling
of the heterogeneity enabled the authors to produce significantly improved model

Eschard et al (1998) used proportion curves to model the Triassic Alluvial Fan System
in the Paris Basin. The Chaunoy Field reservoir facies developed as a response to
rising and falling lacustrine base-level. The authors conclude that the layering is
important and that the generation of proportion curves was useful to validating the
geological model. However, they also noted that the variogram range selected is
not so sensitive because of the strong interconnectedness between channels. The
combination of geological and geostatistical analysis was thought to result in a
successful characterisation of the Chaunoy Field.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 35

R Geomodelling & Reservoir Management

Dehgani et al. (1999) found that the detailed modelling of vuggy carbonates
(dolostones of the Grayburg Formation, Permian) in the McElroy field (Texas)
significantly improved the history-matches as the core data tended to underestimate
the permeability, due to sampling problems. Small scale, high resolution models
were built and upscaled.


Geology provides several insights that are useful for statistical model-building:
categorization of petrophysical dependencies, identification of large-scale trends,
interpretation of statistical measures, and quality-control on generated models. The
first two serve to bring the statistical analysis closer to satisfying the underlying
assumptions. For example, identification of categories and trends, and their subsequent
removal, will bring data sets closer to being Gaussian and/or to being stationary.
The last two are to detect incorrect inferences arising from limited and/or biased
sampling. Consequently, we shall see aspects of these in the following procedures.

The first two steps are common for all procedures.

1. Divide the reservoir into flow units. Flow units are packages of reservoir rocks
with similar petrophysical characteristics—not necessarily uniform or of a
single rock type—that appear in all or most of the wells. This classification will
serve to develop a stratigraphic framework (“stratigraphic coordinates”) for the
reservoir model at the interwell scale.

2. Review the sample petrophysical distributions with respect to geological and

statistical representativity.

What follows next depends on the properties within the flow units, the process
to be modelled, and the amount of data available. We presume some degree of
heterogeneity within the flow unit.

3. If there are no data present apart from well data and there is no geologic
interpretation, the best procedure is to simply use a conditional simulation
technique applied directly to the well data (Figure 40). The advantage of this
approach is that it requires minimal data: semivariogram models for the three
orthogonal directions. (Remember, a gridblock representing a single value of
a parameter adds autocorrelation to the field to be simulated.)

Modelling O N E

dm hz
f g

k hx

hy x
km y

Figure 40 Generation of a three-dimensional autocorrelated random field using

orthogonal semivariograms. hx, hy and hz are the lags in the coordinate directions


cm to m

Core Geology


Figure 41 Schematic reservoir model at two hierarchical scales capturing well-

determined geological scales at each step. A small-scale (cm to m) semivariogram
guides selection of representative structure

Structure can be identified by nuggets, holes, and/or trends in the semivariograms.

Lateral dimensions rely on the quality of the geological analogue data.

5. If the property distribution is multimodal, the population in the flow unit can be
split into components and indicator conditional simulation used to generate the
fields. This is useful in fields where the variation between the elements is not
clearly defined and distinct objects cannot be measured. This approach
(Figure 42) yields fields with jigsaw patterns. This was the approach taken by
Rossini et al. (1994).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 37

R Geomodelling & Reservoir Management

dm hz
f η

k hx
hy x

Figure 42 Schematic of a sequential indicator simulation for a binary system.

Indicator semivariograms are determined from a binary coding of the formation.
hx, hy and hz are the lags in the coordinate directions

6. If the flow unit contains distinctly separate rock types and a PDF can be
determined for the dimensions of the objects, the latter can be distributed in a
matrix until some conditioning parameter is satisfied (e.g., the ratio of sand to
total thickness). This type of object-based modelling lends itself to the
modelling of labyrinth fluvial reservoirs (Figure 43). More sophisticated rules
to follow deterministic stratigraphic trends (e.g., stacking patterns) and interaction
between objects (e.g., erosion or aversion) are available or being developed. A
similar model for stochastic shales would place objects representing the shales
in a sandstone matrix following the same method.

m z
f F

k km x


km y

Figure 43 Schematic of object model of a fluvial reservoir. Dimensions of objects

are sampled and their locations set by sampling CDF’s

These models are the most realistic of all for handling low net-to-gross fluvial reservoirs;
however, they require a good association between petrophysical properties and the
CDF’s for the geometries. Tyler et al. (1994) give a good example of this application.

Modelling O N E

If the reservoir contains more than one flow unit, then the procedure must be
repeated for the next zone. In the Page example (Figure 26), the upper layer needed
a different correlation model (generated by a fractal semivariogram) from the lower
layer (spherical semivariogram). In the Hyde Field example (Figure 21), three
sub-models were used.

The modelling of petroleum reservoirs requires an understanding of the heterogeneity,

autocorrelation structure (nested or hierarchical) derived from the geological
architecture, and the flow process to select the most appropriate technique. As
reservoirs have various correlation structures at various scales, flexibility in modelling
approach must be maintained. The “right” combination of techniques for a Jurassic
reservoir in Indonesia may be entirely inappropriate for a Jurassic North Sea reservoir.
The methods discussed here try to emphasize the need for a flexible toolbox. Further
workflows for geostatistical simulation are given by Deutsch (2002).


The choice of a geostatistical model and its parameters is often guided by subjective
considerations (Dubrule, 1998):

• Realisations ‘look’ right.

• We think we know what we do not know – in reality we cannot know what we

don’t know.

• We assume the scenario selected for modelling is the right scenario

• The modelling technique is assumed to be right (otherwise we wouldn’t be

using it!)

• The software and time that are available (which is often limited).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 39

R Geomodelling & Reservoir Management


Given the following permeability data (same data as was encountered in Chapter 8 of the
Reservoir Concepts course)

Depth Perm eability(mD )

38 34.9 10 5
38 35.2 14 0
38 35.5 29 7
38 35.8 23 6
38 36.1 10 6
38 36.4 14 0
38 36.7 15 7
38 37 14 4
38 37.3 18 9
38 37.6 11 1
38 37.9 57 7
38 38.2 31 8
38 38.5 nmp
38 38.8 52
38 39.1 54

An experimental variogram (Gamma) was calculated as follows. Note that the variogram has
been normalized by the variance.

Lag Ga mma
1 0.75
2 0.87
3 1.45
4 1.37
5 0.78
6 1.03

1. Determine the sill (c), nugget (c0) and range (a) for this variogram.

2. Fit a spherical and an exponential model to these data given. (Note that sill (c) is 1- c0
on normalised variogram)

  3h 1  h  3 
c 0 + c •  −   
Spherical model: γ (h) =   2a 2 a  ,if h ≤ a

c 0 + c ,if h > a

Modelling O N E

0, h=0
  −h  
Exponential model: γ (h) = c 0+ (c − c 0 )  1− exp    , h≠0
 a 

3. What other model might be appropriate?


Six sample 2D “reservoir images” are presented: a single body, a single meandering channel,
stacked, multiple channels, parallel channels, shale.

(i) Discuss which stochastic simulation algorithms might be used for modelling these patterns.

(ii) Match six corresponding omni-directional variograms (a-f) with the sample “reservoir
images” (I-VI).

(iii) Which of the patterns (I-VI) would be better modelled with an anisotropic variogram model.

(iv) Can you point out which patterns feature strong (zonal) anisotropic patterns and which
-- weaker (geometric) anisotropy.

EXERCISE 3: Object Modelling

Braided Fluvial Reservoir Model

This example is based on the interpretation and modelling of two wells in a braided fluvial
reservoir (Toro-Rivera, et al., 1994). The reservoir unit is Triassic in age and approximately
90 metres thick in each well. These wells are more than a kilometre apart in the same field.
The exercise is to build an object model for the volume around each well and determine the
effective property (as matched by a well test in each of the wells).

To do this exercise the following steps are considered useful:

1. Determine net:gross for each well (net:gross is a key controlling factor for the model).

2. Determine location and thickness of sand bodies in each well.

3. Review the vertical variograms in the well to confirm spatial model.

4. Consider geometry away from the well location (the aspect ratio).

5. Use analogue geobody geometry data (outcrop, well test) to guide the aspect ratio.

6. Consider assumptions on using averages to estimate effective horizontal permeability.

Compare result with well test.

7. Consider assumptions for averaging vertical permeability – determine the reservoir

scale kv/kh ratio for both wells. Consider the sweep efficiency.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 41

R Geomodelling & Reservoir Management


Count 276 Count 271

Average 456 Average 577
Geometric 32 Geometric 20
Harmonic 02 Harmonic 02
Cv 1.9 Cv 1.7

Figure A1: Two wells showing the distribution of permeability on a linear scale

Note the linear scale is a good scale to consider the distribution of high permeability zones.
The linear scale (Figure A-2) is more appropriate than the logarithmic scale for this purpose
as the fluid flow in reservoirs is controlled by permeability and not the log of permeability.


Count 276 Count 271

Average 456 Average 577
Geometric 32 Geometric 20
Harmonic 02 Harmonic 02
Cv 1.9 Cv 1.7

Figure A-2: Two wells showing the distribution of permeability on a logarithmic scale

Modelling O N E

Note the logarithmic scale is a good scale to consider the net:gross. For this exercise take
1mD as the appropriate cut-off for an oil field.

To consider the spatial correlation in the vertical direction permeability semivariograms are
also provided (Figure A-3).

g (IhI)
666 628
540000 614
588 476
676 542





0 1 2 3 4 5 6 7 8 9
g (IhI)
450 476
0.0014 666 588
556 628 486 542

0.01 614




0 1 2 3 4 5 6 7 8 9

Figure A-3a: Poroperm plots for Well A (top: core data from 1610-1640; bottom:
Density log data).

In each of the wells the core semivariograms show a different nugget than the log data.
This is often the case with core data as there is often little correlation (i.e., high difference)
between adjacent core data. This is usually the result of small scale heterogeneity. In
Well A, there is a short range (1m) and a stationary (probably exponential) model. In Well
B, there is a longer range (4m) and a clear hole. The hole in Well B is confirmed by the
variogram, of the density log data whereas the apparent hole in Well A is not present in the
log data.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 43

R Geomodelling & Reservoir Management

g (IhI)
1400000 518 424
624 556
584 468
1000000 514


600000 260



0 1 2 3 4 5 6 7 8 9

g (IhI)
598 522 486 394
0.009 446
0.007 454
0.006 506
0.003 252

0 1 2 3 4 5 6 7 8 9

Figure A-3b: Poroperm plots for Well B (top: core data from 1620-1650; bottom:
density log data).

Consider the textural variations expected in this environment and how you might interpret
the wells in terms of layers or channels and whether the distribution is systematic or
random. The semivariograms hold vital clues.

We have three averages which can be used for estimating effective properties. Their
application depends on various assumptions. For single phase flow in two dimensions:

* Arithmetic Average – layered system, layer parallel flow

* Geometric Average – random system flow
* Harmonic Average – layered system, layer series flow

Additional information is available from a fluvial outcrop analogue (Figure A-4) and a
subsurface analogue (Figure A-5). From these you can determine an average aspect ration
of approx. 1:30.

Modelling O N E

Outcrop Analogues

La Saretta, Tertiary, Ebro Basin, Spain

Distributary fluvial system

Vertical/ Lateral stacked sandbodies

Medium NTG (35 45%)
Aspect Ratio Histogram (Log Normal, avg. thickness 5.3m,
avg. channel width 125m, avg. aspect ratio 1:27)

Figure A-4 Outcrop analogue from Spain

Note that the La Serreta outcrop has a lower net:gross than well A or B. This is because
the gross is defined from the WHOLE cliff face interval. Selecting the top and base of the
laterally amalgamated sandstone in the middle of the picture will give a lower net:gross.

Subsurface Analogues
Channel Sandbody Width (m)

id ed s
Bra st em
Sy g
1000 os ite erin
mp a n d
Co Me tems
Faulted anomalies
Well Testing Results
1 10 100

10 well test from Gulf of Thailand - Linear Flow

Figure A-5: Subsurface analogue from Gulf of Thailand (Zheng, 1997)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 45

R Geomodelling & Reservoir Management



Count 276
Average 456 Clue: think about net/gross and geobody interpretation
Geometric 32
Harmonic 0.2
Cv 1.9



Count 271
Average 577 Clue: think about net/gross and geobody interpretation
Geometric 20
Harmonic 0.2
Cv 1.7

Figure A-6: Template for drawing a model for the region around the wells A and B

Sketch the geology away from the wells on the templates provided (Figure A-6) using
the locations of the sand, their aspect ratio and the net:gross. Remember that it’s highly
unlikely that well A or B is drilled in the centre of all the channels intersected. These are
relatively high net:gross sandstone and one can expect the channels to be well connected.
Consider the most appropriate average for the resulting model.
Solutions to follow refer to Figures A-6 to A-12.

Modelling O N E

Toro-Rivera, M., Corbett, P.W.M., and Stewart, G., 1994, Well test interpretation
in a heterogeneous braided fluvial reservoir, SPE 28828, Europec, 25-27 October.
Zheng, S-Y., Well Testing and characterisation of meandering fluvial channel reservoirs,
November 1997, Unpublished PhD Thesis, Heriot-Watt University, 226p.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 47

R Geomodelling & Reservoir Management


1. Determine the sill (c), nugget (c0) and range (a) for this variogram.

Sill (c ) = 1 – Nugget (c0) = 0.35

Nugget (c0) = 0.65
Range (a) = 2.5 (0.75m)

2. Fit a spherical and an exponential model to these data given

Spherical model:

  3h 1  h  3 
c 0 + c •  −   
γ (h) =   2a 2 a  ,if h ≤ a

c 0 + c ,if h > a

La g (h) Gamma(h)
0 0.65
1 0.85
2 0.98
3 1.00
4 1.00
5 1.00
6 1.00

Exponential model:

0, h=0
  −h  
γ (h) = c 0+ (c − c 0 )  1− exp    , h≠0
 a 

La g (h) Gamma(h)
0 0.65
1 0.75
2 0.82
3 0.86
4 0.89
5 0.91
6 0.92

Modelling O N E









Experimental variogram
Spherical Model

Exponential Model


0 1 2 3 4 5 6 7

3. What other model might be appropriate?

Hole effect model

Nugget = 0, range = 0.75 and sill = 1 for case shown

Gamma =c(1-a/h sin(h/a*pi))








Experimental variogram
0.20 Spherical Model
Exponential Model
Hole Effect Model


0 1 2 3 4 5 6 7

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 49

R Geomodelling & Reservoir Management


i.) Discuss which stochastic simulation algorithms might be used for modelling
these patterns.

For most of these models object modelling algorithms would be better than
pixel models. The background facies would always be the black facies and the
white facies would be the modelled objects. The white facies might represent
sand (channel models) or shales. Placement rules would vary, in (I) the object
seems to be placed in the centre - in (V) a repulsion models seems to be used.

ii.) Match six corresponding omni-directional variograms (a-f) with the sample
"reservoir images" (I-VI).



Modelling O N E



iii.) Which of the patterns (I-VI) would be better modelled with an anisotropic
variogram model.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 51

R Geomodelling & Reservoir Management



Short range Isotropic



Modelling O N E


Strong Anisotropy


Strong Anisotropy


Strong Anisotropy

iv.) Can you point out which patterns feature strong (zonal) anisotropic patterns and
which -- weaker (geometric) anisotropy.

Patterns II, III, IV: Zonal anisotropy

Patterns VI: Geometric anisotropy

Pattern types I – VI

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 53

R Geomodelling & Reservoir Management




Omni-directional variograms (a-f)

Modelling O N E

a) b)

c) d)

e) f)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 55

R Geomodelling & Reservoir Management

Exercise 3 Solutions

In these proposed solutions, the model has been simplified and approximated.



Count 276
Average 456
High Net: Gros
Geometric 32
Harmonic 0.2
Clue: Aspect ratio of about 1;30
Cv 1.9 Random System: keff = Geometric average 32 mD

Figure A-7: Solution to well A. Small channels only are present in the well
suggesting that small channels of limited extent will be present in the region
around this well. Because the random nature of these small channels, the
geometric average is the appropriate average to estimate effective permeability.

Modelling O N E



Count 271
Average 577
Geometric 20 Clue: Aspect ratio of about 1:30
Harmonic 0.2
Random System: keff = Arithmetic average 577mD
Cv 1.7

Figure A-8: Solution to well B. A few large channels are present in the well
suggesting that channels of more lateral extent will be present in the region around
this well. Because the layered nature of these small channels, the arithmetric
average is the most appropriate average to estimate effective permeability.

In the above examples, the channels around the well are located in the well. Even if
every interpreter of the well data identified the same sands in the same locations, the
location of the well (either centrally, left-laterally or right-laterally in each channel) for
each channel will vary. Away from the well further channels are needed to maintain
the same net/gross over the volume of the model. The thickness of these channels is
kept close to the thickness of the observed channels in each well. In this sense the
properties – channel thickness, aspect ration, net:gross ratio are considered stationary
for each well model.

The models are also simplified in a 2-D across the channel direction (transverse to the
flow direction). Despite these simplifications the models are thought to be useful and
illustrative. In a real exercise a distribution (pdf) of channel thickness and aspect ratio
could give higher variability. If the net:gross was lower then the sands would become
more disconnected and for this model the effective permeability is drastically reduced.
If these wells were part of the same field (which they are) then this issue of whether
these two wells are from a stationary field with more variability (to accommodate
both well data sets in a single model) would need further consideration.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 57

R Geomodelling & Reservoir Management



Count 276 Low Net:Gross

Average 456
Geometric 32 Clue: Aspect ratio of about 1:30
Harmonic 0.2
Random System: keff = Harmonic average 0.2mD
Cv 1.9

Figure A-9 In the alternative the channels are effectively disconnected and the
effective flow is across the inter-channel material which could be very low –
possible as low as the harmonic average.

With core plug data consideration has to be given to sample sufficiency issues. With
number of samples needed to estimate the arithmetic average within +/- 10% of the
true arithmetic mean (95% of the time) you require 10Cv2 samples. For Well A and
B these are 361 and 289 respectively equating to 23 and 21%. In this example, one
could conclude that the core samples do a reasonable job at catching the variability
(assuming that there is no bias and the lower permeability zones systematically missed).

A cross plot for vertical (kv) vs horizontal (kh) core plug data (Figure A-10) shows
that there is a lot of variability at the core plug scale. These plugs are adjacent pairs
and variability in the ratio shows a lot of local heterogeneity rather than anisotropy.
Considering the models developed above the layered system (Well B) would have
the worse case vertical permeability (as low as the harmonic average of 0.2mD). In
a random system, the vertical permeability would be closer to the geometric average
and the system effectively more isotropic. This example is shown to draw attention to
the difficulty of estimating effective vertical permeability because of the differences
of scale. The layering (or not) of the system will have the major effect.

Modelling O N E

kv/kh plot kv/kh plot

10000 10000

1000 1000

100 100

10 10

0.01 0.1 10 100 1000 10000 0.01 0.1 10 100 1000 10000

0.1 0.1

0.01 0.01

Figure A-10 Core plug kv vs kh ratio (Note: kv on the y axes, kh on the x axes)



Count 276 Low Net:Gross

Average 456
Geometric 32 Clue: Aspect ratio of about 1:30
Harmonic 0.2
Random System: keff = Harmonic average 32mD
Cv 1.9

kv/kh = 1

Figure A-11: Well A. Effective kv vs kh at the formation scale

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 59

R Geomodelling & Reservoir Management



Count 271 Low Net:Gross

Average 577
Geometric 20 Clue: Aspect ratio of about 1:30
Harmonic 0.2
Random System: keff = Harmonic average 577mD
Cv 1.7



Count 276 Low Net:Gross

Average 456
Geometric 32 Clue: Aspect ratio of about 1:30
Harmonic 0.2
Random System: keff = Harmonic average 0.2mD
Cv 1.9

kv/kh = .004

Figure A-12 Well B effective kv vs kh at the formation scale

Modelling O N E

The well test data for these two wells is shown in Figure A-12. These data were
presented in the statistics chapter. Well A shows the effects of small scale channels
near the well (negative skin) and a low effective permeability of a random system.
Well B shows very high permeability which is essentially the permeability of the
high permeable channels (arithmetic average when confined to the channel intervals).



ETR: Linear flow ETR: Radial flow?

MTR: Radial flow (44mD) MTR: Radial flow (1024 mD)
Negative skin Small positive skin
LTR: OWC effect LTR: Fault?

Figure A-13 Well A and Well B well test interpretation

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 61

R Geomodelling & Reservoir Management


Aasen, J.O., Silseth, J.K., Holden, L., Omre, H., Halvorsen, K.B. and Hoiberg, J.
(1990) A stochastic reservoir model and its use in evaluation of uncertainties in the
results of recovery processes. In: A. Buller et al. (eds) North Sea oil and gas reservoirs
II. London, Graham and Trotman, 425-436.

Alabert, F.G. and Massonnat, G.J. (1990) Heterogeneity in a complex turbiditic

reservoir: stochastic modelling of facies and petrophysical variability. SPE 23138.

Babadagli, T. (1999) Effect of fractal permeability correlations on waterflooding

performance in carbonate reservoirs. Journal of Petroleum Science and Engineering,
23, 223-238.

Begg, S.H., Kay, A., Gustason, E.R. and Angert, P.F. (1996) Characterization of a
complex fluvial-deltaic reservoir for simulation. SPE Formation Evaluation Sept
1996, 147-153.

Begg, S.H., et al., (1985), Modelling the effects of shales on reservoir performance:
Calculation of effective vertical permeability, SPE 13529.

Bu, T. and Damsleth, E. (1996) Errors and uncertainties in reservoir performance

predictions. SPE Formation Evaluation Sept 1996, 194-200.

Carle, S.F. and Fogg, G.E. (1996) Transition probability-based geostatistics.

Mathematical Geology, 28(4), 453-476.

Chandler, M. A., G. Kocurek, D. J. Goggin, and L. W. Lake, (1989) “Effects of

Stratigraphic Heterogeneity on Permeability in Eolian Sandstone Sequences, Page
Sandstone, Northern Arizona,” American Association of Petroleum Geologists
Bulletin, 73 (1989) 658-668.

Clemetson, R., Hurst, A.R., Knarud, R. and Omre, H. (1990) A computer program
for evaluation of fluvial reservoirs. In: A. Buller et al. (eds) North Sea oil and gas
reservoirs II. London, Graham and Trotman, 373-386.

Cosentino, L., (2001), Integrated Reservoir Studies, Editions Technip, Paris, 310p.

Cox, D.L., Lindquist, S.J., Havholm, K.G. and Srivastava, R.M. (1994) Integrated
modelling for optimum management of a giant gas condensate reservoir, Jurassic
Eolian Nugget Sandstone, Anshutz Ranch East Field, Utah Overthrust (USA). In:
J.M. Yarus and R.L. Chambers (eds) Stochastic Modelling and Geostatistics. AAPG
Computer Applications in Geology No. 3. Chapter 22, 287-320.

Data Gupta, A., Lake., L.W., and Pope, G.A., (1995), Characterizing heterogeneous
permeable media with spatial statistics and tracer data using sequential simulated
annealing, Mathematical Geology, 27(6), 763-787.

Delhomme, A.E.K., and J.F.Giannesini, (1979), New Reservoir Description Techniques

improve Simulation results in Hassi-Massaoud Field, Algeria, SPE 8435.

Modelling O N E

Dehgani, K., Harris, P.M., Edwards, K.A. and Dees, W.T. (1999) Modelling a vuggy
carbonate reservoir, McElroy Field, West Texas. AAPG Bull, 83(1), 19-42.

Deutsch. C.V., 2002, Geostatistical Reservoir Modelling, Oxford University Press, 376p

Deutsch, C. V., and Journal., A.G., 1992, GSLIB: Geostatistical Software Library
and User’s Guide, New York, Oxford University Press, 340p

Dimitrakopolous, R. and Desbarats, A.J. (1993) Geostatistical modelling of gridblock

permeabilities for 3D reservoir simulators. SPE Reservoir Engineering 8(1), 1239-1246.

Doyen, P.M., 1988, Porosity from seismic data: a geostatistical approach, Geophysics,
53(10), 1263-1275.

Doyen, P.M., Psaila, D.E. and Strandenes, S. (1994) Bayesian sequential indicator
simulation of channel sands from 3D seismic data in the Oseberg Field, Norwegian
North Sea. SPE 28382.

Doyen, P.M., den Boer, L.D., Pillet W.R., Seismic porosity mapping in the Ekofisk
Field using a new form of collocated co-kriging, SPE paper 36498.

Dromgoole, P. and Speers, R. (1997) Geoscore - a method for quantifying uncertainty

in field reserve estimates. Petroleum Geoscience 3(1), 1-12.

Dubrule, O. (1998) Geostatistics in Petroleum Geology. AAPG Continuing Education

Course Note Series #38.

Dubrule, O. et al. (1994) From sedimentology to geostatistical reservoir modelling.

In: K. Helbig (ed.) Modelling the Earth for Oil Exploration. Pergamon, 19-114.

Dubrule, O., Basire, C., Bombarde, S., Samson, Ph., Segonds, D. and Wonham, J.
(1997) Reservoir geology using 3D modelling tools. SPE 38659.

Dubrule, O.R.F., Thibaut, M, Lamy, P. et al. (1998) Geostatistical reservoir

characterization constrained by 3D seismic data. Petroleum Geoscience 4(2), 121-128.

Dubrule, O., (2003), Geostatistics for seismic data integration in earth models, 2003
Distinguished Instructor Short Course notes, SEG and EAGE.

Eisenberg, R.A., Harris, P.M., Grant, C.W. et al. (1994) Modelling reservoir
heterogeneity within outer ramp carbonate facies using an outcrop analog, San Andres
Formation of the Permian Basin. AAPG Bull 78(9), 1337-1359.

Fanchi, J.R., Meng, H.A., Stolz, R.P. et al. (1996) Nash reservoir management study
with stochastic images - a case study. SPE Formation Evaluation 11(3), 155-161.

Fogg, G.E., Lucia, F.J., and Senger, R.K., 1991, Stochastic simulation of interwell-scale
heterogeneity for improved prediction of sweep efficiency in a carbonate reservoir, in
Reservoir Characterisation II, Lake, L.W., Carroll, H.B.Jr. and Wesson, T.C. (Eds.),
Academic Press Inc., New York, 355-381.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 63

R Geomodelling & Reservoir Management

Eschard and B. Doligez (eds) Subsurface Reservoir Characterisation from Outcrop

Observations. Paris, Technip, 53-64.

Eschard, R., Lemouzy, P., Bacchiana, C, Desaubliaux, Parpant, J., and Smart, B.,
(1998), Combining Sequence stratigraphy, geostatistical simulations, and production
data for modelling a fluvial reservoir in the Chaunoy Field, (Triassic, France), AAPG
Bulletin, 82 (4) 545-568.

Geehan, G.W. (1993) The use of outcrop data and heterogeneity modelling in
development planning. In: R.

Goggin, D. J. (1988), “Geologically Sensible Modelling of the Spatial Distribution

of Permeability in Eolian Deposits: Page Sandstone (Jurassic) Outcrop, Northern
Arizona,” PhD dissertation, The University of Texas at Austin.

Goggin, D. J., M. A. Chandler, G. A. Kocurek, and L. W. Lake, (1992) “Patterns of

Permeability in Eolian Deposits,” SPE Formation Evaluation, 3 297-306.

Goggin, D. J., M. A. Chandler., G. A. Kocurek, and L. W. Lake, (1994) “Permeability

Transects of Eolian Sands and Their Use in Generating Random Permeability Fields,”
SPE Formation Evaluation, 7, 7-16.

Goodyear, S.G., and Gregory, A.T., (1994), Risk Assessment and Management in
IOR projects, SPE 28844, presented at Europec, London, 25-27 Oct.

Grant, C. W., D. J. Goggin, and P. M. Harris, (1994) “Outcrop Analog for Cyclic-
Shelf Reservoirs, San Andres Formation of Permian Basin: Stratigraphic Framework,
Permeability Distribution, Geostatistics, and Fluid-Flow Modelling,” American
Association of Petroleum Geologists Bulletin, 78, 23-54.

Grant, C.W., Goggin, D.J., Harris, P.M. (1994) Outcrop analog for cyclic-shelf
reservoirs, San Andres Formation of Permian Basin: Stratigraphic framework,
permeability distribution, geostatistics, and fluid-flow modelling. AAPG Bull 78(1), 23-54.

Grant, C.W., Goggin, D.J., Harris, P.M. (1994) Outcrop analog for cyclic-shelf
reservoirs, San Andres Formation of Permian Basin: Stratigraphic framework,
permeability distribution, geostatistics, and fluid-flow modelling. AAPG Bull 78(1),

Haas, A. and Dubrule, O. (1994) Geostatistical inversion of seismic data. First Break
12(11), Nov. 1994.

Haldorsen, H. H., (1983), Reservoir Characterisation Procedures for Numerical

Simulation, PhD Thesis, 1983.

Haldorsen, H. H., and MacDonald, C.J., (1987), Stochastic modelling of underground

reservoir facies, SPE 16751.

Hatloy, A.S. (1994) Numerical facies modelling combining deterministic and

stochastic methods. In: J.M. Yarus and R.L. Chambers (eds) Stochastic Modelling and
Geostatistics. AAPG Computer Applications in Geology No. 3. Chapter 14, 109-120.
Modelling O N E

Henriquez, A., Tyler, K. and Hurst, A. (1990) Characterisation of fluvial sedimentology

for reservoir simulation modelling. SPE Formation Evaluation, September, 211-216.

Herweijer, J.C. and Dubrule, O. (1995) Screening of geostatistical reservoir models

with pressure transients. Journal of Petroleum Technology 47(11), 973-979.

Hewett, T.A. (1986) Fractal Distributions of Reservoir Heterogeneity and their

Influence on Fluid Transport, SPE15386.

Hird, K. and Dubrule, O. (1995) Quantification of reservoir connectivity of reservoir

description applications. SPE 30571.

Hu, L.Y., Joseph, Ph. and Dubrule, O. (1992) Random genetic simulation of the
internal geometry of deltaic sandstone bodies. SPE 24714.

Isaaks, E.H., and Srivastava, R.M., (1989), Applied Geostatistics: New York, Oxford
University Press, 561p.

Jeffry, R.W., Stewart, I.C., and Alexander, D.W., 1996, Geostatistical estiomation
of depth conversion velocity using well control and gravity data, First Break, 14(8),

Jensen, J.L., Lake, L.W., Corbett, P.W.M., and Goggin, D.J., (2000), Statistics for
Petroleum Engineers and Geoscientists, 2nd Edition, Elsevier, Amsterdam, 338pp.

Journal, A.G., and Huijbregts, C. J., (1978), Mining Geostatistics, Academic Press, 600p

Journel, A., (2003), Multiple-point Geostatistics: A state of the art, Unpublished

Stanford Center for Reservoir Forecasting paper.

Kasap, E. and Lake, L.W. (1990) Calculating the effective permeability tensor of a
grid block. SPE Formation Evaluation, 5, 192-200.

Kerr, D.R., Ye, L.M, Bahar, A. et al. (1999) Glenn Pool Field, Oklahoma: A case of
improved production from a mature reservoir. AAPG Bull 83(1), 19-24.

Kerans, C., Lucia, F.J., and Senger, R.K., (1994) Integrated Characaterization of
Carbonate Ramp Reservoirs using Permian San Andres Formation Outcrop Analogs,
AAPG Bulletin, 78(2), 181-216.

Kerans, C., and Tinker, S., (1997), Sequence stratigraphy and characterisation of
carbonate reservoirs, SEPM short course No. 40, 130p.

Kittridge, M. G., L. W. Lake, F. J. Lucia, and G. E. Fogg, (1990) “Outcrop/Subsurface

Comparisons of Heterogeneity in the San Andres Formation,” SPE Formation
Evaluation, 5, 233-240.

Kongwung, B. and Ronghe, S. (2000) Reservoir identification and characterization

through sequential horizon mapping and geostatistical analysis: a case study from
the Gulf of Thailand. Petroleum Geoscience 6(1), 47-57.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 65

R Geomodelling & Reservoir Management

Lake, L.W. and Malik, M.A. (1993) Modelling fluid flow through geologically
realistic media. In: C.P. North and D.J. Prosser (eds) - Characterization of Fluvial
and Aeolian Reservoirs. Geol Soc Spec Publ 73, 367-376.

Lanzarini, W.L., Poletto, C.A., Tavares, G. and Pesco, S. (1997) Stochastic modelling
of geometric objects and reservoir heterogeneities. SPE 38953.

Lia, O., Omre, H., Tjelmeland, H., Holden, L. and Egeland, T. (1997) Uncertainties
in reservoir production forecasts. AAPG Bull 81(5), 775-802.

Lucia, F. J. and G. E. Fogg, (1990) “Geologic/Stochastic Mapping of Heterogeneity

in a Carbonate Reservoir,” Journal of Petroleum Technology, 42, 1298-1303.

MacCleod, M., Behrens, R.A., and Tran, T.T., (1996), Incorporating seismic attribute
maps in 3D reservoir rocks, SPE 36499, presented at 71st SPE Ann. Tech. Conf. and
Exhibit., Denver, Co., Oct. 6-9.

MacDonald, A.C., Hoye, T.H., Lowry, P., Jacobsen, T., Aasen, J.O. and Grindheim,
A.O. (1992) Stochastic flow unit modelling of a North Sea coastal-deltaic reservoir.
First Break 10(4), April 1992, 124-133.

Meehan, D.N. and Verman, S.K. (1995) Improved reservoir characterization in low-
permeability reservoirs with geostatistical models. SPE Reservoir Engineering 10(3), 157-162.

Omre, H., Tjemland, H., Qi, Y. and Hinderaker, L. (1993) Assessment of uncertainty
in the production characteristics of a sandstone reservoir. In: Linville (ed) Reservoir
Characterization III. Penwell, 556-603.

Ovreberg, O., Damsleth, E. and Haldorsen, H.H. (1992) Putting error bars on reservoir
engineering forecasts. Journal of Petroleum Technology, June 1992, 732-738.

Petit, F.M., Biver, P.Y.A., Calatayud, P.M., Lesueur, J.L. and Alabert, F. (1994) Early
quantification of hydrocarbon in place through geostatistical object modelling and
connectivity computations. SPE 28416.

Roggero, F. (1997) Direct selection of stochastic model realisations constrained to

historical data. SPE 38731.

Rudkiewicz, J.L., Guerillot, D. and Galli, A. (1990) An integrated software for

stochastic modelling of reservoir lithology and properties with an example from the
Yorkshire Middle Jurassic. In: A. Buller et al. (eds) North Sea oil and gas reservoirs
II. London, Graham and Trotman, 399-406.

Schildberg, Y., Poncet, J., Bandiziol, D., Deboaisne, R., Laffont, F. and Vittori, J.
(1997) Integration of geostatistics and well tests to validate a priori geological models
for dynamic simulations: a case study. SPE 38752.

Seifert, D. and Jensen, J.L. (1997) Object and pixel-baesd reservoir modelling of
a braided fluvial reservoir. In: Pawlowsky-Glahn (ed.) Proceedings of IAMG ‘97,
Barcelona, pp. 719-724.

Modelling O N E

Seifert, D., Lewis, J.J.M. and Hern, C.Y. (1996) Well placement optimisation and
risking using 3-D stochastic reservoir modelling techniques. SPE 35520.

Singdahlsen, D.S. (1991) Reservoir characterization and geostatistical modelling of

an eolian reservoir for simulation, East Painter Reservoir Field, Wyoming. AAPG
Bull 75(6), 1140-??

Srivastava, M. (1994) An overview of stochastic methods for reservoir characterisation.

In: J.M. Yarus and R.L. Chambers (eds) Stochastic Modelling and Geostatistics.
AAPG Computer Applications in Geology No. 3. Chapter 1, 3-16.

Strebelle, S., Payrazyan, K., and Caers, J., (2002), Modelling of deepwater turbidite
reservoir conditional to seismic data using multiple-point geostatistics, SPE 77425,
presented at SPE Ann Tech Conf and Exhibit, San Antonio.

Sweet, M.L., Blewden, C.H., Carter, A.M. and Mills, C.A. (1996) Modelling
Heterogeneity in a low-permeability gas reservoir using geostatistical techniques,
Hyde Field, Southern North Sea. AAPG Bull 80(11), 1719-1735.

Tinker, S.W. (1996) Building the 3D jigsaw puzzle: applications of sequence

stratigraphy to 3D reservoir characterization, Permian Basin. AAPG Bull 80(4),

Tjolsen, C.B., Johnsen, G., G. Halvorsen, A. Ryseth and E. Damsleth, (1996) Seismic
data can improve stochastic facies modelling, SPEFE, 11, 141-146.

Tyler, K., Henriquez, A. and Svanes, T. (1994) Modelling heterogeneities in fluvial

domains: a review of the influence on production profiles. In: J.M. Yarus and R.L.
Chambers (eds) Stochastic Modelling and Geostatistics. AAPG Computer Applications
in Geology No. 3. Chapter 8, 77-89.

Tyler, K.J., Svanes, T. and Henriquez, A. (1994b) Heterogeneity modelling used

for a production simulation of a fluvial reservoir. SPE Formation Evaluation, June
1994, 85-92.

Vistelius, A.B., (1949), on the question of the mechanism of formation of strata:

Dokl. Akad. Nauk, SSR., v65, 191-194.

Wang, J. and MacDonald, A.C. (1997) Modelling channel architecture in a densely

drilled oilfield in East China. SPE 38678.

Weber, K.J. (1996) Visions in reservoir management - what next? In “TRC Special
Publications of the Japan National Oil Corporation, Technology Research Centre”.

Yarus, J.M., and Chambers, R.L., (1994), Stochastic modelling and geostatistics:
principles, methods, and case studies: AAPG Computer Applications in Geology, 379p.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 67

R Geomodelling & Reservoir Management


Applied Geostatistics - Variography and Stochastic Simulations in

Reservoir Characterisation - Practical Issues


13.1. Measures of Spatial Correlation

Spatial continuity of data in space is related to the correlation between samples
separated by a given distance. A typical example of spatial continuity is for samples
located close to each other in space likely to have similar values compared to samples
located further away.

Spatial continuity describes the similarities/ differences in space over increasing

distance. This entails comparing pairs of data points separated by a distance h (h is a
vector). The smaller the variation in value over larger distances, the more continuous
is the population and the smoother is the function.

Basically spatial structural analysis or variography consists of two main phases:

1. Exploratory variography _ estimating and interpretation of spatial continuity

measures using data;

2. Modelling of spatial structures _ development of theoretical variograms. The

latter consists usually in a fitting of experimental variograms calculated basing on
data with some theoretical models described by analytical formulas.

Spatial correlation can be quantified by second-order moments such as covariance

or semivariogram, or other.

• Covariance C(x,h) function:

C(x,h)= E{(Z(x)-m(x))(Z(x+h)-m(x+h))}

where E is the operator of expectation, in a 2 dimensional case vector x = (x,y).

Spatial correlation analysis entails stationary assumption in some extent. There are
distinguished several levels of stationarity assumptions:

• Strict stationarity: (which is usually impractical and is of purely theoretical interest)

For any set of n samples at locations xi(i=1,...,n) and any vector h joint multi-dimensional
distribution function V(x1),V(x2),...,V(xn) is identical to

• Second-order stationarity:

1. Mean exists and is constant for any x:

E[V(x)] = m = const;

Modelling O N E

2. Covariance exists and depends only on vector h:

Cov(x,x+h) = E[V(x),V(x+h)]-m2 = C(h).

• Intrinsic Hypothesis: (weaker assumption)

1. E[V(x+h)-V(x)] = 0, ∀ x

2. Variogram depends only on h:

2g (h) = Var[V(x+h)-V(x)], ∀ x

In practice, empirical estimate of the covariance function (experimental covariance)

under the hypotheses of second-order stationarity C(x,h)=C(h) can be estimated as:

1 N (h)
C(h) = ∑ Z (x i )Z(xi + h) − m−h m+ h
N(h) i =1 (1)

where N(h) is the number of pairs separated by the lag vector h and;
N(h ) N (h)
m+ h = 1
2 N(h ) ∑ Z(x i + h), m−h = 1
2N (h) ∑ Z (x )i
i =1 i =1

Experimental (or raw) covariance given by formula (1.1) is calculated in a 2D case

as shown on Figure App. 1. The pairs N(h) are obtained as sum of counts of pairs
searched from each data point within the lag spacing Dh (with lag tolerance ±Dht)
determined by the azimuth angle s with direction tolerance ±Ds/2 and bandwidth bw.
Covariance value can be calculated for any number of lags n covering the data space.


bw 1.62e+03



-22.39 5.78 33.95


Figure App.1 2D Variogram lag definitions

In 3D case the angle tolerance becomes from flat sector to 3D cone. Thus, for vertical
spatial correlation (direction angle is always 90°) the pairs for each data point are
collected according to Figure App. 2.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 69

R Geomodelling & Reservoir Management

Angle Tolerance



Bandwidth Bandwidth

Figure App. 2 Vertical variogram lag definitions in 3D (Deutsch, 2002)

So for the full 3D case the number of parameters to determine the covariance
calculation is the following:

• Azimuth angle s
• Azimuth angle tolerance Ds
• Azimuth bandwidth bw
• Dip angle
• Dip angle tolerance
• Dip bandwidth
• Lag distance can be direction specific) Dh
• Lag tolerance (half lag distance) ht
• Number of lags
• Max distance = Nlags
• Lag.dist.

Another second-order moment usually used in geostatistics is the semivariogram (or

just variogram).

Semivariogram / variogram is the basic tool for spatial structural analysis and
variography. The theoretical formula can be expressed, under the intrinsic hypothesis
of stationarity, as the variance of the increments:

Modelling O N E

γ (x,h) = 12 Var {Z(x) − Z(x + h)} = E ( Z(x) − Z(x + h))
} = γ (h)
Hence, empirical (experimental or raw) semivariogram can be computed as follows:

1 N (h )
γ (h) = ∑ ( Zi (x) − Zi (x + h))2
2N(h) i =1

An example of a variogram plot is presented in Figure App. 3 where each point

is labelled with the corresponding number of pairs N(h) used for the variogram
calculation at that lag distance h. A good illustration of variogram computation is an
h-scatter plot - a 1D plot of pairs separated by the lag distance h within lag tolerance
(h. H-scatter plot clearly represents (if any) correlation of the values in the lag. In
stationary case the smaller lags (Lags 1, 2) feature high correlation between the paired
values, while in the larger lags (Lag 8), values show greater scatter (nugget effect).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 71

R Geomodelling & Reservoir Management

352 314 270
16 232
230 382
14 184 Lag 3
10 172
52 Lag 2
Lag 1
0 20 40 60 80 100 120 140



Cadmium (x+h)


Corresponding lag
h-scatter 0
0 2 4 6 8 10 12 14 16
plot for Lag1: Cadmium (x)



Cadmium (x+h)


0 2 4 6 8 10 12 14 16
Lag 2 Cadmium (x)



Cadmium (x+h)


0 2 4 6 8 10 12 14 16
Lag 8 Cadmium (x)

Figure App. 3 Directional variogram plot

(Pannatier, 1996)

Modelling O N E








250 280 310 340 370 400 430 460 490

52 4.8

Arsenic (x+h)
1.2 320
174 294 215
0.9 241 360 1.8
0.6 1.2
0.3 0.6
0 0
0 20 4 60 88 100 120 144 0 0.6 1.2 1.8 2.4 3 3.6 4.2 4.8 5.4
IhI Arsenic (x)

g(IhI) 2.7
362 306
208 2.4
232 282 242
164 2.1
0.36 1.8
Arsenic (x+h)

0.3 1.5
0.24 164
0.12 40
0 20 40 60 80 100 120
IhI 0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7
Arsenic (x)

Figure App. 4 Top: Variogram: Influence of outliers.Directional variogram plot.

The corresponding data post plot shows pairs with the outlier contributed to the
lag. Middle: The lag h-scatter plot with marked outlying values to be removed.
Bottom: Variogram plot for the data with the outlier (of 5.6) removed.
The corresponding h-scatter plot

High value data (outliers) can greatly influence the variogram. The data distribution
presented in Figure App. 4 has a single maximum value greater than 5. The variogram
computed with this value features quite a fluctuating behaviour, as illustrated on the
h-scatter plot, and is caused by the influence of the pairs, which include this maximum
value. If we ignore the outlier by removing values greater than 5, the new variogram
based on n-1 data will display a more typical trend of increasing and levelling off

Another useful tool to visualize the pairs contributing to the variogram is the variogram
cloud (Figure App. 5). Non-averaged variances for each pair are plotted against
separation distance. This plot also shows the number of data pairs supporting the
variogram in each lag.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 73

R Geomodelling & Reservoir Management

g(IhI) 2246 pairs on plot






0 20 40 60 80 100 120 140


Figure App. 5 Variogram: cloud

(Pannatier, 1996)

The characteristics of a variogram curve can be described in terms of its key components:
nugget, sill and range (Figure app. 6).





0 1000 2000 3000 4000 5000

Range 1 Range 2

Figure App. 6 Anisotropic variogram definitions

The nugget is the variogram value corresponding to the zero distance (obtained
from extrapolating the variogram curve). It represents the random part in the spatial

The Sill is the difference between the nugget and the value where the variogram
tends to a constant value at some lag distance, provided stationary conditions. The sill
represents the correlated part of the spatial continuity. The sum of the nugget and sill
gives the level of the a priory covariance (variance of the global distribution) in the

Modelling O N E

case of stationarity and no trend conditions. In non-stationary case, the sill does not
exist and the variogram continues to increase, showing the existence of correlation
at all distances within the data range.

The range is the distance where the variogram reaches the sill. In case of anisotropy, the
range can change (Range1, Range 2) for different directions (Figures App. 6, and 7).

24000 -90.0[deg]
16000 30.0[deg]



0 40 80 120
Average Distances Between Points in Lag

Figure App. 7 Directional variograms: anisotropic case

Anisotropic spatial correlation is best illustrated using a sector rose or a contour

rose diagram (Figure App. 8). Directional variograms are plotted on a rose type 2D
diagram, with the colour reflecting the variogram values for each lag. Variogram
contours are obtained by smoothing the rose sectors.

In isotropic case, the variograms are almost the same in all directions (Figure App.
9). If the range varies for different directions, while the sill is constant, the anisotropy
is called geometric (Figure App. 10). In geometric anisotropy, the variogram rose
contours form tight ellipsoids. If the range and sill vary for different directions, the
anisotropy is called zonal (Figure App. 11). In this case the variogram contours are
parallel to and symmetrical about the axis of the longer (discontinuous) correlation

Further examples of variograms corresponding to different geological sedimentary

structures are presented in Figure App. 12.

Variograms work only under conditions of stationarity. The variogram, being a squared
difference function, it is very sensitive to outliers (anomalously high data values),
which may need to be ignored if not representative of the population.

There are several other second-order moments besides the variogram, which can also
describe spatial correlation. These are:

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 75

R Geomodelling & Reservoir Management

• Semivariogram
• Standardised Variogram
• Covariance
• Correlogram
• Madogram
• Rodogram
• Relative Variograms
• Drift

The Madogram, for example, is based on the absolute value of the differences between
the values in pairs:

1 N (h)
M(h) = ∑ | Zi (x) − Zi (x + h) |
2N(h) i=1

The Rodogram _ is based on the square root of the differences between the values
in pairs:

1 N(h ) 1

R(h) = ∑ i
2 N(h) i =1
{| Z (x) − Z i (x + h) |}2

Modelling O N E

0 10000
-63 25000

-126 -63 0 63 126

2000 0.5
1000 0.9
0 1.2



-2000 -1000 0 1000 2000

Figure App. 8 2D Variogram representation:

Bottom: Variogram rose

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 77

R Geomodelling & Reservoir Management

1.2 30.0[deg]
Model - 90.0 [deg]
1.0 Model - 60.0 [deg]
Model - 30.0 [deg]
Model 0.0 [deg]
Normal Score Variogram

0.8 Model 30.0 [deg]

Model 60.0 [deg]




0 500 1000 1500 2000 2500
Average Distance (m)

Figure App. 9 2D Isotropic experimental (raw) variogram and the fitted

theoretical model. Bottom: Raw variogram contours

Modelling O N E

1.4 Model - 90.0 [deg]

Model - 60.0 [deg]
Model - 30.0 [deg]
Model 0.0 [deg]
Model 30.0 [deg]
1.0 Model 60.0 [deg]

Normal Score Variogram

0.8 -30.0[deg]
0.6 60.0[deg]



0 500 1000 1500
Average Distance (m)

Figure App. 10 Geometrical anisotropy: 2D experimental (raw) variogram and

the fitted theoretical model. Bottom: Raw variogram contours

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 79

R Geomodelling & Reservoir Management

Model - 90.0 [deg]

Model - 60.0 [deg]
Model - 30.0 [deg]
Model 0.0 [deg]
Model 30.0 [deg]
Model 60.0 [deg]
1.0 -60.0[deg]
Normal Score Variogram

0.8 30.0[deg]




0 500 1000 1500
Average Distance (m)

Figure App. 11 Zonal anisotropy: 2D experimental (raw) variogram and the fitted
theoretical model. Bottom: Raw variogram contours

In case of more than one correlated spatially distributed variables, their joint spatial
correlation is measured by the cross covariance:

Cij(x,h)= E{(Zi(x)-mi(x))(Zj(x+h)-mj(x+h))}

where E is the operator of expectation, in a 2 dimensional case vector x = (x,y).

The corresponding cross-variogram function is defines as follows:

• Theoretical expression for Nv correlated random functions Zi (i=1,...Nv):

• The empirical estimate of the cross-variogram:

1 N (h )
γ ij (h) = ∑ ( Z (x) − Zi (x + h))( Z j (x) − Zj (x + h))
2N(h) i =1 i

Modelling O N E




g 0.6



0 50. 100. 150. 200. 250. 300.
Distance, Pixels




g 0.6



0 50. 100. 150. 200. 250. 300.
Distance, Pixels




g 0.6



0 50. 100. 150. 200. 250. 300.
Distance, Pixels

Figure App.12 Variograms of geological images : Migrating ripples in aeolian

sand-stone (man made, in a lab) Convoluted and deformed laminationfrom a
fluvial environment Cross-laminations from a deltaic environment (real example
(Deutsch, 2002)

13.2 Handling Trends

Spatial correlation and variogram analysis depend on data stationarity. However,
a trend, i.e. a systematic dependence in the data in a given (or several) directions,
is very common in geological setting. The assumption of stationarity is therefore
fundamental in variogram modelling and further geostatistical estimations (e.g.
kriging). The presence of trends make the stationarity assumption invalid and will
significantly affect the correlation structure. Example of a linear trend in 1D data
is illustrated in Figure App. 13. The presence of the trend results in a continuously
increasing variogram, which reflect correlation in up to all ranges and consequently
non-stationarity of the mean and variance, which restrict use of, for instance, a kriging
model. Removing the trend with a linear model, restores stationary behaviour to the
residuals within a range of 25 ft.

Presence of trends in data implies non-stationarity, and traditional geostatistical

applications (kriging, etc.) cannot be used in such cases. Among trend removal
methods are: polynomial, spline and non-linear models. Geostatistical models that
account for trend include:

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 81

R Geomodelling & Reservoir Management

• Kriging with trend (universal)

• Kriging with External drift (external variable)
• Moving window residual kriging (local predictions)
• Intrinsic Random function of order k (higher moments)
• Residual methods with non-lineal trend models
• (Artificial Neural Networks, etc.)

Original Variable

Original Variable Variogram of Original Variable

35 Variogram of Original Variable
Depth, ft Depth, ft

35 1.2
25 1.6
15 γ

0 0
-2.5 -1.5 -0.5 0.5 1.5 2.5 0 5 10 15 20 25 30
Normal Score Value Distance, ft
0 0
-2.5 -1.5 -0.5 0.5 1.5 2.5 0 5 10 15 20 25 30
Normal Score Value Distance, ft

Figure App.13 Trend removal: 1D case Data and the corresponding variogram
with trend
Original Variable

Original Variable Variogram of Original Variable

35 Variogram of Original Variable
Depth, ft Depth, ft

35 1.2
25 1.6
15 γ

0 0
-2.5 -1.5 -0.5 0.5 1.5 2.5 0 5 10 15 20 25 30
Normal Score Value Distance, ft
0 0
-2.5 -1.5 -0.5 0.5 1.5 2.5 0 5 10 15 20 25 30
Normal Score Value Distance, ft

Figure App.13(b) Data residuals after linear trend removal and the corresponding
variogram of the residuals

Modelling O N E

13.3 Variogram Modelling

Experimental (or raw) variogram are modelled using a range of theoretical functions.
Each variogram model function reflects a different spatial correlation feature. A
variogram model can be a combination (sum) of several theoretical functions, which
is called a nested structure. The most commonly used theoretical functions to model
a variogram are presented in Figure App. 14.

1. Nugget _ constant non-zero variogram value, corresponds to the absence of

correlation. In such case no interpolation method has significance. Nugget model
is usually included along with other types of models. Nugget effect can result from
different sources: small-scale variability, measurement errors, positioning errors.

2. Spherical model _ features linear behaviour near the origin and reaches its sill with
zero derivative. This sill is the statistical (a priori) variance. The random function
is continuous but not differentiable.

3. Exponential model _ features exponential approach to sill, which means negligibly

small correlation in discontinuity. Variogram reaches 95% of sill (c+c0) at the
effective correlation distance a

4. Gaussian model _ represents a very smooth behaviour of functions at short

distances. If it exhibits a nugget effect, it usually reflects errors of measurements.

5. Power model _ reflects non-stationary increase of the variogram. Reflects correlation

at all scales, characterises trends or fractal type behaviour.

6. Hole effect model _ represents periodic structures (e.g. objects), acts in one direction only.

7. Damped hole effect _ a product of the exponential covariance and the hole effect;
more common than pure hole effect.

Nested variogram structures. The variogram can reveal nested structures - hierarchical
structures, each characterised by its own range and sometimes sill. In this case
the variogram can be modelled as a sum of theoretical variograms with positive
coefficients; the resulting variogram will be positive definite as long as individual
models are positive definite.

In geostatistical jargon it is so-called “nested” variogram model.

gnested (h)= S|ll|gl(h)

Another useful feature of the variogram is it symmetry:


Variogram and covariance are related as follows:


where C(0) = s2 – a global variance.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 83

R Geomodelling & Reservoir Management

Figure App.14 Theoretical variogram model types

Modelling O N E

Figure App.15 Example of fitting of a 3D theoretical variogram model in Petrel

Top: Minor direction XY. Bottom: Vertical direction Z

Examples of variogram fitting using a commercial interactive graphic user interface

are presented in Figure App. 15. It shows a simultaneous display of the computed
experimental variogram based on the real data (right window), and the modelled
theoretical variogram function (blue solid line), which best fits the sample variogram.
The histogram shows the number of pairs in each lag from which the variogram is
calculated. Information on the directional sector from which the variogram is computed
is shown in the data location map (left window).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 85

R Geomodelling & Reservoir Management


Major advances of Simulations:

• Provide multiple realisations of a function value at a given location
• Honour the sample data (conditional simulations)
• Reproduce statistical moment (variogram, mean, histogram)
• Model variability of the data

Interpolation vs. Stochastic Simulation

The main objective of the interpolation is to provide “best” local estimates z*(u) of
each unsampled value z(u) without specific regard to the resulting spatial statistics
of the estimates. In case of simulations the resulting global features and statistics of
the simulated values take precedence over local accuracy.

Stochastic simulation is a process of preparing alternative, equally probable, high

resolution models zl(u), of the spatial distribution of z(u), each of which is a “good”
representation of the reality in some global sense.

The difference between these alternative models or realisations provides a measure

of joint spatial uncertainty.

Stochastic Simulation Models:

• Gaussian-based (e.g. in petrophysical modelling)
• Indicator-based (e.g. in facies modelling)
• Annealing
• Boolean algorithms (object modelling)

14.1 Sequential Stochastic Simulations

Select realisation from conditional distribution function:
F(x1,...,xN;Z1,...,ZN |(n))=Prob{Z(x1)≤z1,...,Z(xN)≤zN|(n)}

Decomposed into (after Chiles & Delfiner, 1999):

F(x1,...,xN; z1,...,zN |(n))=
F(xN; zN |(n+N-1))x F(xN-1; zN-1 |(n+N-2))x...x F(x1; z1 |(n))

where F(x1;zN |(n+N-1)) _ conditional distribution function Z(x), defined by n samples

and N-1 simulated realisations Z(xj)=z(xj), j=1,...,N-1.

Using this factorisation random vector, Z can be simulated sequentially by randomly

selecting Zi from the conditional distribution Prob{Zi<zi | z1, z2 , ... zi-1} for i= M+1,...,
N. and including the outcome zi in the conditioning data set for the next step.

This procedure of decomposition of joint pdf into product of conditional pdfs is very
general and can be used for spatial random functions as well. Remember that a spatial
function is a collection of random variables. It makes possible the construction of both
a non-conditional (M=0) and conditional (M>0) simulations. The same procedure can
be applied to co-simulation of several non-independent random functions. It produces
simulations that match not only the covariance but also the spatial distribution. In
general, it is not known where to take conditional distributions. But for a Gaussian
random function with known mean, the conditional distribution is Gaussian with
mean and variance obtained from simple kriging.
Modelling O N E

The algorithm of sequential simulations is schematically presented in Figure App.

16. It goes along the following steps:

1. Define a random path through all nodes of the estimation grid, visiting each node just once.

2. Select a node from the random path. Using at the start the original data (and
subsequently all simulated value), make a kriging estimate with uncertainty at the
selected node.

3. Construct a local probability distribution at the selected node location.

4. Draw a random sample from the cumulative local distribution function. This is
the simulated value, which becomes a new data point.

5. Having added a new data point return to step 2 and continue until all nodes have
been modelled.

Choose a
random location


Estimate value
30.1 +
_9 and uncertainty


Derive local PDF


21.1 30.1 39.1

Sample from PDF

24 ?
Use simulated
35.2 14.9 value in text next

Figure App.16 Sequential stochastic simulation algorithm

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 87

R Geomodelling & Reservoir Management

14.2 Sequential Indicator Simulations

Sequential Indicator Simulation (SIS) algorithm can be applied for either categorical
data (e.g. facies) or continuous data. Let us consider simulation of categorical variables
(Deutsch and Journel, 1998). By definition categorical spatial function consists of
K mutually exclusive categories sk k= 1,...,K. At any location only one class can
be detected. Let I(x; sk) be the indicator of category sk set it to 1 if x∈ sk and zero
otherwise. In practice facies indicator variables can be constructed from the log data
using hard pre-defined cut-offs to distinguish between the facies (see Figure App. 17).

Direct kriging of the indicator variable I(x; sk) provides an estimate/model for the
probability that sk prevails at location x.
Pr ob* {I(u;sk ) = 1| (n)} = p k + ∑ λα [ I(uα ;sk ) − pk ]
α =1

where pk = E{I(x;sk)} ∈ [0,1] is the marginal frequency of category sk inferred

e.g. from the declustered proportion of data of type sk (see Annex for declustering
details). The weights are given by simple indicator kriging equations using indicator
covariances of the corresponding classes.

When the average proportions vary locally, one can explicitly provide the simple
indicator kriging with smoothly varying local proportions.

The flowpath of sequential simulation of categorical variables as implemented in

GSLIB (see Deutsch and Journel, 1998) is as follows and illustrated in Figure App. 18.

1. Define a random path through all nodes of the estimation grid, visiting each node once.

2. Select a node from the random path, say at location x. Using the original data
(which includes any previously simulated previously simulated indicator values for
categories sk), make an indicator kriging followed by order relation correction to
obtain K estimated probabilities pk(x|(ci)), k= 1,...,K .

3. At each node perform indicator kriging followed by order relation correction to

obtain K estimated probabilities pk(x|(ci)), k= 1,...,K.

4. Define any ordering of the K categories, say 1,..,K. This ordering defines a cdf-
(cumulative distribution function) type scaling of the probability interval [0,1] with
K intervals.

5. Draw a random number p uniformly in [0,1]. The interval in which p falls

determines the simulated category at location x.

6. The simulated value becomes an additional data point, return to step 2 and continue
until all nodes have been modelled.

Modelling O N E

Thus, the indicator based simulation algorithm can be viewed as a two-step procedure:

1) Simulation class-value

2) Draw a simulation value from that class using a class distribution models (e.g.,
uniform, power, etc.). Consequently, indicator simulations guarantee approximate
reproduction of only the K class proportions and corresponding indicator
semivariograms and not reproduction of the cdf and semivariogram of the original
continuous z-values. Therefore, actual approximation of one-point and two-point
z-statistics by sequential indicator realisation depends on several factors: number of
thresholds, information accounted when performing indicator kriging, interpolation/
extrapolation models used for increasing the resolution of ccdf (conditional cumulative
distribution function).

Typical indicator simulation result is a realisation of facies distribution as illustrated

in Figure App. 19. A set of such equally probable realisations can be post processed
further to obtain statistical inference of used directly in further simulation steps as
independent scenarios.

Cut-Off Facies I1



Figure App.17 Indicator transformation single cut-off

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 89

R Geomodelling & Reservoir Management

1(0) Categorical data:

0(1) Type 0 and 1 with
the global proportion

1(0) Indicator kriging

0(1) Local Probability
0.8 Estimate
1(0) to sum 1)

1 Sample the pdf at

0.2 random to get the
corresponding category
0 1 value

Use simulated
1(0) 0(1) value in text next

Figure App.18 Sequential Indicator simulation algorithm

Figure App.19 Facies dsitribution (2D slice) from Sequental Indicator simulations

Modelling O N E

14.3. Sequential Gaussian Simulations

Sequential modelling is a general approach to conditional stochastic simulations.
Gaussian random function models are widely used in statistics and simulations due
to their analytical simplicity. In this discussion of sequential simulations, we shall
limit ourselves to Sequential Gaussian Simulations.

Sequential Gaussian simulation methodology consists of several steps:

1. Determine the univariate cdf FZ(z) representative of the entire study area and not
only of the z-sample data available. Declustering may be needed. If the original data
are clustered, the declustered sample cdf should be used for both the normal score
transform and back transform (see Annex). As a consequence the unweighted mean
of the normal score data is not zero, nor the variance is one. In this case the normal
score covariance model should be fitted first to these data and then renormalized
to unit variance.

2. Using the cdf FZ(z), perform the normal score transform of z-data into y-data
with a standard normal cdf as illustrated in Figure App. 20.

1 1

-2 -1y 0 2 2 2

y=φ-1 (z)

Figure App. 20 Normal Score Transform (Deutsch & Journel, 1998)

3. Check for bivariate normality of the Normal Score values. There are different ways
to check for bivariate normality of a data set whose histogram is already normal. The
most relevant check directly verifies that the experimental bivariate cdf of any data
pairs is indeed standard bivariate normal with a given covariance function CZ(h).
There exist analytical relation linking the covariance with any standard normal cdf
value (see Deutsch and Journel 1997).

4. Assuming multivariate Gaussian normality random function model for the normal
score transformed variable, local conditional distribution is normal with mean and
variance obtained by simple kriging. The stationarity requires that simple kriging
(SK) with zero mean should be used. If there are enough conditioning data to
consider inference of a non-stationary random function model it is possible to use
moving window estimations with ordinary kriging (OK) with the re-estimation of the
mean. But in any case SK variance should be used for the variance of the Gaussian
conditional cumulative distribution function if there are enough conditioning data
it might be possible to keep the trend as it is.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 91

R Geomodelling & Reservoir Management

5. The flowpath of sequential Gaussian simulations is schematically demonstrated

on Figure App. 21
7. Define a random path through all nodes of the estimation grid, passing through each
node once. At each node x, retain a specified number of neighbouring conditioning
data including both original y-data and previously simulated grid node y-values.

6. Use simple kriging (SK) with the normal score variogram model to determine the
parameters (mean and variance) of the ccdf of the random function Y(x) at location x.

7. Draw a simulated value yl(x) from that ccdf.

8. Add the simulated value yl(x) to the data set.

9. Return to Step 7, and compute the next simulated value until all nodes are simulated.

10. Back transform the simulated normal values yl(x) into real simulated values for
the original variable zl(x).

Gaussian models are theoretically consistent models. The Gaussian approach is also
related to maximum entropy and correspondingly to the maximum “disorder” in the
data. Perhaps, it is not the best choice when spatial correlations between extremes
are of special interest. One possibility is to take another nonparametric model like
indicator based simulations.

Modelling O N E

Transform data to
Normal distribution

Compute Kriging
-0.1 +
_ 0.3 -0.8
estimate and error


Derive local PDF

Built a local Normal
distribution with estimated
mean and variance
-0.4 0.2

Sample from Gaussian PDF

Use simulated value 0.1 ?

in next predictions -0.8


Backtransform the
simulated values

Figure App. 21 Sequential Gaussian simulation algorithm

Sequential Gaussian simulation can generate a set of equally probable realisation of

a 3D field such as petrophysical properties (e.g. porosity). Several realisations are
presented in Figure App. 22, each simulated realisation although different shares the
same global distribution and spatial correlation in all 3 realizations. The difference
between the realisations characterises variability and uncertainty in the model. To
obtain statistical inference such as spatial distribution, mean, variance, p-quantiles,
etc. further post processing on the realisations can be carried out.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 93

R Geomodelling & Reservoir Management

Figure App. 22 Simulated Sequential Gaussian realisations of porosity

(2D slices)

Differences between the simulated realisations can be also quantified using variograms.
A variogram is computed for each simulated petrophysical property field and compared
to the one based on the initial data (Figure App. 23). The spatial correlation from the
simulation is expected to exhibit the same features as the one from the initial data
but with some scatter due to stochastic variations. Note, that Gaussian simulation
is a maximum entropy algorithm, which leads to maximum differences (disorder)
between the realisations.

Modelling O N E





0 500 1000 1500 2000 2500 3000
Lag Distance

Figure App. 23 Variograms of the simulated realisations of porosity vs. the “truth
case” variogram

4 θ+06
sgsim realisations
sgsim realisations
3 θ+06 TruthSimulations

2 θ+06

1 θ+06

0 θ+00
0 5 10 15

Figure App. 24 Full oil production total resulting from 50 simulated realizations
of the petrophysical properties vs. the “truth case” production scenario

When different simulated petrophysical properties are fed into a dynamic simulator
the modelled outputs will vary for each realization as shown on Figure App. 24, where
50 cumulative oil production curves are shown. A histogram of the cumulative oil
production for each realization is shown in Figure App. 25, reflecting the range and
uncertainty that comes from the modelling, which is valuable in reservoir management.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 95

R Geomodelling & Reservoir Management




3400000 3600000 3800000 4000000


Figure App. 25 Distribution of the final full oil production total resulting from
50 simulated realizations of the petrophysical properties vs. the “truth case”
production solution

Annex. Declustering
Clustering (preferential sampling) of monitoring networks has significant influence
on spatial data analysis and modelling. Simple example, how preferential sampling
influences the estimation of mean value for the 1 dimensional function is presented
in Figure App. 26. The true mean value of the function is 0.5. Function was sampled
two times with 1) preferential clustered sampling in the regions with high levels and
2) preferential sampling in the regions of low values. In the former case calculated
mean value is overestimated (1.45) and in the latter case underestimated = - 0.5. The
same effect can be observed when estimating variance, and more generally histograms/
distribution functions. Clustering has also influence on structural analysis _ spatial
correlation analysis (variography), which is a central part of geostatistics.

Figure App. 26

Modelling O N E

1 dimensional example of preferential sampling. True mean value = 0.5 (median =

0.49, min = -5.8, max = 5.3, variance = 4.65), overestimated mean with preferential
sampling in high levels (filled circles) regions = 1.41; underestimated mean with
preferential sampling in low levels (open circles) regions = -0.5.

Thus, as a result of clustered monitoring networks, collected data sets are not
“representative” - they do not represent “true patterns”. The objective of declustering
is to recover representative part of information taking into account both clustering
effect and preferential sampling. The most straightforward way to do it is to apply a
kind of weighting procedures to the raw data. It means, that raw data are multiplied by
weights, and new weighted data are used as a “representative” data set. For example,
declustered mean and variance values can be estimated as follows:
Z m = ∑ Z iω i
i =1

Var( Z ) = ∑ ( Z i − Z m )2 ω i
i =1

where wi - are the weights.

Optimal declustering weights are defined as the weights, which provide the most
representative histogram (Journel and Deutsch 1998).

There are several approaches to calculate these weights.

1. Random declustering.
2. Cell-declustering
3. Voronoi polygons
4. Kriging weights

Cell-declustering method was proposed by Journel (1983). The idea is to cover
spatial domain by a regular grid with different cell sizes and apply an equal weighting
within each grid cell followed by averaging of the means of the cells (see Figure App.
27). Equal weights are assigned to each data within a cell inverse proportional to
the number of data in it. Overall mean depends on cell size. The minimum of mean
corresponds to optimal cell size when preferential sampling is in high value regions
and to maximum otherwise. This is a fast and efficient method that uses all the data.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 97

R Geomodelling & Reservoir Management

Figure App. 27 Principle of the cell declustering.

Modelling O N E


1. Chiles J.-P., Delfiner P. (1999) Geostatistics: Modeling Spatial Uncertainty. John

Wiley Sons Inc.

2. Deutsch C.V. (2002) Geostatistical Reservoir Modeling. Oxford University Press.

3. Deutsch C.V. and Journel A.G. (1998) GSLIB. Geostatistical Software Library
and User’s Guide. N.Y., Oxford University Press.

4. Journel A.G. (1983). Nonparametric estimation of spatial distributions.

Mathematical Geology. V.15, pp.445-468.

5. Goovaerts P. (1997). Geostatistics for Natural Resources Evaluation. N.Y, Oxford

University Press.

6. Kanevski, M., Maignan M., Analysis and Modelling Of Spatial Environmental

Data Dekker/PPUR, 2004, 304 pages, CD-ROM included.

7. Pannatier, Y., (1996) VARIOWIN: Software for Spatial Data Analysis in 2D,
Springer-Verlag, New York, NY.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 99

R Geomodelling & Reservoir Management

Geomodelling Workflow T W O


1. INTRODUCTION 14.5 Permeability Predictor and Horizontal

Permeability Modelling
2. REASONS FOR GEOMODELLING AND 14.6 Vertical Permeability Modelling
RESERVOIR CHARACTERIZATION 14.7 Reservoir Charaterisation From Seismic
14.8 Water Saturation Modelling
15.1 Variables and Evolution of Variability
4. HETEROGENEITIES, MEASUREMENTS AND 15.2 Variograms and Kriging
5. GEOMODELLING WORKFLOW 16.1 Sequential Indicator Simulation (SIS)
5.1 Introduction 16.2 Sequential Gaussian Simulation (SGS)
5.2 Data Input 16.3 Sequential Truncated Gaussian Simulation
6. STRUCTURAL MODELLING 16.4 Collocated Kriging
6.1 Seismic Interpretation 16.5 Multi-Point Simulation
6.2 Tectonics and Structural Style
7.1 Sequential Stratigraphic Correlations 18. UPSCALING
7.2 Flow Units 18.1 Averaging and Flow Based Tensor
7.3 Correlation techniques Upscaling Technique
18.2 The Geopseudo Method
8.1 Fault Representation 19. DYNAMIC SIMULATION
19.1 Finite Difference and Streamline Modelling
9. PETROPHYSICAL GROUPS AND ROCK 19.2 Single and Dual Porosity Modelling
TYPING 19.3 Dynamic Simulator Input
19.4 Dynamic Simulator Output and Results
12.1 Depositional Environment
12.2 Facies Proportions


13.1 Pixel Based Models
13.2 Object Based Models


14.1 Net Pay Cut-Offs
14.2 Formation Evaluation and QC
14.3 Porosity and Permeability Law
14.4 Net to Gross and Porosity Modelling

R reservoir evaluation and management

Geomodelling & Reservoir Management M

R Geomodelling & Reservoir Management


Having worked through this chapter the students should be able to:

• Understand the main reasons and objectives of geomodelling.

• Understand the workflow involved in constructing a geomodel (from data input

to dynamic simulation).

• Data integration and fundamentals of each step in the geomodelling workflow.

• Application of data analysis and geostatistics in constructing a geomodel.

• Uncertainty in geomodelling.

• Interface between Geomodelling and Flow Simulation.

Geomodelling Workflow T W O


This chapter will concentrate on the workflow of geomodelling. Since the 1990’s
commercial computer based geomodellers have become widely available to construct
geomodels, and contain built-in suites of powerful analytical, geostatistical and
visualisation tools to help in the construction. The intention in this chapter is to ensure
that students understand the fundamentals in each step of the workflow rather than
be a substitute for software user manuals.

Keeping in mind the eight cardinal rules of geomodelling, will ensure the success of
any geomodelling exercise.

1) Must be coherent with structural framework, geology (sedimentology, diagenesis

etc) dynamic data and be suited to address the key objectives of the modelling

2) Capture key Heterogeneities.

3) Establish key objectives of modelling and design model to meet these (Fit to

4) Grid size must be linked to the required degree of detail one is trying to capture
and the problems that the modelling is addressing.

5) Complexity of model linked to amount of data and degree of heterogeneity in


6) The model cannot be a precise representation of the reservoir, but should be


7) Upscaling always degrades input data, therefore avoid very fine geological
grid, which subsequently need to be upscaled to a manageable dynamic grid

8) Ensure that your model accurately reproduces dynamic results (production

rates and well test results).

To ensure that the student grasps the key aspects of geomodelling, the chapter is
complemented by a tutorial run in parallel to this course, where the student will build
a simple geomodel and test it in a flow simulator.



In its simplest definition, geomodelling is the spatial representation of porosity and

permeability in a reservoir, ensuring that all reservoir characteristics, including
heterogeneities and connectivity are accurately represented. The ultimate objective
of a geomodel was originally as input into a flow simulator to assess reservoir
flow behaviour. However with the advent of greater computing power and graphic

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 3

R Geomodelling & Reservoir Management

capabilities, geomodels are now routinely constructed to gain better geological

insights of reservoirs. Geomodels can also be used for computing OOIP or IGIP, but
this is not their primary use, since OOIP or IGIP can in most cases be quite accurately
estimated using much simpler techniques other than geomodelling.

Because geomodels are used as the primary input for flow simulators, it is imperative
that all heterogeneities and reservoir characteristics likely to impact flow are accurately
represented or captured in the geomodel. Geomodelling and flow simulation are
therefore key tools in determining the optimal development of a reservoir and its
management, in order to recover the maximum amount of hydrocarbons by the most
efficient, safe and economic means. Flow simulation will allow production profiles
to be modelled and assess the impact of key uncertainties and reservoir management
strategy using “what if” scenarios. Geomodelling and flow simulation are used in
all stages of field development and management in order to obtain the following

Initial Field Development

• Create Base Case
• Estimates of OOIP
• Field development strategy
• Optimal injector & producer numbers and emplacement
• Sector or phenomenological models
• Completion strategy
• Production strategy
• Production profiles (oil, gas and water)
• Estimates of Reserves
• Input for economic evaluation
• Identify and quantify key uncertainties
• Reservoir management

Producing and Mature Field

• Improve & update Base Case
• Better integration of well tests, lab measurements etc
• History matching (production profiles, GOR, pressures, etc)
• Identify zones of bypass or attic hydrocarbons
• Optimise production and recover bypass or attic hydrocarbons
• Variation in support via injection (water, miscible/ immiscible gas, WAG)
• Better understanding of the field
• Additional wells and/ or modify well patterns
• Well design (vertical versus horizontal, completions)
• Update production profiles and economics
• Identify areas of possible additional work


Fluid flow through a porous medium is governed by three fundamental laws of

i) Conservation of Mass (mass is neither created or destroyed)
ii) Conservation of Momentum (rate of change of momentum of a fluid dependent
on force applied)
Geomodelling Workflow T W O

iii) Conservation of Energy (Energy is neither created nor destroyed)

Mathematically, computing fluid flow in a porous medium requires solving a set of

partial differential equations, solving for:
i) Conservation of Mass

 ∂ρ
div (ρV) + q =

ii) Darcy’s Law

 K  
V = - (∇ P - ρ g)

There are no analytical solutions to these partial differential equations, but they can
be solved by a numerical approach.

To do this (Figure 1), the equations must be:

• Discretised by dividing space into grid cells.
• SolvedPrinciples
incrementally in time steps.
of fluid flow in gridded system

For a cell (i)
conservation of mass:
Δ mi ( Δ φρ ) i
∑ Q i, e + q i = = Vi
(e) e Δ t Δt

Darcy Equation solves flux between cells:

⎛ KS ⎞ kr ρ
Q i, e ⎜ ⎟ ( ) i (P i - P e + ρ g Δ Z i, e )
(i) ⎝ L ⎠ i, e μ



Figure 1 Principles of fluid flow in gridded system.

The discrete equations are then solved sequentially in time steps for each cell grid.
In multiphase fluid flow, There are three types of flow mechanisms. These are:
• Viscous Flow
• Capillary Imbibition
• Gravity

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 5

R Geomodelling & Reservoir Management

All three are solved in the Darcy equation and modelled in a flow simulator.

Capillary Gravity
Viscous Forces Forces Forces

Qo = 1
Kr μ
(Qt + K . A. Krμ ww ⎡⎢ ∂∂Pxc − Δρ .g .sin θ6 ⎤⎥ )
⎣ 1, 0133 .10 ⎦
1+ w o
μ w Kro


Heterogeneities are spatial variations of reservoir characteristics and petrophysical

properties that affect fluid flow. Heterogeneities and spatial variation in properties
are controlled by geological variations both in space and time.

Heterogeneities occur over a very wide range of scales ranging from micron scale to
Km scale as shown on figures 2 and 3

1 Sealing fault
Partially sealing fault
Non-sealing fault

2 Sedimentary units

3 Permeability zonation
in a sedimentary unit

4 Baffles in a sedimentary unit

5 Cross lamination/ bedding

6 Micro-heterogeneities

7 Fractures (open or closed)

(Weber, 1986)

Figure 2 Heterogeneities and scale.

Geomodelling Workflow T W O

Heterogeneities and scale



qq Dm

qq hm





Figure 3 Heterogeneities and scale.

Heterogeneities at all scales have some impact on flow. The difficulty is estimating
how the heterogeneities at the varying scales sum up to affect flow at the field scale,
and the ideal gridding size of the reservoir needed to capture the heterogeneities
which have the greatest impact on flow.

Quantifying heterogeneities is restricted by the fact that measurements made in the

subsurface reservoir are limited to 2 main scales:
• Wells
• Whole field (seismic, but also dynamic data such as production history, well tests etc

For each scale of measurements you must remember:

• Radius of investigation
• Resolution of the tool and its uncertainty (accuracy and repeatability)
• For Well measurements:
Limited radius of investigation
Good vertical resolution
How representative is the measurement

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 7

R Geomodelling & Reservoir Management

• For Surface Measurements (Seismic)

Wide radius of investigation
Low vertical resolution

It is therefore important when modelling to remember the scale of measurements

and the resolution of parameters the used as input for building a geomodel. Figure
4 illustrates the typical scale and resolution of the measurements used in modelling
and the orders of magnitude difference between them.

Different scale of information and data

Figure 4 Resolution and scale

There are six main types of heterogeneities which affect horizontal and vertical
continuity in a reservoir:

Horizontal Continuity:
• Lateral continuity of reservoir/ flow units (facies changes, faults etc)
• Horizontal anisotropy of matrix properties
• High permeability horizontal drains in the matrix
• Fractures

Vertical Continuity:
• Vertical anisotropy of matrix properties
• Vertical barriers or baffles
• Fractures

Geomodelling Workflow T W O

To a certain extent, well tests, outcrop analogues and greater numbers of wells
(especially if closely spaced) may fill some information gaps. However the key
method for closing the gaps and populate geomodels with representative petrophysical
properties, remains conceptual ideas based on sound geological models (sedimentary
environment, tectonic and diagenesis).

The art of geomodelling is in essence, the allocation of the correct petrophysical

properties and reservoir characteristics to each individual cell in the model, using
the wells as anchor points and interpolating between them (Figure 5). This exercise
must integrate all available data, and ensure that the model is internally coherent
between static and dynamic data.
Geo Modelling

• Assign petrophysical properties to individual grid cells

at well locations, based on integrated well data. A
• Extrapolate petrophysical properties in 3D in the
inter-well space.

Well test
Kx, Ky
Kz Well test
KR, Pc



Figure 5 Geomodelling


5.1 Introduction
The geomodelling workflow is illustrated on Figure 6 and entails 5 main stages:
• Structural Modelling (horizons and fault mapping)
• Reservoir correlation and zonation
• Cell gridding (orientation, size)
• Facies modelling
• Petrophysical modelling (NTG, Phi, Kh, Kv, Sw)
• Upscaling and transfer to flow simulator

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 9

R Geomodelling & Reservoir Management

Structural Frame work Building

• Top and Intra Reservoir Mapping (Wells & Seismic)
• Fault Interpretation and Modelling Upscaling & Export to
• Tie To Wells (Correlations & Layering) Dynamic Simulator

Petrophysical Property Modeling

Geomodelling Work Flow • NTG, Phil, Kh, Kv/Kh
• Data analysis
• Permeability predictor
• Stochastic approach to populating
• Rock type

Reservoir Correlation & Zonation

• Layering

Grid Design Facies Modeling

• Cell Gridding • Lithofacies from core data
• Log typing and rock type at wells
• Stochastic approach to populate between wells
• Condition to Geol Concepts, Siesmic etc.

Figure 6 Geomodelling Work Flow.

When constructing a geomodel the following key points must be addressed:

• Geometry of the reservoir (trap, closure, compartments etc)
• Distribution of hydrocarbons (Gas cap, OWC, aquifer)
• Main fluid flow directions
• Spatial arrangement of principal drains
• Spatial arrangement of barriers and baffles
• Spatial porosity and permeability distribution
• Relationship between geology and distribution of petrophysical properties
• Boundary Conditions

5.2 Data Input

Before starting any geomodelling exercise, the compilation and QC of the input
data must be carried out. The data sources are two fold: Static and Dynamic. Both
must be used and integrated in the construction of the geomodel, ensuring coherence
between the two datasets.

In the case of Static Data, one commonly refers to Hard and Soft data, the former
being for most part direct measurements or computations from direct measurements,
while Soft data refers to inferred information which may be more tenuous and/ or
subjective, such as geological model or seismic attributes. The various data types,
together with their usage and/ or necessary QC are summarised in Table 1 below.

Geomodelling Workflow T W O

Seismic (2D and/or 3D) - Structural maps and faults.
- Quality, processing, static corrections, multiples?
Well seismic (VSP) - Velocity model, seismic calibration
Checkshots - Synthetic seismograms
Velocity calibration Depth conversion
Wells - Coordinates, KB, deviation surveys etc
- Well location in the field, TD etc
Cores & core description/analysis - Facies description, depositional environment,
- Core Analysis (plug poro-perm & SCAL)
- Mineralogy
- Hydrocarbon shows
- Ensure depth shifting onto wireline logs
Mudlogs - Rock description and mineralogy
- Hydrocarbon shows and kicks
- Mudlosses, porepressure, etc
Sequence stratigraphic interpretation - Correlation and Zonation
- Biostratigraphy and/or Chemostrat
Wireline logs - Petrophysical properties, log typing
- Formation evaluation & Sedimentary environments
Dipmeter and Image Logs - Structural interpretation and coherence with
structural maps
- Fault identification
- Fractures
- Sedimentary features and environments
Geological Model - Depositional environment, Tectonic history,
diagenetic evolution etc
- Core description and wireline information
Seismic Attributes - Inversion, Attributes, AVO etc
- Proper calibration to geology?

Well Tests, - Permeability height, barriers, connectivity, fluid
type, flowrates, reservoir pressure
PLT’s - Producing intervals and contribution
RFT - Reservoir pressure, pressure gradient, OWC and GOC
Production History - Reservoir behaviour (pressure changes), material
balance, production profiles, water-cut, GOR
- Calibration for static and dynamic model.
Fluid Samples - PVT data for hydrocarbons.
- Formation water sample

Data source for geomodelling

Table 1 Data source for geomodelling.

The data used in the modelling must always be QC’ed and validated prior to building
a static or dynamic model and its internal coherence checked. For example, does the
OWC as seen on RFT coincide with formation evaluation, mapped structural closure
and well tests.

Typically in the case of well data, composite log montage showing: wireline logs,
formation evaluation results, core intervals and description, core analysis results
(plug poro-perm results), perforation intervals, formation test intervals, stratigraphic
and/ or seismic markers, pressure data, biostrat zonation etc, would be generated for
each well, to allow a full QC.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 11

R Geomodelling & Reservoir Management


6.1 Seismic Interpretation

Structural interpretation in the subsurface is for most part based on seismic interpretation,
although onshore, surface geology is sometimes used or integrated with the seismic to
map the reservoir structure. Seismic markers or events are tied to well markers using
synthetic seismograms. The reliability of the picking is subject to its coherence and
amplitude strength on the seismic, which in turn depends on the impedance contrast at
the marker. Weak or highly variable impedance contrasts, together with other factors
such as tuning effects, or correlating events across major faults, all contribute to
uncertainty in the seismic picking and the resultant TWT mapping. Imaging remains
one of the key objectives in seismic, and with the advent of 3D seismic these past
20 years, huge improvements in imaging and resolution compared to standard 2D
acquisition have been achieved. Still, imaging quality remains dependent largely on
the acquisition and processing parameters used on the data. However, geomodelling
requires depth maps, which means the TWT maps generated from seismic, must be
depth converted bringing another level of uncertainty, above the uncertainty attributed
directly to the seismic interpretation itself.

As a general rule, depth conversion is done using interval velocities between seismic
markers (Figure 7). The wells themselves are the primary source of interval velocities
and act as anchor points in depth conversion. Velocities in rocks vary widely depending
not only on their mineral constituents (Figure 8), but also on their depth of burial,
porosity and the type of fluid in the pore space. This means velocity models can vary
quite significantly from well to well and in the space between them. Velocities used
in depth conversion come from acoustic logs, well check-shot surveys, as well as
stacking and migration velocities computed during seismic processing.


TwT ms



Interval velocities often adjusted for compaction Int_Velocity_V3l

Maps ground - truthed at well locations.

Well number and spacial distribution reduces uncertainty

Depth Conversion rotates fault planes.

Validate fault plane with dipmeter log

Figure 7 Seismic depth conversion.

Geomodelling Workflow T W O

Depth Conversion and Rock Velocities

P-Wave velocity

0 1000 2000 3000 4000 5000 6000 7000 m/sec

alluvium 700
dry sand
600 1850
wet sand
1500 2000
argillaceous sand
1000 2000
1100 2500

2000 2500
2000 3500

Porous limestone 3200 3600

Tight limestone
3400 7000

2100 4200
3500 5500
5000 5000
3500 6900
4500 6000
3500 7500
4300 5500
Compact clay and shale
3300 5500

Figure 8 Depth conversion and rock velocities.

It is important to remember that depth conversion causes migration effects in the

seismic image, with structural dips of horizons and fault planes likely to rotate after
depth conversion. That means that structural dips from the horizon map, and in case
of wells passing through fault planes must be coherent with the dipmeter data of the

Figure 9 schematically illustrates the range of structural uncertainty due to the seismic
picking and/or depth conversion error and how this affects the bulk rock volume
(BRV) computed from that structure.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 13

R Geomodelling & Reservoir Management

Uncertainty in Structural Interpretation

Reference Map Uncertainty Envelope

Fault Uncertainty 1.0


• Uncertainty in both seismic interpretation and in depth conversion 0.4

• Distribution of BRV for realizations within uncertainty envelope 0.2

Figure 9 Uncertainty in structural interpretation from seismic.

Seismic interpretation and mapping must not only be coherent with the geological
setting (extensional versus compressional regime) but with all available information.
For example, mapped closure or spill point must be consistent with the OWC identified
from formation evaluation or RFT data at the wells. Sequence boundaries being time
lines, these must equate with mappable seismic markers and in some cases are key
inputs for correlations between wells.

Besides structural imaging, seismic can give additional information on facies,

porosity, fluid type and even fluid contacts (OWC, GOC and GWC) from DHI (direct
hydrocarbon indicators) such as flat spots.

Velocity anisotropy as detected in seismic acquisition can sometimes be used to

determine prefered fracture orientation in fractured reservoirs.

6.2 Tectonics and Structural Style

The seismic interpretation of a hydrocarbon accumulation must by definition reflect
some kind of trap mechanism and structural style. These include: anticline, faulted
anticline, tilted fault blocks, fault traps (Figure 10) or combined traps. Stratigraphic
traps are typically more difficult to identify from seismic, unless they include direct
hydrocarbon indicators.

Geomodelling Workflow T W O

Fault Traps and Closures

Dip with fault Dip against fault

Throw > thickness Throw < thickness Throw > thickness Throw < thickness

Spill point


Unlimited closure No closure Unlimited closure Limited closure

Spill point

Unlimited closure Limited closure Unlimited closure No closure

Assumption: Shale against sand is sealing (After Bailey and Stoneley, 1981)
Sand against sand non-sealing

Figure 10 Fault traps and closures.

The interpretation of structural style must be coherent with the known regional
tectonic regime of the basin. Extensional basins are characterised by normal faults
and tilted fault blocks, while reverse faults occur in compressional regimes and flower
structures typically in wrench systems.

Fault throw interpretation is important as it will affect reservoir juxtaposition and

therefore flow across it. This ranges (in a continuum) from acting as an impermeable
barrier at one extreme (reservoir sections faulted against non-permeable intervals)
to being completely transparent or even enhancing permeability at the other.
Transmissibility across the fault can be estimated using Allen Diagram (Figure 11)
where juxtaposition of permeable and non permeable intervals can be mapped across
a fault surface, while techniques such as Shale Gouge Ratio’s (SGR) or Shale Smear
Factor (SSF) can be used to compute the loss of transmissibility across a fault as a
function of both vertical displacement of the fault and shale content in the reservoir

The sealing capacity of a fault can however be determined from well tests (Figure
12). Differing OWC in adjoining fault compartments would be indicative of sealing
faults. Varying pressure changes in adjoining fault compartments would also be
indicative of sealing or partially sealing faults.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 15

R Geomodelling & Reservoir Management

Faulting Sealing and Allen Diagram

Identifies Reservoir on Reservoir

Connection across fault
Assumes some transmissibility when
sand on sand contact

Clay Smearing & Sealing:

• Shale Smear Factor
• Shale Gouge Ratio

Dynamic Modellers allow

Transmissibility Multipliers between
Extensional Fault System cells to model barriers or baffles

(Badleys, 2004)

Figure 11 Faulting sealing and Allen diagram.

Fault Sealing and Transmissibility Barriers. Recognition by Well Testing

Partially Communicating Fault Inhibits Lateral Flow Sealing Fault Preventing Lateral Flow

Active Well

Elementary Fault Systems

MTR LTR (iv)

p 1 (v)
D (iii)
1 (i)

0.01 0.1 1 10 100 1000

Figure 12 Fault sealing and transmissibility barriers. Recognition by well testing.

Geomodelling Workflow T W O

Remember that sub-seismic faults (faults below seismic resolution) are also likely
to be present in faulted reservoirs. These faults however can sometimes be identified
directly in wells from cores and/or image and dipmeter logs (Figures 13 and 14)

Faults and compartmentalisation in a field have a major impact on flow and therefore
on its development and management:
• Limits of reserves (fault bounded structures)
• Connectivity and therefore production management
• Number of wells (producers & injectors)
• Sweep efficiency & pressure maintenance
• Secondary gas cap
Dipmeter & Image Logs – Fault Recognition

N - S Oriented Seismic Section UBI Amplitude


60m cut- Apparent Dip

out at well UBI Dips N S

Complex zone
of faulting
(45m cut-out
at well)



Lawrence et al, 2002

Details on Fault zones from Dipmeter and Image logs. Example of

seismic-scale normal faults with throws of 20-60m. Often
accompanied by bedding rotation and damage zones near fault
plane. Major faults commonly show up as zones of bad hole. Note
that. Dipmeter and Image logs often the only tool to detect sub-
seismic faults.

Figure 13 Dipmeter and image logs - fault recognition.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 17

R Geomodelling & Reservoir Management

Sub-seismic Fault Recognition

Azimuth Gamma ray Dip orientation Micro - resistivity

-40 360 0 API 100 10 20 30 40 50 6070 80 1 2 3 4 1


Drag on
hanging Fault
Fault wall plane dip
zone Fault
plane dip
3900 Drag on
foot wall

-2 18° 10 20 30 40 50 6070 80 90
cc > 0.6
cc > 0.3
(Cowan et al, 1993)

Figure 14 Sub-seismic fault recognition for dipmeter

In faulted reservoirs, great care must be taken where sections are repeated (compressional
setting such as thrust) or lost (extensional setting such as in pull apart basins) as a
result of faulting. Failing to account for these, reservoir sections may wrongly be
inferred to be thickening or thinning between wells.

This is illustrated in Figure 15 where Well B has a loss of section due to a normal
fault. Were it not for the fact that a fault was identified both on seismic and from the
dipmeter log in the well, the reservoir section would have been modelled as thinning
towards Well B, rather than being essentially isopach between wells A and C.

Geomodelling Workflow T W O

Faulting and Loss of Section

Well A Well B Well C

Well A Well B Well C

3.0 km



Apparent thinning of bed

due to faulting.


Neut / Dens


Neut / Dens


Neut / Dens
Figure 15 Faulting and loss of section

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 19

R Geomodelling & Reservoir Management

Correlation – Changes in Thickness from Slumping

Slumping identified from dipmeter and image logs.

Example from Shale Section but equally valid for Sandstones.

Slumping could explain abnormal bed

thickness variations

Slumping may explain anomalous flow behaviours

Figure 16 Correlation - changes in thickness from slumping.

Note also that slumping such as identified from image and dipmeter logs in the example
on Figure 16, may also result in an apparent thinning or thickening of section at a
well, which may be a localised feature rather than a general trend.

On a final note regarding faults and tectonic regime interpretation. Be careful about
your interpretation. Figure 17 shows how a repeat section may be encountered
in a deviated well drilled in an extensional setting with normal faults. Figure 18
meanwhile illustrates how the same structural map may represent quite different
tectonic regimes.

Geomodelling Workflow T W O

Faulting & Repeat Section


Repeat Section does NOT always imply Thrust or Reverse Faults

Figure 17 Normal faulting and repeat section.

Tectonic History

Right Strike Slip

Normal Fault & Erosion

Reverse Fault & Erosion

Left Strike & Normal Fault

Figure 18 Tectonic history

Faults and fractures are closely associated. Fractures are a fundamental consequence
of rock deformation (post-yield) controlled by the tectonic regime. Fractures are key
contributors to flow in many reservoirs, such as many of the supergiant carbonate fields
in the Middle East. Fracture distribution is typically stratabound within mechanical
units of competent rock separated by more ductile incompetent rocks such as claystones
and shales. Fracture density and distribution depends on several factors:
21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 21
R Geomodelling & Reservoir Management

• Position on fold (curvature, hinges, etc.)

• Lithology type and rheology
• Porosity
• Diagenesis (including dolomitization)
• Bed thickness
• Disseminated and layered clays and shales
• Confining pressure

Flow in individual fractures is controlled by:

• Aperture
• Present-day stress conditions
• Plugging of fractures
• Interconnectivity of fractures

There are several ways of detecting fractures and identifying fractured reservoirs in
the subsurface. These are:
• Pressure build-up from formation test (derivative, negative skin)
• Production logs (PLT)
• Temperature (injection)
• Chemical tracers
• Fractures in Cores (not always good in vertical cores)
• Image logs (FMI, FMS, UBI. etc)
• Wireline logs (caliper, resistivity (MSFL), microlog, sonic)
• Mudlogging (ROP, mud loss)
• Well Seismic - VSP (Velocity anisotropy)

The static and dynamic modelling of fractured reservoirs is however a specialised

subject and beyond the scope of this module.


Wireline logs, complemented with core and cuttings description are the main data
for correlating reservoirs between wells. Log motifs and biostratigraphic data are
the principal means of correlating wells. Where biostrat is barren, chemostrat which
identifies correlateable signatures from heavy mineral concentrations or rare earth
element concentration and assemblages can sometimes be used to refine reservoir
unit correlations.

Reservoir correlation and zonation can be a relatively simple exercise in the case of
layer cake reservoirs but can be difficult depending on the complexity of the reservoir
architecture, where rapid vertical and horizontal facies changes mean that correlation
is difficult even between numerous and closely spaced wells. Scarcity of wells and
spacing may also increase uncertainty in correlations. The drilling of highly deviated
or horizontal wells adds a further complexity, as log response may be affected and
log correlation may in some cases become impossible. A geological model or some
conceptual model of the facies distribution possibly integrated with seismic may be
required to properly correlate such wells.

There are two main types of correlations: Sequential and Flow Units Correlations.

Geomodelling Workflow T W O

7.1 Sequential Stratigraphic Correlations

Sequence stratigraphic boundaries have the following characteristics:
• Correlate along time lines
• Tie with seismic markers
• Do not necessarily correlate with Flow Units.

Key points about a Sequence Boundary - (Wagoner et al 1990) are:

• Widespread surface that separates all the rocks above the Boundary from rocks
below. Represents an instant in time over the entire surface (time stratigraphic
• Forms independently of sediment supply
• Commonly marked by significant erosion. Coincides with Maximum Flooding
Surface (MFS) where you have very slow or non-deposition.
• Distinct break in deposition and basinward shift across the Sequence Boundary

Sequential correlation begins by identifying the Sequence and Parasequence

boundaries within the reservoir section. Since these are time lines, they coincide
with seismic markers and as such should be coherent with seismic interpretations.
Coals, evaporites and radioactive shales are very good and laterally persistent regional
sequence boundary markers, bounding differing sedimentary packages as illustrated
on Figures 19 and 20.
Correlations in complex sedimentary facies assemblages


(Mutti and Normack, 1987)

Identify Sequence Boundaries (MFS)
Subdivide further into Parasequences and flow Units

Figure 19 Correlations in complex sedimentary facies assemblages.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 23

R Geomodelling & Reservoir Management


Correlation of wells in a complex deltaic reservoir.

Coals provide the best sequence markers.
(After Rider and Laurier, 1979.)

Figure 20 Sequential correlations.

However as mentioned previously, facies assemblage in jigsaw or labyrinth reservoir

architecture can sometimes make correlation very difficult (Figure 21). Some helpful
techniques for correlations are discussed later in this section.
Correlations and Architecture
A. Layer-cake reservoir type B. Jigsaw puzzle reservoir type C. Labyrinth reservoir type

a) Distinct layering with marked continuity a) Different sand bodies fitting together without a) Com
and gradual thickness variation major gaps. Occasional low permeable zones can and lenses often appearing discontinuous
occur locally between adjacent or superimposed in sections
sand bodies.

b) Layers represent sands deposited in b) Reservoir architecture determination requires b) In 30 interconnections exist locally but in
same environment of deposition detailed sedimentological analysis part only via thin low permeable sheet sands

c) Excellent log correlation showing gradual c) Although the sand/shale ratio is high, correlation c) Difficult log correlation even when well
lateral changes in thickness and properties may be difficult without detailed facies interpretation spacing is 400 to 600 m

Appoximate average data density requires for deterministic correlation of major sand units

Well pattern Spacing, m Wells/km2 Well pattern Spacing, m Wells/km2 Well pattern Spacing, m Wells/km2
Rectangular 1000 1 Rectangular 600 3 Rectangular 200 25
Triangular 1200 0.8 Triangular 800 2 Triangular 300 13
Random 1.3 Random 4 Random 32

(After Selley, 1998.)

Figure 21 Correlations and architecture.

Geomodelling Workflow T W O

7.2 Flow Units

Flow Units are defined as:
• Correlateable continuous porous and permeable intervals (linked to lithofacies)
• Hydraulically connected intervals
• Correlateable non-reservoir (barriers) intervals

In this type of correlation, the notion of time stratigraphic boundaries is not considered,
rather the correlation of reservoir sections with similar flow properties (or lack thereof
in the case of barriers). Flow Unit correlations can and do cut across time line as
illustrated on Figures
Correlations 22 and
- Sequence 23.
Stratigraphy versus Flow Units -
Progradational Parasequence Set
A B C D Parasequence
Chronostratigraphic correlation set boundary
A 4
3 3
2 2 300'
1 1 1
Basinward 1

10 miles

A Coastal-plan sandstones Shelf

and mudstones mudstones
Shallow-marine A Well
sandstones locations
1 Parasequence number

C D Datum:
Top of marine
2 4
3 4
2 3 3

1 2

Lithostratigraphic correlation
Wagoner et al, (1990)

Figure 22 Correlations - Sequence stratigraphy versus flow units - progradational

parasequence set.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 25

R Geomodelling & Reservoir Management

Correlations - Sequence Stratigraphy versus Flow Units -

Retrogradational Parasequence Set
A B C D Parasequence
Chronostratigraphic correlation boundary


10 miles
C D Datum:
Lithostratigraphic correlation Top of sandstone

Coastal-plan sandstones Shelf

and mudstones mudstones
Shallow-marine A Well
sandstones locations
Wagoner et al, (1990)
1 Parasequence number

Figure 23 Correlations - sequence stratigraphy versus flow units - retrogradational

parasequence set.

In the Progradational example (Figure 22), it is clear that even though the correlations
using Sequence Boundaries or Flow Units are very different, the computed OOIP from
both models would be much the same. Similarly, the connectivity of the reservoir is
essentially the same between both correlation models, except for some minor isolated
reservoirs, and flow simulation (and therefore reserves) would likewise be similar
when computed from either model.

However, in the case of the retrogradational model (Figure 23) although the OOIP
would be very similar for both the Sequence Stratigraphic and Flow Unit models, the
reservoir connectivity and flow behaviour would be very different. As such reserves
estimates would vary significantly between the two models.

7.3 Correlation Techniques

A useful technique used in correlating clastic reservoirs from logs consists of removing
all high energy intervals (sandstones) from the log section and leave only the shale
sections. Since shales are deposited in low energy depositional environments when
clays fall out of suspension over wide areas on basin floors, these are correlatable
over relatively large distances. Figures 24 and 25 are two such examples from North
Sea fields where the technique was successfully applied.

Geomodelling Workflow T W O

Correlations – Shale Band Correlations



Remove sandstones and correlate shale bands

Restore sandstones in correlation

Hatton, I.R. et al, 1992

Figure 24 Correlations - shale band correlations (example 1).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 27

R Geomodelling & Reservoir Management

Correlations – Shale Band Correlations



Remove sandstones and correlate shale bands


Restore sandstones in correlations

Hatton, et al, 1992

Figure 25 Correlations - shale band correlations (example 2).

When correlations are detailed with numerous markers picked, it is important to

check that these correlations are coherent between wells. Generating a Line of
Correlation diagram (LOC) is a useful tool for doing this. The technique compares
all correlation relative to a reference well. The marker depths of that reference well
are cross plotted to give a straight line correlation. The marker depths of all other
wells are in turn cross-plotted against the reference well (Figure 26). In wells where

Geomodelling Workflow T W O

correlations reflect isopach layer-cake successions, the trends on the LOC will be
parallel (Wells A and B, Figure 26) In the case of an isopach correlation, but with a
missing section due to a fault, the trend on the LOC will be parallel but with an offset
(Well C) if bed thickness decreases, the slope of the LOC trend will decrease (Well
D) and increase if there is a thickening. In the case of gradually increasing thickness
with depth (Well E), the trend will gradually deviate from the reference trend to give
a curved LOC as shown on Figure 26.
Correlation Tools - Line of Correlation Diagram (LOC)
1750 2000 2250 2500 3000
1750 A B C
Well A
Well B
Well C D
2000 Well D
Well E


3000 Constant Thickness

Loss of

Constant thinning

Taking a Reference Well Increasing thickning

(Well A) and crossplotting
Tops against this reference
Depth TVDSS well gives LOC for other wells

Figure 26 Correlation tools - line of correlation diagram (LOC).

Varying reservoir pressures and/ or gradients can also be used to correlate reservoir
units. Figure 27 shows such an example where two vertical barriers were correlated
in a field, solely on the basis of vertical pressure changes.
Zonation & Reservoir Barriers or Baffles


Well A
Well B
Well C o o o Well A
o o
o o o

o o
o o
Well B
o o
o o o
o o o Well C
oo o
oo o
oo o
o o OWC
oo o
oo o
o o
o o

• Oil Field with 3 major fault compartments but field hydraulically in Equilibrium

Figure 27 Zonation and reservoir barriers or baffles
Northern Sector put on production
• 5 years later Wells B and C are drilled
• Change of Pressure shows Compartmentalisation
• Vertical
Institute BarriersEngineering,
of Petroleum / Baffles Heriot-Watt University
21/07/16 29
R Geomodelling & Reservoir Management

• Oil field with 3 major fault compartments but field hydraulically in equilibrium
• Northern sector put on production
• 5 years later wells B and C are drilled
• Change of pressure shows compartmentalisation
• Vertical barriers / baffles

Once the main reservoir correlations are picked, the reservoir is zoned in order to
capture subtler reservoir characteristics, such as high permeability or tight-cemented
intervals. This may include zones of enhanced secondary porosity and permeabilities
or zonation governed by the degree of dolomitization, depending on the impact
dolomitisation has on poroperm properties. In other words, refine the correlation
into zones with different poroperm and flow characteristics compared to units above
and below.

Figure 27 shows a reservoir section, where a modified Lorenz plot highlights high and
low permeability intervals and has been used for flow unit zonation in a reservoir.

E. Facies

core porosity Blocked
GR core permeability Phi Phi
Flow unit zonation





Reservoir zonation controlled by permeability.

Figure 28 Reservoir zonation using modified Lorenz plot.

Geomodelling Workflow T W O

Highly deviated and horizontal wells are common these days and their correlation not
always easy. It is very common for deviated/ horizontal wells to be projected back
as TVD vertical wells to facilitate correlation. However this is not always possible
in horizontal wells, since the horizontal section, sometimes hundreds of meters in
length will be projected over less than one meter of true vertical section, or the section
intersected may not be representative of the point where it is being projected back.
Furthermore, horizontal wells may in the case of dipping beds move up section and
emerge back at the top of a reservoir for example. Yet, horizontal wells are often key
producers and must be properly correlated along their entire horizontal section, or
else the model may be importantly flawed.


Armed with a structural interpretation and reservoir zonation for the field, the next
step is creating a grid, which will be used to construct the geomodel and populate
it with the reservoir properties. Since the key objective of a geomodel is to capture
heterogeneities that significantly impact flow, the grid orientation and size must be
constructed such that these heterogeneities can be effectively modelled. Although
modern geomodelling softwares allow models with up to 50 million cells, it must
be remembered that dynamic simulation grids are limited to between 200,000 to
500.000 cells depending on the reservoir complexity. Building very detailed fine
grid geomodels to capture a high detail of heterogeneities will require upscaling to a
coarser flow simulation grid, with the inevitable and often unquantifiable degradation
of the fine grid data.

The geomodel and grid must be made fit for purpose and ideally, geomodels should
be constructed using a grid size that will minimize or not require upscaling when
used as input to a flow simulation.

At the moment most commercial geomodellers use orthogonal corner point geometry
to generate grids, but limited PEBI gridding (hexagonal shaped cell grids) now exist
and will become more widely available in the near future.

Key points and considerations when gridding are listed below:

• Field geometry (Structure and fault boundaries)
• Geological trends
• Orientation of grid along main heterogeneity and/ or flow directions
• Orientation of grid along main heterogeneity
• Main flow directions
• Numerical dispersion and grid orientation effects (linked to flow direction, grid
size and grid orientation)
• Dynamic simulator (problems of convergence)
• Model only the faults which are known or believed to have an effect on volumes/
• and/ or flow performance
• Other heterogeneities
• Capture key flow units and main reservoir drains
• Fractures and enhanced flow directions
• Drains and barriers
• Drainage

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 31

R Geomodelling & Reservoir Management

• Aquifer
• Production Mechanism
• Pressure and saturation distribution
• Dynamic simulator (problems of convergence)
• Computation time
• Grid size linked to degree and size of key heterogeneities
• Maximum number of cells allowed (computing limitations)
• Most regular and orthogonal Gridding as possible
• Grid size changes should ideally be less than 2 in either ∆Xi+1/ ∆Xi or ∆Yi/ ∆
Yi direction
• Always try to have more than 2 grid cells between wells

This is schematically
orientationon Figures
and 29 Yand
X versus 30length of cells

Sediment Source

Open Fractures

Active Faults

Main Structural axis

Figure 29 Grid orientation and X versus Y axis length of cells.

Identifying main flow units and fluid flow directions for each
reservoir unit is important at onset of the modelling

Section through channel complex Plan of channel system

Figure 30 Gridding

Geomodelling Workflow T W O

Besides the reservoir zonation discussed in the previous section, the zones can be
further subdivided vertically in the grid by what is commonly referred to as layering.
Layering can vary between different reservoir zones in a geomodel, depending on
the complexity of the reservoir morphology and properties that we are attempting
to capture for each zone.

Key geological and reservoir considerations when layering the grid are:
• The number of grid cells (Layering greatly impacts on the number of cells)
• The size of your model (cells or physical dimensions)
• Reservoir correlation and zonation
• Objectives of model (Full field versus sector or phenomenological)
• Complexity of the field
• Fluid being produced (oil/ Gas)
• Production Mechanisms (Depletion and Sweep)

You may need to adapt your layering to accommodate and capture certain reservoir
anomalies, well behaviours or just practical reasons linked to your wells or completions.
Typical reasons why you may adapt your layering are:
• Displacement fronts (sweep efficiency)
• Water coning or gas cusping
• High permeability drains
• Baffles or vertical barriers
• Perforation intervals for production/ Injection, completions and even DST’s
• Production logging results (PLT)
• Accommodate Horizontal Wells

As shown in Figure 30, there is great flexibility in the way a reservoir section can be
layered and vertically gridded. It is clear that all the above conditions cannot possibly
be satisfied in the gridding of a reservoir. Compromise will therefore be necessary
to favour the criteria considered most important in the model.

8.1 Fault Representation

In a geomodel, faults are represented as surfaces cutting across the grid and subdividing
it into compartments or segments. There are three ways faults can be represented
(Figure 31).
Fault Representation in Gridding

Oblique but along cell Margins

Along I or J Direction

Oblique Direction

Figure 31 Fault representation in gridding.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 33

R Geomodelling & Reservoir Management

1) Oblique
The grid cells terminate in a non-orthogonal fashion along the fault-plane. This
generates complex grid cell shapes along the fault trace, which can adversely
affect flow computation along and across the fault-plane.

2) Along I or J direction
The grid cells are aligned along the fault-planes to keep the cell grids along the
fault orthogonal.

3) Zigzag
The fault-plane is bent along the edges of grid cells in a zigzag pattern in order
to keep the grid cells orthogonal along the fault trace.

The fault-plane representations in Figure 31 are equally applicable to the vertical

plane, but not in all commercial geomodellers.

Finally, gridding may be locally refined if a particular property needs to be modelled.

For instance fine grids, or radial grids can be created around wellbore in order to better
model flow behaviour or reproduce well test results. Examples of grid refinements
are illustrated on Figure 32

Grid system or refinements depend on what is required from model

Variable size cartesian grid Local grid refinement Radial grid (round well bore)

Figure 32 Hybrid grids


Cross-plots of coreplugs porosity and permeability measurements will in many cases

fall into Petrophysical Groups associated with lithofacies which occupy defined areas
on the porosity permeability cross-plot as illustrated on Figure 33. The linking of
Petrophysical Groups to a given set of relative permeability and Capillary Pressure
curves is known as Petrophysical Rock Typing (Figure 34). Note that petrophysical
groups and Petrophysical Rock Typing are not formal nomenclature and may vary
between authors and companies.

Geomodelling Workflow T W O

Petrophysical Groups

(characterized by

Permeability (md)

Petrophysical groups to facies Relationship

Porosity (%) 60
Sedimentological and Lithofacies analysis 40 Group_04 s
30 Group_03 g ro
20 Group_02 ical
Group_01 hys
tr op
F_01 F_02 F_03 F_04 F_05 F_06


Figure 33 Petrophysical groups

Petrophysical Rock Typing



Relative permeability

k ro
k rw




Petrophysical group
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
SW, Water Saturation, Fraction


Associate Pc and Kr to
Petrophysical Groups

Figure 34 Petrophysical rock typing

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 35

R Geomodelling & Reservoir Management


Using core data, cuttings and wireline logs, lithofacies can be identified, each lithofacies
typically falling within a Petrophysical Group as illustrated on Figure 33.
Examples of Lithofacies are:
• Coarse pebbly sandstone
• Sorted cross-bedded sandstone
• Massive well sorted sandstone
• Upward fining laminated sandstone
• Fine-grained argillaceous sandstone

The lithofacies identified from core and wireline data now need to be extrapolated to
the uncored section of the well and other non-cored wells in the field. This is done by
establishing lithofacies predictor from wireline logs. This is known as Log Typing.
Lithofacies based on Log Typing are commonly referred to as Electrofacies in order
to differentiate them from those determined from cored data. Again Log Typing and
Electrofacies are not formal terms and will vary between workers and companies.

Typical examples of lithofacies discriminators from wireline logs are:

• Cut-offs on logs: GR, SP, Pef or combinations thereof
• Density-Neutron cross plots. Possibly with a third axis (Figure 35)
• Histograms or combinations thereof
• Mineralogy proportions (Quartz, Shale, Limestone, Dolomite)
• Other wireline cross-plots

Lithofacies Discriminator

2.292 0.3

2.588 0.15
2.673 0.1
2.715 2.706
















Neutron Porosity

Figure 35

Geomodelling Workflow T W O


As mentioned earlier, geomodelling entails allocating a single value representing

lithofacies and petrophysical properties (NTG, Porosity, Kh, Kv) in each grid cell of
a model. The wells act as anchor points from which properties are interpolated in the
interwell space (Figure 5). The values allocated at the cells intersected by the wells
are computed from wireline logs and the process is called Log Blocking.

Figure 36 schematically illustrates a single cell intersected by a well. Since only a

single value can be attributed to a cell for a given property, the facies allocation will
be based on the predominant facies which in this example is sandstone (40%). The net
to gross allocated to the cell will be computed, from the thickness above the porosity
and Vshale cut-off (Section 13.1) divided by the gross interval. Average porosity can
be computed by averaging the porosity above the cut-off, or can be biased so as to
be computed solely from the predominant facies allocated to the cell (Figure 36).
Averages can of course be computed as arithmetic, geometric, harmonic or RMS
depending on the property being calculated. Log Blocking Computation at Well


Predominant facies attributed



Cell Height




Figure 36 Log blocking computation at well.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 37

R Geomodelling & Reservoir Management

Log blocking example of E_Facies, Phie, NtG & Kh

Reser. Layering


GR Phie NtG Kh (md)
0% 30%0 1 0.01 1000

Upscaled Phie
Avg per Layer Avg NtG per Layer Upscaled Kh_Int
Avg per Layer

Upscaled Phie
Biased for Litho
Avg per Layer Upscaled Kh_Int

Biased for Litho

Avg per Layer

High Kh streak

Figure 37 Log blocking example of E_facies, Phie, NtG and Kh.

Figure 37 shows a real well Log Blocking example.

Note that the values allocated to the cells by the well control become anchor points
and it is important to ensure that these are real and representative. In case of badhole
sections (Figure 38), the computed petrophysical properties are not representative over
these intervals, and must be excluded from the log blocking as shown on Figure 39. If
the amount of badhole section is excessive, the well may become non-representative
and should be ignored in those sections with excessive badhole sections.

Geomodelling Workflow T W O

Bad Hole Sections

Top Reservoir

Measurements in Badhole
sections must be excluded
from Facies and Petrophysical


Badhole Section

Figure 38 Example of bad hole sections.

Log-Blocking Computation and Badhole Section at Well

E_Facies Phie
Fa s


Cell height

Badhole section Cut_off

• In above example if badhole is ignored:
-Predominant Facies may be wrongly attributed to Dolostone
-Petrophysical Parameters in Sandstones would be wrongly
biased towards higher values
• Badhole sections treated as null values
• Wells with excessive Badhole, may not be representative for the Zones where these occur
• Exclude them and allow model to stochastically generate Facies and Petrophysical Parameters
• Fix Facies and/or Petrophysical Parameters

Figure 39 Log-blocking computation and badhole section at well.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 39

R Geomodelling & Reservoir Management

Care must also be made when using wells with reduced or missing sections due to
faulting or partial penetration. The reduced intersected section may not be representative
of the entire interval and should be ignored. Also remember that wells intersecting a
fault or in close proximity to a fault may be affected by diagenesis and may therefore
have petrophysical properties not representative of reservoirs away from the fault
Modelling of Facies and Petrophysical Properties
(Figure 40).
Well A Well B

Diagenesis along fault? Diagenesis along fault?

Well A not representative of Facies Volume. Need to estimate the fractional volumes of the various Lithofacies in the Model
If diagenesis from fault, well A may not have • Usually per interval
representive NTG, Phi and Kh away from fault. • Beware of wells with sections missing
• Are wells with intersecting faults representative – Diagenesis?

Log_Blocking – Upscale of Log Data to Cell Scale

Figure 40 Modelling of facies and petrophysical properties
Attribution of a single log value to a cell

Well trajectory
3D grid cells

a c b

Properties attributed to cells a) and b)

based on very limited well log - values
calculated in cell c) may be attributed
to a) and b).

Complex well intersection with grid cells

Figure 41 Log blocking - upscale of log data to cell scale

Geomodelling Workflow T W O

Finally, in case of deviated wells, where a well may only intersect a small fraction
of a cell (Figure 41) these cells based on limited well data may be ignored in order
to avoid allocating non-representative values to such cell. Geomodeller softwares
have different methods in Log Blocking to deal with such problems, as shown on
Figure 41.


12.1 Depositional Environment

The spatial distribution and geometry of lithofacies depends largely on their depositional
environment as illustrated on Figure 42. Establishing the depositional environment
of the reservoir section is therefore one of the key tasks in geomodelling, as it will
allow the geometry and spatial arrangements of the lithofacies to be predicted. It is
evident from this that the interpreted depositional environment will hugely affect
how porosity and permeability are distributed in the geomodel.
Depositional Model and Lithofacies from Core

Active Channel and Levees Sequence:

Lithofacies: i) Coarse to Pebbly Sandstone
ii) Medium grained Mega Rippled
trough X-bedded Sandstone
iii) X-bedded silty to medium grained Sandstone

Abandoned & Overbank Channel Sequence:

Lithofacies: iv) Thin bedded silty to fine grained Sandstone
v) Floodplain shales, silts & coals

(Modified from Selley, 1976)

Figure 42 Depositional model and lithofacies from core.

The identification of depositional environments is based on the recognition of diagnostic

features from a combination of wireline log motifs, cuttings descriptions and of course
core data, where detailed sedimentary features can be identified (Figure 43).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 41

R Geomodelling & Reservoir Management

FACIES - Diagnostics


and shell Tidal Tidal sand Regressive
debris channel wave barrier
(high-energy bar

Glauconite Submarine channel

shell debris,
carbonaceous Prograding
detritus, and Turbidite Grain flow submarine
mica (dumped fill fill fan

Carbonaceous Prograding
Fluvial or Delta
detritus and delta or
deltaic distributary
mica crevasse
channel channel
(dumped) splay

Facies type Depositional environment

I Sub-wave base slope deposits "Deep marine"
II Above sub-wave base slope deposits "Shallow marine"
III Fluvial deposits
Diagonistic features *
1 Scattered pebbles in massive sandstones *
2 Climbing ripples *
3 Graded bedding *
4 Penecontemporaneous faulting
* *
5 Glauconite
6 Fossile shells *
7 Lensoid ripples *
8 Bioturbation *
9 Mottled claystones (Caliche) *
10 Cross-bedding *
11 Red colour
Selley, 1998

Figure 43 Facies - diagnostic features

Geomodelling Workflow T W O

Lithofacies variation within Sedimentary Environment

Wide vaiation in:

Composition (minerals and lithics)
• Varying log response
• Need coring
• Use mudlogs and cuttings description
• Side wall cores

Vertical Profile of Gravely Braided Stream Deposit Busch, 1985

Figure 44 Lithofacies variation within the same sedimentary environment.

However one must be aware that diagnostic log motifs may in some cases be variable
within the same depositional environment. Figure 44 illustrates how gravel lithofacies
and log motifs within a braided stream deposit can vary significantly simply as a result
of gravel composition (variations in lithic fragments: shales, sandstones, igneous
rocks, feldspathic rocks, coal fragments, evaporites etc). Core data may in such case
be the only means of getting a proper interpretation.

However, log motifs can in most cases be used to interpret depositional environment.
Figures 45 to 48 illustrate diagnostic log motifs and major reservoir characteristics of
several depositional environments ranging from braided and meandering channels to
delta lobes and distributary mouth bars. Note that not only are log motifs important,
but the interpretation must also consider the lateral and vertical relationship between
the different sub-environments. Remember that certain geometries can be recognized
from seismic and if available, seismic must be integrated into the interpretation.

However from Figures 47 and 48, it is evident that without additional information,
it may be impossible to differentiate between the delta lobe and the distributary
mouth-bar. In this case however, the two environments are likely to have very similar
lithofacies distribution and would therefore not adversely affect the geomodel and
reservoir prediction whichever depositional model is adopted.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 43

R Geomodelling & Reservoir Management

Diagnostic Depositional Environment from Wireline Logs

Braided Channel System

Summary diagrams illustrating the major characteristics of braided channel deposits

Coleman and Prior (1980)

Diagnostic 45 Diagnostic depositional
Environment environment
from from wireline logs
Wireline Logs

Meander point bar system

Summary diagram illustrating the major characteristics of meandering point-bar deposits.

• Diagnostic Log Curves

• Lateral and Vertical Relationship
• Use Seismic Coleman and Prior (1980)

Figure 46 Diagnostic depositional environment from wireline logs

Geomodelling Workflow T W O

Diagnostic, Depositional Environment from Wireline Logs

Delta lobe

Summary diagram illustrating the major characteristics of lacustrine

delta-fill deposits in the upper delta plain
Coleman and Prior (1980)

Figure 47 Diagnostic depositional environment from wireline logs

Diagnostic, Depositional Environment from Wireline Logs

Distributary mouth bar

Summary diagram illustrating the major characteristics of the

distributary-mouth bar deposits in the subaqueous delta plain.

• Diagnostic Log Curves Note similarity with delta lobe.

• Lateral and Vertical Relationship Coleman and Prior (1980)
• Use Seismic

Figure 48 Diagnostic depositional environment from wireline logs

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 45

R Geomodelling & Reservoir Management

As mentioned above, log motifs on their own may therefore not be sufficient for
a clear diagnostic, requiring additional information to determine the depositional
environment. For example the log motifs on Figures 49 and 50 display typical upward
coarsening cycles, one from a wave dominated delta and the second from a shoreface
Parasequences and Diagnostics from Logs

Grain size

Parasequence: Relatively conformable

succession of genetically related beds
or bedsets bounded by marine flooding
surfaces and their correlative surfaces
(Wagoner et al, 1990)

Stratal characteristics of an upward-coarsening

parasequence, in a beach environment.

Figure 49 Parasequences and diagnostics from logs

Parasequences and Diagnostics from Logs

Wave Dominated Delta

Grain size

(Wagoner et al, 1990)

Stratal characteristics of an upward-coarsening

parasequence, in deltaic environment.

Figure 50 Parasequences and diagnostics from logs

Geomodelling Workflow T W O

Discriminating between these two depositional environments even with detailed

core information may still be quite difficult, and may require seismic or regional
information to decide. Unlike the previous example in Figures 47 and 49, the choice
of environments in this case would have a major impact on the geometry of the
Parasequences and Diagnostics from Logs
reservoir model as shown on Figures 51 and 52.

Wave Dominated Delta Delta

0 150
Grain size


mouth bars Lenticular geometry

Figure 51 Parasequences, diagnostics from logs and reservoir geometry.

Parasequences and Diagnostics from Logs

Beach Beach - Barrier shoreline

0 Grain size 150

Tabular and linear geometry

Different interpretations have major impact on reservoir

morphology, extension and geometry

Figure 52 Parasequences, diagnostics from logs and reservoir geometry.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 47

R Geomodelling & Reservoir Management

Remember that all interpretations must integrate all available information, and all
interpretations must be internally coherent.

In the example on Figure 53, we have a shoreline depositional setting and as can
be seen from that example, wells A and B are likely to correlate very well even if
quite distant from each other, while the much closer wells B and C are likely to
have different lithofacies assemblage and be difficult to correlate. Uncertainty in the
interpretation and geological model is often a function of the number of wells, their
and spatialEnvironment
and Paleogeography

Lagoonal Fluviatile
Upper-Shore Face

Lower-Shore Face

1 Km

• Figure 53 Depositional environment and paleogeography

Depositional Environment interpreted from diagnostic features and vertical & lateral association or assemblage of LithoFacies
• Good interpretation depends on well distribution, well number and spacing
• The fewer the number of wells or the wider the spacing the higher the uncertainty
• Spacial distribution also a factor of Uncertainty. For example wells A and B likely to correlate better than wells B and C.
• Usually as number of wells increases need for geological model decreases – Can use a gridding & contouring approach
• Good interpretation depends on well distribution, well number and spacing.
• The fewer the number of wells or the wider the spacing the higher the uncertainty in the
• Spacial distribution also a factor of uncertainty in interpretation. For example wells A and B
likely to correlate better than wells B and C.
• Usually as number of wells increases need for geological model decreases. May use simple
gridding and contouring approach.

Note that trends or orientations of paleo-shorelines or channel orientations in a field

can come from several sources, such as regional geology and dipmeter data (Figure
54). Seismic can help discriminate between different reservoir architectures such as
isolated channels or channelised lobes (Figure 55).

Geomodelling Workflow T W O

Facies – Recognition using the Dipmeter

Dipmeter may be very useful in giving orientation of channels or beach axes. (Cowan et al, 1993)

Figure 54 Facies recognition using the dipmeter

Depositional setting and special distributions of reservoirs.

Interpretation has major impact on pore and heterogeneity distribution

Channel - levee complex

Channelised lobe complex

Schematic cross-sections showing reservoir architecture and depositional elements (Lawrence et al, 2002)

Figure 55 Depositional setting and spatial distributions of reservoirs.

Figure 56 is another example, where we have a channelised sequence identified from

seismic. Here, knowing whether this is a turbidite (marine) or a fluvial (continental)
channel complex will give you additional information, with turbidite channels having
levees with significantly higher percentages of sandy facies compared to the much
siltier and argillaceous levees in the fluvial channel system.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 49

R Geomodelling & Reservoir Management
Diagnostic Logs and Goemetry - Limitations

Two possible channel interpretations

Channel feature (from seismic)

Submarine Sandy levees
turbidite channel

Main channel Levees and overbanks


Important reservoir implications

Fluvial channel Silty levees
depending on selected model

Figure 56 Seismic, reservoir geometry and reservoir characteristics

12.2 Facies Proportions

We have so far discussed the interpretation of lithofacies and their conversion to
Electrofacies in all wells. We also discussed how Electrofacies and reservoir properties
can be allocated to grid cells using Log Blocking. The review of depositional
environments has given the first insight on interpolating Electofacies (and therefore
reservoir properties) in the interwell space. However before doing this, we first need
to decide the percentages of lithofacies that must be attributed to the geomodel and
how these are distributed in the various reservoir zones.

The first estimate will come from the wells themselves, where the weighted average
percentages of Electrofacies computed from the wells may be representative of the
field as a whole. Figure 57 shows an example where the weighted average percentages
of shale and sandstone for each layer have been computed from a number wells. If we
are satisfied that these percentages are representative of the entire field, the geomodel
will be populated with the same percentages as illustrated on Figure 57. However
if the wells were drilled in an area of the field where one or more facies are over or
under represented, the percentages can be modified accordingly, either over the entire
section or selectively for any given layer(s).

Geomodelling Workflow T W O

Facies Modelling

• From the facies proportion, The facies are

given the same Probability value as their
proportion for each layer
If for a given layer or groups of layers these
proportions and probabilities are not judged
• Proportion of Facies computed for each layer, to be valid, these can be modified on the
on the basis of average form selected wells probability curve to something believed to
be closer to reality.

• Spatial Correlations will ensure that the various

Figure 57 Facies proportion modelling
Facies have the wanted geometry

• Facies Distribution can be further conditioned by

trend maps and co-simulation or co-kriging.
e.g. a correlation between the reservoir isopach
and the sandstone thickness
Facies Modelling
• Need to estimate the fractional volumes of the various
Lithofacies in the Model
In growth fault settings:
• Percentage Volumes computed from wells:
• Wells on upthrown side of structure often not representative
Are wells distributed such as to be globally
of Facies Volume, NTG, Phi & K for downthrown side
and spatially representative?
B of Reservoir interval.
• Model the properties independently for each segment.

In this example:
• Are all Wells on upthrown side representative of that
o segment?
• Is only 3 wells on the downthrown side representative of
that segment?
• Consider decluster approach
o o

o B
o A
Different Geology?

Figure 58 Facies proportion modelling

For example, in a field cut by a major growth fault such as schematically represented in
Figure 58, it is very likely that the geology and lithofacies on the upthrown side of the
fault will be different from that on the downthrown side. As such, only the lithofacies
and percentages from wells on the upthrown side of the fault should be considered in
populating that segment of the field, and only the wells on the downthrown side of

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 51

R Geomodelling & Reservoir Management

the fault considered as representative of the southern segment of the field. Note that
in cases where there is insufficient number of wells to give a representative facies
percentage, analogues can be used to get better estimates.

Remember that facies proportions for each zone and layer are computed directly
from averaging the proportions seen in all selected wells. In cases where wells are
closely grouped together, percentages will be strongly biased towards well clusters,
which may not be representative of the whole field as illustrated on the upthrown
segment from example in Figure 58. Techniques, such as the appropriately named
declustering may need to be applied to remove this bias.

Having established the percentages of facies to be modelled, the way these are to be
interpolated between wells and distributed in space in the geomodel requires applying
geostatistical modelling techniques which are discussed next.


13.1 Pixel Based Models

Pixel based models are built and populated with facies using correlation structures
determined by variograms. Each grid cell is treated as a pixel and the variogram
determines how the facies will be distributed and clustered in 3D space. Each facies
will have its own variogram, so that each facies has its own 3D distribution in space.

The variograms that produce the wanted distribution and clustering in 3D space
are defined by specific parameters. These are: the variograms model (Spherical,
Exponential, Gaussian); nugget, sill and correlation lengths. (Variograms are covered
in Section 15). The variogram parameters are determined from several sources.
Typically from well data if sufficient well data exists for the variogram to be statistically
significant (which is often not the case). More commonly, facies models are based
on statistics taken from outcrop analogues (See AAPG Studies in Geology, No 50)
and/ or published data that offer many different relationships such as those shown on
Figures 59 and 60, which can be used for modelling different facies assemblages.

When modelling complex facies associations in a reservoir with little well control
and/ or where it is difficult to correlate reservoir and sealing units between wells
(rapid spatial variability in X, Y and Z planes) or where modelling interpolation of
facies and sealing facies between wells may be difficult or impossible based on the
existing well data, we can revert to analogues to guide the modelling. Analogues are
based on detailed studies of modern depositional environments (present day fluvial,
deltaic or beach environments) or ancient sedimentary sequences exposed at surface.
These analogues will give key insight on how, for a given depositional setting, various
facies are stacked in 3D space, their continuity (vertical and horizontal) or lack
thereof, their proportions (percentages), their size (length, width and height), their
azimuth, and how they vary from proximal to distal settings. All that information can
be used as input, besides the well control, into the modelling exercise if we believe
that our reservoir is comparable to some present day or ancient outcrop analogue. For
example, if we a reservoir consisting of a fluvial braided system, with many different
facies ranging from very high permeability streaks to silty overbank deposits and
thin shale intercalations, their vertical and horizontal stacking will greatly affect the

Geomodelling Workflow T W O

flow behaviour of the reservoir, yet even with a large number of well control, only
an analogue study could guide in the accurate modelling of the facies. Looking at
turbidite and fluvial system analogues will be the core objective of your field trip to
the Southern Pyrenees at the start of term 3.



Deltaic Barrier
Longer (%)

Delta Fringe and Delta Plain

Distr. Channel
Point Bars
0 500 1000 1500 2000
Length of Shale Intercalation (ft)
(Webber. K,J, 1986)

Figure 59 Shale length versus depositional environment (Webber K.J., 1986)

Variograms are discussed further in Section 15

10 100 1000 10000

15 15
Channel belt 1:100
Channel 1:50

10 10
Thickness (m)

5 5

0 0
10 100 1000 10000
B Width (m)

Figure 60 Thickness versus width for channel and channel-belt sandbodies in the
Escanilla Formation at Olsón (from Dreyer et al., 1993).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 53

R Geomodelling & Reservoir Management

When using pixel based facies models, we require the following parameters: i) facies
proportions (as discussed in the previous section); ii) variograms for the different
facies; and iii) to tie the facies to the wells in our model. Incoherence between these
parameters (for example using variograms which create geometries which cannot
honour the wells for instance) means the simulation will fail to “converge” towards
the correct solution, and the geomodelling software will compensate by modifying the
facies proportions which may be significantly different from those wanted. Although
some variations are expected from one realisation to the next the proportions should
cluster close to those proportions we want, else a major error could be introduced
in the model.

13.2 Object Based Models

Object Based Models are built on the superposition of reservoir geometries associated
with a given set of facies. The types of geometries used are illustrated on Figure 61.
The parameters of each geobody (length, width, height and azimuth) can be either
fixed or given various ranges based on probability functions. The geometries available
in commercial geomodellers are becoming ever more sophisticated, especially for
channel complexes. Modelling channel systems for example allows a whole range
of possibilities not only in terms of sinuosity (amplitude and wavelength); channel
height; width to height ratios but also options for modelling overbank facies such as
levees, crevasse splays etc. Placement rules allow hierarchical modelling, such that
facies are modelled chronologically, one after the other in any preferred order, such
that the later modelled facies can erode into those modelled before them.
Object Based Modelling

(1) (6)

τ a

h2 I
(3) (5)

(4) (7)

c c

I h

Various Geometries in Object Based

Modelling (Lanzarini et al, 1997).

Figure 61 Object based modelling

Geomodelling Workflow T W O

The key requirements in Object Modelling are the selection of the geobodies (Figure
67), their orientation (azimuth), and their sizes in X; Y and Z directions. Note that unlike
Pixel based modelling, there are no variograms in Object modelling. The selection
of geobodies and their parameters is determined from facies interpretations based on
well data (cores, wireline log patterns etc) and possibly high resolution 3D seismic
if available. The parameters (percentages, size, orientation etc) for the geobodies,
like in Pixel based modelling, are typically based on outcrop analogues. Finally, the
placement rules are equally applicable to object based modelling.

Object based modelling begins by placing the geobodies in their correct position
and correct intersection at the well bore (Figure 62). Modelling is than extended by
generating geobodies in the interwell area until the required proportions have been
reached. Because of the placement rule geobodies are modelled in a pre-selected
order in order to allow (if wanted) one type of geobody to cut or erode into another
(Figure 62).

Conditioning Data

Honour Well Data

(Randomly locate
bodies to coincide
with wells)

Interwell Sand Bodies

Added until N:G reached
(conflicting ones removed)


Figure 62 Placement of geobodies according to well data and in interwell area.

Conditioning of the placement rules is possible with some geomodellers, such that
modelled objects can cluster or alternatively anti-cluster (Figure 63). Note that the
problem of convergence also exists in object based modelling and may be more
problematic than in pixel based modelling, because it may be difficult to get the
selected bodies and their given sizes to honour the wells and the required facies
proportions. This is all the more so as the number of wells increases.

The verification and QC for convergence in facies modelling is a critical step in

geomodelling, failing which the model may be badly flawed or erroneous.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 55

R Geomodelling & Reservoir Management



(Clemetson et al, 1990)

Figure 63 Conditioning placement rule in object modelling

With the advent of 3D seismic and improved imaging, most geomodellers now have
the possibility of modelling objects directly from correlated seismic attributes. Figure
64 is an example where a high amplitude event in a seismic cube (possibly correlated
with some lithofacies) is Modelling
Object captured extracted
as a 3D object.
from Seismic

High Amplitude Event Captured as Object

(Petrel 2003)

Figure 64 Object modelling extracted from seismic

Geomodelling Workflow T W O


14.1 Net Pay Cut-Offs

Before discussing Petrophysical Modelling, it is important to consider the notion of
cut-offs in reservoir characterisation. The selection of a cut-off allows the determination
of net-pay. Net Pay is that remaining part of the reservoir after removing the volume
of rock that does not contribute to reserves. Net-pay therefore by definition occurs
above the FWL or OWC of a reservoir. The volume of rock that does not contribute
to reserves consists of non-depleting rocks and includes shales and reservoirs whose
poroperm properties do not permit hydrocarbons to be produced in commercially
significant amounts. This brings the factor of commerciality and not just technical
criteria in selecting cut-offs. For example, an onshore well with an oil production of
100bbl/ day may be deemed commercial and will therefore have net-pay, while the
same well in an offshore setting would be non-commercial and therefore have zero
net-pay and therefore a different cut-off.

In its simplest form, net-pay can be defined as those portions of a reservoir that contain
commercially producible hydrocarbons. There is therefore:
• Notion of moveable hydrocarbons
• Notion of commercial rather than technical limitation

The commercial factor means that cut-offs are subjective and may vary from one
company to the next and often from regulatory or governmental bodies. The SPE
Applied Technology Workshop (28-29 September 2000) in Dallas, which looked
specifically at the problem of cut-offs concluded:
• Net-pay (h) is one of the most important parameters used in geological mapping
and reservoir calculations.
• Its application includes: volumetric estimates of in-place hydrocarbons,
reservoir simulation, well test interpretation, fluid injection analysis, flow rate
estimates, geological modelling, well completions, stimulation design, and equity
• Unfortunately, the petrophysical, geological, and petroleum engineering literature
provide very limited insight and guidance into how net-pay should be defined
and computed and each application may involve different criteria.
• Net-pay means different things to different people.
• Current industry net-pay cut-offs are largely subjective, object oriented, and
based on local experience.
• There is no systematic industry guideline for selecting net-pay cut-off criteria.
• When discussing the concept of net-pay, a clear understanding of net-pay
definitions and the basis upon which it is calculated should be clarified (one
person’s definition of net-pay may not agree with that of another person).
• Net-pay determination is impacted by Darcy’s law and must consider such
items as fluid mobility, reservoir pressure (gradient), reservoir drive mechanism
(primary, secondary, or tertiary), and wellbore skin (stimulation affects).

In selecting a cut-off, the following must be considered:

• The cut-off should give the volumes of rock, which effectively contributes to
production. The risk is that we may remove some rock volume, which may
undergo some depletion and therefore contribute to production over the life of
the field.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 57

R Geomodelling & Reservoir Management

• The cut-off should give net-pay heights which are coherent with flow intervals
measured in actual well tests. Here again we have the similar risk that we may
remove some rock volume which failed to produce at the well-bore, but do
undergo some depletion and therefore contribute to production.
• Note that in well tests, especially if these are short, the well may not have fully
cleaned up by the time the test is completed (variable amounts of formation
damage or skin factors) and as such some intervals which failed to flow during
the test could well have done so under normal conditions. The well in Figure
88 is such an example where some zones which on poro-perm properties were
expected to flow, failed to do so during testing as shown by the PLT log.
• Invasion profiles on resistivity logs, RFT’s and mud cake will all indicate
permeability, and as such, the selected cut-offs should be coherent with these
permeability indicators.
• Beware of thin beds within a shale rich interval. These may be masked on the
wireline logs due to limited vertical resolution, but may in fact contain some
very good producing horizons.
• If producing thin beds are proved to exist in such a shale rich setting (Figure
65), the net-pay can be approximated by:
Net Height = Gross Height * (1-VshAverage)
PhiAverage in Net-pay = PhiAverage in Gross Height / (1-VshAverage in Gross Height)
• Avoid cut-offs on water saturation.
• Cut-offs are often a major uncertainty in reservoir modelling and sensitivities
are necessary to determine their impact on hydrocarbons in place and reserves.
• The choice of a cut-off will impact both the NTG of the reservoir and the average
porosity in the net-pay
• Remember that cut-offs are based on moveable hydrocarbons and are therefore
a function of permeability and fluid mobility. Cut-offs will therefore vary for
different fluids.
• In thinly interbedded/ laminated sequence thin beds with sometimes very good
permeability are not detected by wireline logs (high Vshale) and excluded from Net Pay.

• In thin beds (shale/ sandstone) the Net Height can be approximated by

Net Height = Grossheight x (1-Vshaleaverage) and the porosity in the clean Sandstone as
Phi = Phiaverage / (1-Vshaverage)

Figure 65 Cut-offs in thin beds within shaly sequence

Geomodelling Workflow T W O

Since net-pay is based on moveable hydrocarbons, cut-offs must by definition be

based on a permeability threshold, which will vary according to the fluid mobility
which depends on its phase (gas or liquids) and the viscosity of the fluid.

It is relatively easy to compute porosity directly from well logs, and since there is
a correlation between porosity and permeability, the permeability threshold can be
converted to a porosity cut-off using a porosity - permeability cross plot (Figure 66)
and applied readily to all wells. In the example on Figure 66, a porosity permeability
function is shown, together with different cut-offs. These may be for fluids with
varying viscosity or the different permeability thresholds could be part of a sensitivity
analysis on the same fluid (Min, ML, Max). There may on the other hand be some
uncertainty on the porosity - permeability function itself. So for a given permeability
cut-off, different porosity - permeability functions will yield different porosity cut-offs
(Figure 67). These functions may represent Min, ML and Max scenarios.
Porosity – Permeability Cut-Offs: Lithofacies

Permeability cut-off is subject to hydrocarbon
mobility (type and viscosity).

Rule of thumbs
1.0 md for 30-35 API oil

0.5 to 0.1 md for dry gas

Use permeability cut-off to determine 100

porosity equivalent
Permeability (md)

Hogg and others, 1996
Triassic sandstones
Sherwood Sandstone Group
4md Wytch Farm Field
Hampshire Basin
1md 1 Southern England

0.5md Grain size

coarse upper
0.1 coarse lower
medium upper
medium lower
fine upper
fine lower
ver fine upper
0 10 20 30 40
5% 9% 11%
Helium Pororsity (%)

Figure 66 Porosity - permeability cut-offs

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 59

R Geomodelling & Reservoir Management

May wish to keep permeability cut-off fixed, but

Select varying Phi_K relationships to determine
range of porosity cut-offs


Permeability (md)

Hogg and others, 1996
Triassic sandstones
Sherwood Sandstone Group
Wytch Farm Field
Hampshire Basin
1md 1 Southern England

Grain size

coarse upper
0.1 coarse lower
medium upper
medium lower
fine upper
fine lower
ver fine upper
0 10 20 30 40
5% 10% Helium Pororsity (%)

Figure 67 Porosity - permeability cut-offs

Remember that porosity - permeability functions will be different for different

lithofacies as they will coincide with different Petrophysical Groups. As such porosity
cut-offs may vary between lithofacies (Figure 68).

Geomodelling Workflow T W O

May select different cut-offs

for different Lithofacies


Permeability (md)

Hogg and others, 1996
Triassic sandstones
Sherwood Sandstone Group
Wytch Farm Field
Hampshire Basin
1md 1 Southern England

Grain size

coarse upper
0.1 coarse lower
medium upper
medium lower
fine upper
fine lower
ver fine upper
0 10 20 30 40
5% 9%
3% Helium Pororsity (%)

Figure 68 Porosity - permeability cut-offs: Lithofacies.

Porosity is used as a cut-off because of its link to permeability. In other words, the
porosity cut-off is simply a converted permeability cut-off (Figures 66 and 67). To
determine which volume of the reservoir can be considered as Shale, we use the Vshale
property computed from logs (and calibrated to core data if possible). Cross-plots of
computed Vhale versus core permeability are sometimes used to determine a Vshale
cut-off. If this is not possible Vshale is plotted against computed Porosity to determine
the maximum acceptable Vshale for reservoirs above the porosity cut-off.

On the basis of our established Porosity and Vshale cut-offs, we can now compute
a new Property called NTG, where NTG equals 1 if above the cut-offs and zero

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 61

R Geomodelling & Reservoir Management

14.2 Formation Evaluation and QC

Before Petrophysical Modelling can start, the formation evaluation from which
the Petrophysical Properties are derived must be QC’ed and validated for all wells.
Figure 69 and the list below gives a non-exhaustive checklist for the QC of formation

• Core properly depth shifted onto logs

• Is log quality acceptable (not adversely affected by borehole conditions such as
washouts and well rugosity)
• Correct environmental corrections carried out
• Mineralogy and Vshale computation (coherent with core and cuttings descriptions)
• Erroneous mineralogy and Vshale will affect porosity calculations.
• Porosity computations (effective porosity).
• How good is correlation between core and computed porosity?
Impacts Permeability predictor. Computed porosity must be calibrated to core
data. This is because permeability predictors are based on core plug porosity-
permeability cross-plots, and either under or overestimating porosities from logs
(compared to the core data) will affect computed permeabilities accordingly.
• Are parameters in porosity computations coherent with mineralogy?
• Are porosities corrected for shale?
• Water saturations (erroneous porosities will affect water saturation
• Are water saturations and Sw/Sxo (moveable oil) coherent with OWC from
mapping and/ or RFT?
• Is net-pay and computed NTG from selected cut-offs, coherent with dynamic
data (well tests and PLT)

Is core on depth?
What shift to apply?

Top Reservoir
Ties with structural map?

Look for moveable oil

Remember if Sxo=Sw no moveable Oil even if Sw is low!
Is this consistent with Formation test?
Do logs indicate invasion in High Permeability cored section?
How well does core plug porosity
correlate with computed porosity?
Are DST results consistent with Formation Evaluation?

Consistent with Closure on structural map?
Consistent with RFT pressures?
Vshale, is GR good estimator?
If not why not?
What alternatives

Don’t use badhole section!

Figure 69 Formation Evaluation.

Geomodelling Workflow T W O

Computed Porosity Versus Core Porosity

Model 2




Model 1



0 5% 10% 15% 20% 25% 30% 35%

• Two different Phi computational models
• Both models underestimate Phi compared to core data

Figure 70 Computed porosity versus core porosity

14.3 Porosity and Permeability Law

As mentioned previously, one of the key objectives in geomodelling is the correct
representation or modelling of porosity and permeability in space. To do this requires
analysis of the core porosity and permeability in order to establish a function relating
these two parameters. This will in turn allow a permeability predictor to be generated,
which can be used for computing and populating permeability in the geomodel. Each
coreplug and its measurements are unique, yet we must pool samples together before
calculating any statistics or build a reservoir model (Deutsch, C, 2002)

The permeability of a rock is a function of its pore geometry, i.e. pore space or pore
diameter and their interconnectivity via pore-throat and the diameter of the pore-
throats themselves. This is in turn dependent on mineralogy and texture (grain size,
sphericity, sorting and packing) of the reservoir rock. Figure 71 shows three cross-
plots of permeability versus porosity, pore diameter and porethroat diameter for the
same set of core plugs. Not surprisingly, the three plots show a clear correlation
with permeability, with the best correlation and linearity between permeability and
porethroat diameter.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 63

R Geomodelling & Reservoir Management

Porosity Permeability Law from coreplugs

Warren and Pulham, 2001

Quartz arenites from 3 formations

10000 10000

1000 1000
Permeability (md)

Permeability (md)

100 100

10 10

1 1
0 10 20 30 1 10 100
Porosity (%) Median pore diameter (um)
Porosity Permeability Law


Permeability (md)



mm= =2.0266
R2R2= =0.9599
0.1 1 10
Median pore throat
diameter * porosity (um)

Trend between Permeability and Porosity, but better with Pore Diameter

Figure 71 Porosity permeability law from coreplugs

However computing porethroat diameter from core plugs is difficult and impossible
to measure from wireline logs. Porosity meanwhile can be readily measured in core
plugs and computed from wireline logs. Modelling porosity distribution in space
is also far easier than porethroat diameter, and if a relationship between porosity
and permeability can be established, permeability can be modelled as a function of

Geomodelling Workflow T W O

Before making a permeability predictor, it is important to have a good understanding

of porosity - permeability trends and the factors that affect such a trend.

A very useful catalogue of porosity and permeability cross-plots from core plugs in
Siliciclastic Rocks exists on the US Geological Society website
of/2003/ofr-03-420/ authored by Philip H. Nelson and Joyce E. Kibler (USGS)

The bulk of Porosity Permeablity plots used in the ensuing slides were extracted
from the above referenced dataset

Figure 72 is a generalised porosity permeability trend showing the positive correlation

between the two properties. Compaction and chemical alteration change the dimensions
of pores and porethroats effectively reducing permeability.

However, as shown on Figures 72 and 73, there are many factors, that cause movement
along the porosity - permeability trend or away from the “normal” compaction curve.
These factors are:
• Compaction
• Grainsize
• Sorting
• Composition
• Cementation and types of Cement
• Dissolution
• Diagenesis
• Clay content
• Deformation and Fractures

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 65

R Geomodelling & Reservoir Management

Co om g
se r tin
gr So
Gr ain
av s
1000 fra
cti Fi
on ne
Permeability (md) log scale


Qu Lith dspar
con ar tz ic c and

ten ont



y co


0 10 20 30 40
Porosity (%)

General Relationship between Phi and Kh.

Porosity and Permeability Sensitive to:
• Grain size
• Sorting
• Composition
• Compaction
• Cementation and types of Cement
• Dissolution
• Diagenesis
• Clay content
• Deformation and Fractures
All of the above cause divergence of Phi_K trend away from the “normal” Compaction Curve

Figure 72 General porosity - permeability compaction trend.

Geomodelling Workflow T W O

Generalised Porosity vs Permeability Trend

(Selley, 1988)

Relationship between porosity and permeability for the different types of pore
systems. Note that fracturing will enhance permeability dramatically for any
type of reservoir.

Figure 73 Generalised porosity vs permeability trend

It is also important to remember that when considering porosity - permeability cross-

plots, we are dealing with interconnected porosity or effective porosity. Neutron &
Density Logs read total porosity, while Sonic Porosity reads effective porosity and
there may be significant differences between the two.

Because we use mercury injection or helium gas to measure porosity and permeability
in core plugs, we assume that it is effective porosity that is measured. Core plug
measurements are our calibration points for establishing a porosity permeability
function. However these are normally measured at surface conditions, after drying
etc, which is not the same as at reservoir conditions. Corrections to permeability
can be made (Klinkenberg) or calibrated to reservoir conditions from permeability
measurements made under confined pressure. It is therefore important to consider if
a compaction correction is necessary to be applied to the core poroperm data.

Figure 74 shows changing permeability measurements with increasing confining

pressures. Note that from this example, the greatest apparent changes come from the
lower porosity permeability plugs. This may be a simple artefact, as permeability
measurements below 0.1 md become less reliable with low repeatability.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 67

R Geomodelling & Reservoir Management

Data Set 33


Permeability (md)

Keighin and others, 1989
Almond Formation
Greater Green River Basin
Effect of confining pressure
on 17 samples

Confining pressure
250 psi
2250 - 9000 psi
250 psi

4750 - 5450 psi

0 10 20 30 40

Pororsity (%)

• Lab results indicate that increasing Confining Pressure affects

Porosity more than Permeability in rocks with Kh > 1 md
• Beware of using trends established in very low Permeability rocks

Figure 74 Porosity - permeability: confining pressure

Because the porosity permeability function established from core plugs will have to
be extended to non-cored sections where porosity has been computed from wireline
logs, it is important that the computed porosity is well calibrated to the core data
(properly corrected for Vshale for example). Alternatively the uncertainty or error
in the computed porosity must be quantified using Core porosity versus Computed
porosity cross-plots (Figure 70) since over or under estimated porosities will impact
the permeability accordingly.

We will now review some of the factors that affect porosity and permeability. The
effects from grain size and texture have already been covered in detail elsewhere
in this course (Reservoir Concepts and Reservoir Sedimentology Modules) and the
student should refer to those modules for more information.

Geomodelling Workflow T W O

Porosity – Permeability: Composition & Mineralogy


High Primary Quartz content = Good Kh
even at relatively low Phi

10 3
Mixed Clastics = Good Kh at high Phi
i.e. Quartz Arenites typically have better Kh
than subarkose or sublitharenites at same porosity


Permeability (md)



Sandsone classification
Feldspathic Litharenite
Poorly consolidated

(Nelson, 2004)
0 10 20 30 40
Porosity (%)

Figure 75 Porosity - permeability trend: composition and mineralogy

Figure 75 shows a porosity permeability cross plots for sandstones with differing
composition (quartzarenites to arkose) as well as poorly consolidated sandstones. The
clean sandstones (quartzarenites) display the best permeablities, even at relatively low
porosities. As the sandstones become progressively more arkosic or lithic, permeability
becomes comparatively poorer, with the better permeabilities restricted to significantly
higher porosities compared to the quartzarenites. The unconsolidated sediments
meanwhile show significantly higher porosities than the cemented sandstones, but
permeability trends not unlike those from the cemented rocks. This again highlights
that porethroat rather than porosity is the key controlling factor in permeability.

Figure 76 shows a porosity permeability cross plot with three Petrophysical Groups
correlating with different depositional environments. The three depositional
environments have overlapping porosity ranges, but significantly different permeability
ranges, which will give each Petrophysical Group a different porosity-permeablity

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 69

R Geomodelling & Reservoir Management

Atkinson and others, 1990
Permian-Triassic sandstones
Ivishak Formation
Sadlerochit Group
Prudhoe Bay Field, Alaska

Permeability (md)


Fluvial setting
Mid-braided stream (conglomera)
Mid-distal braided stream
Distal fluvial and floodplain

0 10 20 30 40
Porosity (%)
• Depositional Settings have differing Energy Environment
• This will impact the texture of the rock (Grain size, Sorting etc)
• Therefore impacts Phi_log(K)
• Same Porosity Range but varying Permeability Range

Figure 76 Porosity - permeability trend: depositional environment

Figure 77 on the other hand shows how a single porosity-permeablity function, albeit
non-linear, may be used for three different Petrophysical Groups correlating with
different depositional environments.

Geomodelling Workflow T W O

Porosity – Permeability: Depositional Environment

Data Set 35


Permeability (md)

Kerr and others, 1989
'Bartlesville sandstone'
Glenn Pool Field
Northeastern Oklahoma Platform

DGI - 'discrete genetic intervals'

equivant to parasequences

DGI facies Grain size

B meandering fine to
C Channel fill medium
D Splay fine to
E medium
F braided channel medium to coarse
0 10 20 30 40

Porosity (%)

• In this example LithoFacies in each Depositional Environment

have same Phi_Kh relationship but fall in different Petrophysical Groups

• Some overlap in Porosity Range but varying Permeability Range

Figure 77 Porosity - permeability trend: depositional environment

Diagenesis also has a major impact on porosity-permeability trends. Although

diagenesis generally reduces porosity (enhancing compaction and pore plugging by
mineral precipitation), in some cases it may enhance porosity (secondary porosity)
and permeability trough leaching.

Figure 78 is a schematic flowchart of the diagenetic pathways for clastics and

carbonates. The degree of diagenesis is a function of the pressure-temperature regime,
with dissolution taking place when temperature and pressure increase and mineral
precipitation and loss of poroperm quality occurring when the reverse takes place.
However the aquifer (mineral solution concentration and strength) also impacts the
scale of diagenesis, as will the timing of hydrocarbon emplacement.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 71

R Geomodelling & Reservoir Management

Diagenesis – Clastics and Carbonates





• Diagenesis usually reduces Phi_K but leaching may enhance them.

• Need aquifer flow. Temp-Pressure changes during burial or deformation
may increase dissolution or precipitation of cements or clay formation.
• Hydrocarbon emplacement reduces diagenesis

(Selley, 1998)

Figure 78 Diagenesis - clastics and carbonates

Geomodelling Workflow T W O

Figure 79 shows a single reservoir lithofacies split into three Petrophysical Groups
on the basis of different types of cement.

Diagenetic influence may in certain cases be the

best discriminator for Petrophysical Groups
Data set 36
Langford and others, 1994
Oligocene delta-front sandstones
Vcksburg formation
McAllen Ranch Field
Texas Gulf Coast.

Permeability (md)



Chlorite-cemented facies
Calcite-cemented facies
Quartz overgrowth-cemented facies

0 10 20 30
Porosity (%)

Figure 79 Diagenesis - clastics and carbonates

Dolomitization is another form of diagenetic alteration. Dolomitization occurs when

limestones come into contact with brines rich in magnesium ions that substitute the
calcium ions in the limestone through the following chemical reaction:

2 CaCO3 + Mg++ = CaMg(CO3)2 + Ca++

Dolomite being denser than limestone, dolomitization effectively reduces the limestone
matrix volume and therefore increases porosity. In theory, dolomitization may reduce
the limestone matrix volume by as much as 12.5%.

14.4 Net to Gross and Porosity Modelling

Having computed Net to Gross ratios (NTG) and Porosity in the wells and upscaled
these by averaging in Log Bloking, these reservoir properties can now be modelled in
the interwell space. In the case of porosity, it is simple to generate porosity histograms

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 73

R Geomodelling & Reservoir Management

either directly from the porosity curve computed at the well or from Blocked data for
each reservoir lithofacies (Petrophysical Groups) and per reservoir zone. In doing
this, always check whwther porosity distributions vary above and below the OWC.
This is quite common as diagenesis and porosity reduction commonly occurs in the
water leg, while the presence of hydrocarbons above the OWC reduces diagenesis
and preserves better poroperm properties. If there is a difference, porosities will have
to be modelled separately above and below the OWC

All cells with non-reservoir lithofacies (shales for example) should by default be
attributed zero porosity; zero NTG and zero permeability. In cells with reservoir
facies, the average porosity must be computed using the net reservoir (above porosity
cut-off) of that facies only. The porosity distribution used to populate the reservoirs
must be based solely on the intervals above the Vshale and Porosity cut-offs. Each
Lithofacies (Petrophysical Group) must have its own porosity distribution.
Modelling NTG is a quite different to Porosity. Firstly, the input curve from the log is
discrete (1 or 0) unlike porosity which is a continuous variable. There are two main
ways to model NTG. These are:

1. We explicitly model the reservoir and non-reservoir facies in proportions which

honour the overall NTG of the reservoir. Imagine we have a reservoir with only two
lithotypes: Shale (non-reservoir) and Sand (reservoir) in a ratio of 40% Shale 60%
Sand. If we explicitly model the Shales and Sands, such that by volume 40% of the
model contains cells with shale (non-reservoir therefore: zero NTG) and 60% with
sands (reservoir and NTG =1), we have effectively modelled NTG explicitly in our
reservoir. If however the Sand has a further porosity cut-off, which effectively removes
a fraction of the reservoir below this porosity, a NTG will need to be modelled in the
reservoir cells to account for this reduction of reservoir volume. Else the volume of
gross reservoir will be overestimated in the model.

2. Case where we cannot explicitly model the non-reservoirs. Imagine the reservoir
case where we have two lithotypes as above. Shale (non-reservoir) and Sand (reservoir)
with a 40% and 60% distribution. However in this scenario, we cannot explicitly
model the shales (partially or in their entirety) because these occur predominantly
as thin interbeds within the sand, way below any cell resolution. For the fraction of
shale we cannot explicitly model, we have to include them within the sand, where
we reduce NTG to account for the fraction of shale occurring as interbeds. In other
words, the NTG in the Sand cells must account for both the porosity cut-off and the
shale volume. As such the model will display a larger number of cells with Sand, but
as a whole, the reservoir will still have an overall NTG of no more than 60%.

As for porosity, NTG histograms can be generated from the NTG Blocked data
for each reservoir lithofacies (Petrophysical Groups) and per reservoir zone. NTG
modelled between wells is typically controlled by facies distribution (using the same
variograms as for facies modelling), although these can be overprinted by diagenetic
constraints independent of facies that may have affected NTG (depth of burial,
uplift, tilting etc). So NTG (and other properties for that matter) may be affected by
geological trends (other than facies) which we must incorporate in our modelling to
accurately distribute the properties in the interwell volume. This is commonly referred
as “Property Conditioning”.

Geomodelling Workflow T W O

The conditioning of trends is illustrated by an example on Figure 80. Several wells

have been drilled in a growth fault setting, with the reservoir section thickening into
the growth faults. It is quite common for such thickening to affect facies distribution,
NTG and porosity trends and distributions. In the example on Figure 80, the reservoir
layers were grouped into Genetic Units and their NTG plotted against gross thickness
(Figure 81). In this case a clear correlation is evident between NTG and Gross Reservoir
Thickness. As such, NTG modelling could be conditioned to isopach maps.

Any trends or changes in

Facies, NTG or Porosity with A
Thickening and Depth
D Unit

Ø ? Ø ?

Figure 80 Genetic units, thickness and property variations

Gross E_Facies SST/Gross Thk


0.6 ♦ ♦
♦ ♦
0.5 ♦
♦ ♦
♦ ♦

0.2 ♦
0.1 ♦ ♦
0.0 20.0 40.0 60.0 80.0
Gross Thk (m)

Verify if NTG is associated with a Geological trend.

Correlation between NTG and gross thickness.
Condition NTG to gross thickness.

Figure 81 Conditioning of trends

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 75

R Geomodelling & Reservoir Management

It is important when building a model to search and establish geologically coherent

correlations and trends for reservoir properties to ensure an accurate representation
of these properties in the model.

Another example of trend conditioning is in the case of where porosity and/ or NTG
are known to be affected by diagenesis. In that case, assuming a trend map has been
established (degree of dolomitization for example) the porosity and NTG should be
conditioned to the dolomitization trend map of the reservoir.

Many geologically coherent trends can often be established by careful analysis.

For example, Figure 82 is a schematic cross-section through a typical Cretaceous
carbonate play, offshore West Africa. Rafting on top of the basal salt, has created
large-scale growth faulting as shown. In deepest areas of the downthrown side of
the growth faults, greater water depths and lower energy are likely to result in higher
micrite content compared to shallower areas on the upthrown side of the growth faults.
Rapid subsidence and greater compaction means poorer porosity development on the
downthrown side of the growth faults.
Offshore West Africa - Cretaceous Carbonate Reservoir

Brackish water from salt Dolomitization

Compaction Compaction

Diagenesis and Diagenesis and

Dolomitization Dolomitization
Fractures Fractures
Aerial Exposure Aerial Exposure

Water Circulation Water Circulation

φ/NTG/Kh φ/NTG/Kh

Growth Faulting, Diagenesis, Lithofacies & Petrophysical Trends

Figure 82 Offshore west Africa - cretaceous carbonate reservoir

However on the upthrown side of the growth faults, shallower water conditions and
higher energy means better porosity development. In addition, better ground water
circulation together with the abundant salt in the fault plane, means the ground water is
likely to be contaminated by salt to create brines resulting in extensive dolomitization
and improved porosity. As such, Porosity, NTG and Permeability are likely to have
trends as illustrated on Figure 82

14.5 Permeability Predictor and Horizontal Permeability Modelling

A permeability predictor is typically a porosity-permeability log function, established
from a correlation between core porosity and permeability data as shown in the
equation below.

Geomodelling Workflow T W O

Log K = a∆eff + b

However the relationship between porosity and permeability can be more complex,
and sometimes better described by a polynomial or an exponential function.

Figure 83 shows two such functions for the Fontainebleau Sandstone. One correlation for
sandstones below 9% porosity and a second correlation for porosities above 9%.
Permeability Predictor

Data set 7
Bourbie and Zinszner, 1985
Oligocene (eolian and marine beach)
Fontainebleau Sand
Paris Basin

Permeability (md)



0 10 20 30
Porosity (%)

Fontainebleau Sandstone

Figure 83 Permeability predictor

The Fontainebleau Sandstone is quite exceptional in that there is almost no scatter

of permeability about the porosity trend, and permeabilities predicted from the two
functions should be very close to the true permeabilities. The porosity permeability
cross plot on Figure 84 is a more typical example, where although there is a very
good correlation between porosity and permeability, with a correlation coefficient
close to 1, there is significant permeability scatter around the porosity trend. The
function in that example would predict a permeability of 2 md at 10% porosity, when
the core data indicates that permeabilities could in fact be as low as 0.25 md and
as high as 30 md around that porosity value. In other words for any given porosity

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 77

R Geomodelling & Reservoir Management

value, there is a distribution of permeabilities, that can vary by one or more orders of
magnitude, as illustrated on Figure 84. Co-simulation of porosity and permeability
may (in this example) be a better option, in that it would allow the simulation for
a given porosity, to pick a permeability value from a distribution. Co-simulation is
discussed in section 16.4
Permeability Predictor

Bowker and Jackson, 1989
Weber sandstone
Rangely Field, Colorado



Permeability (md)

Inverse slope = 3.3%

Invtercept, porosity
at 1 md = 9%

cross-laminated facies (eolian)

massive facies (bioturbated, eolian)
arkosic facies (fluvial)
0 10 20 30
Porosity (%)

Although very good correlation Coefficient between Phi and Kh

Predictor would compute a permeability of 2 md at 10% Porosity
When it could be between 0.25md and 30md due to scatter.

Better to have co-simulation between Phie and Kh

Figure 84 Permeability predictor

In another example illustrated on Figure 85, no function can be established between

porosity and permeability. In that case, we can assume that porosity and permeability
are independent variables and let the permeabilities and porosities be modelled
independently from each other.

Geomodelling Workflow T W O

No Trend in this High Phi, High Kh reservoir

Use Independent Distributions

Data Set 53
Reedy and Pepper, 1996
Pleistocene and Pliocene
Green Canyon 205 Unit
Gulf of Mexico, Offshore Louisiana


Permeability (md)


0 10 20 30 40
Porosity (%)

Figure 85 Porosity - permeability predicition

Another technique used for predicting and modelling permeability is Hydraulic

(flow) Units (HU). HU is defined as the Representative Elementary Volume (REV)
of a reservoir with petrophysical properties and fluid flow characteristics that are
predictably different from other parts of the reservoir. For a detailed explanation of
HU the student is referred to Amaefule J.O. et al (1993). In summary, the technique
is based on the principle that permeability and therefore hydraulic properties are a
function of porethroat geometry. The HU technique devised by Amaefule and his
colleagues is based on a modified Kozeny-Carmen equation which models a reservoir
as a bundle of capillary pressure tubes, with porosity, permeability and capillary
pressure a function of a mean hydraulic radius (rmh) of the capillary tubes. The first
step in HU modelling is to determine the Flow Zone Indicator (FZI) from the coreplug
porosity-permeability data, using the following equation:

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 79

R Geomodelling & Reservoir Management

FZI = =
Φz  φ 
 
 1 − φ  (1)

RQI = Reservoir Quality Index
∆z = Normalized Porosity Index

Samples with the same FZI will lie on a straight line with unit slope on a log-log plot
of RQI versus ∆z and have similar porethroat attributes, that equate to a hydraulic
unit. Samples with different FZI will lie on separate parallel lines on the RQI versus
∆z log-log plot.

By re-arranging the FZI equation (2), lines of constant FZI or HU can be determined
and plotted on a porosity-permeability plot.
  φ  
 ( FZI)x 1 − φ  
K=φ  0.314
 
 
  (2)

Figure 86 shows a porosity-permeability cross-plot for a reservoir, with no apparent

porosity permeability correlations for the different facies. However there is a good
correlation on the basis of Hydraulic Units.

10000 10000
1000 1000
100 100
10 10


1 1
0.1 0.1
0.01 0.01
0.001 0.001
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
φ effective φ effective
Core data cross-plot Core data cross-plot discriminator on facies






0 5 10 15 20 25 30 35
φ effective
Core data cross-plot discriminator on HU
Hunter Williams & Straub 2002

Figure 86 Permeability prediction using flow units

Geomodelling Workflow T W O

As for the conversion of Lithofacies into Electrofacies and ultimately Petrophysical

Groups using Log Typing (section 10), Log Typing of the HU is also necessary for
extrapolating them to uncored sections and all non-cored wells in the rest of the

Having established permeability predictors, these can then be applied to all the wells
in the field. Results should always be carefully QC’ed and validated before using the
computed permeabilities in the geomodel. To start, it is important that the computed
porosities used as input in the permeability predictor are properly calibrated to the
core porosities. Failing that, the permeabilities derived from the permeability predictor
will be erroneous. QC can be carried out by simple graphic overlays of the core and
computed poroperms to give a quick qualitative comparison between the core and
computed poroperms. A cross-plot of core versus computed porosities (Section 14.2,
Figure 70) will give a quantitative comparison, which can be used as a correction
factor to ensure that the computed permeabilities tie with the core data.

Another simple QC technique is illustrated on Figure 87. The ranges of core

permeablities for different Lithofacies (Petrophysical Groups) are plotted against the
range of modelled permeabilities for each modelled Electrofacies. In this example, we
can see an excellent correlation between the permeability ranges from the core and
those computed by the permeability predictor. This technique needs however to be
complemented by a comparison of the permeability histograms of each Lithofacies and
its equivalent Electrofacies to ensure that not only the same range but also the same
permeability distribution has been computed compared to the cored lithofacies.

0 1 2 3 4 5 6 7 8 9 10 11 12 0 1 2 3 4 5 6 7 8 9 10 11 12
1000 1000 1000 1000

100 100 100 100

10 10 10 10

1 1 1 1

0 0 0 0

0.01 0.01 0.01 0.01

0 1 2 3 4 5 6 7 8 9 10 11 12 0 1 2 3 4 5 6 7 8 9 10 11 12
Lithofacies Electrofacies

K plugs K modelled

Plot Core_K and Computed Log_K Same range of Permeabilities between core
against lithofacies and electrofacies and modelled values for each litho electrofacies

Figure 87 K-Predictor: comparison K_plugs vs. K_modelled

However the key validity check for the modelled permeabilities comes from well test
data. Figure 88 is a well log display showing a single test interval with a number of
perforation intervals and PLT results. The PLT shown as the percentage contribution
of each perforated interval to the total flow. Because a well test gives the permeability
height product or KH, estimating the average permeability correctly from a well test
depends on an accurate estimate of the contributing height.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 81

R Geomodelling & Reservoir Management
Permeability – Calibration against Well Test

Predominant facies

Formation test

8% 8%
KH DST : 1020 mD.m
KH Estimator : 1280 mD.m
0% 0%
H* DST : 37 m
H Cut-off : 57m

K DST: 27.4 mD
K Estimator: 22.4 mD 14% 14%

H* = producing interval (from PLT) 11% 11%

0% 0%
7% 7%

58% 58%

0% 0%

0% 0%

0% 0%

Well Tests gives you K.H
So an error in H could give significant difference in K between Test and from Estimator
In this example H from cut-offs estimated at 57m

Figure 88 Permeability - calibration against well test

In the example on Figure 88, although the Net Height seen by the PLT is different
from that estimated by the cut-off, the KH and average permeability based on the
permeability predictor and the well test closely match, validating the permeability

Remember too, that in a well test, reservoir conditions are seldom if ever single phase.
Assuming we are dealing with a virgin reservoir, water saturations will typically be
at irreducible water saturation conditions (Swi) which means that in an oil reservoir
for example, the permeability will be Kabsolute * Kowr (relative permeability of oil to
water). Since Kowr is usually less than 1, the permeability measured in a well test will
therefore be lower than Kabsolute.

In the absence of coreplug poroperm measurements, there are several empirical

equations which can be used to compute permeability. Students are referred to Nelson
P.H, (1994) for a more detailed review on empirical permeability equations. The main
equations are listed in Table 2:

Geomodelling Workflow T W O

Class Author Equation

Grain- Berg k = 80.8 e-1.385p D2 φ5.1
based models Van Baaren k = 10.0 C-3.64 Dd2 φ3.64+m
Surface Timur k = 0.136 Swi-2 φ4.4
area Sen et al. k = 0.794 T12.15 φm+2.15
models Coates (NMR) k = c (Vffi/Vbvi)2 φ4
Pore Kozeny-Carman k = 400 Rh2 φm
Size Winland k = 49.4 R352 φ1.47
Models Katz and Thompson k = 17.9 Rc2 φm
Warren and Pulham k = 141.2 Rm2 φ2

Table 2 Empirical functions to compute permeability

These equations are based on three main models:

• Grain-based Models
Based on grain size and sorting as measured from sieve analysis. Assumes these
parameters control pore size and distribution
• Surface Area Models
Assumes that Permeabilty is impaired by large surface areas. e.g. clay minerals
growing on pore-wall surfaces. Magnetic Resonance interpretation of Permeability
based on this model.
• Pore Size Models
Based on pore size measurements of capillary pressures in the lab. Threshold
(percolation) pressure and shape of curve related to permeability.

The modelling of permeability in the interwell space, first entails Log Blocking of the
well data (Section 11.0). In the case of horizontal permeability (Kh) the arithmetic
average can be used to compute Kh in the blocked cell (Figure 89). This same example
also highlights the importance of the reservoir layering and the major impact it can have
in the case of permeability modelling. We see for example that in the case of the delta
front layer there is a very high permeability streak (drain) at the top of that interval
(Figure 89), which is around four orders of magnitude greater than in the rest of the
interval. The average permeability calculated in this layer is neither representative
of the drain or the lower permeability reservoir below it, and will therefore fail to
represent the real flow behaviour in that interval of the reservoir. A refined zonation,
with the drain interval zoned separately would solve this problem. Likewise, in the
estuarine bar layer of this same example, the very low permeabilities (less than 0.01
md) are likely to be below the cut-off and should not be included in computing the
average Kh permeability of the cell. Note however that this very low permeability
interval will greatly impact the vertical permeability and must be included in when
computing vertical permeability (Kv).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 83

R Geomodelling & Reservoir Management
Modelled – Permeability for varying Facies

LN (permeability)
Upward Coarsening Deltaic Sequence.
Very High Permeability
We have a typical Drain:
Computed permeability will neither
be representative of High
or Intermediate Kh Estuarine bar

Digital log

Depth (m)
The Low Kh Values in Estuarine Bar should not have
Delta front
been included in computation of Kh since they are
below the cut-off and therefore non-reservoir. Mean
Compute Kh in Net reservoir only Standard
This low Kh interval will however have big impact on Kv/Kh
of reservoir
Distal delta lobe

Prodelta (with HCS)

Ringrose, 2003

Figure 89 Modelled - permeability for varying facies

As for other reservoir properties, permeability histograms for input to the interwell
modelling can be generated directly either from the permeability curves computed
at the well by the Permeability Predictor or from the upscaled permeability (Blocked
data). These distributions are generated for each reservoir lithofacies (Petrophysical
Groups) and per reservoir zone. Non-reservoir facies such as Shale will be given
zero Kh. Since there is a dependence between porosity and permeability, horizontal
permeability is typically modelled with the same variograms used for populating
porosity and by collocated kriging with porosity (See section 16.4 on collocated

14.6 Vertical Permeability Modelling

Modelling vertical permeability is an equally important parameter to assess the flow
behaviour of a reservoir, and in most cases is significantly different from horizontal
permeabilities. There are several methods for computing vertical permeability at the
wells and modelling them as illustrated in Figure 90.

Geomodelling Workflow T W O

Kv Predictor: Kv/Kh Comparison K plugs vs K Modeled


Plot Core_ Kv Kv/Kh = 0,04
against Core Kh _core NKSM1
Establish Ratio Kv /Kh Method 1


Method 2
Kv (mD)


Kh (md)
0.01 0.1 1 10 100 1000 10000

Method 3
Kh (mD)

LithoFacies 1
LithoFacies 2
20 LithoFacies 3


0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1

Kv/Kh (mD)

Method (1):
For each Petrophysical Group (lithofacies)
Establish Kv/Kh from core measurements.
In this Example Kv/Kh = 0.6
Compute Kv Curve from Computed Kh

Method (2):
Compute Geometric or Harmonic Mean of Kh in each Cell at the wells to establish Kv

Cross-Plot against Arithmetic Mean of Kh for each Petrophysical Group

Establish a Kv/Kh ratio for each Petrophysical Group

Method (3):
Draw Histogram of Kv/Kh Using Log_Blocked Values of
Arithmetic Mean Kh and Harmonic Mean Kh (Estimate of Kv)
do this for each Petrophysical Group (lithofacies)

Figure 90 Horizontal permeability prediction

• Method 1
If you have representative and statistically sufficient vertical permeability
measurements (quite rare in reality) from core plugs, a Kv/Kh ratio can be
established from a cross-plot with the horizontal permeability measurements
(Figure 90). If there is sufficient core plug data, this Kv/Kh ratio can be established
for different Lithofacies or Petrophysical Groups. Multiplication of the Kh

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 85

R Geomodelling & Reservoir Management

curve (computed from the permeability predictor) by the derived Kv/Kh ratio,
will allow a Kv curve to be computed at the wells. Note that in core analysis,
sampling is usually biased towards shale free intervals avoiding non-reservoir
or intervals with extremely low permeabilities, which typically have the main
impact on Kv. As such, measured Kv in cores are often optimistic, and Kv/Kh
ratios established from cross-plots between core Kv and Kh must be seen as
best case scenario.

Using this established Kv/Kh ratio (for each Lithofacies), we can derive a Kv
curve in the wells and upscale these in Log Blocking. Kv can now be modelled
between wells for each lithofacies using the same variograms used for populating
Kh together with collocated kriging to Kh (See section 16.4). Cells with non-
reservoir facies will be attributed a Kv of zero.

• Method 2
Using Log Blocking (Section 11.0), a Kv average is computed from the harmonic
average of Kh curve generated by the Permeability Predictor. The blocked Kv’s
are then cross-plotted against the blocked Kh to give a “blocked” Kv/Kh ratio for
each of the Lithofacies in the model. This ratio is then applied to the modelled
Kh values over the entire geomodel. Once again, as for method 1, the Kv/Kh
ratio must be considered as the best case scenario.

• Method 3
As for Method 2, using Log Blocking, compute facies biased Kh and Kv by
arithmetic and harmonic averaging respectively. Generate histograms of Kv/
Kh for each Lithofacies and co-simulate Kv/Kh and Kh. Generate Kv in the
geomodel by multiplying the two properties.

• Method 4
We know that for clastics, porosity is a function of grain size, with typically the
coarsest and cleanest sandstones displaying the highest porosities, and the low
porosity sands having the smallest grainsize and greatest shaliness. This explains
the decreasing permeabilities in low porosity sands. However, because shales
tend to be deposited horizontally, often as thin, but extensive sheets, these will
act as vertical barriers or baffles, greatly affecting Kv. Modelling Vshale in
the reservoirs can be used for refining the Kv modelled in Methods 1 to 3, by
accounting for the shale fraction in the sands and their impact on Kv. Accounting
for Vshale on Kv can be done by using a simple formula (or variations thereof) such as:

Kv = (1-Vshale)*Kv
where Kv comes from Kv modelled in Methods 1 to 3 above. No changes to
Kv when Vshale is 0, but Kv decreases as Vshale increases.

It is also possible to model Kv directly from the modelled Kh and Vshale, using
similar formula to the one above:

Kv = (1-Vshale)*Kv/Kh*Khmodelled
where Kv/Kh is the ratio established in Methods 1 or 2 above and is a
constant. Khmodelled is the modelled horizontal permeability. Maximum
Kv occurs when Vshale is 0, and decreases as Vshale increases.

Geomodelling Workflow T W O

For flow simulations sensitivities, vertical transmissibility “multipliers” could also

be applied between layers to modify modelled Kv values if necessary.

Ultimately the choice in modelling permeability (Kh and/or Kv) is dependent on the
data available and the modellers’ preferences. However the modelled permeabilities
should be coherent with core data; geological model and especially the dynamic data
when available. For example, the wells in the model should have flowrates matching
those measured during well-tests.

14.7 Reservoir Charaterisation From Seismic

The link between seismic and petrophysical parameters is established through attribute
analysis and calibrated from correlations between seismic attribute and petrophysical
properties at the well. Figure 91 shows such an example where a correlation between
impedance and porosity has been established and an impedance map extracted from
the seismic. This impedance attribute map can then be used through co-kriging to
map porosity.

From Impedance to Petrophysical Properties

Seismic Impedance Porosity Map

Impedance vs. Porosity in Wells Collocated


• Applies to other Attributes

• Validity depends on good calibration


Figure 91 Seismic attributes from impendance to petrophysical properties.

Seismic inversion is also commonly used in seismic in order to interpret the geology
and petrophysical properties. The principle is simple. When an energy pulse in
the form of an acoustic wave is sent into the subsurface, the reflected energy (and
therefore seismic image) we record at surface is directly dependent on the acoustic
impedance of the layers making up the subsurface (Figure 92). Assuming we have a
good idea of the wavelet, we can than invert the seismic with this wavelet and obtain
an impedance contrast volume. Assuming we know the first order reasons for the
impedance contrast (well calibration) we can model the petrophysical properties of
the subsurface.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 87

R Geomodelling & Reservoir Management

Principle of Acoustic Impedance

Earth Energy Seismic

Seismic Acoustic lmpedance



Acoustic lmpedance Earth

Figure 92 Principle of acoustic impedance.

Finally, a word of caution on using seismic attributes. Remember that a relationship

between seismic attributes and geology can have several non-unique solutions, and
you therefore need good calibration to ensure the validity of your seismic attribute
to geology model.

14.8 Water Saturation Modelling

Modelling water saturation (Sw) is different from the other properties in that it is
dependent both on the reservoir properties (wettability, poresize and pore-geometry
geometry - i.e: porosity and permeability), and the height above the Free Water Level
(FWL). Sw below the FWL is 1, but above the FWL, in the hydrocarbon interval,
Sw is dependent on the Capillary Pressure (Pc) of the reservoir. Capillary effects and
Pc in particular are linked to the rock type and its poroperm properties. This means
that the height of transition zone separating the FWL and the OWC (height at which
we reach irreducible water saturation (Swi)) increases as Pc increases, which takes
place as poroperm properties degrade.

So for each Lithofacies or Petrophysical Group in a model, we have to convert the Pc

versus Sw curve, into a saturation height curve to model the transition zone between
FWL and the OWC. This is done using the simple formula below:


( w − o )g

Geomodelling Workflow T W O

Where h is height above the FWL in meters. Pc is capillary pressure in bars. rw and
ro are water and oil densities in Kg/m3 and g is the earth’s gravitational constant of
9.82 Newtons.

Swi is reached at the OWC where it remains constant above that level. The saturation
height curve can be entered directly into the model as a function to model Sw.

Note that each Petrophysical Group will have its own saturation versus height
curve as illustrated on Figure 93. This figure illustrates how the transition zone and
irreducible water saturation (Swi) progressively decrease with improving poroperm
Saturation versus Height Curves

Increase in
Porosity &
70 Permeability

Height above Fwl (m)

50 7 6 9 9 2 5 3 4 1





Swir Swir
10 20 30 40 50 60 70 80 90 100
Sw %

Saturation Curves for varying Petrophysical Groups

Figure 93 Saturation versus height curves

Above the OWC, Sw can either be modelled as a constant Swi depending on its
Petrophysical Group, or if we have reliable computed Sw curves from the well logs
and Sw distribution histograms, we can model Sw with collocated kriging to porosity.
This will ensure higher Sw’s modelled in lower poroperm reservoirs and vice versa
in good poroperm reservoirs.

In rocks with very good to excellent poroperm properties (25% porosity and hundred
or more millidarcy permeabilities) the transition zone may be small and is sometimes
ignored. In such cases, Sw is modelled solely on the basis of the distribution of Swi.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 89

R Geomodelling & Reservoir Management


The fundamentals of geostatistics have already been covered in depth elsewhere

in this course (Reservoir Concepts Module), so this section only aims to be a brief
summary of key fundamental points in geostatistics, before discussing the role of
geostatistical simulation techniques in creating geomodels.

In broad terms geostatistics can be defined as: “the branch of statistical sciences that
studies spatial/ temporal phenomema and capitalizes on spatial relationships to model
possible values of variable(s) at unobserved, unsampled locations” (Caers, 2005)

15.1 Variables and Evolution of Variability

As the name implies, a variable is some population whose characteristics are non-
uniform and vary within certain ranges. If we have a good understanding of the
variability of a population, we can use that information to make predictions about its
behaviour. Some populations have variabilities that are fully predictable and considered
as deterministic variables (Figure 94). An example of a deterministic variable is the
number of days in a month. Because we know exactly how these vary, it is possible to
predict the exact number of days of a month no matter how far forward or backward
in time we may wish to go. We could likewise predict the exact day of the week no
matter what. However most variables are stochastic or have a degree of uncertainty
as illustrated again on Figure 94. If we consider a variable like monthly rainfall for
instance, we would we need a sufficiently large sample size from that population to
make any reliable predictions about that population. These predictions however will
never be better than an estimate qualified by some degree of probability.

Days in a month

year n year n+1

Can predict with certainty

Monthly Rainfall

year n year n+1

Measurements from the population

necessary to statistically quantify
and predict the changes of the variable

Figure 94 Deterministic and stochastic variables

Geomodelling Workflow T W O

The study a single variable is known as univariate analysis. This typically entails
trying to determine the distribution of the population. To do this involves taking
a representative sample from the population whose distribution is believed to be
representative of the population. Variability is displayed as a histogram or a frequency
distribution (Figure 95)

These distributions will have different shapes (Normal or Gaussian; Log Normal
and many others) that can be mathematically described with their own specific
characteristics such as mean, standard deviation, variance, coefficient of variance,
skewness, median, mode etc. Any distribution can readily be transformed into a
probability distribution function (pdf), which is a key tool in determining the expected
value of a variable.
Univariate Analysis

f(v) f(v)

1.0 1.0

0.5 0.5

0 v 0 v

Histogram or Frequency Distribution Cumulative Curve

Distributions an be defined in terms of:

Type of distributions (Gaussian to Uniform)
Mean (Arithmetic, Geometric or Harmonic), Median & Mode
Variance, Standard Deviation & Coefficient of Variance (CV)
Skewness Percentiles etc

Figure 95 Univariate analysis

It must be noted however that variables are not always independent (average rainfall
and seasons for example). Determining how one variable relates to another is called
bivariate analysis.

Porosity and permeability are a typical bivariate pair as illustrated on Figure 96.
Computing statistical parameters such as regression coefficients and covariance
allows us to quantify the degree to which two variables are related. The problem
with covariance is that it is not dimensionless, making comparison between different
pairs of variables difficult.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 91

R Geomodelling & Reservoir Management
Bivariates Analysis

Bowker and Jackson, 1989
Weber Sandstone
Search for a function relating two variables Pennsylvanian-Permian
Use Regression and Covariance Rangely Field, Colorado


If covariance >> 0 or <<0 Strong correlation

between the 2 variables
If covariance ~ 0 Variables are independent
Covariance is not dimensionless. Sensitive to
scale 10
Permeability (md)

Inverse slope = 3.3%

Invercept porosity
at 1 md = 9.4%


0 10 20 30
Porosity (%)

Figure 96 Bivariates analysis

Using the dimensionless Coefficient of Correlation (Figure 97) is a better parameter

to compare correlations between pairs of variables. The Coefficient of Correlation
as shown on Figure 97 can be positive (e.g. pressure and temperature) or negative
(e.g. pressure and altitude) and ranges from -1 to +1. The closer the Coefficient of
Correlation tends to zero the more independent the two variables are from each other.
A coefficient of correlation of 1 or -1 represents perfect correlation.

Geomodelling Workflow T W O

Coefficient of Correlation ( ) gives a dimensionless

mesure of Regression between two variables

1 N
Covariance = (vi - mv)(wi - mw)
N i=1

= covariance -1 1
v. w

0.7 0 + 0.7

Figure 97 Bivariates anaylsis

Knowledge of univariate properties of a population or more commonly a sample

of that population is unlikely on its own to tell you about its spatial continuity or
behaviour between sample points. If you take the six sample points as shown on
Figure 98, these could come from a population with an erratic variation in space
and between sample points or come from a population with perfectly continuous
variability (Figure 98).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 93

R Geomodelling & Reservoir Management

Measuring Spatial Continuity

Very Erratic
Sampling of a variable on its own
Not necessarily sufficient to describe
Its spatial continuity


Need to statistically
quantify the spatial continuity


Figure 98 Spatial continuity of a variable.

How do we represent or model spatial continuity of a variable? This is typically where

kriging and variograms come into play.

15.2 Variograms and Kriging

In the previous section, we looked at different property distributions in a population or
sample points but have not considered the spatial relationship between values in that
population. The non-randomness of reservoir properties such as porosity, permeability
etc infers some form of spatial relationship, with values measured close together more
likely to be similar than those measured further apart. If we want to accurately model
such properties in 3D, we need to quantify that spatial relationship, typically through a
geological continuity model. The simplest quantification of spatial relationship, comes
from evaluating a correlation coefficient between a datum value z at some location u
(x,y,z) and any others measured some distance h away. This correlation for various
h distances is what defines a variogram. It is this variogram and its quantification of
spatial correlation that allows us to estimate unsampled values in 3D on the basis of
neighbouring sample values.

This can be written using the following equation:

z * (u) = ∑ λα z(uα )
α =1

Geomodelling Workflow T W O


z*(u) = estimate of property z at location u

z(u) = some property z at location u

la = kriging weight

This means the estimate at unsampled location [ z*(u) ] is a combination of n related

z(ua) data points, each data point having some weighing factor la.

For this computation to have any meaning, the data must in statistical term display
stationarity. This means the data points that are used in computing our estimate
must all have similar statistical properties. For instance, in a reservoir, it would be
nonsense to compute estimates in some interval using sample points that came from
some underlying zone with different statistical properties to the interval where we
are computing our estimates.

What is this la kriging weight factor and how is it determined?

Two key factors are taken into account to compute la.

1. The closeness of the sample points relative to the unsampled point. However
because of spatial correlation, this does not necessarily equate to some inverse
distance factor. This is illustrated on Figure 99. In that example we note a
northeast-southwest continuity. Sample z(u1), although closest to unsampled
point z*(u) should carry far less weight than sample points z(u2), z(u3) and z(u4)
in computing point z*(u) even though these are much further away from z*(u).

Figure 99 Map of some property with an underlying spatial continuity. We note a distinct
Northeast to Southwest spatial correlation of the property.
• Sample points z(u2), z(u3) and z(u4) are clearly more relevant to estimating z*(u) than
sample point z(u1)
• Sample points z(u2) and z(u3) being very close together have a high redundancy

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 95

R Geomodelling & Reservoir Management

2. Data redundancy. Taking the example on Figure 99, sample points z(u2) and
z(u3) being close to each other, correlate strongly and therefore have a high
redundancy when computing z*(u). These two sample points should therefore
have a combined weight equivalent to point z(u4).

This is illustrated in Figure 100 where we can compare a weighing based purely on
inverse distance and Kriging.

λ2 = 1/3 λ2 = 1/4

λ3 = 1/3 λ3 = 1/2
λ3 = 1/3 λ1 = 1/4

Inverse Distance Kriging

Figure 100 Kriging. Example of data redundancy where l2 and l1 are given a combined
weight equivalent to l3.

The geostatistical method that uses a generalized least squared regression method to
compute z*(u) together with a weighing factor (la) based on spatial correlation and
redundancy is know as Kriging.

A variogram is a statistical technique that describes the spatial variation of a variable
(a petrophysical property for example). It assumes that closely spaced samples are
likely to have some correlation compared to more widely spaced samples. What the
variogram establishes is the distance between sample points beyond which there is
no correlation between them. This correlation distance can, and often is, anisotropic
in the X, Y and Z directions. A key pre-requisite for a variogram, is stationarity of
the data from which it is determined (see Chapter 1). This means that the local mean
must be the same as the global mean.

A variogram is a plot of variability in terms of semi-variance against separation

distance. It pairs each sample point with number of other data points at progressively
increasing separation distances known as the lag distances. This is illustrated on
Figure 101, showing the computation of the normalised semivariance for only 2 data
points and their paired data points in lag 4. The variograms calculates the degrees of
dissimilarity between data pairs with progressively increasing separation distances.
The orientation, the number of lags and the search angle are at the discretion of
the interpreter. As lag distance increases, the variogram converges towards a semi-
variance value of 1 (Figure 102). The distance at which this occurs is known as the
range and marks the distance beyond which there is no correlation between pairs.
Note that in isotropic correlations the variogram can and will change according to
the orientation angle.

Geomodelling Workflow T W O

Move to next node.

Example: looking at lag 4 Repeat computation at next

node until all nodes have
been computed.

Start at a given node, and compare

value to all nodes which fall in
the lag and angle tolerance.

Figure 101 Calculating experimental variograms

Calculating Experimental Variograms

Repeat for all nodes Repeat for all lags

Variogram, γ(h)

Sill No correlation



Lag distance (h)

Figure 102 Calculating experimental variograms

Three main variogram models are used in stochastic modelling. These are: Gaussian,
Spherical and Exponential (Figure 103)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 97

R Geomodelling & Reservoir Management

Normalised Semivariance

Range Spherical

Sample Separation. lag (h) [m]

Figure 103 Variagram model types

Figure 104 schematically illustrates, the evolution of variability, variogram shapes

and the effects of variability, variogram shape and scale
of scale.
a) Variables Variograms

Total randomness
500 km
Mountain range h

Z(x) G(h)

Globaly random but some

50 km correlation over short distance h
Mountain peaks Spheric

Z(x) G(h)

5 km Some organisation but h

Mountain side some noise Linear with nugget ...

d) G(h)

Completely Predictable
organisation h
50 m

Figure 104 Evolution of variogram shape and scale

Take a mountain range several hundreds of kilometres across as an analogue (Figure

104-a). At that scale, it is unlikely that any correlation can be established. i.e. there
is total randomness between points and the variogram will display a pure nugget
effect. Zooming closer (Figure 104-b), short range correlation is now evident from
the variogram. Closer still (Figure 104-c) where we are on the flank of a mountain,
the variogram now displays a linear trend, and fails to converge on a normalised
semivariance of 1, reflecting that non-stationarity of the data at that scale. Removing

Geomodelling Workflow T W O

this trend (slope of the flank) would yield a variagram like Figure 104-b. Finally at
the closest range (Figure 104-d) there is no randomness at all (fully correlated and
predictable) and we get a parabolic variogram. For this last case, there would be no
requirement for stochastic modelling, and a deterministic model could be used instead.

In kriging, properties between sample points are modelled with a spatial variability
defined by a variogram. This is illustrated on Figures 105 and 106, where the spatial
variability between two sample points is modelled. In Figure 105, isotropic variograms
with progressively longer ranges are used, while on Figure 106 we have the same
two sample points, but anisotropic variograms with progressively longer ranges are
used for kriging. From these two examples we can see how kriging with different
variograms affects the spatial distribution of the estimates between the sample points.
Kriging with Gaussian Variogram, Isotropic

Variogram Range=100m

Variogram Range=250m

Variogram Range=500m

Variogram Range=1000m

Figure 105 Kriging with Gaussian variogram. Isotropic

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 99

R Geomodelling & Reservoir Management

Kriging with Gaussian Variogram, Anisotropic

Variogram Range = 100m x 25m @ 90°

Variogram Range = 250m x 100m @ 90°

Variogram Range = 500m x 125m @ 90°

Variogram Range = 1000m x 250m @ 90°

Figure 106 Kriging with Gaussian variogram. Anisotropic

Figure 107, is another display of kriging results using different variogram models,
nuggets and ranges. Note that exactly the same data set is used in all cases.

Geomodelling Workflow T W O

Variogram Effect - Same data Well Cases: Mean 20, s = 4

Gaussian Variogram Spherical Variogram Exponential Variogram

Effect From Different Variagram Models

Range = 10 m Range = 50 m Range = 400 m

Effect From Different Range

No Nugget effect Slight Nugget effect Pure Nugget

Effect From Nugget

Figure 107 Variogram effect - same data (mean = 20, s = 4)

From the examples on Figure 107, we note how local variation in the spatial distribution
increases progressively as the variogram model goes from Gaussian to Exponential
or as range decreases, or the nugget effect increases. This is because in all cases,
the correlation length is effectively decreased, allowing wider contrasting values to
be modelled in close proximity to each other (checker board pattern). This may not
be realistic for certain variables such as porosity or permeability which are likely to
have a smoother distribution from high to low values.

Figure 108 illustrates different types of kriging used to map seven sample points from
a twin population of channels and inter-channel lithofacies. The results from normal
kriging using isotropic and anisotropic variograms are shown, with the anisotropic
kriging, probably giving a better match to the original population. Also displayed is
kriging done with external drift or as collocated-kriging where the spatial distribution
of the channels and inter-channel lithofacies is conditioned to that of a secondary

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 101

R Geomodelling & Reservoir Management

Types of Kriging

= Sample Points

Channels and Overbanks

External Drift/Trend

Collocated Kriging
(primary variable)

Isotropic Variogram Anisotropic Variogram


Figure 108 Types of Kriging

Collocated kriging is discussed in section 16.4


Kriging is fine for modelling if the sampled properties are representative of the
population such as the continuous variable shown on Figure 98. If however properties
have a complex spatial distribution and are not fully represented by the samples,
Stochastic Simulation can be used. In stochastic simulation, in addition to kriging
using sample values and their variograms, the population’s property distribution
is also used as input. Properties modelled that way look more realistic than when
using deterministic methods or by simple kriging, and will honour the population’s
distribution better. However be aware that this is a stochastic modelling and that each
run will generate a new, but equiprobable output, which may look quite different in
the interwell space from one run to the next. In other words, do not drill a “high”
based on the outcome of a single stochastic modelling.

Sequential Stochastic Simulation, as the name implies, is the sequential modelling of

values at each nodes of an estimation grid using stochastic simulation (See Chapter 1
-Appendix section), until values have been modelled at all nodes of the estimates grid.

There are two main types of Sequential Stochastic Simulation: Sequential Indicator
Simulation (SIS), which simulates discrete variables such as facies and Sequential
Gaussian Simulation (SGS), which simulates continuous variables such as porosity
or permeability.

Geomodelling Workflow T W O

16.1 Sequential Indicator Simulation (SIS)

Figure 109, (from Chapter 1, Appendix 1) schematically represents Sequential
Stochastic Simulation.

1(0) Categorical data:

0(1) Type 0 and 1 with
the global proportion

1(0) Indicator kriging

0(1) Local Probability
0.8 Estimate
1(0) to sum 1)

1 Sample the pdf at

0.2 random to get the
corresponding category
0 1 value

Use simulated
1(0) 0(1) value in text next

Figure 109 Sequential indicator simulation algorithm

In the example on Figure 109, we are considering only two variables (shale and sand).
These are converted into category 0 and 1 (There could of course be more than two
categories). Assume a global proportion of 50% for each facies (category), and also
assume that we know the variograms describing each variable’s spatial distribution.
There are three sample points with either category 1 or 0 as shown on Figure 109.
A node or cell is selected at random and by indicator kriging the local probability of
each indicator is estimated at that cell and normalised to give a total local probability
of 1. This turns out as 0.8 for Indicator 1 and 0.2 for Indicator 0 in the example. A
local cdf is generated and sampled, giving 80% probability of selecting indicator 1
and 20% indicator 0. Once the indicator is selected, this node becomes a new data

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 103

R Geomodelling & Reservoir Management

point and is included in the computation on the next random cell, where the process
is repeated. This cycle is continued until the entire estimation grid is simulated.

It is evident from this, that the final outcome from one SIS simulation to the next will
be different but equiprobable, since it will honour the sample points, variograms and
global proportions from one simulation to the next.

16.2 Sequential Gaussian Simulation (SGS)

The Sequential Gaussian Simulation (SGS) method is somewhat different to SIS
as schematically illustrated on Figures 110 and 111 (from Chapter 1, Appendix 1).

1 1

-2 -1y 0 2 2 2

y=φ-1 (z)

Figure 110 Normal score transform (Deutsch and journel, 1998)

Geomodelling Workflow T W O

Transform data to
Normal distribution

Compute Kriging
-0.1 +
_ 0.3 -0.8
estimate and error


Derive local PDF

Built a local Normal
distribution with estimated
mean and variance
-0.4 0.2

Sample from Gaussian PDF

Use simulated value 0.1 ?

in next predictions -0.8


Backtransform the
simulated values

Figure 111 Sequential gaussian simulation algorithm

The first step in SGS simulation is transforming the global probability distribution
function of the variable into a normal distribution with mean = 0 and variance = 1
using a “normal score transform” (Chapter 1 - Appendix) as illustrated on Figure
110. The sample data are also converted to a normal distribution data: N(0,1) as
illustrated on Figure 111. A node or cell is selected at random and the mean computed
by kriging. Using the kriging error as variance we can construct a local pdf for
that node or cell. The local pdf is then sampled (Figure 111) and the selected value
becomes a new data point, which is used when the process is repeated in computing
a value for the next random cell. This cycle is continued until the entire estimation
grid is simulated. Once the simulation is completed the normalised values are back
transformed to their real values.

As for the SIS simulation, the outcome will be different (but equiprobable) from one
SGS simulation to the next.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 105

R Geomodelling & Reservoir Management

Figure 112, illustrates

Multiple Gaussian and example of multiple Gaussian simulation.



0 0.5

Channels and Overbank Channels



0 0.5

Figure 112 Multiple gaussian simulation

This simulation allows a property that has different distributions for different spaces
of the grid to be modelled selectively for their given space. In the example, we have
channels with relatively high porosities and inter-channel facies with poorer porosities
as shown by their respective distributions on Figure 112. Both these distributions
can have differing variograms that will control how the porosities cluster in the two
environments. Taking the facies map generated by collocated kriging in our previous
example in Figure 108, the regions interpreted as channels (grey zones) will be
populated with the higher porosity distribution and the regions interpreted as inter-
channels will be populated with the lower porosity distribution.

16.3 Sequential Truncated Gaussian Simulation (STGS)

STGS simulation allows discrete parameters (Facies for example) to be modelled
from a continuous SGS simulation. The technique is exactly the same as for the SGS
simulation, except that the normalised distribution is arbitrarily divided into regions,
each region allocated a discrete value. For example, converting the normalised input
distribution into a cdf, we could allocate discrete values from 1 to 5 for modelled
points falling between percentile points P0 to P20; P20 to P40; P40 to P60 etc. In
other words, the modelled points falling within these percentiles will be converted
to a discrete value depending on the selected truncation of the Normalised Gaussian

Geomodelling Workflow T W O

16.4 Collocated Kriging

In the SIS and SGS examples discussed in the previous sections, we modelled single
independent variables in each case. However if two variables are correlated, and are
assumed to have the same spatial distribution we can use collocated kriging to model
and condition the one variable against the other.

Assume that you have two variables with some correlation between the two. If you
know the spatial distribution of both variables and the cross-covariance between
the two, you can model one variable and its spatial distribution by co-kriging and
capture the cross-correlation that relates the two variables. Collocated co-kriging,
is a reduced form of co-kriging. In reality co-kriging and collocated co-kriging are
difficult to compute, so the process is further reduced to collocated kriging, which is
the technique widely used in geomodelling.

In collocated kriging the spatial distribution of the secondary variable is conditioned

to the spatial distribution of a correlated and previously modelled primary variable.

To illustrate collocated kriging, let’s use a simple example. Take two correlated
variables such as porosity and seismic acoustic impedance (Figure 91). A correlation
coefficient has been established between the two variables from a cross plot as shown
on Figure 91. If the spatial distribution of both variables is assumed to be the same, it
is possible using the modelled spatial distribution of the acoustic impedance (primary
variable) to model porosity (secondary variable) using the correlation coefficient
linking the two variables and the spatial distribution of the acoustic impedance. Note
that in this example, the spatial distribution of the acoustic impedance is explicitly
determined from attribute analysis of a 3D seismic cube, but could have just as easily
come from a variable modelled from a variogram.

In the actual collocated-kriging computation, the primary variable (acoustic impedance)

is already modelled in space. The distribution of the two correlated variables are
normalised to N(0,1) (Figure 113).

Primary variable - Normalised Distribution: m = 0, s = 1

=1 =0.8 =0.5 =0

Secondary variable - Normalised Distribution: µ = 0, σ = 1

Figure 113 Collocated kriging

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 107

R Geomodelling & Reservoir Management

The normalised value from each cell of the primary variable (acoustic impedance) is
used to determine the value of the secondary variable in that same cell. Lets assume
that for a given cell, the value of the primary variable is the circled value lying
somewhere on its normalised distribution (Figure 113). If the correlation coefficient
between the primary and secondary variables is 1, the value from the first variable
would map exactly with an equivalent normalised value from the secondary variable.
In other words, all cells with the same normalised acoustic impedance value would
yield exactly the same normalised porosity value. If the correlation coefficient is
lower than 1, we see that the normalised porosity value will now be sampled from a
wider region on the normalised pdf curve of the secondary variable as illustrated in
Figure 113. That sampling region progressively widens as the correlation coefficient
approaches 0. At zero correlation coefficient, since there is no correlation, the porosity
can be sampled from anywhere over the entire normalised distribution.

Collocated kriging is important when modelling correlated variables such as porosity

and permeability for instance. It is important to remember that in collocated kriging
the spatial distribution for the primary and secondary variables must be the same. For
modelling permeability by collocated kriging with porosity, the positive correlation
between the two variables means that for high porosity values, permeabilities will be
sampled from the high side of the permeability distribution, and vice versa in case
of low porosities. Finally, note that negative correlations can similarly be modelled
by collocated kriging (Vshale and NTG for example).

16.5 Multi-Point Simulation

This is a relatively new technique, described in Chapter 1 of this module. In summary,
Multi-Point Simulations use training images rather than variograms to model
spatial distribution of variables. This technique is not yet available on commercial
geomodellers, but when it does, may well supersede the classical Pixel or Object
based Sequential Simulation techniques used today.


Uncertainty is a function of what we know about the reservoir. The less we know the
greater the uncertainty. Reservoir modelling is never more than an approximation of
reality, and in constructing a geomodel, there will always be uncertainty as to whether
the model has effectively captured the main reservoir characteristics which most
influence flow behaviour in the reservoir, and if the model is true representation of
the reservoir, capable of accurately predicting hydrocarbons in place, flow behaviour
and reserves of the reservoir.

Geological uncertainty assessment is therefore an important part of any geomodelling

exercise. Cosentino (2001) has identified four main areas of uncertainty associated
with geomodelling. These are:

1. Data quality and interpretation

All data and interpretations have inherent errors. This includes errors in the acquisition
and measurements of samples, erroneous seismic interpretations and/ or poor formation
evaluation computations. The assumption in geomodelling, is that our input data
is correct. Quantifying the error from data quality and interpretation is difficult
and sometimes impossible. As in all analytical work, any interpretation should be

Geomodelling Workflow T W O

corroborated by as many (independent) sources as possible, integrate all available

information and be internally coherent. All data and interpretations should therefore
be validated as best prior to starting a geomodelling project.

2. Structural and Stratigraphic interpretation

This uncertainty is related mainly to seismic picking, depth conversion and reservoir
zonation. This has been extensively treated in this chapter, including the means of
quantifying this uncertainty where possible and ways of reducing it.

In summary this uncertainty relates to reservoir zonation and the 3D geometry of the
reservoir, it’s structural style faulting etc.

Reservoir zonation, can be either in terms of Sequence stratigraphic and/or flow unit
correlations. Wrong correlations will yield erroneous reservoir connectivity models.
Correlations should be coherent with geological model and Sequence strtigraphic
correlations should tie with the seismic.

Structural interpretation in the subsurface is for most part based on the interpretation
of seismic. Imaging quality depends on 2D versus 3D seismic and their quality
which includes frequency content (impacts resolution) as well as the acquisition and
processing parameters used on the data.

Good ties between seismic markers and the wells via synthetic seismograms. This also
depends on the quality of the well correlations. Weak or highly variable impedance
contrasts, together with other factors such as tuning effects, or correlating events
across major faults, all contribute to uncertainty in the seismic picking and the
resultant TWT mapping.

Depth conversion is also a major uncertainty. Seismic interpretation and mapping must
be coherent with the geological setting (extensional versus compressional regime)
and all available information. For example mapped closures or spill points must be
consistent with the OWC seen in the wells.

3. Stochastic model and parameters

The use of pixel rather than object based or even multipoint stochastic modelling in the
construction of a model, will in itself give three different outcomes. This uncertainty
is extended further when the choice of parameters used in modelling is considered.
For pixel based modelling, this includes the selected variogram model, variogram
ranges, nuggets, orientation etc, while in object based modelling this includes the
type of geobody selected, sizes, orientation etc.

The construction of several models, using different parameters representing what is

believed to be optimistic, pessimistic and most likely scenarios is useful for estimating
the uncertainty from the stochastic model and parameters.

4. Equiprobable realizations
This is the uncertainty linked to the fact that geomodelling is a stochastic exercise.
Stochastic modelling will yield a series of different, but equiprobable outcomes, all
of which will for example give a different hydrocarbon in place estimate.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 109

R Geomodelling & Reservoir Management

The way to estimate the impact from this uncertainty is to run multiple realizations,
and in each case compute some key parameter (typically hydrocarbon in place,
either STOOIP or GIIP) from each outcome, and plot these as histograms. The range
from this histogram will quantify the uncertainty stemming from the randomness of
the simulation. Some geomodellers in fact allow you to run many simulations and
generate histograms of the computed hydrocarbon in place for each outcome. These
hydrocarbon in place histograms can be used to determine P10, P50 and P90 cases
for STOOIP and GIIP. Testing each outcome in a dynamic simulator, and get a range
of reserves is also possible, but can be prohibitively time consuming in terms of CPU
time. However it is easier to select models close to the P10, P50 and P90 cases for
example and use them as input in a dynamic simulator to assess the reserves uncertainty.

Treating uncertainty in reservoir modelling is a huge field of research (including Heriot

Watt University). There are presently two main approaches to modelling uncertainty:

1. Top Down Approach, where we begin with a coarse and relatively simple model,
and add details and complexity as required.

2. Bottoms-up Approach, where we look in great detail at very small sectors of the
model, hoping to capture and address all geological uncertainties.

Which approach is best is impossible to say at present. Suffice to say that when
constructing a model, all key uncertainties should be identified and ranked. The key
uncertainties should be assessed and if possible their impact quantified (typically by
running sensitivities). Assessing uncertainties in a model and their impact is a key
objective in the construction of any model.


Dynamic simulations are limited in terms of the number of cells that can be modelled.
Although these have steadily increased in line with increasing computing power,
dynamic simulation models are still limited to around 200.000 cells, going up to
500.000 cells depending on the complexity of the model and the dynamic simulation
method used. Geomodels on the other hand are far less restricted with models of up
to 50 million cells reported. These geomodels cannot be dynamically tested without
first reducing the number of cells to a manageable size. Transforming a fine grid to
a coarser grid to reduce the number of active cells is known as upscaling. This is
illustrated on Figure 114.

Geomodelling Workflow T W O


Figure 114 Upscaling

Upscaling always degrades a model, as averaging may remove some key details, which
greatly affect reservoir flow behaviour. Upscaling may cause loss of coherence in
geological model, and coherence between Static and Dynamic Model during History
Matching. If possible avoid upscaling, by designing your model and grid size such
that upscaling will not be required.

Upscaling along the main axis of flow minimises the degradation of the input parameters

18.1 Averaging and Flow Based Tensor Upscaling Technique

Upscaling entails averaging fine grid data into coarser cells, ensuring that the properties
modelled in the fine grid are preserved and effectively reproduced in the coarse grid
model such that flow behaviour from the fine to the coarse grid is retained. For many
properties, simple averaging techniques such as arithmetic, geometric, harmonic or
RMS averages can be used. Averages should be weighted to the fine grid cell volume
(most commercial geomodellers do that automatically), but weighing may need to
be extended to include additional properties. For example in upscaling porosity, the
porosities must be weighted not only for cell volume, but also for NTG and in the
case of Saturation upscaling should be weight for cell volume, NTG and porosity.

Upscaling permeability is however more complex, exactly because permeability is

sensitive to the scale of measurements and because of its directional components in
the X, Y and Z directions. Horizonral permeability (Kh) in the X and Y directions
can be upscaled using arithmetic averaging, while vertical permeability (Kv) can be
upscaled using Harmonic or Geometric averaging. However the technique called Flow
Based Tensor (FBT) upscaling is more commonly used to upscale permeabilities.
In FBT upscaling, it is assumed that for a given pressure gradient, flow in the X,
Y and Z directions of a coarse cell, will be the same as the flow from the smaller
cells included in the coarser cell. Two computations are involved in FBT upscaling.
First, the pressure field is computed across the cluster of fine grid cells that are to be
upscaled into the coarser cell, before the resultant flow due to the pressure field can
be computed. This is carried out in the X, Y and Z (or I, J and K) directions. From
the flow and pressure gradient, the X, Y and Z permeabilities for each upscaled cell
can be determined. Full tensor computation allows an additional output of diagonal
permeability tensors in the IJ, IK and JK directions to be computed.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 111

R Geomodelling & Reservoir Management

The boundary conditions set in FBT upscaling are important in computing the pressure
and flow fields. Typically, there are two boundary conditions:

Open boundary conditions:

In open flow boundary conditions, flow is permitted through all cell sides and the
pressure is varied linearly along the sides of the model (Figure 115).

Closed boundary condition:

In closed boundary conditions, a constant pressure is applied to two opposite cell
block sides in which the fields are to be measured, while all the remaining boundaries
are closed (Figure 115). This method is suitable when little cross flow is believed to
exist between layers.


Open boundary conditions Closed boundary conditions

Figure 115 Flow based tensor upscaling - boundary conditions

Remember that the effective permeability calculated using open boundary conditions
is always greater or equal to the effective permeability calculated with close boundary

18.2 The Geopseudo Method

In the upscaling discussion above, we have looked only at the upscaling of static
properties. However reservoir engineering parameters such as relative permeability
(Kr) and capillary pressures (Pc) may also need to be upscaled. This is harder to
compute, yet these have a key bearing on the flow simulation.

One way round this problem is Pseudoisation, which is simply the application of the
Kyte and Berry method to compute pseudo Kr and Pc. Pseudoisation applications
are readily available on most commercial dynamic simulators.

Pseudoisation is illustrated in the example on Figure 116, where sector models of

sedimentary structures are sequentially upscaled to geologically significant length
scales. The objective is to determine the Kr and Pc curves to be used in the full-field
simulation of a massive cross-bedded reservoir sequence. Small-scale data (core
plugs poroperm and SCAL) is available. The upscaling exercise starts by modelling
representative tabular cross-bedded units (Figure 116) to estimate Kr and Pc by
pseudoisation for cells around 2m x 0.5m in size. A larger scale sector of the reservoir
(~20m) is next constructed and the Kr and Pc values determined in the previous
pseudoisation are used as input in this model to compute upscaled Kr and Pc for
cells around 20m x 5m. Further upscaling can be carried out until the full-field grid
size is reached.

Geomodelling Workflow T W O

• Create models of sedimentary structures

• Upscale at geologically significant length scales



Oil Water


GUP project, Heriot Watt

Figure 116 The pseudoisation method

Finally, to verify whether upscaling has not adversely affected the flow behaviour of
the fine grid reservoir, it is necessary to test the upscaled model by comparing flow
simulations results from representative fine-scale sector models (assumed truth case)
and their upscaled equivalent. This is time consuming, but the only way to properly
validate an upscaled model.


Having constructed a geomodel with an acceptable number of grid cells, either

directly or after upscaling, it can now be loaded into a dynamic simulator to study
its flow behaviour.

There are two simulation approaches we can use for multiphase flow modelling.
These are:

Black Oil model, where we consider:

• Reservoir fluids limited to 3 components: Water, Oil and Gas
• Consider properties at reservoir and at standard atmospheric conditions.
• Composition does not change during production, only properties such as Bo,

Compositional model, where we consider:

• Hydrocarbons defined by their composition (CH4, C2H5, ...) or pseudo components
(groupings of components - eg: H2S, N1C1, C2C3C4, C5C6, C7+)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 113

R Geomodelling & Reservoir Management

• Equation of state define thermodynamic equilibrium (eg Peng-Robinson Equation)

• Composition of the oil and gas phase varies with time during the simulation

19.1 Finite Difference and Streamline Modelling

In terms of computing fluid flow in the reservoir, we can use Finite Difference
numerical methods (FD) such as Eclipse, or Streamline method. The main feature
of Streamline simulation is the decoupling of flow from the pressure solution. For
this, Streamline flow simulation computes a pressure field based on the input grid
parameters, and computes a (conceptually) separate dynamic streamline grid, which
is independent from the underlying pressure grid. Fluid movement or flow takes place
along the streamlines. Streamline simulations are 10 to 100 times faster to compute
than equivalent FD simulation and can therefore test larger and more complex
geological grids, reducing the need of upscaling. However, Streamline simulation is
best suited for viscous dominated flow, where capillary or gravity flow components
are less significant. If these are significant, Streamline computation becomes more
complex and the rapidity advantage of streamline is lost.

In other words for modelling more complex fluid flow, you should use FD modelling.

19.2 Single and Dual Porosity Modelling

Finally there are three ways a reservoir can be handled in dynamic simulations. These
areDynamic Simulators
summarised below and schematically illustrated on Figure 117:

Single M M M

Dual F F F Flow
M M M Flow

Dual F F F Flow
Dual Flow

Figure 117 Dynamic simulators

Single Porosity Model

It assumes that fluid flows from cell to neighbouring cell and that flow is controlled
by the matrix properties in these cells.

Geomodelling Workflow T W O

Dual Porosity Model

This modelling is used in fractured reservoirs, where flow is controlled essentially
by fractures. It assumes that matrix poroperm properties are too low for fluid to
move from cell to cell through the matrix, and that fluid movement will be from
the matrix to fractures, and thereafter, fluid flow (towards the producing well for
instance) will occur solely along the fracture network. Conceptually, the modelling
involves having the exact same two grids, one with the matrix properties and the
other the fracture properties. The fracture grid typically has lower porosities and high
permeabilities, hence the name dual porosity modelling. This modelling requires an
additional modelling parameter, called the Sigma factor, which relates the matrix to
fracture fluid transfer. This is a difficult parameter to estimate and one of the main
uncertainties in dual porosity modelling of fractured reservoirs.

Note that this modelling technique requires twice as many grid cells as for single porosity
modelling, and therefore reduces the number of possible active cells accordingly.

Dual Porosity - Dual Permeability Model

This modelling technique is the same as the dual porosity model, but has the added
possibility of modelling fluid movement between the matrix cells as illustrated on
Figure 117. This model gives greater simulation flexibility, but still requires the use
of a Sigma factor and all the uncertainties that this entails.

The grid cell restrictions of the dual porosity model also apply here, but with the added
problem that flow is now modelled both in the matrix and fracture grids, meaning
significantly longer CPU times.

19.3 Dynamic Simulator Input

Besides the static reservoir properties modelled from the geomodelling, additional
reservoir engineering properties are required as input in order to run a flow simulation.
These properties fall into three main classes:

1) Rock, Fluid and Saturation Functions

Rock compressibility is the only additional parameter required for the reservoir rock
(besides its static properties). Fluid properties are the PVT properties of the reservoir
fluids as listed in Table 3. Saturation functions are the associated pairs of relative
permeability (Kr) and capillary pressure (Pc) curves (Table 3)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 115

R Geomodelling & Reservoir Management

Fluid properties
Bo Oil FVF vs. pressure
Bw Water FVF vs. pressure
Bg Gas FVF vs. pressure
ρσ Oil density at standard conditions
ρw Water density at standard conditions
ρg Gas density at standard conditions
Rs Gas in solution vs. pressure
μo Oil viscosity vs. pressure
μg Gas viscosity vs. pressure
μw Water viscosity vs. pressure
Co Oil compressibility
Cw Water compressibility
Saturation functions
PCwo vs. Sw Water-oil capillary pressure (drainage and imbibition)
PCgo vs. Sg Gas-oil capillary pressure (drainage and imbibition)
Kr0, Krw vs. Sw Oil and water relative permeability functions (drainage and imbibition)
Kr0, Krg vs. So Oil and gas relative permeability functions (drainage and imbibition)
Kr0, Krg vs. Krw 3 phase oil, gas and water relative permeability functions

Table 3 Fluid properties and saturation functions as input parameters (Cosentino, 2001)

Note that there will be as many pairs of Kr and Pc curves as there are Petrophysical
Rock Types (Section 9) identified in the reservoir. In all cases, saturation end points:
Swir (irreducible water saturation), Sor (irreducible oil saturation) and Srg (irreducible
gas saturation) must be also be defined for each pair of Kr and Pc curves.

2) Field production and well constraints

These are the limiting conditions applied to either the field as a whole, or to individual
injection or production wells. These constraining limits may be set for reasons of
history matching in the case of fields with a prodution history, or because of limiting
constraints such as downhole limitations (borehole diameter, etc) or surface facilities
limitations. Surface limitations include amongst others: pipeline size; access restrictions
to pipeline; water and oil treatment limitations; injection capacity and finally market
or regulatory constraints.

However, constraints may be set by because of the development scenarios being

modelled, or the reservoir management strategy envisaged. Examples are: use of
horizontal or vertical wells or combination thereof; production support by water or
gas injection; reservoir pressure maintenance above the bubble or dew points (in the
case of a condensate reservoir). All the above will mean setting some constraints on
the wells and production.

Geomodelling Workflow T W O

The key field and well constraints are listed in Table 4 below.

Field/group production and injection constraints

Max oil production
Max water production
Max water injection rate
Max water injection pressure
Min average reservoir pressure
Separator pressure
Well production and injection constraints
Max total liquid rate
Min and max oil production rate
Min and max water injection rate
Min bottom hole pressure
Max water injection pressure
Well head flowing pressure

Table 4 Reservoir and well constraints (Cosentino, 2001)

3) Initial Conditions
The last reservoir engineering parameters required as input parameter for dynamic
simulation are the initial reservoir conditions. These include the initial reservoir
pressure, the OWC; GWC and/or GOC.

The incremental time steps of the simulation may also be considered part of the initial
conditions, as is the decision to model the reservoir with an active or inactive aquifer.

19.4 Dynamic Simulator Output and Results

Dynamic modelling, coupled to economic assessment is the primary tool for estimating
recoverable reserves and the economic potential of a field.

A multitude of production outputs can be computed from dynamic simulation, including

cumulative production of oil and gas, which are the field’s reserves.

The simulator models the same production parameters commonly recorded during
production (Figure 118). Typically, output parameters such as: cumulative oil/ gas
and water production; reservoir pressure changes; oil/ gas and water production
rates etc are computed at each time increment of the simulation and can readily be
plotted against time for comparison and evaluation. Modern flow simulators also
have powerful 3D visualisation graphics that can be used as analytical tools, to study
the evolution of reservoir conditions (saturation, reservoir pressure etc) in time and
space. This permits the visualisation in space and time of displacement fronts, sweep
efficiency etc.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 117

R Geomodelling & Reservoir Management

Production Profiles
2000 10000 BSW 500

Reservoir pressure



50 250
1000 5000 Oil production

Producing gas / oil

Reservoir pressure

Oil production rate


Water production


0 0 0
69 70 71 72 73 74 75

Figure 118 Typical field production profile.

Because simulations are run over pre-determined time spans, they are instrumental
for predicting key parameters such as future production (oil, water and gas) for a
field, and are used as primary input for economic analyses.

In cases where the field has been on production for a length of time, the results from
the simulation can be directly compared to the real field production data. This is called
history matching. In case of mismatches, the reservoir parameters can be modified
or adjusted until the predicted and actual production history of the field give a good
match. Any adjustment(s) to the reservoir parameters to get a good history match
should be internally coherent, as history matching has non-unique solutions and may
eventually be reached, but for the wrong reasons.

Dynamic simulations are also useful for testing different development scenarios or
management strategies. The outputs can be compared and ranked on any basis we
may wish, such as:
• Economic criteria (maximum production in shortest time - Scenario 1 Figure
• Maximum reserves recovery (not time critical - Scenario 2 Figure 119)
• Minimum environmental impact (fewer wells, re-injection of production water
and/ or gas)
• Test reservoir management ideas for improved recovery (IOR - Figure 119)
• Compare results from several equiprobable geomodels and assess modelling
• Compare results between Max, ML and Min modelling scenarios (oil production
- Figure 119).

Note that for any given oil field, production will at some stage fall below some economic
threshold at which point the field becomes uneconomic to produce. This threshold

Geomodelling Workflow T W O

will vary according to both the oil price and financial constraints (Capex, Opex, Tax
regime etc). Figure 119 shows how for Scenario 1, the field becomes uneconomical
from 1997, while this date is shifted to 2003 for scenario 2. The same figure also
illustrates how gas injection from 1989, is predicted to significantly improve recovery
and increase the economic life of the field to beyond 2002.
Dynamic Simulators Output

Cummulative Oil Production

Recoverable Reserves



1980 82 84 86 88 90 92 94 96 98 2000 02

Production Rates

Scenario 1
Oil Production Rates

Scenario 2
(Gas Inject)


1980 82 84 86 88 90 92 94 96 98 2000 02

Figure 119 Dynamic simulators output

Figures 120, shows a modelled comparison between a single horizontal producer and
5 vertical producers. It is clear that the horizontal well shows significant improvement
in recovery.

Figure 121 shows the results from a simulation on a fractured reservoir. Oil recovery
is shown to increase with water injection, until injection rates reach around 200m3/

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 119

R Geomodelling & Reservoir Management

day/ well. Beyond that rate recovery decreases significantly due to early water
breakthrough and increased by-passed oil. This would naturally adversely impact
the field economics.
Recovery - Horizontal versus Vertical Producers

27.6% RF

1 Horizontal Production Well

Oil 21.5% RF
Cumulative Oil (m3)

Cumulative Gas (m3)


5 Vertical Production Wells

0 20
Time (Years)

Figure 120 Recovery - horizontal versus vertical producers

Injection Rates versus Recovery


150 200
100 250

Recovery (%)

20 years Production Scenario


Injection Rate (Sm3 Day Well)

Figure 121 Injection rates versus recovery

In conclusion, the possible uses of geomodelling, coupled to dynamic flow simulation,

are limited only by the imagination of the geoengineer and the reservoir management
in solving the problems with which he or she is faced.

Geomodelling Workflow T W O

Example 1

Examine the structure of geoscience and engineering in your organisation or within

your education. What positive and negative aspects contribute to the integration of
geoscience and engineering and promote teamwork?

Example 2

Describe the attributes of teams that you have experienced which

1) have been successful

2) have failed

How far are internal structures and communications responsible?


1. Amaefule, J.O., Altunbay, M., Tiab, D., Kersey, D.G., and Keelan, D.K. 1993
Enhanced reservoir description: using core and log data to identify hydraulic
(flow) units and predict permeability in uncored intervals/wells. SPE paper
26436, 205-220.
2. Busch D. a. et al, 1985. Exploration Methods of Sandstone Reservoirs. OGCI
3. Caers, J, 2005, Petroleum Geostatistics, SPE
4. Chidsey, T. C., Adams, RR. D., Morris T. H., 2004. Regional to wellbore analog
for fluvial-deltaic reservoir modelling: The Ferron Sandstone of Utah. AAPG
Studies in Geology, No 50
5. Cowan G., et al., 1993. The use of dipmeter logs in the structural interpretation
and palaeocurrent analysis of Morecambe Fields, East Irish Sea Basin. In: J.R.
Parker, ed., Petroleum Geology of Northwest Europe: Proceedings of the 4th
Conference, The Geological Society, London, p. 867-882
6. Deutsch, C, 2002. Geostatistical Reservoir Modeling. Oxford University Press
7. Dubrule, O, 1998. Geostatistics in Petroleum Geology. AAPG - Continuing
Education Course Note Series # 38
8. Hatton, I.R. et al, 1992. Techniques and applications of petrophysical correlation
in submarine fan environments, early Tertiary sequence, North Sea. In: Geological
Applications of Wireline Logs II (A. Hurst, C.M Griffiths & P.F.
Worthington, eds.) Geological Society, London, Special Publication, 65, 21-30
9. Horne R. N, 1995, Modern Well Test Analysis, Second Edition (2000).
Petroway Inc
10. Lawrence D.A., 2002, Net sand analysis in thinly bedded turbidite reservoirs
- case study integrating acoustic images, dipmeters and core data. Paper HH,
SPWLA 43rd Annual Logging Symposium, June 2-5, 14 pp
11. Nelson P.H, 2004, Rock Evolution on the Permeability-Porosity Plane: Data
Sets and Models. AAPG Hedberg Conference, Austin, 8-11 February 2004.
12. Nelson P.H, 1994, Permeability-Porosity Relationships in Sedimentary Rocks,
Log Analyst, p38-64
13. Reading H.G (editor), 1978. Sedimentary Environments and Facies, Second
Edition (1986). Blackwell Scientific Publications

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 121

R Geomodelling & Reservoir Management

14. Ringrose, P, 2003, Geological Reservoir Modelling Course, Statoil visiting

15. Scholle P.A et al, 1982, Sandstone Depositional Environments. AAPG, Memoir 31
16. Selley R. C. , 1985, Elements of Petroleum Geology, Second Edition (1998),
Academic Press
17. Serra O., 1985, Sedimentary Environments from wireline logs, Schlumberger
18. Wagoner J.C. et al, 1990, Siliciclastic Sequence Stratigraphy in Well Logs,
Cores and Outcrops. AAPG Methods in Exploration Series, No 7
19. Speers R et al, 1992. Managing Uncertainty in Oilfield Reserves. Schlumberger
Middle East Well Evaluation Review, vol 12
20. Snyder, R.H.:”A Review of the Concepts and Methodology of Determining
‘Net Pay’,” SPE 3609 presented at the 1971 SPE Annual Meeting, New
Orleans, Oct. 3-6
21. Vavra, C.L., Kaldi, J.G., and Sneider, R.M.: “Geological Applications of
Capillary Pressure: A Review,” AAPG Bulletin (June 1992) 76, No. 6, 840-50
22. Cobb, W.M. and Marek, F.J.: “Net Pay Determination for Primary and
Waterflood Depletion Mechanisms,” SPE 48952 presented at the 1998 SPE
Annual Technical Conference and Exhibition, New Orleans, Sept. 27-30
23. SPE/WPC Reserves Definitions Approved JPT (May 1997) 527.


To be distributed during the course.

Data Integration T H R E E









5.1. Norwegian Case Study
5.2. UK Case Study
5.3. Summary of Well Test and Core








R reservoir evaluation and management

Carbonate Geomodelling M

R Carbonate Geomodelling


Having worked through this chapter the students will understand:

• Fundamental concepts for integration of data.

• Aspects of permeability - porosity relationships.

• Challenges of dynamic and static integration.

• Role of geochemistry in reservoir description.

Data Integration T H R E E


Data integration in the petroleum industry is a major challenge. It is not something

that naturally happens - in fact the development of specialists in sedimentology,
geochemistry, geophysics and simulation acts against integration. Specialists might
easily be totally unaware of the contribution the other scientists could make to address
their respective problems. Indeed, it may be that each is unaware of the other, and
unaware of the science! It is in this aspect of integration across disciplines that the
geoengineer is most able to contribute to the petroleum industry.

Data integration is a key aspect of reservoir management. “The best way to identify
and quantify rock framework and pore space variations is through the deliberate and
integrated use of engineering and earth-science (geoscience) technology” (Harris and
Hewitt, 1977). Understanding and awareness of other technologies will promote the
free exchange of ideas. Integration helps reduce uncertainty.

In the following sections we examine some of the issues in the data integration in the
areas of petrophysics, geochemistry, seismics and geostatistics. The review is not
by any means exhaustive, but serves to show that cross-disciplinary integration does
not follow a universal recipe. On the contrary, the solution to a reservoir problem
often lies in a unique blend of disciplines. The challenge for the geoengineer is to
make sure the appropriate integration happens - by developing recipes where they
don’t exist. In each case study presented, which act as model recipes, the value of
integration speaks for itself - can industry afford not to integrate data? “Integrated is
always better than disintegrated” even if the word is rather fashionable, writes Luca
Cosentino in the preface to a book devoted to this subject (Cosentino, 2001).


Data integration is usually through models (e.g., linear regression models, reservoir
simulation models, etc.). There are a number of fundamental issues in modelling
that have bearing on data integration.

Support and stationarity. Geostatistical measures and modelling techniques assume

that the support and stationarity of the data is well understood. Data are said to have
appropriate support when small changes of volume and location don’t produce large
changes in the measurement (Bear, 1972; Haldorsen, 1986; Anguy et al., 1994). The
concept of Representative Elementary Volume has been applied to the pore scale,
lying above the microscopic (where individual grains impact the measurement) to
macroscopic (where there is an effective porous media) threshold (Figure 1). Other
representative elements can be determined at the laminaset, bedset and parasequence
set scales.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 3

R Carbonate Geomodelling

Property Pores Porous media


Microscopic Macroscopic

Figure 1 Definition sketch of a porous media (after Bear, 1972 and Haldorsen, 1986)

Side 1
Side 2
Permeability, mD

Side 3
1 Large


1000 CUBES
1 Side 4
0.01 Depth cm
0 1 2 3 4

Figure 2 Permeability profiles on the four sides of a small sandstone block measured by
small and large probe permeameter compared with measurements on cubes (Corbett et al,

In Figure 2, an experiment has been conducted to consider the upscaling of probe

permeameter data to a cubic Hassler cell. Because the rock (sandstone) is relatively
homogeneous the upscaled measurement is a simple (arithmetic) average of the small
scale measurements. The fine scale probe data show that permeability is relatively
insensitive to volume scale or location. These data are appropriate for modelling and
simulation as they represent the effective properties of the porous media.

Data Integration T H R E E

0.01 Side 1
Side 2

Permeability, mD
0.01 Side 3
Side 4
1000 Small

0.01 Depth cm
0 2 4 6 8 10 12

Figure 3 Repeat of experiment described in Figure 2 for a carbonate block. In this case the
different scales of measurement give different values of permeability (Corbett et al, 1999)

The experiment described above has been repeated in a very heterogeneous carbonate
sample (Figure 3). This time the probe measurements and Hassler cube measurements
are not necessarily comparable with larger measurements both lower and higher
“observed” permeabilities. This is clearly a sample with a sample support problem.
There is no confidence in the data, its not clear if a systematic averaging procedure
can be used and there is a question mark over the data values as the rock is not an
effective porous media. The devices are responding to local disconnected vugs. These
data should not be used for modelling without careful further analysis.

A comparison of upscaled (averaged) probe data and the cube shows that the appropriate
upscaling average is between harmonic and geometric.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 5

R Carbonate Geomodelling

Upscaling test
Probe to cube

Arithmetic average Geometic average Harmonic average

1000 100 10
100 10


0.1 0.1

0.01 0.01 0.01












Kcube Kcube Kcube

How do you determine the effective property?

Figure 4 Ultimate test of upscaling from probe to cubes for the limestone experiment
shown in Figure 3 (Corbett et al, 1999)

We have encountered averaging as an upscaling technique, however, the discussion

of sample support shows that there may not be any simple averaging techniques
(and certainly not a general one for porous media). In petrophysics, one is often
faced with the need to correlate between different petrophysical properties. We have
distinguished between cross-scaling and upscaling (Figure 5) to clarify the various
issues involved. Cross-scaling should ideally be carried out on consistent, isotropic,
homogeneous samples (or at least on some representative elementary scale)

a: Up-scaling

Probe Plug MDT/RFT DST


10-7 10-5 102 106

Volume (cu.m)

b: Cross-scaling
Log Log
Plug Plug

Permeability Porosity
Density Permeability
10-5 10-5 1 1
Volume (cu.m)

Figure 5 Definition of upscaling and cross-scaling (Corbett et al., 1998)

Data Integration T H R E E


Nelson ( 1994) reviews porosity permeability relationships for a range of sands, clays
and sandstones. Examples of linear relationships on log(k) vs _ are compared with
examples where there is little relationship. Examples of clastics and carbonates are
given. Clustering the data by facies, grainsize, clay content can reduce the scatter to
manageable subsets, providing these can be recognised on the logs. Several empirical
models and log-based predictors are given. A summary sketch for the impact of grain
size, sorting, clay and interstitial cements upon poroperm trends is given in Figure 6 and 7.

1000 sorting


k (mD)

10 y
, cla

0.1 0.2 0.3


Figure 6 A summary sketch for the impact of grain size, sorting, clay and interstitial
cements upon poroperm trends (Nelson, 1994)






0.0 0.0 0.2 0.3

Figure 7 Berg’s theoretical model for poroperm relationships incorporating grain size
(lines are lines of increasing grain size). Case shown is for a well sorted sand

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 7

R Carbonate Geomodelling

As a summary for practical applications, Nelson (1994) offers the following:

1. In the absence of well logs and core, look for a suitable petrophysical analogue

2. When porosity and grain size estimates are available use Berg’s theoretical
model (Figure 7).

3. With porosity and water saturation, permeability can be estimated from Timur’s
equation (k = [afb]/[Swc2], where a and b are constants and Swc is the connate
water saturation).

4. Where core data are abundant generate an empirical predictor.

5. Use pore dimensions from capillary pressure to determine permeability (this

can possibly be done from cuttings).

Permeability prediction in the subsurface is often a critical part of any field description,
one that is often glossed over in the geostatistical simulations of plug data - issues to
remember are representivity and upscaling.

Under certain circumstances it is useful to be able to predict permeability whilst drilling.

In the case of a horizontal well, knowing the permeability for each foot drilled can
be used to judge the appropriate length of horizontal section. Permeability whilst
drilling has been required in the offshore part of the Wytch Farm (S. England) and
has been achieved by using density logged - whilst - drilling (LWD) and estimation
of average grain size (from sieve analysis of ditch cuttings) (Figure 8). Because the
poorly sorted nature of the braided Sherwood Sandstone fluvial sandstone (Triassic)
porosity alone (from the density) is unable to predict permeability. This technique
is able to optimise perforation strategy by quantifying the most productive intervals.

Permeability (mD)

10 coarse

1 medium

very fine
0 5 10 15 20 25 30
Porosity (%)

Figure 8 Poroperm relationships for various grain size classes (Harrison, 1994)

Data Integration T H R E E

Models for cementation can be used to examine the poroperm relationships in response
to diagenesis. Quartz overgrowths reduce the smaller pore throats more than the
larger pore throats and this is reflected in the resulting poroperm curves (Figure 9).
Combining grain size variations and diagenetic modification can lead to a large scatter
in the poroperm data (e.g. braided fluvial reservoirs).




0.1 Model prediction

0 10 20 30
Porosity (%)

Figure 9 Poroperm relationship for a simple quartz overgrowth model (Bryant et al.,

When log based predictors are being used one also has to consider the effect of upscaling
(plug - log response) and cross-scaling (perm - density). The permeability prediction
in a braided fluvial reservoir was improved when upscaled probe data (6”inch running
window) was compared to 6” resolution microresistivity data (Ball et al., 1994). In
this reservoir, there was a poor relationship between plug permeability and wireline
density (Figure 10). The relationship between resistivity and saturation is supported
by laboratory measurements (Figure 11).

Any model for permeability prediction (such as an empirical model of correlation

between resistivity logs and core permeability) should be calibrated against some
dynamic data, such as that provided by production logging data (Figure 12).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 9

R Carbonate Geomodelling

Tradition Core New opportunity

Low resolution High resolution

log with large log with small
volume of volume of
investigation investigation

Core plug

Probe vs MSFL Plug vs Density

R2 = 0.81 R2 = 0.54
log permeability (mD)

4 4
log permeability (mD)

3 12 3 1 2
8 8
9 10 Key data points 9
2 numbered for 2 10

1 comparison 1
6 6
7 7
0 54 3 0 5
11 3 4 11

0 1 2 3 2.3 2.4 2.5 2.6 2.7 2.8

log MSFL (ohm m) Wireline Density (g/cm3)

Figure 10 Cross-scaling and up-scaling for permeability prediction. Top: Benefit of

high resolution and high density measurements for permeability prediction; Left bottom:
Upscaled probe permeability vs. wireline microresistivity (MSFL); Right: Core plug
permeability vs density (after Ball et al., 1994)

Resistivity Apparatus General Arrangement

Connections to Bridge
log Permeability (mD)

Glass Tray
Brine Rock Slab
Lead Perforated
Electrode Spacer

Probe Permeameter Apparatus General Arrangement
Nitrogen injection
y = 9.0617 - 5.7724x R^2 = 0.902
0.9 1.0 1.1 1.2 1.3
log Formation Resistivity Factor (F)
Rock Slab

Figure 11 Laboratory correlation between resistivity and permeability (Jackson et al,


Data Integration T H R E E

-11900 -11900

-11950 Probe -11950 Probe

Cum. probe

Log depth (ft)

-12000 -12000

-12050 -12050
PLT Cum. probe
-12100 -12100

-12150 -12150
0 50 100 150 200 250 300 350 400 0 250 500 750 1000 12501500 1750
Permeability (mD)

Figure 12 Calibration of permeability prediction with production logging data (PLT)

(from Ball et al 1994)

A later study (Thomas et al, 1996) showed how probe data could be upscaled to
various scales by using arithmetic average for horizontal and harmonic average for
vertical permeability for comparison with MDT measurements (Figures 13 & 14).

In Figure 13, the MDT determined kv (MDT kv) and MDT determined kh (MDT kh) are
shown for the interval of the MDT measurement (4238 - 4254 ft). In this same interval there
are few kv plug measurements and each is significantly higher than the MDT estimate.
The (horizontal) probe permeability measurements and detect a low permeability interval
at 4243.5 ft. This interval will control the vertical permeability as shown by the MDT kv.
The probe detects horizontal permeabilities similar to the MDT kh.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 11

R Carbonate Geomodelling

cl slt vf f m


Depth (ft)

WT kv


Clay Drapes
Clean Sands

Horizontal Plugs
Vertical Plugs
WT kh Probe Permeameter

0.01 0.1 1 10 100 1000

Permeability (mD)

Figure 13 Integration between MDT and Geology, Sherwood Sandstone, Irish Sea Basin
(Thomas et al, 1998)

A running harmonic average of the probe measurements is used to estimate vertical

permeability. A running arithmetric average is used to estimate horizontal permeability.
The window for the running average is expanded according to the measurement
interval to give estimates at different scale (Figure 14).

Data Integration T H R E E

Bed Scale Bedset Scale


Permeability anisotropy (kv/kh)

Window of
probe estimates
Average probe


0 5 10 15
Measurement interval (ft)

Figure 14 Permeability predictor vs MDT (Thomas et al, 1996)

The MDT measurement falls within the upscaled envelope of probe measurements
and is close to the effective kv/kh for the interval.


Sedimentologists cluster sediments with textural variations into grain size and sorting
classes. Porosity and permeability are driven by variations in grain size and sorting.
Therefore it should be possible to cluster porosity and permeability into classes in
a similar way.

A Hydraulic (Flow) Unit (HU) was defined as the representative elementary volume
(REV) of the total reservoir rock within which geological and petrophysical properties
that affect fluid flow are internally consistent and predictably different from properties
of other rock volume (Amaefule et al., 1993).

The HU’s for a hydrocarbon reservoir can be determined from core analysis data (k
& f). This technique has been introduced by Amaefule et al. (1993), and involved
calculating the flow zone indicator (FZI) from the pore volume to solid volume ratio
(fz) and reservoir quality index (RQI) through Equation 1.

FZI = = (1)
Φz  φ 
 
 1 − φ

From FZI values, samples can be classified into different HU’s. Samples with similar
FZI value belong to the same HU (Mohammed, 2002; Mohammed and Corbett, 2002).
The permeability and porosity data have been classified been classified into seven
distinct HU’s with different hydraulic properties (Figure 15) in a well from a reservoir.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 13

R Carbonate Geomodelling


10000 10000

1000 1000

k, md

10 kb (mD)


0.1 1

0 0.05 0.1 0.15 0.2 0.25
HU4 HU5 HU7 All k-phi data
0 0.05 0.1 0.15 0.2 0.25

Figure 15 A k-phi cross plot showing Hydraulic Units for routine plugs (left) compared
with a single empirical relationship (right)

The Hydraulic Unit approach is a ‘rock typing’ approach to clustering core plug data
(Unit here is a unit in porosity-permeability space - not one with physical dimensions).
Other rock typing approaches have also been proposed. Winland (this method is
attributed to Dale Winland of Amoco, but has never been published although it is
discussed by Spearing et al., 2001) established an empirical relationship between
porosity, permeability, and pore throat radius from mercury injection capillary pressure
(MICP) measurements in order to obtain net pay cut-off values in some clastic
reservoirs. Winland rock typing is based on samples with similar R35 belonging to the
same rock type. Essentially, Winland Rock typing and HU rock typing give consistent
breakdown of the porosity permeability data and an R35 value can be determined for
the same rock types as determined by an FZI value, and vice versa.

HU classes are strongly texturally controlled (Mohammed, 2002; Corbett et al, 2003).
Correlations were observed between grain size and sorting and HU. HU units therefore
represent a fundamental link with the depositional texture. Sedimentologists classify
sands by grain size classes which are clusters based on a systematic series of median
grain diameters.

In two field studies different numbers of, and different (i.e., FZI values) HU’s occurred
in separate wells. However, the numbers of HU’s seen in these fields didn’t vary
largely and differences between HU’s was often small. It was possible in each case
to develop a small number of HU’s for each field. It becomes important to consider
how separate the HU’s should be.

As permeability variation within an order of magnitude is relatively small in the

context of a reservoir and as grain size classes are limited in number, 10 classes have
been initially considered.

For a given porosity, the permeability can be calculated by a rearrangement of

Equation 2, as follows,

Data Integration T H R E E

  φ  
 ( FZI)x 1 − φ  
K = φ 0.0314 (2)

and using this equation, lines for constant FZI can be determined. Selecting a systematic
series of FZI values allows the determination of HU boundaries to define 10 porosity-
permeability elements (termed Global Hydraulic Elements). The definition of these
boundaries is arbitrarily chosen in order to split a wide range of possible combinations of
porosity and permeability into a manageable number of Hydraulic Elements (Table 1).


48 10 1.5 5
24 9 0.75 4
12 8 0.375 3
6 7 0.1875 2
3 6 0.0938 1

Table 1 Hydraulic Unit boundaries (shown as FZI values) for 10 Global Hydraulic

The Global Hydraulic Element approach is being taken in a number of modeling studies
(reducing the complexity of the porosity-permeability modeling. Four GHE’s were
recognized in a Siberian Field (Field K, Figure 16). These GHE’s occur systematically
in the coarsening-up tidal sand body. The variation of HE’s laterally was considered
between wells prior to the making of a reduced (i.e., deterministic object approach
to modelling, rather than full pixel petrophysical simulation) simulation model was
used to generate synthetic well test responses for well test design.

Global Hydraulic Elements

GH10 FZI 48
GH9 FZI 24
10000 GH8 FZI 12
1000 GH6 FZI 3
Permeability (mD)

100 GH5 FZI 1.5

GH4 FZI 0.75
10 GH3 FZI 0.375
1 GH2 FZI 0.1875
GH1 FZI 0.0938
0.1 Field K GHE 6
Field K GHE 5
Field K GHE 4
0.001 Field K GHE 3
0.0001 Lines represent the HU lower
bound of Hydraulic Elements
0.000 0.100 0.200 0.300
Porosity (Decimal)

Figure 16 Field K k-phi cross plot showing Hydraulic Elements for routine plugs

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 15

R Carbonate Geomodelling

The Global Hydraulic Element approach uses specific HU’s (FZI values) as the
boundaries between classes, in a similar way that certain median grain sizes are used
in sedimentology. The GHE approach provided a useful reduced (in complexity)
simulation model for engineering studies and a link between the geology and the model.

In a North African reservoir (Ellabad, 2003), showed the link between the Lorenz
plot and Hydraulic Units for a heterogeneous North African fluvial reservoir (Lc
=0.74). The flow into the well shown by the plot is dominated by one of the hydraulic
units (HU 1 in this case). The modified Lorenz Plot and the PLT clearly show the
influence of this HU on the inflow performance (Figure 17). When 70% of the flow
comes through a single thin zone, there is always scope for confusion with a fractured
reservoir, though in this case there are plug measurements supporting matrix flow
(and fractures are not usually measured in core plugs).

Crossplot of k vs. Phi for different

Hydraulic Units Cum. kh vs. Depth for the cored
interval along with the Normalised
1000 Production Log (PLT)

100 Flow Capacity, (kh)

0.0 0.2 0.4 0.6 0.8 1.0
10 12220
k, mD


0.001 Perforated
0.0001 12280
0.000 0.050 0.100 0.150 0.200 0.250
Phi, frac.

Lorenz Plot, Well X7

0.7 12340

0.5 PLT

0.3 HU1
0.2 HU3 12380 HU2
0.1 HU5 HU4
0.0 HU5
0.0 0.2 0.4 0.6 0.8 1.0 12400
Phi, frac.

Figure 17 Hydraulic units in a North African field showing the dominance of one HU on
flow. Top left: Poroperm plot showing the various HU. Bottom left: Lorenz Plot showing
that 70% of the flow capacity in this well is from HU1 Right: PLT correlated with modified
Lorenz plot showing the location of HU1 in a single zone (Ellabed, 2003)


Well testing provides a unique insight into the effective in-situ permeability of
reservoirs. As a cross-check on the veracity of the interpretation the permeabilities
are usually compared with core plug data, when available. However, the plug data
need to be upscaled in order to be comparable with the scale of measurement of the
well test. We have investigated the comparison of well test data and core data in

Data Integration T H R E E

two fluvial case studies, where the reservoirs are characteristically heterogeneous
because of the poorly sorted nature of the sediments. The pitfalls in applying simple
averaging for upscaling are explored.

Permeability measurements from core plugs and well tests are derived from
interpretations of pressure and rate data under some flow assumptions. Core data are
usually derived from assumed linear flow in small core plug samples. The well test
response is derived from a radial flow assumption over a significantly larger volume.
There are theoretical and/or statistical methods for the integration of core and test
data (Oliver, 1990; Deutsch, 1992; Desbarats, 1994). However, this contribution
illustrates an alternative pragmatic approach. This subject is also addressed in a case
study from a North Sea Jurassic reservoir (Zheng et al., 2000).

In wells where both coring and well testing have been undertaken across the same
intervals, the opportunity arises for comparison of the two types of permeability
measurement at different scales. Comparisons have to be made on the basis of some

1. Representivity and upscaling. The representivity of the core within the volume of
investigation of the well test and the representivity of the core plug samples
of the well bore. The appropriate upscaling or averaging of the plug data for
the well test volume. Often, the assumption will be made that the geology is
layered (requiring the arithmetic average of plug data) or random (requiring
the geometric average).

2. Downhole vs. surface conditions. Where stressed measurements of permeability

on plugs are taken these are generally found to be less than those taken at
ambient conditions.

3. Relative permeability effects. In well testing usually only one phase is flowing
for the length of time of the test.

It is the first problem of comparison that this contribution addresses. For the purposes
of the two case studies presented here, the stress effects and relative permeability effects
have been assumed to be of less significance (the end point oil relative permeability
in the UK data set is 80% of the absolute perm) than the effects of representivity
and upscaling.

We find that the comparison between plug and well test is meaningless if:

1. The petrophysical description of the core is insufficient

2. The geological controls on permeability are ignored

3. The geological architecture over the volume of investigation is not taken into

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 17

R Carbonate Geomodelling

This is fairly obvious but you’d be surprise how few published examples there are
of comparisons between well test and core. The common factor linking many case
studies is that both are fluvial reservoirs. Fluvial reservoirs, whether high or low
net:gross, are characteristically heterogeneous because of the poorly sorted nature
of the sediments in a high energy depositional environment (Brayshaw et al., 1995).
The effective permeability (i.e., the permeability that a grid block of a certain volume
will have) at various scales for use in reservoir simulations is a critical issue in such
reservoirs. It is no coincidence that the integration of petrophysical and well test data
provides the greatest challenge - and potentially the greatest reward - in understanding
such reservoirs.

5.1. Norwegian Case Study

The first case study comes from a well in the northern North Sea (Tampen Spur
area, Offshore Norway). The reservoir unit tested is a braided fluvial channel of
Lower Jurassic age. In this example, the well test permeability was interpreted to be
significantly lower than the core plug permeabilities (Figure 18), prompting a concern
over the effective permeability of the interval.


Depth (m)



.01 .1 1 10 100 1000 10000 100000
cl s vf f mc vc Plug permeability (mD)

Figure 18 Plug perms over the Norwegian test interval

The core plugs taken at a regular 0.25m spacing show the interval to be heterogeneous
with permeabilities in the tested interval ranging from 10mD to 10000mD. The
arithmetic average of the core plugs is 963mD, the geometric average 254mD and the
coefficient of variation of 1.9. Using the No concept of Hurst and Rosvoll (1991) for
the number of samples and the level of variability suggests that the “true” arithmetic
average lies between 318 and 1608mD. Because of the low number of samples for
the variabity there is great uncertainty in the arithmetic average (and other statistics)
derived from the data.

Data Integration T H R E E

The derivative of the pressure response from the well test (Figure 19) shows radial
flow in the Middle Time Region (MTR) from which a well test permeability of 345mD
with a skin of -1.67 was derived. The radial flow becomes linear with time and
such a flow regime is to be expected from a channel sandstone where the “parallel”
boundaries for the channel give rise to a 1/2 slope on the derivative of pressure plot.
In the Late Time Region (LTR), a bilinear flow regime is seen (1/4 slope on the
derivative) suggesting a reduction in permeability at some greater distance from the
well (Figure 19).

Pressure and derivative




Time function

Figure 19 Log-log plot of pressure and derivative for the Norwegian test

The well test of 345mD is within the error bands of the arithmetic average permeability
and could be indicating layer parallel flow. Alternatively, the well test average is
closer to the geometric sample average which could indicate a random permeability
distribution. The higher permeabilities towards the base of the channel sandstone
are consistent with simple models for channel fill (i.e., fining upward) and suggest a
geological control. Additional data acquisition reveal further permeability structure
(Figure 20). The cores exhibit very marked cross-bedding with large grain size
variations. These result in marked permeability contrasts at the lamina-scale as
measured by the probe permeameter (Brendsdal and Halvorsen, 1993). The probe
data show dramatic variation (Fig. 16) and show additional high (>10000mD) and
low (<10mD) permeable zones than were detected by the plugs. The arithmetic
average of the probe data is 1038mD, geometric average 306mD and the coefficient
of variation 1.85. The number of measurements exceeds the No criteria and the true
mean lies within ±20% of 1038mD (Corbett and Jensen, 1992). Clearly, the well test
permeability is significantly less than the arithmetic mean of the interval.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 19

R Carbonate Geomodelling



Depth (m)



.01 .1 1 10 100 1000 10000 100000
cl s vf f mc vc Plug permeability (mD)

Figure 20 Probe data across the Norwegian test interval

The effects of the lamination were then considered. The presence of several high
permeability laminations intersecting the well bore (solid arrows in Figure 20) could be
responsible for the negative skin observed. Commonly a phenomena associated with
fractures, in this example, the “geoskin” appears to be derived from depositionally-
controlled permeability contrasts. The lamina, related to cross bedding, are unlikely
to be laterally extensive away from the well bore. Flow in the immediate well bore
region is therefore enhanced by the presence of high permeable laminae.

Within the formation, away from the wellbore, the effective (single phase) flow should
approximate the harmonic average of the lamina as flow will be across the laminae.
This configuration was considered for a four-layer model in which the permeabilities
(derived by the harmonic average within the layers) were 651, 168, 297 and 1169,
respectively. The arithmetic average for this layered model is 571mD. The well test
permeability is still significantly less than this, suggesting the laminae alone are not
alone responsible for the reduced effective permeability.

The low permeability streaks shown by the open arrows in Figure 20 are associated
with the reduced bounding surface or “shale drape” permeabilities between the sets.
Shale drapes commonly occur between the architectural elements within braided
fluvial reservoirs (Høimyr et al., 1993). To simulate the effects of the low permeability
draping network a simple numerical model was constructed based on an orthogonal
network separating blocks of different permeability (Figure 21). Whilst clearly a
simplification, the model showed that the presence of the network could be responsible
for the remaining permeability reduction. For a range of shale permeabilities (1-50mD)
and spacings (40-80m), that are “realistic” given the probe and analogue data, the
model produces permeabilities close to the well test permeability (330-375mD).

Data Integration T H R E E

Injection Production

50 x 1m cells
651 mD

8 x 1m 168 mD
297 mD

1169 mD

No crossflow
between layers
"Matrix" permeability

"Shale barrier" permeability

Figure 21 Numerical model to study effect of shale drapes

5.2. UK Case Study

The second case study comes from the offshore extension of the Wytch Farm Oilfield
on the south coast of the UK. The reservoir is a braided fluvial sandstone of Triassic
age. This study has been described in some detail elsewhere (Toro-Rivera et al., 1994)
and only the most relevant aspects to this review are developed here. In this case study,
two tests from two wells in the same reservoir unit appeared to have (for one well)
significantly lower and (in the other) greater permeability than the arithmetic average.

The well test permeabilities had, historically, generally matched the arithmetic
average of the plug data and the well tests were initially thought to be showing some
mechanical alteration. The permeability distributions and mean permeabilities in the
two wells are, nevertheless, very similar (Figure 22).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 21

R Carbonate Geomodelling


Well A
Arith. av.: 400mD 40
Geom. av.: 43mD Count
Harm. av.: 0.22mD
Cv: 1.52 20


-6 -4 -2 0 2 4 6 8 10 12
in(x) of KH

Well B

Arith. av.: 625mD 50

Geom. av.: 19.8mD
Harm. av.: 0.19mD

Cv: 1.71 30



-6 -4 -2 0 2 4 6 8 10
in(x) of KH

Figure 22 Permeability histograms for the two Wytch Farm wells (from Toro-Rivera et
al., 1994)

For Well A, inspection of the core data (Figure 23) shows the presence of several,
relatively thin, permeability intervals. The intervals are considered to be relatively
minor channels of limited extent (based on unpublished interpretations by Mckie
and Little of Badley-Ashton). The well test (Figure 24) build-up data show negative
skin (from the high permeable channels), early linear (channel) flow and radial flow
(with the “effective permeability” of the combined channel/inter-channel reservoir),
all consistent with the minor channel model. The radial flow permeability of 44mD
is close to the geometric average, suggesting that the system behaves as if the
permeabilities were randomly distributed. In this example, the late time increase in
permeability seen by a downturn in the derivative can be explained by the oil-water
contact, which is not far below the tested interval.

Data Integration T H R E E

Well A


0.1 100 0 2000 4000

Figure 23 Permeability distrubution in Well A

Well A
Pressure and derivative


Time function

Figure 24 Well test interpretation for Well A

In Well B, the core data show up a few relatively thick, high permeability major channels
(Figure 25). In this well, the flow is likely to be dominated by these channels. The
well test permeability (1024mD) from the radial flow region (Figure 26) is close to
the arithmetic average (911mD) of the channel intervals. Unfortunately, there is no
production log available to confirm that the channels alone were flowing, however,
this would explain a test permeability higher than the core plug arithmetic average.
The late time decrease in permeability (upturn in the derivative) is thought to be due
to a fault mapped from seismic data at the appropriate distance.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 23

R Carbonate Geomodelling

Well B


.01 100 0 2000 4000
Permeability (mD)

Figure 25 Permeability distribution in Well B

Pressure derivative


Well B

Time function

Figure 26 Well test interpretation for Well B

This study shows that the effective permeability of a reservoir in the well test volume
of investigation, is dependant on the medium scale (bed) architecture, even when the
total permeability field appears relatively stationary (i.e., the mean and variance for
the reservoir are constant).

5.3. Summary of Well Test and Core Comparisons

1. A simple comparison of core plug averages with well test interpretations can be
very misleading for several reasons. We wish to emphasise them here:

• Traditional core plugs fail to sufficiently characterise the permeability of relatively

short heterogeneous intervals that are commonly found in fluvial reservoirs. The
No concept can be used to give a spread of core average values to account for
small sample numbers (Figures 27 & 28 from Zheng et al., 2000).

• If the arithmetic or geometric average of the plug data is to be assumed to be the

appropriate average for comparison of well test and core permeabilities, the spatial
organisation of permeability determined by the geological architecture must be
taken into account. The architecture may change radically within a reservoir. In
channelised fluvial reservoirs one can expect the sand to thicken or thin away from
the wells and this explains why core plug data can be more or less than well test
permeabilities in the same reservoir unit! (Zheng et al., 2000).

Data Integration T H R E E

• The flow regimes identified (including skin) should be rationalised with respect
to the interpreted geology at the well and in the radius of investigation to confirm
or cast doubt on the pressure interpretations

• With downhole shut-in, the Early Time Region can be interpreted providing
additional geological information on a scale more readily comparable with the
core data (Toro-Rivera et al., 1996)

Data ordered
Data not ordered


0.25 2874
Storage Capacity (Pki*h)





1 0.75 0.5 0.25 0

Flow Capacity (K*h)

Figure 27 Lorenz Plot and modified Lorenz Plot showing high permeability zone in centre
of a channel sandstone, Ness Formation (Zheng et al, 2000)

2 Original well test
Well test permeability (mD)

1 interpretation
750 versus core plug
New test interpretation
500 0.5
versus probe average
95% Confidence estimate
Uncertainty in well test
0 estimate of permeability
0 250 500 750 1000 due to uncertainty in h
Core permeability (mD)

Figure 28 Comparison between test and core plug permeabilities for 3 wells in a North
Sea fluvial reservoir. Error bars in core estimates derived from small sample size and in
the well test from uncertainty in flowing interval (Zheng et al, 2000)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 25

R Carbonate Geomodelling

• The effective flowing interval(s) in heterogeneous reservoirs is not necessarily

the whole perforated interval (this should be confirmed by a production log and
modified Lorenz Plot. Uncertainty in well test permeability can incorporate
uncertainty in flowing interval (as in Figure 28 from Zheng et al., 2000).

2. Numerical models can be very useful in well test interpretation. The approach
taken here further developed with additional focussed analogue and petrophysical
data, integrated by geostatistical models, will enable greater understanding of well
test data (see Bourgeois, et al, 1993 for a turbidite example and De Rooij et al,
2003, for a meander loop example).

3. A coherent interpretation of geology, petrophysics and well test data, can add
confidence to the description of complex reservoirs resulting from interpretations
of the same data in isolation.


Refer to Smalley and England (1994) and Larter et al., (1994) for a review of the
uses of geochemistry in reservoir engineering. Geochemists are specialists in rock-
fluid interaction and their skills should be used more in data integration and reservoir
management. This area of reservoir description is very much in its infancy but holds
out promise for the future in a number of areas:

• Small-scale distribution of hydrocarbon species in reservoirs (Wettability) (Figure 29)

• Fault compartmentalisation (Isotopes/Oil compositional heterogeneity) (Figures

30, 31)

• Tar Mats (In-situ permeability baffles/barriers) (Figure 32)

Data Integration T H R E E

Probe Perm (mD) S1/S1+S2 Tmax(c)

1 100 10000 0.8 1.0 400 430



Petrophysics Geochemical

Figure 29 Comparison of petrophysical data and geochemical parameters. The full

significance of these relationships are not known, but are thought to be due to wettability
changes (Larter et al, 1994)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 27

R Carbonate Geomodelling



210 kPa




RSA 875r/865r

Figure 30 Barriers identified by changes in isotopic composition match up with producing

pressure boundaries (Smalley and England, 1994)


Forties SE Forties
Bubble Point Pressures
for Forties Oils

Figure 31 Oil composition variation revealing reservoir compartments (Smalley and

England, 1994)

Data Integration T H R E E

0 probe k
plug k

Core Depth (ft)

30 Tar


10 100 1000 10000

Permeability (mD)

Figure 32 Probe (uncleaned core) and plug (cleaned) permeability differences reveal in-
situ tar mats (Larter et al, 1994)


Time lapse seismic monitoring (i.e., repeating the survey or parts of the survey, 2-D or
3-D, after a period of time, sometimes known as 4-D seismic in the case of repeat 3-D
surveys) can be used as an important control for reservoir management. In the case
of the Heimdal Field (Figure 33, Norwegian North Sea) 11 lines costing $0.6million
were acquired (Figure 34) to monitor suspected water encroachment (Figures 35,
36) in the northern part of the field (Grinde, et al., 1994). Water encroachment into
the Palaeocene gas reservoirs can have a significant impact on the drainage of a gas
field. Gas depletion in the presence of a strong water drive invariably results in lower
recovery, with wells “prematurely” watering out. AVO (amplitude-versus-offset)
seismic processing and inversion was used to determine the gas-liquid contact (GLC).
A difference image was generated from the initial and repeated survey to give a
difference image. This image was used to determine the rise in the water table. This
was seen to be occurring in a northern accumulation without separate withdrawal
points. Good drainage was shown to be occurring across the field using history matched
model (Figures 37, 38) obviating the need for an additional well (at $16million!).
Whilst the relatively shallow high porosity Tertiary sands lend themselves to seismic
monitoring, the techniques have been used on other North Sea fields with varying
success. The combined effort of the team of geophysicists, geologists and reservoirs
was a key factor in the success of the project (and a conclusion of the paper!!). That
people needed to stress the importance of teamwork in 1994, shows that nothing had
really changed since Harris and Hewit’s paper in 1977!! This is a practical example
of the integration of geophysics and reservoir engineering.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 29

R Carbonate Geomodelling

60 Complex



0 10 20 30

Figure 33 Location map Reservoir/Aquifer Grid for Heimdal Field (Grinde et al, 1994)



Seismic Grid

Figure 34 Location of Time Lapse Seismic Grid, Heimdal Field (Grinde et al, 1994)

Data Integration T H R E E


Column (m)


Figure 35 Pre-Seismic Model Prediction, Heimdal Field (Grinde et al, 1994)


Gas Sand

Figure 36 Model For Trapped Gas, Heimdal Field (Grinde et al, 1994)

230 Model Aquifer

Model Pressure
Pressure (Bar)



160 0
77 86 94

Figure 37 Heimdal Field Gas and Aquifer Pressure History Match, Heimdal Field (Grinde
et al, 1994)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 31

R Carbonate Geomodelling




Positive = Model Rise > Monitoring Rise

Figure 38 Comparison of Water Rise Match Between Seismic and Reservoir Model,
Heimdal Field. Seismic shows area to north has been produced (Grinde et al, 1994)

Tjolsen et al. (1998) report on a study of the Ness Formation in the Oseberg Field.
In this study (Figure 39) acoustic independance is used to predict sand proportions
and greatly reduce the spread in simulation models.
lt u


Vertically L

Impedance H
In Upper Ness
H-High Impedance
L-Low Impedance H

Figure 39 Distribution of high and low impedance in the Upper Ness in Oseberg Field
(Tjolsen et al, 1995) Note both production (•) and injection (º) wells are located close to
areas of low impedance (high net: gross)

Data Integration T H R E E


Upper Ness

Lower Ness




Figure 40 Cross-section through Ness Formation in Osberg Field (Tjolsen et al, 1995)



0-2 2-4 4-6 6-3 8-10 10-12 12-14 14-16 16-18 18-20
Channel Deposit Thickness (M)

Figure 41 Channel Deposit Thickness (M) in the Ness Formation (Tjolsen et al, 1995)




6 0
0 0.5 1
Non-Res Sand
Channel Proportion

Figure 42a Using impedence to discriminate reservoir and non-reservoir (Tjolsen et al,
1995). Figure 42b Using impedence to discriminate reservoir and non-reservoir in Ness
Formation, Oseberg Field (Tjolsen et al, 1995)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 33

R Carbonate Geomodelling

Well 2 Well 4 Well 3



Non Res Sand Non Res Sand Non Res Sand

Figure 43 Upscaling Facies Changes To Seismic Response in Ness Formation, Oseberg

Field (Tjolsen et al, 1995)

Ten unconditional stochastic realizations were upscaled and simulated. The range
of outcomes are shown in Figure 44. A further 10 runs were conditioned on the
seismic data. The plateau length was significantly longer for the seismic-conditioned
simulations. The spread in total production and plateau height was also reduced.

Cum Oil
Oil Rate


Figure 44 Reservoir Prediction With (+) and Without (-) Seismic, Ness Formation,
Oseberg Field (Tjolsen et al, 1995)

Data Integration T H R E E


There are numerous studies of the use of geostatistics in reservoir studies. As an example,
Rossini et al. (1994) describe a case study in which geostatistics have been used to honour
the petrophysical heterogeneity whilst minimising the history matching process. The
study shows how the static and dynamic data are integrated. The reservoir comprises a
transitional sand - dolomite lithology in three distinct flow units (Figure 45). The PEF
(Photoelectric Effect log) was used to discriminate between dolomite and sand, the poroperm
characteristics for the facies being quite different (Figure 46).

Layer 1

Layer 2

Layer 3

Figure 45 Geological cross section of the field showing three well-defined layercake flow
units with internal heterogeneities (Rossini et al., 1994)

Sandy facies Perm Dolomitic facies Perm

Cv = 0.76 Cv = 3.1
Number of data 65 No = 58 Number of data 45 No = 968
Porosity : mean 11.4 X variable : mean 11
variance 46.2 variance 60.9
Permeability : mean 230 Y variable : mean 20
variance 30580 variance 3878
10000. 10000.

1000. 1000.

100. 100.
K (mD)

K (mD)

10.0 10.0

1.00 1.00

.100 .100

.010 .010
.0 6.0 11.0 17.0 40.0 .0 4.0 9.0 19.0 40.0
PHI (%) PHI (%)

Figure 46 Porosity-permeability for sandy and dolomite facies (Rossini et al., 1994). Note
that the dolomite facies is severely undersampled

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 35

R Carbonate Geomodelling

A two-stage geostatistical simulation was carried out using the appropriate variograms
(Figure 47):

• geostatistical (sequential indicator) simulation of the facies (sand and dolomite)

in 3-D to preserve the lithological architecture,

• geostatistical (conditional sequential gaussian) simulation of the porosity

distributions within facies, to honour the petrophysical description of each facies.
Permeabilities were assigned using Monte Carlo from the discrete porosity classes
identified on the poroperm crossplots (Figure 48).

Vertical variogram Horizontal variogram

1995 1230 507
γ γ 1241

0 0
h(m) 24 h(m) 1400
Sill Range Sill Range
Nugget 0.015000 Nugget 0.015
Spherical 0.150000 6 Spherical 0.15 500

Vertical variogram Horizontal variogram

1217 342
969 795 250 247
γ γ 164
384 94 147
193 75
0 0
h(m) 45 h(m) 2550
Sill Range Sill Range
Nugget 0.03 Nugget 0.03
Exponential 0.97 56 Exponential 0.97 850

Vertical variogram Horizontal variogram

133 62
237 36
γ γ 82

0 0
h(m) 30 h(m) 1400
Sill Range Sill Range
Nugget 0.03 Nugget 0.03
Gaussian 0.96 10 Gaussian 0.96 650

Figure 47 Variograms of (a) facies indicators (b) sandy facies porosity (c)dolomite facies
porosity (Rossini et al., 1994)

Data Integration T H R E E

The three flow units were generated separately, but treated as one for the purposes
of flow simulation. The petrophysical models were 1.6 million grid blocks (large
for the mid 90’s), and these had to be upscaled for the purposes of flow simulation.
Porosities were upscaled using a weighted arithemetic average (N.B., resulting pore
volumes were checked for fine and coarse grids). Permeabilities were upscaled from
the fine grid block transmissabilities using combined arithmetic and harmonic averages.
The upscaled models were then history matched with RFT pressures, GOR and water
cut. The latter two proved the most discriminating and enabled an appropriate static
model to be selected for further reservoir management studies (Figure 48). In this
way, geostatistical techniques have been used to provide a more reliable fluid-flow
model. Similar techniques were also used by Damsleth et al. (1992).



1 1 1 1


1 2 3 4 5 6 7 8 9 10
Realization Number

Figure 48 Ranking of realization based on the number of wells in the model that matched
real production characteristics (Realization 9 being the best model)

Geostatistics provides a tool for the integration of geology, petrophysics and reservoir
engineering. There are more examples of geostatistical models used in the integration
of geoscience and engineering in Yarus and Chambers (1996).


Shared Earth Model has been applied to a single vendor’s platform (Consentino,
2001) or an integrated database (Fanchi, 2003). The lack of fully interoperable
software systems and completely integrated platforms will require a continuation
of data transfer from one piece of software to another, requiring the geoengineer to
be computer literate or part of a team that includes a computer programmer, for the
forseable future.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 37

R Carbonate Geomodelling



An example of an integrated geoengineering study was given in a paper on modeling

the well test responses in a braided fluvial reservoir study (Corbett et al, 2005). Patches
of coarse and/or well sorted, high permeability sand are often present in braided
fluvial systems and are identified as the preserved secondary channels of limited
lateral extent. These patches produce restricted ‘point’ entry points for flow into the
well bore in high net to gross braided fluvial reservoirs. This study was undertaken
to examine the dynamic effects of these patches on the well test response.

In order to undertake this project we build a geological model, integrate some

petrophysical measurements for a numerical model. This is then simulated and
validated before providing an engineering implementation in well test interpretation.
Using the well test can also provide validation of the geological model and improve
the geological understanding. This logical sequence with feedback is following a
geoengineering workflow (Figure 49)

Characteristics of Braided Fluvial Reservoirs

The distribution of patches of relatively high reservoir quality reservoir sandstones in
a matrix of lower reservoir quality sandstone creates a double matrix porosity (also
known as a Dual Permeability) reservoir. The Lorenz plot can be used to diagnose
this sort of reservoir type. Commonly, the flow in the well bore will be coming from
a small proportion of the net sandstone interval, as defined by a permeability, rather
than a porosity, cut-off. The stratigraphically-ordered Lorenz plot has been used to
identify the flowing elements in cored intervals and these will correlate well with
production logs in wells that are not fractured.

Braided fluvial reservoir deposits are formed as the coarser grained, higher energy
part of fluvial systems. These reservoirs have characteristic patchy distribution of
varying grain size and sorting in outcrop sections (Figure 50). Recent flume tank
experiments have shown those elements to be deposited in secondary channels within
the fluvial system (Figure 51). The geometry of the secondary channels results in
randomly distributed 3-D patches.

Geological model of a high net:gross braided fluvial system

A numerical petrophysical model, to represent this geological distribution, is
built using a correlated random stochastic model with varying correlation lengths
(Figure 52) to distribute the hydraulic units which are then populated by appropriate
petrophysical properties. In the numerical model, the porosity has been kept constant
and the permeability has been allowed to vary according to the distribution of rock
types. Figure 52 shows one realisation of a stochastic permeability model. There are
possible geological arguments for making these patches longer in the downstream
direction but as the direction can radiate over a wide range in braided fluvial systems,
the approximation of an isotropic areal variogram model is considered sufficient.

Production log data (PLT) from these braided fluvial systems often shows point source
entry points for fluid (Figure 53). These high permeability intervals correlate with the
best reservoir property core plugs, classed as hydraulic unit 1 (HU1) in this example.

Data Integration T H R E E

The Lorenz plot for the interval shows that HU1 contains 70% of the transmissivity in
the well in only 15% of the porosity – this PLT response and degree of heterogeneity
is often observed in braided fluvial reservoirs.

In the reservoirs in North Africa that were the basis of this study the porosity is
moderate (10-15%). The background average permeability was around 3mD with the
best permeability zones approaching and sometimes exceeding 100mD. The Lorenz
Plot (Figure 54) shows clear double matrix porosity behavior.

Numerical well test model: The geostatistical models (an example shown in Figure 52)
were incorporated into a black oil simulator. A single well was placed in the centre
of the model with local grid refinement. Numerical artefacts can often occur in
simulation models when a radial local grid refinement around the well is nested
within a cartesian grid. For this reason the model described here uses a cartesian
grid throughout. Careful review of simulation models has eliminated any significant
numerical artefact in the middle time region. However, numerical artefacts may still
be present in the early time period. Because the phenomea we are describing occurs
in some realizations but not in others (Figure 55), whilst the grid remains constant,
we are confident that the phenomenum that we observe is free of numerical artifact.

Geotype curves for the braided fluvial model

A limited number of cases were simulated in this study. Five stochastic realizations
for each of three varying correlation length models were run. The case illustrated in
Figure 52 is for a medium correlation length (MCL) model of approximately 200 foot.
Pressure drawdown and buildup were simulated and the buildups interpreted as in a
real well test. A well-defined hump in the middle time region is seen on two of the
five realizations for this correlation length. Shorter and longer correlations were also
simulated. For shorter correlation lengths the hump response was more common. As
the correlation length is extended then the phenomenum becomes rare and eventually
disappears. These observations were made on a relatively low permeability model.
These responses would have to be scaled for higher permeability systems.

Geochoke phenomenum: The restriction of flow for a short period of time represents
depletion of the high permeability zones connected to the well, and the delay in
recharging from other patches away from the well. This model requires there to be a
fairly high density of patches and for some to be intersected by the well. This is why
only some numerical realisations show the hump and some don’t to the same degree.

Field validation: We have identified a humped middle time response in a number of

field examples. Figures 56 and 57 has been interpreted in by various combinations
of radial composite and fault models. We suggest that these data show the geochoke
phenomena, for which there is not currently an easy method to apply either an analytical
solution or a type curve match.

Figure 56 shows a derivative from a well test in a North African reservoir in a braided
fluvial environment. The mapped faults in this field were at a distance from the well
that was outside the radius of invesitigation. Clearly sub-seismic faulting could be
invoked. The geochoke response could be reasonably modelled by an analytical
radial composite model with a lower permeability ring around the well. Simple

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 39

R Carbonate Geomodelling

faulting would not give this response. In the full field simulation model more
extensive channels had been modeled and low permeability regions around the well
were needed to improve the match8. It was beyond the scope of this work to rebuild
the reservoir model. This well test interpretation suggests that a model should be
updated to include short correlation length patchy high permeability sands, within
the more extensive channels.

Geotype catalogue for double matrix porosity reservoirs

The geochoke response is seen in double matrix (also call dual permeability by the
well testing community) reservoirs when the correlation length is short (say up to
10% of the depth of investigation or 50-500ft) and the proportion is high and areally
reasonably consistent (certainly more than 10%). As the correlation length in vertical
and horizontal orientations becomes larger this response disappears systematically
as shown in the geotype catalogue (Figure 57) which is constructed around an
architectural matrix for double matrix porosity reservoirs. A catalogue encourages
the development of type curves for geological phenomena in a systematic framework.
In most geological situations a particular situation around a well and its resulting well
test buildup response are a member of a family of curves best captured by using a
systematic range of geostatisical and petrophysical model parameters.

Sub-seismic faults and their interpretation in Braided Fluvial Systems

With knowledge of the existence of the geochoke model, the well test interpreter should
be very careful not to misinterpret the phenomenum as a fault response, particulary
in braided fluvial systems. It is often said that sub-seismic faults interpreted in
braided fluvial systems have to be ‘removed’ during later production as they are not
seen in later test interpretations or during production. These “faults”, we believe,
can be misinterpreted geochoke phenomena. Other high net:gross environments
with significant petrophysical heterogeneity (e.g., mixed fluvial – aeolian, deep-
water channelised systems) may also have property variations and architectural
arrangements that will cause a similar buildup pressure response. The Lorenz plot
is a useful screening tool to identify whether the contrasts in permeability exist to
set up the right conditions for the choking phenomena. Production log data are also
a useful indicator of point source entry points.

Geotype catalogue for single matrix porosity reservoirs

A similar matrix with signature geotype curves for single matrix reservoirs has also
been developed (Figure 58, Corbett, 2009). In these systems the system switches for
comingled reservoirs to cross-flow reservoirs (by connection) when the correlation
length of the shales is short.

Data Integration T H R E E





Sedementology, seq. strat, structure Concepts

Siesmic & Log Correl, Mapping


MEASUREMENT sample volume
Geophysics, core and log petrophysics

Log Inter, Geostatics

3-D Geological model, stress model GEO MODEL




Finite diff, Finite element, flow stress

computer size

Sensitivity analysis

Data Quality


Figure 49: Petroleum Geoengineering workflow and example.

Figure 50: A view of a typical braided fluvial system at outcrop (Eocene Escanillia
Formation, N. Spain). The figure on the left is identifying the coarser grained elements
– the figure of the right, the finer. It is likely that the better sorted finer elements will
have a significantly different permeability for the poorly sorted coarser elements.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 41

R Carbonate Geomodelling

Note that there are no obvious flow barriers between the elements to stop crossflow.

Figure 51: The result of a laboratory analogue study of a braided fluvial system.
The high permeability class units in D relate to the secondary channel fills in C that
have been mapped in cross sections (A and B) through modern sediments deposited
in a flume tank (from Moreton et al., 2002)

Figure.52: Isotropic correlated random field to capture the distribution of high

permeability patches (the rock type to the left of the scale bar – which shows four
rock types with reducing properties from left to right).

Data Integration T H R E E

Flow Capacity, (kh)

0.0 0.2 0.4 0.6 0.8 1.0







12380 HU2

Figure 53: Point inflow from thin high permeability (HU1) interval in a braided fluvial
reservoir. The production log (PLT) inflow (solid line) is predicted by the cumulative
core permeability. Note that the inflow corresponds to plugs from hydraulic unit 1
(HU 1), the rock type with the best reservoir properties.

Figure 54: Lorenz Plot for a braided fluvial reservoir. The best permeability units
(HU1) contain 70% of the k*h (transmissivity) but only 15% of the f*h (storage).
Braided fluvial reservoirs contain double matrix porosity – part of the matrix is
transmissivity-dominated (in this case with only 15% of the pore volume) and part
is storage-dominated (85% of the pore volume).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 43

R Carbonate Geomodelling


Delta P/Delta Q (psi/STB/day)



MCL3 MCL3 Derivative

MCL1 Derivative MCL2 Derivative

MCL5 Derivative
0.1 MCL4 Derivative

0.0001 0.001 0.01 0.1 1 10 100

Equivalent Time (Hours)

Figure. 55: Numerical simulations from five realizations (MCL1-5) of the model
shown in Fig. 51. The hump in the middle time regime is only obvious in only two
of the realisations. This result suggests that the geochoke phenomena will occur
only for particular arrangements of the high permeability patches around the well.
It also shows that the effect can be quite subtle. Upper set of curves show pressure
buildup, lower set show derivative.
Delta P/Delta Q (psi/STB/day)



0.01 0.1 1 10 100 1000

Elapsed Time (hours)

Figure 56: Field example of a pronounced humped middle time region in a well test
buildup from a braided fluvial reservoir in a North African fluvial reservoir example.
Upper curve is pressure buildup and lower curve is the derivative.

Data Integration T H R E E

(-0.5) -ve geoskin
P+ P+


% Flow
100kh 0 PAY PAY

High k Low k
Sand Sand

Lorenz Coefficient
< 0.3


T geochoke

Figure 57: The geochoke response occurs in double matrix porosity systems with
short lateral and vertical correlation lengths. The Lorenz plot on the left defines
the transmissive and storage elements. The matrix in the centre shows decreasing
correlations in the vertical and horizontal correlation lengths from top left to bottom
right (based on the Tyler and Finley 1991 architectural matrix). The various derivative
responses relate to well test characteristics for various scenarios.



% Flow
100kh 0 PAY PAY

P+ P+

ve Channel
geoskin Thickening
fh A
Lorenz Coefficient
< 0.3

Cross Flow
by Connection

P+ B P+ P+



Figure 58: The geochoke response occurs in single matrix porosity systems with
short lateral and vertical correlation lengths.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 45

R Carbonate Geomodelling


Given the following permeability data (same data as was encountered in Chapter 8
of the Reservoir Concepts course)

Depth Porosity Permeability(mD)

3834.9 23.80 105
3835.2 24.80 140
3835.5 27.40 297
3835.8 26.40 236
3836.1 23.60 106
3836.4 24.60 140
3836.7 24.20 157
3837 24.80 144
3837.3 26.00 189
3837.6 24.80 111
3837.9 29.40 577
3838.2 27.60 318
3838.5 14.60 nmp
3838.8 22.50 52
3839.1 22.10 54

1. Crossplot data and fit a regression line that you might want to use for permeability
prediction. (i) Predict the missing value “nmp”. (ii) Predict a cut-off porosity for
a) an oil reservoir and b) a gas field

2. Fit on a plot of Global Hydraulic Elements. How many GHE’s are present and
how does this relate to the heterogeneity of the interval (as calculated in Chapter
8 of Reservoir Concepts).

Note GHE are defined by equation:

  φ   k
( FZI ) x   0.0314
  1 − φ  RQI φ
K = φ 0.0314 or FZI = Φ =  φ 
  z
   
   1 − φ

For following lower bound values of FZI:


48 10 1.5 5
24 9 0.75 4
12 8 0.375 3
6 7 0.1875 2
3 6 0.0938 1

3. Fit HU(s) lines to clusters. Estimate HU values from reference to the GHE plot

Data Integration T H R E E


Given the following permeability data (same data as was encountered in Chapter 8
of the Reservoir Concepts course)

Depth Porosity Permeability(mD)

3834.9 23.80 105
3835.2 24.80 140
3835.5 27.40 297
3835.8 26.40 236
3836.1 23.60 106
3836.4 24.60 140
3836.7 24.20 157
3837 24.80 144
3837.3 26.00 189
3837.6 24.80 111
3837.9 29.40 577
3838.2 27.60 318
3838.5 14.60 nmp
3838.8 22.50 52
3839.1 22.10 54

1. Crossplot data and fit a regression line that you might want to use for permeability
prediction. Predict the missing value “nmp”

Linear fit. Note High Coefficient of Determination but the expression can’t be used to
predict permeability of 14.6% porosity point as this would be negative permeability!
Also note that the residuals would be non-linear.

Cross Plot

700 Linear fit. Note High Coefficient of Determination


500 y = 63.511x - 1409.2

R2 = 0.8785




19 21 23 25 27 29 31

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 47

R Carbonate Geomodelling

3 Log –linear fit. Note Higher Coefficient of Determination

y = 0.1373x - 1.2691
R2 = 0.9557

Log (k mD)



0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00
Porosity (%)

Expression log(k) = 0.1373(f)-1.2691 predicts:

5.1 mD at 14.6% f
9.2% f at 1md
1.96% f at 0.1md
-5.3% f at 0.01md

Note this expression predicts negative porosity at 0.01mD which cannot be possible.
Many relationships are also non-linear in the log(k) - linear (f) space. This non-linearity
is also suggested by the HFU relationships.

2. Fit on a plot of Global Hydraulic Elements

Note GHE are defined by equation:

  φ  
 ( FZI ) x   
 1 − φ
K = φ 0.0314
 
 
 

For following lower bound values of FZI:


48 10 1.5 5
24 9 0.75 4
12 8 0.375 3
6 7 0.1875 2
3 6 0.0938 1

Data Integration T H R E E

Global Hydraulic Elements

Permeability (mD)
0 0.1 0.2 0.3 0.4 0.5
Porosity (Decimal)

Data lie in just two GHE’s. The Cv is 0.7. Data lying in one GHE are homogeneous.

3. Fit HU(s) lines to clusters

Estimate HU clusters and values from the GHE plot above.

Values of FZI – 1.6, 2.75, 2.9 and 3.1.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 49

R Carbonate Geomodelling


Amaefule, J.O., Altunbay, M., Tiab, D., and Kersey, D.G., 1993, Enhanced Reservoir
Description: Using core and log data to identify hydraulic (flow) units and predict
permeability in uncored intervals/wells SPE 26436 68th Ann. Conf. And Exhibit.,
Houston, Texas, Oct 3-6

Anguy, Y., R. Ehrlich, C.M. Prince, V.L. Riggert, D. Bernard, 1994, The sample
support problem for permeability assessment in sandstone reservoirs, in J.M. Yarus
and R. Chambers (eds.) Stochastic modelling and geostatistics. AAPG Comp. Appl.
in Geology, 3, 37-54.

Ball, L. D., Corbett, P.W.M., Jensen, J.L., and Lewis, J.J.M.L., 1994, The role of
geology in the behaviour and choice of permeability predictors, SPE 28447, presented
at 69th Annual Technical Conference and Exhibition, 25-28th September.

Bear, J., Dynamics of Fluids in Porous Media, Amsterdam: Elsevier Scientific

Publishing Co., 1972.

Bourgeois, M.J., Daviau, F.H., Boutard de la Combe, J-L., 1993. Pressure behaviour
in finite channel levee complexes, SPE paper 26461, presented at the 1993 SPE
Annual Conference, Houston, October 3-6.

Brayshaw, A.C., Davies, G.W., and Corbett, P.W.M., 1995, Depositional controls on
primary permeability and porosity at the bedform scale in fluvial reservoir sandstones,
in M.Dawson (Ed.), Advances in Fluvial Dynamics and Stratigraphy, 373-394.

Brendsdal, A., and Halvorsen, C., 1993, Quantification of permeability variations

across thin laminae in cross-bedded sandstone, in P.F.Worthington (Ed.), Advances
in Core Evaluation and Accuracy and Prediction III, Gordon and Breach Science
Publishers, London, 25-42.

Bryant, S., Cade, C., and Mellor, D., 1993, Permeability prediction from geologic
models, AAPG Bulletin, 77, 1338-1350.

Corbett, P. W. M. and J. L. Jensen, “Estimating the Mean Permeability: How Many

Measurements Do You Need?” First Break, 10 (1992) 89-94.

Corbett, P.W.M., Jensen, J.L., and Sorbie, K.S.S., 1998, A review of Upscaling and
Cross-Scaling issues in Core and Log Data for Interpretation and Prediction, in
Core-Log Integration P.Harvey and M.Lovell, (Eds.), Geol. Soc. Spec. Publ., 136, 9-16.

Corbett, P.W.M., Anggraeni, S., and Bowen, D., 1999, The use of the probe
permeameter in carbonates - Addressing the problems of permeability support and
stationarity, The Log Analyst, 40, 316-326.

Corbett, P.W.M., 2009, Petroleum Geoengineering – Integration of Static and Dynamic

Models, SEG/EAGE DISC Course notes, 100p.

Data Integration T H R E E

Corbett, P.W.M, Ellabad, Y. , Egert, J.I.K., and Zheng, S., 2005, The geochoke well
test response in a catalogue of systematic geotype curves, SPE 93992, presented at
SPE EROPEC, Madrid, Spain, 13-16 June.

Cosentino, L., 2001, Integrated Reservoir Studies, Editions Technip, Paris, 310p.

Damsleth, E., et al., 1992, A two-stage stochastic model applied to a North Sea
reservoir, JPT, April

De Rooij, M., Corbett, P.W.M., Barens, L., 2002, Point Bar geometry, connectivity
and well test signatures, First Break, 20, December.

Desbarats, A.J, 1994, Spatial averaging of hydraulic conductivity under radial flow
conditions, Mathematical Geology, 26, 1-21.

Deutsch, C.V., 1992, Annealing techniques applied to reservoir modeling and the
integration of geological and engineering (well test) data, PhD Thesis, Stanford, Ca.

Grinde, P., Blanche, J.P., and Schnapper, D.B., 1994, Low-cost integrated teamwork
and seismic monitoring improved reservoir management of Norwegian gas reservoir
with active water drive, SPE 28876, presented at Europec, 25-27 October.

Harris, D.G., and Hewitt, C.H., 1977, Synergism in Reservoir Management - the
geologic perspective, JPT, July, 761-770.

Harrison, P.F., 1994, Wytch Farm: Horizontal well application. Paper presented at
3rd Horizontal well Technical Forum, 18-19 August, Heriot-Watt University.

Høimyr, Ø., Kleppe, A., and Nystuen, J.P., 1993, Effects of heterogeneities in a
braided stream channel sandbody on the simulation of oil recovery: a case study
from the Lower Jurassic Statfjord Formation, Snorre Field, North Sea, in M.Ashton
(Ed.), 1993, Advances in Reservoir Geology, Geological Society Special Publication,
69, 105-134.

Hurst, A., and K. J. Rosvoll, “Permeability Variations in Sandstones and Their

Relationship to Sedimentary Structures” in Reservoir Characterization II, L. W. Lake,
H. B. Carroll Jr., and T. C. Wesson, Orlando: Academic Press, Inc., (1991) 166-196.

Larter, S.R., Aplin, A.C., Corbett, P.W.M., Ementon, N., Chen, M., Taylor, P.N., 1994,
Reservoir geochemistry: A link between reservoir geology and engineering, SPE
28849, presented at Europec.

Mohammed, K., Corbett, P.W.M., Bowen, D., Gardiner, A.W., and Buckman, J.,
2002, Solution seams in the Mamuniyat Formation El-Sharara-A Field, SW Libya,
Impact on Reservoir Performance, Journal of Petroleum Geology 25(3), 281-296.

Mohammed, K., and Corbett, P.W.M., How many relative permeability samples do
you need? A case study from a North African Reservoir, SCA2002-03, Monterey,

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 51

R Carbonate Geomodelling

Moreton, D.J., Asworth, P.J., and Best, J.L., 2002, The physical scale modeling
of braided alluvial architecture and estimation of subsurface permeability, Basin
Research, 14, 265-285.

Nelson, P.H., 1994, Permeability-porosity relationships in sedimentary rocks, The

Log Analyst, May-June,.38-62.

Oliver, D.S., 1990, The averaging process in permeability estimation from well test
data, SPEFE, September, 319-324.

Rossini, C, Brega, F., Piro, L., Rovellini, M., and Spotti, G., 1994, Combined
geostatistical and dynamic simulations for developing a reservoir management
strategy: A case history, JPT, November, 979-985.

Smalley, P.C., and England, W.A., 1994, Reservoir compartmenatlisation assessed

with fluid compositional data, SPERE, August, 175-180.

Tjolsen, C.B., Johnsen, G., Halvoren, A., Ryseth, A., and Damsleth, E., 1995, Seismic
data can improve the stochastic facies model significantly, SPE 30567, presented at
SPE Annual Technical Conference and Exhibition, Dallas, 22-25 October.

Toro-Rivera, M.L.E., P.W.M. Corbett and G. Stewart, 1994, Well test interpretation
in a heterogeneous braided fluvial reservoir, SPE 28828, presented at Europec,
25-27th October.

Tyler, N. and Finley, R.J., 1991, Architectural Controls on the Recovery of Hydrocarbons
from Sandstone Reservoirs, SEPM Concepts in Sedimentology and Palaeonotology,
V3, p3-7.

Yarus, J.F., and Chambers, R.L., 1996, Stochastic Modelling and Geostatistics,
Principles, Methods and Case Studies, AAPG Computer Applications in Geology,
3, 379pp.

Zheng, S., Corbett, P.W.M., Ryseth, A., and Stewart, G., 2000, Uncertainty in well
test and core permeability analysis: A case study in fluvial channel reservoir, Northern
North Sea, Norway, AAPG Bulletin, 84(12), 1929-1954.

Reservoir Management F O U R








6.1. Residual Oil
6.2. By-passed Oil
6.3. Attic Oil
6.4. Trapped Gas
6.5. Stranded Gas
6.6. Areal and Vertical Sweep Efficiency


8.1. Water Shut-off
8.2. Improved Oil Recovery (IOR) and Enhanced
Oil Recovery (EOR)
8.3. Infill Drilling
8.4. Fraccing




R reservoir evaluation and management

Geomodelling & Reservoir Management M

R Geomodelling & Reservoir Management


Having worked through this chapter the students will develop understanding of:

• Reservoir management strategies.

• Reservoir management case studies.

• Habitat of remaining oil and gas.

• Reservoir management techniques.

Reservoir Management F O U R


Sound reservoir management practice relies on the use of available resources (human,
technological and financial) to maximise profits from a reservoir by optimising recovery
whilst minimising capital investment and operating expenses (Satter et al., 1994;
Satter and Thakur, 1994). Reservoir Management can be reactive or proactive - it
involves making choices - let it happen or make it happen. There are many definitions
of reservoir management but improving recovery, minimising expenditure, prolonging
field life and the management of resources are usually involved.


Reservoir management is not synonymous with reservoir engineering and/or reservoir

geoscience. Integration and synergy from the earliest opportunity (discovery) are
a fundamental aspect of reservoir management (Sneider, 1999, 2000a, 2000b).
Experience shows that a fair amount of persistence is required for true integration,
along with cross-training and organisational changes.

Geol + Geoph + Eng =

Figure 1 Definition of synergy: The output of a synergistic team is larger than the sum of
the output of individuals


In the UK, the operator is expected to deliver a Field Management Plan to the DTI
(Owens, 1998) which sets out clearly:

The Reservoir Management Strategy - detailing the principles and objectives that
the operator will hold when making field management decisions and conducting
field operations, and;

The Reservoir Monitoring Plan - describing the data gathering and analysis
proposed to resolve existing uncertainties and understand dynamic performance
during development drilling and subsequent production phases

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 3

R Geomodelling & Reservoir Management


Reservoir management (just like any other form of management) is simply about
following a systematic strategy (Satter and Thakur, 1994):
Monitoring the plan, and,
Evaluating the results.
The integration of data at all stages is key to successful reservoir management and
should be considered a dynamic process.


An enormous amount of data are collected during a field life from discovery to
abandonment. An efficient data storage and retrieval system is a fundamental part
of Reservoir Management (Satter and Thakur, 1994). As the geoengineer will be
saddled (at least in the early part of their career) with a system in place, and it is a
major problem to change anything, databases are difficult to summarise. Up to 70%
of the geoengineers active working day is spent in data retrieval. A standard computer
software platform (through POSC - Petrotechnical Open Software Corporation) and
modern compatable machines are helping the establishment of more user-friendly,
accessible data bases. 3-D modelling packages are increasingly being used for data
storage - the “Shared Earth Model”, (Gawith, & Gutteridge, 1996). Outsourcing of
databasing is also occuring - however, a good, complete, readily accessable database
is what makes an Oil Company - and many developments in this area might stay


Before one can implement a new reservoir management scheme one needs to identify
the remaining oil and gas. The location of remaining oil and gas was well described in
a study on Forties Field by Brand et al (1996, Figure 2). Determining the remaining oil
requires evaluation of available infill well locations with production logs and RFTs.
(Spaak et al 1999) identify interesting and very subtle barriers in Fulmar Field (Figure
3). These barriers are likely to be thin flooding surface shales in a shallow marine
environment. Remaining oil and gas can be assigned to various definitions.

Reservoir Management F O U R

Attic Oil Channel margin sands

Attic Oil




Stratigraphic by-passed oil



Figure 2 Forties field - habitat of remaining oil (from Brand et al., 1996)




(psi) (TVD ft)

4750 4850 4950
Strat-trapped residual oil

9900 beneath flooding surface


GH .1 1 MPH log 100 45 NPH -.15 0 GHPC .5
0 150 .1 1 DPH log 100 1.35 RHCB 205 0 POR .5

10000 Kimmeridge



10500 LYDELL

Flooding Event

Shoreface USK


High oil saturation in 11000

10600 prograding shoreface

succession (top USK)
below base Lydell
flooding event

Smith 10800

Figure 3 Fulmar Field - shoreface reservoir (from Spaak et al., 1999). Pressure
discontinuities and residual oil trapped by subtle shale breaks (by-passed and possibly
attic oil)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 5

R Geomodelling & Reservoir Management

6.1. Residual Oil

Residual oil is trapped within the pore space after a waterflood. The residual oil
saturation (Sor) is normally less than 20%. The oil is physically located in the centre
of large pores in a water-wet rock. The residual oil saturation often correlates with
rock quality (root k/phi), being lower in the better quality rocks. Residual oil can be
displaced by increasing the velocity of the flow (often impractical) or changing the
fluid characteristics (steam flooding) or wetting characteristics (adding surfactant).

6.2. By-passed Oil

Oil is by-passed because of stratigraphic trapping behind the water flood (channel
boundaries, shales) or due to structural compartmentalisation (subseismic faults).
In a fractured reservoir, oil can be trapped in the matrix by water bypassing in the
conductive fractures.

6.3. Attic Oil

Oil remaining updip (and often out of reach) as a result of gravity above the highest
perforations - usually in the crestal area of fault blocks. Attic oil can be displaced
by a lighter fluid - gas - or by relocating wells closer to the crest (by side-tracking,
possibly with coiled tubing, or infill drilling)

6.4. Trapped Gas

Trapped gas refers to gas trapped in the pore space behind moving water (usually
mobile aquifer). This gas is often not recoverable, unless the reservoir pressure can
be significantly reduced.

Trapped gas can also refer to gas trapped at the front of an advancing oil bank in front
of a waterflood. Repressuring in the latter case will encourage the gas to redissolve
in the oil.

6.5. Stranded Gas

Gas not connected to the producing facilities (in structural or stratigraphic
compartments). This gas can only be recovered by new wells.

6.6. Areal and Vertical Sweep Efficiency

Areal and/or vertical sweep efficiency are essentially areal or vertical recovery
factors due to a drainage mechanism such as water flood or gas displacement. Sweep
efficiency is controlled by wettability (mixed wet systems being the best for recovery),
interfacial tension, viscosity ratio, gravity forces, dip, heterogeneity, waterflood
pattern, mobility ratio, ordering of layers, gravity segregation. For vertical sweep
efficiency, the Lorenz and Modified Lorenz plots are useful screening tools (Figure
4). For areal sweep efficiency, heterogeneity measures (Dykstra-Parsons Coefficient,
Lorenz Coefficient, Coefficient of Variation) are useful indicators; high heterogeneity
results in poor sweep efficiency.

Reservoir Management F O U R

Producer Injector
Lorenz plot Lorenz plot Cross section

Very long vertical

kh φh z(m) and horizontal
correlation length

φh kh x(m)

Long vertical
and horizontal
Fining up
Water Reduced sweep
kh shut - off /
plug back

Long vertical
and horizontal
Coarsening up
Improved sweep

Short vertical
Long horizontal

Water Very reduced

shut - off
after breakthrough

Short vertical
φh and horizontal

kh Improved sweep

Figure 4 Schematic sweep characteristics defined by Lorenz plot, modified Lorenz plot
and correlation lengths

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 7

R Geomodelling & Reservoir Management


Water influx into wells requires a range of diagnoses and treatment opportunities.
Arnold et al. (Schlumberger Technical Review, Summer 2004) ranked various water
influx scenarios by complexity of treatment. We repeat those here.

(1) Tubing, Casing or Packer Leak (Figure 5a)

Diagnosis by production logs, treatment by squeeze of mechanical shut-off

(2) Flow behind Casing (Figure 5b)

Temperature survey, cased hole logs, treat with shut-off fluids

(3) Oil/water contact influx (Figure 5c)

Happens with low (1mD) vertical permeability – at higher (1D) vertical
permeability coning occurs. In vertical wells – plugging back is an option.

(4) High Permeability Layer (without cross flow) (Figure 5d)

With no crossflow this zone can be shut-off by fluid or mechanically. With no
cross flow a layer of limited extent will deplete but if connected between wells
will channel water between injector and producer.

(5) Conductive Fractures/Faults between Injector and Producer (Figure 5e)

Transient testing and tracers can be used to detect this. Shutting off the injector
– may encourage oil production from a different part of the network.

(6) Conductive Fractures/Faults to the Water Zone (Figure 5f)

Treat with shut-off fluids

(7) Coning (usually water upward) or Cusping (usually gas downward) (Figure 5g)
Emplacement of fluid/gel pancake 50ft extensive might stop this. High angle
wells are effective in high vertical permeability wells.

(8) Poor Areal Sweep (Figure 5h)

Results from lateral heterogeneity (channelling). Treated by lateral wells or
infill drilling. Monitor with 4D seismic.

(9) Gravity-Segregated Layer (Figure 5i)

In high vertical permeability thick layers or with high permeability at the base
(top in the case of gas injection) water will drop. Shut-off lower perforations.
Gel treatment to stop coning, foam injection, water-alternating gas (WAG)
might all be effective.

(10) High Permeability Layer with Crossflow (Figure 5j)

Very difficult to treat as fluid will by-pass any remediation treatment.

Reservoir Management F O U R

Figure 5a-j

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 9

R Geomodelling & Reservoir Management


Aadland et al. (1994) review the reservoir managment of Statfjord Field (Figure 6a).
The team developed a plan maintaining the well production potential by high well
activity (Figure 6b). The plan is implemented by drilling high-angle and horizontal
wells towards the flanks of the field to drain attic oil. Reservoir simulation and
well studies address the long and short term monitoring and evaluation of the plan.
Other recovery mechanisms (WAG - Water Alternating Gas - injection, polymer
or surfactant flooding) have been evaluated, other business opportunities (satellite
fields, gas storage), and quantification of remaining oil saturation (cased - hole
logging, sponge coring, resisitivity logging) all indicate the the managment team on
the Statfjord Field are taking a very proactive role, ensuring that the asset continues
to perform for years to come.

Brent Attic oil Structural

Stratigraphic by passed oil
by passed oil


Rim oil
Remaining oil locations 500m

Figure 6a Statfjord Field - Remaining oil

New completions High angle wells
Infill wells


Horizontal wells
Extended reach
drilling (ERD)
Remaining oil locations 500m

Figure 6b Statfjord Field - New opportunities

Reservoir Management F O U R

Mijnssen and Maskall (1994) also describe a proactive hunt for the remaining gas
in the Leman Field. The objectives of the plan are to locate remaining produceable
gas. The plan is achieved by (re-)analysis of cores and logs. The study involved a
reclassification of lithofacies, a petrographic study and petrophysical analysis. The
authors conclude that horizontal wells drilled parallel to the palaeowind direction in
the aeolian sandstones are optimum (Figure 7) in this integrated study. In general there
are a number of opportunities to access remaining gas in the Rotliegend sandstone
(Figure 8).

Main wind direction



0 1 km

kD Dune
Kl kD
kI = 2 - 12
Kll Interdune
= 20 - 75

Kll = Permeability parallel to lamination

Kl = Permeability perpendicular to laminate
kD = Permeability of dune sands
kI = Permeability of interdune sands
= Indicates main inflow direction

Figure 6 Optimum horizontal well trajectory in aeolian sandstones

(Mijnssen and Maskal, 1994)

Typical Rotliegendes reservoir section

bypassed gas
Faults Top Reservoir


100m Water


Trapped gas

Horizontal well/multilateral opportunities Fraccing

Figure 7 Typical Rotliegend reservoir section

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 11

R Geomodelling & Reservoir Management

Re-evaluation of data brought on by new seismic, new wells or the requirements

for a new reservoir model is often required during field life. Currie et al. (1994)
describe such a reinterpratation of the Rob Roy Field. The production history could
not be matched by the existing reservoir model, however, tweaking the Pore Volume
Multipliers suggested a 32% increase in oil-volume. Geoscience re-interpretation
lead to a model with 30% extra oil-in-place that was matched relatively easily.

An integrated re-interpretation of the dynamic and static data was required to provide
the framework for the management of the Ness reservoir in Brent Field (Bryant and
Livera, 1991). Mapping of individual genetic units was needed to identify accurately
the original oil-in-place and to efficiently manage continued production (monitor
sweep and identify by-passed oil). (Figure 9).
A Broom, Rannoch, Etive formations B

Ness Formation

PLT 81 85 PAY






(corrected to datum)

1975 1980 1985 1990

Oil production Water injection Gas injection

Figure 9 Brent field reservoir monitoring, initial and changing conditions of fluids and
perforations (Bryant and Livera, 1991)

In 1975, initial completions in the lower Brent Group were in Etive and Rannoch
Formations (Figure 9(a)). By 1990, production was only coming from lower Rannoch,
the rest having watered-out (Figure 9(b)). Original production from sand 4, later
production from sand 3 only in the Ness Formation.

In a review of Ninian Field reservoir management (Pressney, 1993) the development

strategy was outlined:

• Downdip water injection using dip to to improve sweep efficiency. Maximising

water injection maximises oil production and reserves.

• Reservoir pressure maintained at a level which 100% watercut wells will flow to
surface by voidage balanced waterflood.

It was observed that Ninian’s three platforms could have been replaced with two with
the advances in drilling reach.

Reservoir Management F O U R

Continuous field monitoring in Ninian is achieved by a range of techniques, as follows:

• Measurement of bottom-hole pressures at shut-in or workover.

• Transient pressure measurements during shut-in. Occasional interference tests to

evaluate communication across faults.

• Produced/injected water chemistry to determine the effectiveness of the waterflood

and optimum timing of scale treatments.

• Tracer tests to show communication between injector/producer pairs and for

history matching.

• Cased-hole logs (Carbon / Oxygen and Gamma Spectroscopy; Thermal Decay

Tool inappropriate because of relatively fresh formation and injection seawater)
had limited success in monitoring water saturations behind casing.

• Zonal flow tests over dedicated intervals can be used to monitor fluids and pressures.

• Repeat Formation Testers are run routinely in infill wells.

• Production logs are routinely run to assess zonal contributions and production

The challenge in Ninian is to improve the sweep efficiency (c.f. Brent, Bryant and
Livera, 1991). Ninian is a typical tilted Brent fault block with various heterogeneities
and a relatively low (38-49%) recovery. Vertical sweep efficiency is being addressed by
improved zonal selectivity in wells, remedial well work (perforation of new intervals,
squeeze cementing of intervals, reconfiguration of completions by wireline), varying
the injection allocation where appropriate, perforating thin bypassed intervals (oil
scavenging), intermittant production of high watercut wells, chemical shut-off of
water out zones, horizontal wells in the Rannoch. Areal sweep efficiency is being
addressed by artificial lift, infill drilling, flood realignment, development of new areas
(East Flank), and chasing Ness channels.

All this reservoir management is supported by various well management practices (use
of chrome completions, scale inhibitors, slimhole completions) in a very proactive
management strategy. Clearly the comprehensive monitoring programme has
provided invaluable data for managing the field, however, with 51-62% of the original
2,900MMSTB in place remaining, Ninian still called for a reservoir management
plan for improved recovery! Note that the Ninian Field was later sold by Chevron
to Oryx - who felt that they could optimise the recovery of remaining oil to their
economic advantage.

In summary there are a range of reservoir management options.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 13

R Geomodelling & Reservoir Management

8.1. Water Shut-off

Water shut-off is the shutting off of perforations through which water is entering the
well by various methods - cementing (if the lower perfs water out), chemical (polymer
gel) or mechanical (squeeze perforations). Production logs (PLT) are often used to
show zones that will preferentially produce water. The Modified Lorenz Plot can also
be used to identify potential thief zones where water can be expected to breakthrough.

8.2. Improved Oil Recovery (IOR) and Enhanced Oil Recovery

IOR - any improved oil recovery mechanism (including EOR), infill drilling,
multilaterals, smart wells, EOR - enhanced oil recovery tends to refer to increased
recovery by the use of chemicals, polymers or steam. MEOR is a type of EOR -
Microbial Enhanced Oil Recovery - that is used to improve sweep.

8.3. Infill Drilling

Drilling for attic or bypassed oil or stranded gas by sidetracks or high angle,
multilateral wells. Infill drilling attempts to alter sweep patterns or address structural
compartmentalisation. The Yibal Field in Oman is a Cretaceous Carbonate reservoir
(Shuaiba Formation) presents an example of a exhaustive infill drilling programme
(Figure 10) resulting a significant increase in recovery factor but also significantly
increased water cut (Figure 11) caused by thief zones associated with the faults and
high permeability matrix (Figure 12, Mijnssen et al., 2003).

1979 1985

Depletion and "phase" injection Aquifer injection

1994 2002

Onset of horizontal drilling

High density horizontal infill

Figure 10 Increase of well density in Yibal Field (Mijnssen et al., 2003) from vertical
producers in 1979 to horizontal producers and injectors in 2002

Reservoir Management F O U R

WOR (frac)

0 0.1 0.2 0.3 0.4 0.5
Recovery Factor (frac)

01/81 01/88 01/94 09/98

Phase Aquifer Injection Horizontals

Figure 11 Rise in field water - oil ratio (WOR) due to horizontal well activity, Yibal field
2003. Mijnssen et al., 2003

Lower Thief Layer: Upper shuaiba matrix: Upper thief zone:

• Dual pore system • Single pore system • Dual pure system
• Uncertain continuity • Uncertain Kv/Kh ratio • Uncertain continuity
• Uncertain keff • Uncertain So,r • Uncertain keff
• Uncertain ke

Tight Streak: Tight Streak:

• Baffle to flow • Uncertain and varying
• Uncertain keff conductivity
• Uncertain continuity • Uncertain density
• Uncertain keff

Figure 12 Schematic geology of the Yibal field, showing many the if zones (both
stratigraphic and structural) (Mijnssen et al., 2003)

The Heather Field is a Brent Group reservoir which is very compartmentalised, as

the pressure profiles in infill wells shows (Figure 13). The recovery factors in the
stratigraphic units are very variable around the field (Figure 14). Infill wells are
drilled to target isolated, undrained fault blocks (Figure 15) (Gilbert Scott, pers. com.).

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 15

R Geomodelling & Reservoir Management

Figure 13 Varying pressure profiles in Brent reservoir in Heather field, showing

stratigraphic compartmentalization

Figure 14 Varying recovery factors for Heather field reservoir layers on crest and flank of
the field, show variations in vertical and areal sweep efficiency

Reservoir Management F O U R

1 Km 10720
10740 10700
010680 1058
106 40 0
105 Fault Y
40 0
10 Fault X

H11 2/5 Block

H-44 Injector Target TD
H23Z Intersection with fault X
Brent Entry
H-62 well track

Figure 15 Infill drilling in Heather field targetting three fault blocks (1-3)

8.4. Fraccing
The pressuring-up of the formation until it fails and the injecting of proppant into
artificial fractures is a method to access bolt by-passed or residual hydrocarbons.
Fraccing is usually used to improve the productivity, but can also be used to access
additional reserves.


New proactive Reservoir Management strategies were discussed in a special issue

on Field Revitalisation (JPT, Feb, 1998). One project, Brent Depressurisation, was
presented in detail (Christensen and Wilson, 1998) with sections on:

Reservoir (Re-) Description Issues

Locating remaining oil in the full field simulation model
Mapping the complex slumped fault blocks on the west flank (Figure 16)
Developing an Oil-rim Management plan
Critical-gas Saturation Uncertainty - laboratory experiments confirm rate
Aquifer Influx - water injectors back - produced to contain aquifer
Compaction - not thought to exceed 2.5%
Hydrogen Sulphide - production separators modified
Well Engineering - Electric Submersible Pumps and coiled tubing gas lift
Management tools
Full Field Simulation Model
Scenario Analysis

New Technology

The project is expected to take Brent recovery factors to 59% oil recovery on the
west flank (57% total) and 80% of the original gas in place.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 17

R Geomodelling & Reservoir Management


Brent Brent Statfjord

7000 West Crestal Crestal
Flank Slumps Slumps
COC 8560'

8000 X-Uncon
GOC 9100'

9000 OWC 9040'

OWC 9690'
10000 DUNL
ORAN 0 5000 ft

OIIP 3800mmbbls GIIP 7.5TCF Reserves(1999) 200mmbbls & 2.6TCF (biggest UK field in 1999)

Figure 17 Brent Field (from James et al., 1999). In 1999 this field was the largest in the
North Sea despite being in production for 20 years

Reservoir Management F O U R


In summary, Reservoir Management can be summarised as follows (the DIME

approach Satter and Thakur):

Develop plans in teams for the maximum oil production that resources allow. This may
be modified in the light of the companies’ or country’s overall production strategy.

Implement with state-of-the art production facilities and wells.

Monitor with the appropriate combination of geoscience, petrophysical, chemical.

static and dynamic tools.

Evaluate using state of the art geoscience and reservoir models. This phase could
also be a technical audit and could actually start the process: EDIM).

Reservoir Management is a dynamic process and devlopment, implementation,

monitoring and evaluation of plans needs to be continuously updated in as proactive
and synergystic fashion as resources allow. The objective of reservoir management is
to extend the plateau or reduce the decline to extend the life of a field (Figure 18). This
has been extremely effective in a North Sea field (Figure 19) where the operator has
managed to significantly improve the reserves through proactive reservoir management
(Figure 20). A focussed geoscience and engineering approach has potential to speed
up and make more efficient, the reservoir management decision-making process.

Field development plan

Reservoir Engineering and Engineering Studies

Overall field Production Production profile
development optimisation protection
Operations optimisation

and Geology
Start of production


(after Campbell Airlie, EPS)


Figure 18 Production capacity increase in mature fields

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 19

R Geomodelling & Reservoir Management


Given the following permeability data from 3 wells (each data set contains 10 layers)

Well A Fluvial
Porosity(%) Permeability(mD)
0.08 0.05
0.20 30
0.20 1000
0.18 5000
0.10 0.1
0.08 0.05
0.20 100
0.25 1250
0.30 800
0.13 3

Well B Turbidite
Porosity(%) Permeability(mD)
0.20 100
0.25 1000
0.22 500
0.18 10
0.22 500
0.21 80
0.18 20

Well C Shallow Marine

Porosity(%) Permeability(mD)
0.180 500
0.150 2000
0.180 200
0.175 100
0.170 50
0.150 20
0.140 8
0.140 4
0.130 2
0.120 0.5

1. Plot a Lorenz and Modified Lorenz Plot for each of the three reservoirs.

2. Use these two plots to discuss production characteristics and expected sweep
efficiency of the three reservoirs

Reservoir Management F O U R


Given the following permeability data from 3 wells (each data set contains 10 layers)

Well A Fluvial
Porosity(%) Permeability(mD)
0.08 0.05
0.20 30
0.20 1000
0.18 5000
0.10 0.1
0.08 0.05
0.20 100
0.25 1250
0.30 800
0.13 3

Well B Turbidite
Porosity(%) Permeability(mD)
0.20 100
0.25 1000
0.22 500
0.18 10
0.22 500
0.21 80
0.18 20

Well C Shallow Marine

Porosity(%) Permeability(mD)
0.180 500
0.150 2000
0.180 200
0.175 100
0.170 50
0.150 20
0.140 8
0.140 4
0.130 2
0.120 0.5

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 21

R Geomodelling & Reservoir Management

WELL A: Fluvial

Lorenz Plot Modified Lorenz Plot

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Storativity Storativity

The ordered Lorenz Plot shows significant heterogeneity. Average permeability is

818mD and Cv is 1.89. Unordered LP shows speed zones in a few locations. This
is a composite sand body, perhaps of braided fluvial origin. If there is cross flow this
profile will be sustained. The sweep may not be good as there is a tortuous pathway
through the reservoir.

Note that this reservoir contains 6 GHE.

Permeability (mD)

0 0.1 0.2 0.3 0.4 0.5
Porosity (Decimal)

Short vertical
φh Long horizontal

kh Water Very reduced

shut - off
after breakthrough

Short vertical
φh and horizontal

kh Improved sweep

Reservoir Management F O U R

WELL B: Turbidite

Lorenz Plot Modified Lorenz Plot

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Storativity Storativity

The ordered Lorenz Plot shows significant heterogeneity. Average permeability is

316mD and Cv is 1.17. Unordered LP shows speed zones These may be commingled
layers laterally separated by shales. The higher permeability zone will water out first
as there is unlikely to be cross flow in the reservoir (assuming a layered architecture
of a distal layered turbidite system). This will lead to poor sweep efficiency.

There are 3 GHE in this reservoir.

Permeability (mD)

0 0.1 0.2 0.3 0.4 0.5
Porosity (Decimal)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 23

R Geomodelling & Reservoir Management

Lorenz Plot Modified Lorenz Plot

1 1

0.8 0.8

0.6 0.6


0.4 0.4

0.2 0.2

0 0

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Storativity Storativity

WELL C: Shallow Marine

The ordered Lorenz Plot shows significant heterogeneity. Average permeability is

288mD and Cv is 2.15. Unordered LP shows similar profile. Water is likely enter the
high permeability upper layer and to fall under gravity and one should expect good
sweep – providing the vertical permeability is high enough.

There are 6 GHE in this reservoir

Shallow Marine
Permeability (mD)

0 0.1 0.2 0.3 0.4 0.5
Porosity (Decimal)

Long vertical
φh and horizontal
Fining up
kh Water Reduced sweep
kh shut - off /
plug back

Long vertical
and horizontal
Coarsening up
Improved sweep

Reservoir Management F O U R


Aadland, A., Dyrnes, O., Olsen, S.R., and Dranen, O.M., 1994, Statfjord Field: Field
and reservoir management perspectives, SPEFE, August, 157-161.

Arnold, R., Burnett, D.B., Elphick, J., Feeley, T.J. III.,Galbrun, M., Hightower, M,
Jiang, Z., Khan, M., Lavery, M., Luffey, F., and Verbeek, P., 2004, Managing Water
- from waste to resource, Oilfield Review, Schlumberger, 16(2) p26-41.

Brand, P.J., Clyne, P.A., Kirkwood, F.G., and Williams, P.W., 1996, The Forties Field:
20 years young, Journal of Petroleum Technology, April, 280-291, 1996

Bryant, I.D., and Livera, S.E., 1991, Identification of unswept oil volumes in a
mature field by using integrated data analysis: Ness Formation, Brent Field, UK
North Sea, in Generation, accumulation and production of Europe’s hydracarbons
(ed. A.M.Spencer) EAPG Spec. Publ. 1, 75-88.

Christiansen, S.H., and Wilson, P.M., 1998, Challenges in the Brent Field: Implementing
Depressurisation, synopsis of SPE paper 38469, JPT, Feb, 1998, 75-77.

Gawith,D.E. & Gutteridge,P.A., 1996, Seismic Validation of Reservoir Simulation

Using a Shared Earth Model. Petroleum Geoscience vol. 2, 97-103.

Heward, A.P., and Gluyas, J.G., 2002, How cen we help ensure success of oil and
gas field rehabilitation projects, Petroleum Geoscience, 8, 299-306.

James, S.J., 1999, Brent Field Reservoir Modelling: the Foundations of a brown field
redevelopment. SPE Reservoir Evaluation and Engineering 2(1):104-111.

Mijnssen, F.C.J., and Maskall, R.C., 1994, The Leman Field: Hunting for the remaining
gas, SPE 28880, presented at Europec, 25-27th October.

Mijnssen, F.C.J., Rayes, D.G., Ferguson, I., Al Abri, S.M., Mueller, G.H., Razali,
P.H.M.A., Nieuwenhuijs, R., and Henderson, G.H., 2003, Maximising Yibal’s
remaining value, SPE Reservoir Evaluation and Engineering, August, 255-263.

Owens, J., 1998, The role of the DTI in the UK Oil and Gas Industry, presentation
at Heriot-Watt, 1998.

Pressney, R., 1993, Reservoir Management in the Ninian Filed - a case study, paper
presented at Heriot-Watt SPE Lecture, December.

Satter, A., and Thakur, A., 1994, Integrated Reservoir Management, Pennwell, 335p.

Satter, A., Varnon, J.E., and Hoang, M.T., 1994, Integrated Reservoir Management,
JPT, December, 1057-1064

Sneider, R, 1999, Multidisciplinary teams in exploration and production: Their

value and future, Part 1, Concepts, definitions and team success, The Leading Edge,
December, 1366-1370.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 25

R Geomodelling & Reservoir Management

Sneider, R, 2000a, Multidisciplinary teams in exploration and production: Their

value and future, Part 2, Financially successful and unsuccessful teams, The Leading
Edge, January, 28-31.

Sneider, R, 2000b, Multidisciplinary teams in exploration and production: Their

value and future, Part 3, Building successful teams, team recognition and rewards,
and MDT’s in the future, The Leading Edge, February, 136-137.

Spaak, P., Almond, J., Salahudin, S., 1999, Fulmar: a mature field revisited. Mohd
Salleh, Z., and Tosun, O., 1999, in Fleet and Boldy (Eds) Petroleum Geology of North
West Europe, The Geological Society, pp l089-110.

Handling Uncertainty F I V E






4.1 Derivation of Parametric Method for
Sum of Two Distributions
4.2 Parametric Method Examples






R reservoir evaluation and management

Geomodelling & Reservoir Management M

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management


Having worked through this chapter the students will develop knowledge of:

• To understand why uncertainty quantification is important

• To appreciate and avoid common pitfalls in uncertainty quantification

• To understand the appropriate distributions for estimating oil in place uncertainty

• To apply the parametric method to compute OIP and reserves uncertainty

• To understand the use of Bayes Theorem for updating uncertainty estimates

Handling Uncertainty F I V E


Uncertainty in prediction of oil recovery from reservoirs arises primarily from our
lack of knowledge of the subsurface. The uncertainty is there whether we choose to
acknowledge it or not, and the primary reason for quantifying uncertainty is to improve
the decisions taken. The important aspect of uncertainty quantification is what we do
with our estimates of uncertainty – acquire more data, create intervention plans, etc.

The ability to estimate uncertainty accurately – and we’ll define what we mean by
accurate estimation of uncertainty later – has a direct impact on a company’s financial
performance. This can be through reserves bookings, which affect a company’s share
price, or through an effective reservoir management plan, which can reduce OPEX
and increase income by increasing oil production.

Uncertainty quantification is the process of taking what we know about a reservoir

– measured data – and what we believe about the reservoir, for example inferences
from depositional environment, and calculating how the uncertainty in the input
translates to uncertainty in the quantity of interest.

To carry out this process effectively, we need to understand how the uncertainties in
both elements of this process arise. That is, how do the uncertainties in the measured
data arise, and how do the uncertainties in our beliefs or inferences compare with
the truth.


It is important to think about what we mean by the word uncertainty: all too often,
the temptation is to rush in and “calculate the uncertainty” without being clear what
we mean by this statement.

Questions we might like to think about are:

What do we mean by uncertainty?
What do other people mean by uncertainty?
Is my definition of uncertainty the same as my boss is using?
What will happen if I’m wrong?
What does being wrong mean when we are talking about uncertainty?
Is there a single correct answer to the question “what is the uncertainty”?

Here are a number of definitions of uncertainty that we can find if we search on


“Being unsettled or in doubt or dependent on chance”

“A measure of how poorly we understand or can predict something”
“The degree of variability in the observations”
“A condition where the outcome can only be estimated”
“Lack of sure knowledge or predictability because of randomness”
“The likely difference between a reported value and a real value”

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 3

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management

Although these definitions provide partial answers to the first 3 questions, they do not
address the issue of a single correct answer to the question “what is the uncertainty”
nor do they address the question of what being right or wrong means.

Epistemic vs Aleatory Uncertainty

Before we address the issues of right or wrong in relation to answers about uncertainty,
it is useful to examine different categories of uncertainty.

Uncertainty has been studies in many branches of science – and it is becoming

in-creasingly important with concerns about high impact events such as climate
change. The oil industry has always been a risk based business and has developed
many tech-niques to handle risk and uncertainty – but there is always scope to learn
from other fields.

Uncertainty has been characterised as Fuzziness, Incompleteness, Randomness

(FIR). Fuzziness is imprecision of definition, Incompleteness is informations we
don’t know, and Randomness is a lack of a specific pattern. The JUNIPER project
spent some time implementing an uncertainty approach based on these ideas and
Dempster-Schafer Theory.

More common in the natural sciences is the categorisation into Epistemic and
Aleatory uncertainty. Aleatory uncertainty is categorised by inherent randomness, is
due to the intrinsic variability of nature, and over time, all values will eventually be
sampled. Epistemic uncertainty, on the other hand, is due to our lack of knowledge,
our inadequate understanding, and epistemic uncertainty can be reduced by additional

The categorisation into epistemic and aleatory uncertainty ties in with two distinct
statistical approaches – Bayesian and frequentist.

The Frequentist approach believes that probability only exists in reference to well
de-fined random experiments. For a frequentist, probability is defined as the relative
fre-quency of a particular outcome in the limit of infinitely many repeated experiments.

A Bayesian, on the other hand, believes that probability theory can be applied to the
degree of belief in a proposition. For a Bayesian, the probability of an event represents
the degree of your belief in the likelihood of that event.

Both approaches to probability follow the same rules, and will give the same answers
given large amounts of data. The frequentist approach is limited to aleatory uncertainty
by definition, whereas the Bayesian approach can handle both epistemic and aleatory
uncertainty. But, Bayesian methods have not received wholesale acceptance largely
because of the subjective element introduced by the concept of “degree of belief”.

Why is this relevant to uncertainty quantification in the oil industry? Primarily

because the uncertainty we have to deal with is largely epistemic uncertainty. We
do not have multiple random reservoirs leaping into existence in the subsurface; we
have a single reservoir whose properties are known to limited accuracy at a limited
number of points.

Handling Uncertainty F I V E

Data Uncertainty
Where do the uncertainties in the data come from?

First of all, reservoir properties are variable, and so there is an issue of sampling
density. Have enough samples been taken to capture variability and trends? What
has happened to the samples since they were taken? Did the collection method
change their properties?

Second, how does the measurement process work? What is actually measured, and how
is the quantity of interest calculated? How are the uncertainties accounted for?

Not all of these uncertainties will be of equal magnitude, but you will have to consider
the effects of both variance and bias.

Assessment of uncertainties in data can be complex. It is not clear that even apparently
well carried out analyses always capture the full range of uncertainty. As an example,
consider measurements of the speed of light as shown in Figures 1 and 2.

These figures show that estimating accurate uncertainty bounds can be very hard,
even for a quantity as well-defined as the speed of light.

300000 299840
Measured Speed of Light (km/sec)

Measured Speed of Light (km/sec)









299650 299760

299600 299750
1870 1880 1890 1900 1900 1910 1920 1930 1940 1950 1960
Year of Experiment

Figure 1 Measurements of the velocity of light; 1875-1958. Results are as first reported,
with correction from air to vacuum where needed. The uncertainties are also as originally
reported, where available, or as estimated by the earliest reviewers. Error bars show
standard error (s.e. = 1.48 x probable error)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 5

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management

Estimated Speed of Light (km/sec)



1984 Value






1920 1930 1940 1950 1960 1970 1980
Year of Estimate

Figure 2 Recommended values for the velocity of light, 1929-1973


Oil in Place and Reserves estimation is an area where uncertainty quantification is

very important. There are strict rules about reporting reserves as they have a direct
impact on a company’s perceived value. Reserves are classified as proven, probable
and possible.

The concept of “proven” reserves is hard to square with the concept of uncertainty in
reserves estimates. What exactly do we mean by proven when we can’t be certain?
Reserves definitions have changed over recent years to account for the increasing
importance of uncertainty.

Let’s look at how the reserves definitions have evolved over time:

Journal of Petroleum Technology, May 1987:

“Reserves are estimated volumes of crude oil, condensate, natural gas liquids, and
associated substances anticipated to be commercially recoverable from known
accumulations from a given date forward, under existing economic conditions, by
established operating practices, and under current government regulations.”

Journal of Petroleum Technology, August 1996:

“Proved reserves are those quantities of petroleum which, by analysis of geological
and engineering data, can be estimated with reasonable certainty to be commercially
recoverable, from a given date forward, from known reservoirs, and under current
economic conditions, operating methods, and government regulations.”

Handling Uncertainty F I V E

What does “estimated with reasonable certainty” in this definition mean? There
are two definitions given in the August 1996 article. If using deterministic methods,
it means “with a high degree of confidence”; if using stochastic methods, it means
“at least 80%”.

Notice that this definition still does not remove ambiguity. One person’s “high
degree of confidence” may well be significantly different from someone else’s. If
using stochastic methods, are you going to choose a p80, a p90, p95, or even p99 to
satisfy “at least 80%”?

The current SPE definitions assign the following values to proven, probable, and

• Proved reserves – 90% probability that quantity will be produced or exceeded

• Probable reserves – 50% probability of proved and probable will be produced or
• Possible reserves – 10% probability of proved and probable and possible will be
produced or exceeded

Note that SPE regulations for proved oil require financial and regulatory conditions
to be wet as well

How do we compute OIP and reserves uncertainties. The first thing to do is to look
at the definition of oil in place, which is given by:

OIP = GRV × N / G × φ ×So Bo

In this expression GRV is Gross Rock Volume, N/G is net to gross, φ is porosity, So
is saturation and Bo is oil formation volume factor.

So, if we can assess uncertainties in each of the terms in this equation, we can combine
them to compute the overall uncertainty in oil in place.

How do we decide what distributions we should use for each of the individual terms?
For example, if we have a porosity log showing porosity varies in one well from 3%
to 28%, with a mean value of 22%, should we use those three numbers to define a
triangular distribution for porosity? The answer is no, for reasons explained below.

The correct distributions to use in equation (1) are distributions of the average values.
For example, the porosity in equation (2) is the average porosity in the reservoir, and
it is the uncertainty in that average value that goes into the calculation of OIP.

We can see why this should be the case by looking in detail at how we calculate Oil in
Place. First, let us define a characteristic function, χ(x,y,z) which is 1 where we have
pay and zero otherwise. Then our Oil in Place is given by (in reservoir units):

OIP=∫ φ So χ dτ

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 7

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management

Basically, this equation sums up the porosity times saturation everywhere we have pay.

How would we compute an average saturation? It would be given by the following


∫ φS χdτ = OIP
∫φχdτ ∫φχdτ
Similarly, the average porosity would be given by

φ =
∫φχdτ (4)
So OIP is given by

OIP = Sφ ∫χdτ (5)

The last integral is summing up all the pay throughout the reservoir, and so is the net
rock volume, or the net to gross times the gross rock volume. So, the oil in place is
given by

OIP = GRV × N / G × φ × So Bo (6)

Notice that we have switched back to surface quantities through the introduction of
Bo, and that all quantities are averages given by the above equations.

How would we use these ideas in computing uncertainty in OIP? Suppose you have 4
wells, with saturation and porosity data. For each well, you can compute the average
saturation and average porosity. Look at the uncertainty in those quantities and use
that as a guide to estimate the uncertainties in average porosities and saturations.


Our expectation on uncertainty quantification is that as we acquire more information,

and study a reservoir in greater detail, our estimate of the oil in place and reserves
improves and our uncertainty decreases. This is shown schematically in Figure 3,
redrawn from Dromgoole & Speers.

Handling Uncertainty F I V E

Exploration Appraisal Development

Development Decision

Reserves Estimate


Figure 3

Dromgoole and Speers carried out a study at BP on effectiveness of uncertainty

estimation for OIP and reserves using data from a wide range of field, both BP data
and data from other companies. They found a systematic underestimation of true
levels of uncertainty, and published a summary of their findings in the Schlumberger
Middle East Well Evaluation Review. Figure 4 shows an schematic plot for a North
Sea reservoir of estimated OIP as a function of time. The blue line represents the
p90 OIP estimates as a function of time, with p50 shown in red, and p10 shown in
green. This plot is typical of many that were produced by Dromgoole and Speers as
part of the original study.

Development P10
100 Decision
80 P90



0 2 4 6 8 10 12
Time (Years)

Figure 4

What is happening here? Why are our expectations of decreasing uncertainty not
being met? This pattern is repeated across different companies, so it is unlikely that
there is a failing of one company’s method that is responsible for this underestimation
of uncertainty.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 9

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management

60 Underestimate

% Reserve Change




-60 Overestimate
Low Degree of Complexity High
Submarine Fan Reservoirs

Compartmentalized Deltaic Reservoirs

Other Reservoirs

Figure 5

Figure 5 shows a summary from Dromgoole and Speers of the change in p50 reserves
from discovery to 4 years later categorised by reservoir type. The message here is
that reserves tend to be underestimated for simple reservoirs and overestimated for
complex reservoirs.

Psychological Research
There have been many studies reported in the psychology literature looking at how
effective people are at knowing what they do and don’t know. See for example
Kahneman and Tversky’s book.

Handling Uncertainty F I V E




Proportion Correct



.5 .6 .7 .8 .9 1.0
Subjects' Responses

Hazard and Peterson, 1973: Probabilities

Hazard and Peterson, 1973: Odds
Philips and Wright, 1977
Lichtenstein (Unpublished)

Figure 6

Figure 6 shows an example outcome from one of the studies. In this case, a collection
of people were asked questions with a choice of two answers, such as “Which city is
closer to London – (a) New York, or (b) Moscow”. They were asked to select one of
the answers and then assess their confidence in the correctness of their answer from
50% (pure guesswork) to 100% (absolute certainty). Each study asked a number
of people between 50 to 100 questions and then looked at the frequency of correct
answers in each probability decile. If people are well calibrated, we would expect
on average 50% of the guesses to be correct, 70% of the answers assessed as 70%
confidence to be correct and so on.

Figure 6 shows results from 4 independent studies. We can see that there are remarkable
similarities between the different studies (which all used questions considered
“difficult”). In all cases, there is no real improvement in accuracy of answers until
confidence rises above 80%, and accuracy at 100% confidence seems to be around
75 – 85%.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 11

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management

The graph shown in Figure 6 is called a calibration curve, and shows how how well
calibrated our estimates of uncertainty are. Clearly, the Dromgoole and Speers study
shows that OIP estimates in the oil industry are not well calibrated. Other industries
also produce calibration curves, and an example from weather forecasting is shown in
Figure 7. Here, data from a set of weather forecasts is plotted showing the frequency
of observed precipitation as a function of forecast probability of precipitation. The
numbers next to each point are the total number of forecasts at that probability.
Clearly, despite what we may think about the quality of weather forecasts, they do a
much better job than the oil industry!

Observed Relative Frequency (%)

587 1065

30 1574

1134 2502
0 816
0 10 20 30 40 50 60 70 80 90 100
Forecast Probability (%)

Figure 7


The Parametric Method

The parametric method is an uncertainty estimation method that relies on a simple
set of rules for combining distributions. Its use in the oil industry has been described
by a number of authors [1,2,3].

Rules of the Parametric Method

1 The sum of independent distributions, of whatever form, tends to a normal

2 The mean of the sum of distributions is the sum of the means
3 The variance of a sum of independent distributions is the sum of the variances
4 The product of independent distributions tends towards a log-normal
5 The mean of the product of distributions is the product of the means
6 The value of (1+κ)2 of the product of distributions is the product of the
individual (1+κ)2 values.

Handling Uncertainty F I V E

4.1 Derivation of Parametric Method for Sum of Two Distributions

We can show where the formulae for the parametric method come from by considering
the sum of two distributions.

We’ll let w=x+y. Then, from the definition of the mean

w = n1 ∑ ( xi + yi ) = n1 ∑ xi + n1 ∑ yi = x + y

so the mean of the sum is the sum of the means.

Now consider the variance.

σ w2= (w − w ) = w 2 − w 2

from the definition of w=x+y

σ 2 = ( x + y) − ( x +y ) = ( x + y ) −x 2 − y 2 −2x .y
2 2 2
Expanding (x+y)2, we get

2 2 2 2 2
σ w = x + y +2xy − x − y −2x .y
which becomes

2 2
σw = x + y 2 + 2xy −x 2 − y 2 −2x .y

Using the definitions of variance in x, y, we can rewrite this as

2 2 2
σ w = σ x + σ y +2 xy −2x .y
which using the definition of correlation coefficient becomes (see appendix)
2 2 2
σ w = σ x + σ y +2rσ x σ y

For uncorrelated distributions, the value of r is zero, and so the variance of the sum
of distributions is equal to the sum of the individual variances.

4.2 Parametric Method Examples

Sum of Several Distributions

To illustrate the parametric method, let’s first look at the sum of 5 triangular distributions
with parameters given in Table 1. At this stage, we’re not assigning any physical
meaning to these numbers – we’re just going to see how well the parametric method
performs compared with Crystal Ball.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 13

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management

min mode max mean variance

80 100 120 triangular 100 67
100 300 500 triangular 300 6667
200 500 1000 triangular 567 27222
400 700 1400 triangular 833 43889
500 900 1800 triangular 1067 73889

2867 151733

Table 1 Parameters for 5 distributions to sum

Forecast: total
10,000 Trials Frequency chart 9,933 Displayed
.0 2 1 206

.0 1 5 1 5 4 .5

.0 1 0 103

.0 0 5 5 1 .5

.0 0 0 0

1 ,9 0 3 .0 5 2 , 4 0 4 .7 3 2 , 9 0 6 .4 0 3 ,4 0 8 . 0 8 3 ,9 0 9 .7 6

Figure 8 Sum of 5 distributions computed using Monte Carlo

The mean of the distributions computed by the parametric method is 2867, and by
Crystal Ball is 2865 (shown in Figure 8). The standard deviation (square root of the
variance) is 398 for the parametric method, and 390 for Crystal Ball.

Oil in Place Calculations

Oil in place is estimated using the equation (expressed in a consistent set of units)

OIP = GRV × N / G × φ ×So Bo

To quantify the uncertainty in Oil in Place, we need to multiply the distributions for
each of the input parameters. Table 2 shows a set of parameters for this example system.

The first step is to identify suitable distributions for each of the input values.
Remember, these are average values, so we are looking for distributions that express
the uncertainty in average porosity, average So, average net-to-gross, as well as the
distribution of uncertainty in gross rock volume. The largest uncertainty here is
almost certainly gross rock volume.

Handling Uncertainty F I V E

min most likely max

GRV (ft3) 350000000 600000000 1150000000 triangular
N/G 0.6 0.8 0.9 triangular
por 0.19 0.23 0.29 triangular
So 0.75 0.8 0.9 triangular
Bo 1.25 1.3 1.35 triangular

Table 2 Parameters for Example OIP calculation

Since we are multiplying these parameters, we expect a log-normal distribution.

The mean of the distribution is the product of the individual means. We calculate
the mean of each triangular distribution using µ = (a+b+c)/3 ; this calculation is
shown in the “mean” column in Table 3. Remember, we are multiplying by 1/Bo,
so we need to compute the mean and variance of 1/Bo rather than Bo. The variance
is calculated using σ 2=(a2+b2 +c2–ab –ac –bc)/18, and is shown in the “variance”
column in table 4. Once we have the mean and variance we can compute 1+ σ 2/
µ2 for each distribution. We then multiply the means and the values of 1+ σ 2/µ2 to
compute the mean and the value of 1+ σ 2/µ2 for the combined distribution. From the
mean and 1+ σ 2/µ2 we can compute the variance of the new distribution.

min most likely max mean variance 1+sig^2/mu^2

350000000 600000000 1150000000 triangular 700000000 2.79167E+16 1.056972789
0.6 0.8 0.9 triangular 0.766666667 0.003888889 1.006616257
0.19 0.23 0.29 triangular 0.236666667 0.000422222 1.007538187
0.75 0.8 0.9 triangular 0.816666667 0.000972222 1.001457726
1.25 1.3 1.35 triangular 0.769990503 0.000146391 1.000246913

ft^3 79867835.32 4.70851E+14 1.073814103

bbl 14224013.41 3864486.79

Table 3 Mean and Variance calculation for log-normal distribution

Overlay Chart
Frequency Comparison

.017 Lognormal Distribution

Mean = 14,187,863.43

Std Dev = 3,951,269.15



5,000,000.00 10,000,000.00 15,000,000.00 20,000,000.00 25,000,000.00

Figure 9 Monte Carlo Results using Crystal Ball

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 15

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management

The mean from the parametric method is 14,224,013, compared with a value calculated
from Crystal Ball of 14,187,863 (Figure 9). The standard deviations were 3,864,486
(parametric) and 3,951,269 (Crystal Ball). Figure 9 also shows a log-normal curve with
that mean and standard deviation, showing how close the distribution is to log-normal.


Bayes Theorem provides a way to update our estimates of probability given new
data. For example, we may have generated estimates of uncertainty in Oil in Place,
and wish to update them given some production data – for example rate or pressure
data. Bayes Theorem relates posterior probabilities to prior probabilities through a
likelihood term. The prior probabilities are our belief before we acquire the data. The
posterior probabilities are the probabilities updated to reflect the new information.

An example of where we might wish to use Bayes Theorem is where we have estimated
uncertainty in OIP and have then acquired some production data. Bayes Theorem
provides a consistent way of updating our beliefs with new information.

The statement of Bayes Theorem is:

p(O | mi )p(mi )
p(mi |O) =
∑ p(O |mi ) p(mi )
Here mi is one of a set of possible models, p(mi) is the prior probability of the model.
p(mi|O) is the probability of the model given the observations.

What Bayes Theorem says is that to update the probability of a model given some
observation, we assume that the model is true and compute the likelihood of seeing
the observation assuming that the model is true. If we multiply that likelihood by
the original probability, and then normalise, that gives us the updated (or posterior)


Water Rate (stb/d)


0 200 400 600 800 1000 1200 1400
time (days)

Figure 10

Handling Uncertainty F I V E

As an illustration, consider the water rate history match graph shown in Figure 10.
If we assume that the simulated model correct, how do we calculate the probability
of the observation? Assume that each point has some experimental uncertainty about
it as shown in Figure 10 and that the experimental uncertainty is given in the form
of a Gaussian:

( q−q obs )2

2 (7)

The probability of the true measured value being equal to our simulated rate is
given by substituting the simulated rate for q in Equation 7. Thus the likelihood of
observing a single point is

( qsim−q obs )2

2 (8)


The probability of all the observations being consistent with the model is the product
of the individual probabilities if all the observations are independent.

( q sim (i)−q obs (i) ) 2


P = ∏ pi = ∏ e

This can be rewritten as

p(O| m)=e −M 2 (10)

where the misfit is defined by the usual least squares expression.

(qsim (i) −qobs (i))

M =∑ (11)
It is important to stress that we have assumed all the observational errors are
independent. If the errors are correlated, or if there is bias in the system, we can end
up with misleading probability distributions. For example, if the errors are correlated
in time, we may end up with an overconfident prediction.

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 17

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management

75000 75000 75000 75000

60000 60000 60000 60000

Oil Production Rate (STB/Day)

Oil Production Rate (STB/Day)

45000 45000 45000 45000

30000 30000 30000 30000

15000 15000 15000 15000

0.00 0.00 0.00 0.00

1990.8 1991.4 1992.0 1992.6 1993.2 1993.8 1990.7 1991.4 1992.0 1992.6 1993.2 1993.8
Time (Years) Time (Years)

Figure 11

As an example of how to use these ideas, Figure 11 shows oil rates from a simulation
of the Wytch Farm Reservoir carried out as part of an uncertainty study at BP. The
reservoir model was constrained to produce at the observed rate, and was run with
6 possible top surfaces.

Initially, all 6 top surfaces were judged as equally likely, so the prior probabilities
were set to 1/6. On running the reservoir model, 3 of the top surfaces showed good
agreement with the observed data (left hand picture), and 3 failed to match (right
hand picture).

A first update with Bayes theorem could then be to set 3 likelihoods equal to 1, and
3 equal to zero. The normalising constant in the denominator of Bayes Theorem is
then 3*1/6 + 3 * 0 = 1/2. For the successful models, the prior probability is 1/6, and
after the Bayes update is (1/6)/(1/2) or 1/3.

A more sophisticated approach uses the Gaussian errors discussed above, and is shown
in Figure 12. We computed the least squares error and then used

p(O| m)=e −M 2 (12)

to produce the “improved” plot shown in Figure 12.

Handling Uncertainty F I V E








Model 1 Model 2 Model 3 Model 4 Model 5 Model 6


Figure 12

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 19

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management


1. “Application of Forecasting and Uncertainty Methods to Production”, J A Spencer,

D T K Morgan, SPE 49092

2. “The Quantification and Management of Uncertainty in Reserves”, P J Smith,

D J Hendry, A R Crowther, SPE 26056

3. “A Pragmatic Approach to Risked Production Forecasting”, I Singleton, SPE


4. “Managing Uncertainty in Oilfield Reserves”, R Speers, P Dromgoole, Schlumberger

Middle EastWell Evaluation Review, 12, 1992. Available online at http://www.slb.

5. “Reengineering Simulation: A Bottom Line Approach to Managing Complexity

and Complexification”, N G Saleri, SPE 36696

6. “Improving Investment Decisions Using A Stochastic Integrated Asset Model”,

S H Begg, R B Bratvold, SPE 71414

7. “Incorporating Historical Data in Monte Carlo Simulation” J A Murtha, SPE


8. “Risk Assessment and Decision Making in Business and Industry – a practical

guide” G Koller, CRC Press, 1999.

9. “Judgement Under Uncertainty: Heuristics and Biases”, eds D Kahneman,

P Slovic, A Tversky, Cambridge University Press, 1982.

Handling Uncertainty F I V E


This section reviews statistical terms and concepts we need in uncertainty quantification.

Mean, Median, Mode

There are 3 common measures of an “average of a distribution”. These are the mean,
median, mode.

The mean value is the most frequently used and most commonly known average. It
is given by


µ X=
∑ Xi
It is also often represented as <X>, with the angled brackets implying a sum (or
integral) over all the X values, and dividing by the number of points, or as X .

The median value of a probability distribution p(x) is the value of x at the midpoint
of the cumulative probability distribution.
xmed ∞
∫ p(x)dx = = ∫ p(x)dx
2 xmed

It is often referred to as the p50 value.

The mode is the most likely value of x. That is the value with the highest probability of
occurrence. For a Gaussian distribution, the mean, median, and mode are all identical.





Figure A1 Mean, Median, Mode for a Normal Distribution (from Spencer et al)

21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 21

R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management



Figure A2 Mean, Med