You are on page 1of 22

4.

Static reservoir study

4.5.1 Introduction
The most important phase of a reservoir study is
probably the definition of a static model of the
reservoir rock, given both the large number of
activities involved, and its impact on the end results.
As we know, the production capacity of a reservoir
depends on its geometrical/structural and
petrophysical characteristics. The availability of a
representative static model is therefore an essential
condition for the subsequent dynamic modelling
phase.
A static reservoir study typically involves four
main stages, carried out by experts in the various
disciplines (Cosentino, 2001).
Structural modelling. Reconstructing the
geometrical and structural properties of the reservoir,
by defining a map of its structural top and the set of
faults running through it. This stage of the work is
carried out by integrating interpretations of
geophysical surveys with available well data.
Stratigraphic modelling. Defining a stratigraphic
scheme using well data, which form the basis for wellto-well correlations. The data used in this case
typically consist of electrical, acoustic and radioactive
logs recorded in the wells, and available cores,
integrated where possible with information from
specialist studies and production data.
Lithological modelling. Definition of a certain
number of lithological types (basic facies) for the
reservoir in question, which are characterized on the
basis of lithology proper, sedimentology and
petrophysics. This classification into facies is a
convenient way of representing the geological
characteristics of a reservoir, especially for the
purposes of subsequent three-dimensional
modelling.

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

Petrophysical modelling. Quantitative


interpretation of well logs to determine some of the
main petrophysical characteristics of the reservoir
rock, such as porosity, water saturation, and
permeability. Core data represent the essential basis
for the calibration of interpretative processes.
The results of these different stages are integrated
in a two (2D) or three-dimensional (3D) context, to
build what we might call an integrated geological
model of the reservoir. On the one hand, this
represents the reference frame for calculating the
quantity of hydrocarbons in place, and on the other,
forms the basis for the initialization of the dynamic
model. In the following paragraphs we will describe
these stages in greater detail.

4.5.2 Structural model


The construction of a structural reservoir model
basically involves defining the map of the structural
top, and interpreting the set of faults running through
the reservoir. Traditionally, this phase of the study falls
under the heading of geophysics, since seismic surveys
are without doubt the best means of visualizing
subsurface structures, and thus of deducing a
geometric model of the reservoir. Further
contributions may be made by specialist research such
as regional tectonic studies and, for fault distribution,
by available dynamic data (pressures, tests and
production data).
The definition of the reservoirs structural top
involves identifying the basic geometrical structure of
the hydrocarbon trap. In this case we are dealing with
the external boundaries of the reservoir, since its
internal structure is considered in relation to the
stratigraphic reservoir model (see below).

553

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

In most cases, the map of the reservoirs


structural top is defined on the basis of a
geophysical interpretation of 2D or 3D data. In this
case, the most frequent, the geophysicist interprets
significant horizons in a seismic block as a
function of times. This generates a data set (x, y, t),
forming the basis for the subsequent gridding
phase; in other words, the generation of a surface
representing the time map of the horizon under
consideration. This time map is then converted into
a depth map, using the relevant laws governing the
velocities of seismic waves, which are calculated
according to the characteristics of the formations
overlying the reservoir. There are various
techniques for performing this conversion, some of
which are highly sophisticated. The choice of the
most suitable depends on geological complexity,
and on the human, technological and financial
resources available. In any case, the resulting map
is calibrated against well data.
In some cases, the map of the structural top
may be generated solely on the basis of available
well data, with the help of data from the geological
surface survey, if the reservoir lies in an area with
outcrops of geological formations. This may
happen where no seismic survey has been carried
out, or when there are enough wells available to
provide adequate coverage of the structure. In these
instances, the improvement in quality of the
structural top map resulting from a seismic
interpretation is not sufficient to justify the extra
work involved, which is due above all to the
problem of calibrating a large number of wells.
The interpretation of the set of faults running
through a reservoir has considerable impact on its
production characteristics, and in particular on the
most appropriate plan for its development. Given
an equal volume of hydrocarbons in place, the
number of wells required is higher for reservoirs
characterized by faults which isolate independent
or partially independent blocks from the point of
view of fluid content. In the case of deep sea
reservoirs (for example in the Gulf of Mexico,
West Africa, etc.), the number of wells is often
crucial in the evaluation of development plans.
Consequently, an accurate assessment of faults and
their characteristics may be a decisive factor.
The interpretation of the set of faults within a
reservoir is generally based on four types of data that
are subsequently integrated.
Inconsistencies in correlation. The presence of
faults may sometimes be apparent from well data,
indicated by inconsistencies in the correlation
scheme. Typically, for example, the depth of a
horizon in a well may turn out to be greater or

554

smaller than expected, indicating the possible


presence of a fault. In the past, when 3D seismic
surveys were much less readily available than today,
this technique allowed only the largest faults to be
identified and located with a good degree of
accuracy.
Well data. The presence of faults in a well can
generally be ascertained through the analysis of the
stratigraphic sequence. Missing geological sequences
indicate the presence of normal faults, whereas
repeated sequences indicate the presence of reverse
faults.
Geophysical tests. Geophysical data represents
the main source of information on the presence of
faults since, unlike the two previous techniques, it
also investigates parts of the reservoir which are
distant from the wells. The presence of faults may
be indicated by discontinuities in the seismic
signal. This applies to both data from surface
seismic surveys and data recorded in well seismics
(VSP, crosswell seismics). Furthermore, this data
can be interpreted both traditionally, by mapping a
reflecting geological horizon, and by using seismic
attributes (dip, azimuth, amplitude, etc.).
Dynamic well test data. The interpretation of
dynamic well tests (see Chapter 4.4) may show the
presence of faults in cases where the faults have an
impact on fluid flow, and thus on pressure patterns
over time.
An adequate integration of these types of
information generally allows the set of faults running
through the reservoir to be reconstructed with sufficient
accuracy. However, in carrying out the integration, we
should take into account a series of factors which may
be crucial for the quality of the end result.
The first factor concerns the degree of detail
aimed for in the interpretation. In most cases this
depends more on the tools available than on the
actual aims of the study. Geophysicists tend to
include in their interpretation all those
discontinuities which can be identified from the
seismic survey, regardless of whether these have an
impact on fluid flow. As a result, the reservoir
engineer often has to simplify the map during the
dynamic simulation phase, keeping only those
faults which turn out to have a significant impact
on the results of the simulation model. For
example, faults shorter than the average size of the
model cells can obviously be disregarded. For this
reason, the degree of detail in a geophysical
interpretation should match the overall
requirements of the study, and be discussed and
agreed with the other members of the study team.
Another factor is linked to the hydraulic
transmissibility of the faults. In a reservoir study, we

ENCYCLOPAEDIA OF HYDROCARBONS

STATIC RESERVOIR STUDY

are interested only in those faults which act as


sealing faults, or which are more transmissive than
the reservoir rock. Faults which have no impact on
fluid flow, on the other hand, may be disregarded.
From this point of view it is important to stress that
geophysical tests provide a means of locating the
faults in space relatively accurately, but give no
information on their sealing effect. By contrast,
dynamic well tests allow us to quantify the impact of
the faults on fluid flow, but not to locate them
precisely in space. It is therefore obvious that these
two techniques are complementary, and that adequate
integration improves the end result.
Reconstructing the network of faults running
through a reservoir is a complex activity, requiring a
combination of data of differing type, quality and
reference scale. The quality of the final reconstruction
is generally tested during the validation phase of the
simulation model (see Chapter 4.6), in which we
attempt to reconstruct the reservoirs past production
history (history match). It may be found necessary to
adjust the preliminary interpretation during this phase
of the work, which consequently takes on an iterative
nature, aimed at progressively refining the initial
assumptions. Obviously, all modifications must be
made after consultation with the
geologist/geophysicist who carried out the work, so as
to maintain the models necessary geological and
structural coherence.
The structural model of a reservoir is based on
the combination of results obtained during the stages
of defining the structural top, and interpreting the
fault network. In a 2D context, the result is simply a
depth map calibrated against the wells, with the
superimposition of fault lines where these intercept
the structural top. The model, understood as the
external framework of the reservoir, is completed
with a map of the bottom which is derived using the
same method.
Recent years, however, have seen the increased use
of software allowing us to model subsurface structures
in 3D, and this now represents the most widely
adopted approach in the field. The main advantages of
3D techniques are their speed and ease of use, as well
as the ability to model complex structures (e.g. reverse
faults), which are impossible to represent with
traditional 2D techniques, based on the mapping of
surfaces representing geometrical and petrophysical
parameters.
The procedures for constructing a threedimensional structural reservoir model vary according
to the applications used, but generally include the
following steps.
Modelling of major faults. These are faults which
bound the main blocks forming the reservoir. The fault

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

Fig. 1. Example of a 3D structural


reservoir model
(courtesy of L. Cosentino).

planes are in this case explicitly modelled as surfaces,


which in turn define the boundaries of the main blocks
of the three-dimensional model.
Construction of geological surfaces. Within each
main block, parametric surfaces are generated, which
represent the main geological horizons, typically the
top and bottom of the main sequences. These surfaces
must be consistent with the depths measured in all
available wells.
Modelling of minor faults. Whilst affecting fluid
dynamics, these faults have only a slight impact on the
overall geometry of the reservoir, locally displacing
the geological surfaces.
Fig. 1 shows an example of a three-dimensional
structural reservoir model: the major faults, surfaces
and minor faults are clearly visible. It is obvious that
structures of this complexity cannot be modelled using
traditional two-dimensional mapping methods.

4.5.3 Stratigraphic model


The development of the stratigraphic model is without
doubt one of the most traditional tasks of the reservoir
geologist, who must perform a well-to-well correlation
with the aim of defining the stratigraphic horizons
bounding the main geological sequences within the
hydrocarbon formation. This task is of vital
importance for the studys overall accuracy, since fluid
flow is heavily influenced by the reservoirs internal
geometry. It is therefore essential to devote the
necessary time and resources, both human and
technological, to this stage of the project, in order to
build an accurate model.

555

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

The difficulties encountered at this stage of the


reservoir study are mainly linked to the definition of
the depositional environment of the reservoir. In some
cases, when the sedimentary sequences present a
significant lateral extension, the correlations between
wells may be relatively simple. This is true, for
example, for shelf areas, with both terrigenous and
carbonate sedimentation, dominated by tidal
phenomena. An extreme example of correlativity is
represented by the distal facies of some deep sea
turbidite sediments, as in various Adriatic fields,
where we can correlate with certainty individual
events just a few centimetres thick, even between wells
several kilometres apart. However, such examples are
exceptions to the rule. In most cases, the lateral
extension of the sedimentary bodies is much lower,
and in many cases, unfortunately, is less than the
average distance between wells. This is true of most
continental and transitional geological formations,
such as alluvial, fluvial and deltaic sediments, where
reconstructing the internal geometry of the reservoir
may turn out to be extremely complicated,
representing an often insurmountable challenge for the
reservoir geologist. In these cases, as we will see
below, integration of the various disciplines
participating in the reservoir study may be crucial for
improving the accuracy of the end result.
Correlation techniques

The basic data used for well-to-well correlations are


the logs recorded in open hole or cased hole, and cores.
These data are used to create stratigraphic sections and

correlations, in terms of real depth or with respect to a


reference level, through which we can generally
identify the lines corresponding to significant
geological variations. Fig. 2 depicts a classic example of
a geological cross-section between two wells, showing
the logs used for the correlation itself.
As already mentioned, there is often a high risk of
generating spurious correlations, and the reservoir
geologist must carefully choose the suitable
methodologies to minimize possible errors. To this
end, one of the best techniques is sequence
stratigraphy. Sequence stratigraphy is a relatively new
approach, whose official appearance can be dated to
1977 (Vail et al., 1977). This is a chronostratigraphic
system, based on the hypothesis that the deposition of
sedimentary bodies is governed by the combined
effects of changes in sea-level (eustatic phenomena),
sedimentation, subsidence and tectonics.
On this basis, we can identify sequences of different
hierarchical order within a geological unit, separated by
sequence boundaries which represent unconformities or
maximum flooding surfaces. These surfaces are the
most important reference levels (or markers) that a
reservoir geologist may find in well profiles.
The correct identification of these units allows us
to generate an extremely detailed chronostratigraphic
framework. This is especially well-suited to reservoir
studies, since chronostratigraphic units and fluid flow
are usually closely linked. This link does not
necessarily exist if we consider traditional
lithostratigraphic units (for example by correlating the
tops of arenaceous units).

Fig. 2. Example
of a correlation
between wells
(courtesy
of L. Cosentino).

556

ENCYCLOPAEDIA OF HYDROCARBONS

STATIC RESERVOIR STUDY

Where it is not possible to apply sequence


stratigraphy, or where this does not provide the desired
results, we may resort to correlations based on the
hydraulic properties of the sedimentary bodies. This
approach aims to define flow units (or hydraulic
units), which do not necessarily coincide with the
geological units, but which can be considered
homogeneous from a dynamic point of view. One of
the classic methodologies for the definition of flow
units is described in Amaefule et al. (1993).
Validation of the stratigraphic scheme

Once we have defined the reference correlation


scheme, it is good practice to check its accuracy using
other types of techniques and data which may provide
useful information for this purpose.
Biostratigraphy and palynology. Available rock
samples (cores or cuttings) are frequently analysed
with the aim of studying micropalaeontological and/or
palynological associations (spores and pollens). This
data may in some cases help to confirm the
stratigraphic scheme. However, it is important to
check that chronostratigraphy and biostratigraphy are
consistent, and, in the case of drilling debris (cuttings),
to take into account the limited vertical resolution of
the data.
Pressure data. Available static pressure data, and
particularly those collected in wells with WFT
(Wireline Formation Tester) instruments, provide
extremely significant information on the continuity
and connectivity of the various sedimentary bodies. In
the absence of structural discontinuities (e.g. faults),
the pressures measured in different wells in identical
geological sequences should be similar. If this is not
the case, there may be correlation problems.
Production data. Within a geological unit we
should be able to observe a thermodynamic
equilibrium, corresponding to specific
characteristics of the fluids produced at the surface
(gas-oil ratio and oil density). The presence of
anomalies in these characteristics may be due to
correlation problems. Obviously, in these cases we
should first rule out problems with the well (e.g.
defective cementing).
Drilling data. The Rate Of Penetration (ROP) may
provide useful information on the stratigraphic
sequence crossed. Different geological units often
present varying resistance to the advancement of the
bit. In these cases, data supplied by the drilling activity
may be used to check the consistency of available
correlations.
It is obvious that this list of techniques is not, and
cannot be, exhaustive; every reservoir study possesses
distinctive data and information which can be used
during the various stages of the study. It is therefore

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

the responsibility of the reservoir geologist to examine


all the existing opportunities, and exploit these to the
utmost.
Construction of a stratigraphic model

The stratigraphic horizons defined at the wells


during the correlation phase are subsequently linked to
one another by constructing surfaces which together
form what we might call the stratigraphic model of the
reservoir. This model consists of a series of thickness
maps of the individual geological horizons located
between the upper and lower boundary surfaces of the
reservoir. These maps are usually created using
appropriate computer mapping programmes. For
stratigraphic modelling, too, the three-dimensional
approach is the most commonly adopted by reservoir
geologists today. In this case, after constructing the
external framework of the reservoir according to the
procedure described in the previous paragraph, we
proceed to define the internal geometry; in other
words, to create that set of surfaces between the top
and the bottom of the reservoir which represent the
boundaries between the geological sequences selected
for correlation. Generally, as already stressed, these
surfaces form boundaries between flow units which
are independent of one another.
The specific procedure allowing us to construct
this stratigraphic scheme obviously depends on the
applications used. Generally speaking, it is possible to
model all existing sedimentary geometries (conformity
and erosion surfaces, pinch-out, onlap, toplap,
downlap, etc.) and to achieve an accurate reproduction
of the depositional scheme under consideration.
Fig. 3 shows an example of a three-dimensional
stratigraphic model, where we can see the different
depositional geometries of the various sedimentary
units. Note especially the onlap type geometry of the
lower unit in the area of structural high.

Fig. 3. Example of a 3D stratigraphic model

(courtesy of L. Cosentino).

557

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

4.5.4 Lithological model


The structural and stratigraphic models described in
the previous paragraphs together form the reservoirs
reference framework. The next stage in a reservoir
study involves defining the spatial distribution of the
reservoir rocks petrophysical characteristics. In
three-dimensional geological modelling jargon, this
operation is often described as the populating of the
reservoir model. As a rule, it can be performed using
appropriate deterministic or stochastic functions
which allow us to generate two or three-dimensional
spatial distributions of significant characteristics,
such as porosity and permeability, directly from well
data. However, this operation is often difficult to
perform, since the lateral and vertical continuity of
these reservoir parameters is often uncertain, and the
modelling process is based on the a priori
assumption of continuity and spatial regularity which
does not necessarily reflect the real situation. This is
especially true of parameters such as permeability,
whose spatial continuity is usually considerably
lower than the average distance between control
points (wells). For this reason, when working in three
dimensions, it is often preferable to construct a
preliminary lithological model of the reservoir; that
is, a model based on the identification and
characterization of a certain number of basic facies
which are typical of the reservoir under examination.
These facies are identified on the basis of data
gathered in the wells using specific classification
criteria, and subsequently distributed within the
three-dimensional structural-stratigraphic model
using special algorithms. The main advantage of this
approach is that it is usually much easier to carry out
the spatial distribution of basic facies than that of
reservoir-rock parameters. This is because the
distribution of facies is based on precise geological
criteria which depend on the sedimentary
environment under consideration. The distribution of
petrophysical parameters is therefore implemented
subsequently, and is based on the previously created
lithological model. The idea behind this procedure is
that the petrophysical characteristics of the reservoir
can be considered intimately linked to the lithological
facies.
The concept of facies is particularly well-suited to
reservoir studies. Once the facies have been
determined and characterized by integrating log and
core data, and, where possible, seismic data, this
classification system can be used in various stages of
the study, including the following.
Three-dimensional modelling. The facies can be
employed as building blocks for the development of
three-dimensional geological models, generally

558

through the application of stochastic algorithms. As


we have said, this is the most typical use of facies.
Quantitative interpretation of logs. We can
associate each facies, or group of facies, with a typical
interpretative model, for example in terms of
mineralogy (matrix density), saturation exponent or
cementation factor.
Definition of rock types. Although it is not possible
to perform a direct upscaling on the facies for the
simulation stage (since this is a discrete parameter),
their distribution may be used as a qualitative
reference in the dynamic model for the assignment of
saturation functions (capillary pressure and relative
permeability). This stage is usually known as the
definition of rock types.
It is evident that the small-scale, three-dimensional
geological model which describes and characterizes
the facies can be used in different stages of the study,
and in various contexts. The facies may thus be
considered the most suitable tool for conveying wideranging geological information through the various
stages of the study up to the simulation model,
guaranteeing the consistency of the work flow. It is
worth noting that the concept of facies also represents
a convenient common language for all the experts
involved in the study.
In practical terms, the lithological model of a
reservoir is constructed by integrating an ideal
representation of the reservoir (sedimentological
model), a classification stage (definition of facies)
and a spatial distribution stage (three-dimensional
model).
Sedimentological model

The sedimentological/depositional model of the


reservoir forms the basis for the lithological model,
and is defined in two main stages: the description and
classification of the individual lithological units
(lithotypes) making up the reservoir rock, carried out
on available cores; and the definition of a depositional
model describing the sedimentary environment
(fluvial, deltaic, marine, etc.). This model also allows
us to formulate hypotheses regarding the geometries
and dimensions of the geological bodies; this
information is used in the three-dimensional
modelling stage.
Classification of facies

The facies can be considered the building


blocks of the lithological reservoir model. They
can be defined in various ways, the simplest of
which involves the application of cut-off values to
log curves recorded in the wells. For example, a
simple sands-clays classification may be realized
by identifying a cut-off value in the gamma ray

ENCYCLOPAEDIA OF HYDROCARBONS

STATIC RESERVOIR STUDY

log curve (log of the gamma rays emitted by the


rock as a function of depth).
Usually, the classification into facies is obtained
through a more complex process. This involves
selecting the most suitable log curves, identifying a
number of reference wells (i.e. those wells which have
been cored, and have high-quality logs), and applying
statistical algorithms such as cluster analysis, or more
complex processes based on neural networks. In this
way, a lithological column is generated for each
reference well, where each depth interval is associated
with a specific facies (log facies). This process is
iterative, and aims to identify the optimal number of
facies needed to describe the reservoir rock in the right
degree of detail.
Next, these log facies are compared with
available core data and characterized from
a lithological and petrophysical point of view.
Basically, each log facies is associated with typical
lithological descriptions and values (mean and/or
statistical distributions) for petrophysical
parameters. The degree of detail and the accuracy of
this characterization stage obviously depend on the
number and quality of logs used. In the case of old
wells, with a limited availability of logs (e.g.
electrical logs of spontaneous potential and/or
resistivity), the classification process is perfunctory
and the characterization stage is limited to a basic
lithological description, for example sands/silt/clays,
with limited vertical resolution. By contrast, where
logs of more recent generations are available (e.g.
density/neutron, PEF, sonic and NMR), the facies
emerging from the classification process can be
characterized more completely. For example, each

facies can be linked not only to the most obvious


lithological characteristics, but also to precise
petrophysical values such as porosity, permeability,
capillary behaviour, compressibility, cementation
factor, saturation exponent, etc.
During the final stage, the defined classification
on reference wells is extended to all the other wells in
the reservoir through a process of statistical
aggregation. This stage allows us to obtain
lithostratigraphic columns in terms of facies for all the
wells in the reservoir.
Three-dimensional distribution of facies

The three-dimensional distribution of facies is


usually obtained by applying stochastic algorithms,
using the three-dimensional stratigraphic model as a
base (see above).
These algorithms, which will be discussed in
greater detail below, allow us to generate extremely
realistic geological models, related to all available
data; geophysics, log and core data, and sometimes
even dynamic data. Fig. 4 shows an example of this
type of model, demonstrating the degree of detail
which can be obtained in what are now routine
geological studies.
These models use a vast number of basic cells,
often in the order of tens of millions, thus allowing an
extremely detailed representation of the real geological
structure of the reservoir. In a later stage, after a
process of simplification and reduction of the number
of cells (upscaling), these geological models (in terms
of the petrophysical characteristics of the reservoir
rock) are input into the dynamic model to simulate the
production behaviour of the reservoir.

Fig. 4. Example of a stochastic

model of facies
(courtesy of L. Cosentino).

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

559

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

4.5.5 Petrophysical model


Fluid flow in a reservoir takes place in an
interconnected grid of porous spaces within the
reservoir rock. The characteristics of this grid
determine the quantity of fluids present, their relative
distribution, and the ease with which they can flow
towards production wells.
The properties of this porous system are linked to
the characteristics (mineralogical, granulometric, and
textural) of the solid particles which bound it. These in
turn are a function of the original depositional
environment and the post-depositional processes
(diagenesis, cementation, dissolution, fracturing)
which may have affected the rock after its formation.
The quantitative study of the porous space in
reservoir rock forms a part of petrophysics, a
discipline which plays a fundamental role in reservoir
studies. This represents the basis for the dynamic
description of fluid flow, and thus of the behaviour
(observed or predicted) of production wells. For this
reason it is essential to devote sufficient time and
resources to this stage, both in terms of data collection
and analysis (including laboratory tests on cores), and
in terms of interpretation, in order to generate a
representative petrophysical model of the reservoir.
This section is divided into two parts: the first is
devoted to the petrophysical interpretation in the strict
sense of the word the quantitative evaluation of
reservoir properties in the wells. Special emphasis will
be given to the most important parameters (porosity,
water saturation and permeability) which make up a
typical petrophysical well interpretation. This will be
followed by a discussion of the problem of
determining the cut-off value to be applied to
petrophysical parameters to obtain the net-pay of the
reservoir in question in other words the portion of
rock which actually contributes to production. The
second part is devoted to the distribution within the
reservoir of the petrophysical parameters calculated at
the wells, dealing separately with 2D and 3D
representations, and a description of the main
deterministic and stochastic techniques adopted for
this purpose.
Petrophysical well interpretation

In a classic petrophysical interpretation we


generate, for each well in the reservoir, a series of
vertical profiles describing the main properties of the
reservoir rocks porous system, such as porosity, water
saturation and permeability. This analysis also
provides a more or less sophisticated mineralogical
interpretation of the solid part of the system, in other
words the reservoir rock itself. Fig. 5 shows a typical
example of a petrophysical interpretation, including

560

Fig. 5. Example of a petrophysical well interpretation


(courtesy of L. Cosentino).

the results in terms of petrophysical and mineralogical


parameters.
Both the properties of the porous system and the
composition of the solid part can be analysed and
measured directly on cores. In this case, the results can
generally be considered fairly accurate, at least where
the cored portions are actually representative of the
reservoir rock. However, cores often cover only a
limited portion of the total number of intervals crossed
by the wells. Consequently, the petrophysical
interpretation is generally carried out using available
logs, whereas cores are used to calibrate the
interpretation algorithms and to check the results.
Below is a brief description of the main petrophysical
parameters, and the techniques used to determine
them, as already described in Chapter 4.1.
Porosity

The determination of porosity (see Chapters 1.3


and 4.1) can generally be considered the least complex
stage in a petrophysical interpretation. However, this
stage is extremely important, since it defines the
quantity of hydrocarbons present in the reservoir. In
the laboratory, porosity is measured on rock samples
whose linear dimensions are generally limited to
11.5 inches, using techniques which involve the
extraction of fluids, or, vice-versa, the introduction of
fluids into the samples porous system. These
techniques, which have been in use for over 40 years,

ENCYCLOPAEDIA OF HYDROCARBONS

STATIC RESERVOIR STUDY

generally provide fairly accurate values, and can also


be applied under conditions of temperature and
pressure corresponding to original reservoir
conditions.
The problems associated with this type of
measurement, where they exist, are linked to the
representativeness of the rock sample. A typical
example is the measurement of secondary porosity,
which, being linked to genetic factors whose spatial
intensity is extremely irregular, may not be at all
representative of average reservoir conditions. As
such, it may be difficult to determine the porosity of
fractured rocks, or rocks affected by intense
dissolution and/or cementation phenomena. Another
example of poor representativeness is provided by
rocks of conglomerate type, in which the distribution
of the porous system is highly irregular, at least at the
core scale.
The methods most frequently used to determine
porosity are those based on the interpretation of well
logs. The quantitative interpretation of porosity is of
particular significance in reservoir studies when the
determination of the porous volume of the reservoir
turns out to be highly complex. This is true, for
example, of old fields, with little data of low quality
and resolution; of carbonate reservoirs, characterized
by prevalently secondary porosity; and of fractured
reservoirs, where well instruments may at times turn
out to be completely inadequate for a quantitative
calculation of porosity.
In all of these cases it is essential to integrate the
normal petrophysical interpretation, based on log and
core data, with all those techniques, static and
dynamic, which may provide information, even of an
indirect nature, on the porous volume of the reservoir.
This integration process may make a fundamental
contribution to the evaluation of the porous volume of
the reservoir, and the understanding of its spatial
distribution.
Water saturation

The porous system of a reservoir rock is filled


with fluids, typically water and hydrocarbons. The
relative distribution of these fluid phases within
the porous space depends on a series of factors
linked to the chemical and physical properties of
the rock and the fluids themselves, as well as the
interaction between rock and fluid (the rock
wettability). Determining the saturation conditions
of the reservoir rock represents one of the most
important stages in a reservoir study, since it
influences not only the calculation of the amount
of hydrocarbons in place, but also the
determination of fluid mechanics, and thus the
productivity of the wells. This is generally a

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

complex stage, which frequently involves a high


degree of uncertainty in the final construction of
the integrated reservoir model.
The water saturation of a rock, like its porosity,
may be measured on cores, or on the basis of logs. In
the laboratory, meaningful measurements of water
saturation may be obtained using Dean-Stark type
extraction data on preserved samples, at least in cases
where mud filtrate invasion is limited, and where the
expansion of the gaseous phase does not lead to a
significant change in the samples initial saturation
conditions. We can often obtain data of considerable
accuracy by using suitable coring techniques and noninvasive oil-based drilling muds, at least in areas of the
reservoir which are distant from the transition zone,
also known as the capillary fringe. An example of a
systematic study of this type, carried out on the
Prudhoe Bay field in Alaska, is described in McCoy et
al. (1997).
The water saturation of a rock may also be
determined using capillary pressure measurements,
based on the fact that capillary forces are responsible
for the relative distribution of water and hydrocarbons
within the porous space.
For the purposes of reservoir studies, water
saturation is mainly measured on the basis of well
logs recorded in uncased boreholes, and particularly
electrical resistivity/induction logs, generally using
the famous Archie equation, first published back in
1942 (Archie, 1942). In cased boreholes, on the
other hand, water saturation can be measured using
data obtained with pulsed neutron-type instruments.
These also have the advantage of being recordable
through the production tubing, while the well is
producing. These instruments are often used in
systematic monitoring of the evolution of saturation
conditions in reservoirs, and therefore represent an
extremely interesting source of information for
reservoir studies. For example, the ability to
monitor the advancement of oil-water or gas-water
contacts in the various zones of the reservoir as a
function of time, not only allows us to optimize the
management of the field, but also provides
information which is essential in calibrating the
results of the reservoir model.
Permeability

Permeability (see Chapter 4.1) is without doubt the


most important petrophysical reservoir parameter.
Permeability determines both the productivity of the
wells and the reservoirs ability to feed drainage areas,
and thus the reservoirs capacity to sustain economic
rates in the long term. On the other hand, this is also
the most difficult parameter to determine.
Permeability is a property which can be measured

561

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

directly only on cores, logs generally allowing us to


obtain only rough estimates. Further, permeability is
usually characterized by extremely high spatial
variability, which makes it difficult to estimate even in
areas adjoining available measurement points. The
determination of permeability is thus an important and
complex stage in the reservoir study, which requires an
integration of all available data, and consequently a
high degree of cooperation between the engineers and
geologists in the work team.
Estimates of a reservoirs permeability are
generally carried out on the basis of available core
data, preferably calibrated on the results of
production test interpretations, when these exist.
Although this approach generally provides acceptable
results, reservoir engineers are frequently forced to
modify radically the distribution of permeability
values in the simulation model during the subsequent
dynamic modelling phase, so as to replicate the
production behaviour observed in the wells. This
clearly indicates an incorrect initial determination of
permeability.
The best method for satisfactorily defining the
initial distribution of permeability values in a reservoir
is doubtlessly the integration of the different sources
which provide direct or indirect information on this
property. These sources are more numerous than is
generally thought, and the integration process often
allows us to generate fairly accurate permeability
models, which turn out to be adequate during the
simulation phase.
Below is a brief list of some of the available
techniques providing information on the permeability
of a reservoir. Each of these techniques supplies data
referring to a given support volume (i.e. reference
scale), given saturation conditions (i.e. relative or
absolute permeability) and given measuring conditions
(in situ or laboratory-based). When integrating these
data, it is therefore necessary to carry out a
normalization process which takes these differences
into account.

Minipermeameter analysis

Permeability can be rapidly and accurately


measured in the laboratory using an instrument known
as a minipermeameter. Comparison with the normal
measurements obtained from cores under ambient
conditions often shows a good level of coherence
between these two types of data. The significance of
this type of measurement lies in the possibility of
identifying small-scale heterogeneities. In this case,
too, the critical aspect is represented by the sample
volume, even smaller than normal core samples.
Furthermore, the measurements only refer to
laboratory conditions.
Interpretation of well tests

The permeability of a formation can be estimated


(see Chapter 4.4) through the interpretation of well
tests (flowing and pressure build-up, injectivity and
interference tests, etc.). These interpretations provide
values for effective permeability to hydrocarbons
under reservoir conditions, and refer to a much larger
support volume than any other technique. Where good
quality pressure data are available, well tests allow us
to estimate the average permeability of a reservoir
with considerable accuracy.
Production logs (PLT)

These tools are generally used to monitor wells


(see Chapter 6.1); however, where a production test is
available it is possible to use PLT (Production Logging
Tool) data to calculate a permeability profile at the
well (Mezghani et al., 2000). These data refer to the
effective permeability to hydrocarbons under reservoir
conditions, and generally represent an interesting link
between the dynamic estimates deriving from the
interpretation of well tests, and the static estimates
which can be obtained, for example, by logging.
However, it is necessary to note the possible damage
suffered by the geological formation (or skin) around
the well.
Wireline Formation Testing (WFT)

Core analysis

Absolute permeability can be measured in the


laboratory on core samples of various sizes. These
measurements, which represent the only source of
direct data, may refer both to laboratory and reservoir
conditions. The data measured are then adjusted to
take into account the so-called Klinkenberg effect (gas
slippage) due to gas escaping from the reservoir, and
the effects of overburden pressure. The most critical
aspect of this type of data is the extremely small
support volume, which often renders the
measurements unrepresentative of the reservoir as a
whole.

562

This is a test which measures formation pressures


at predetermined depth intervals, by carrying out
short flowing and pressure build-up phases. These are
interpreted in the same way as a flowing test to
obtain estimates of permeability. In this case the
values obtained can be considered to refer to the
permeability of the fluids present in the invaded
zone, under reservoir pressure and temperature
conditions.
Nuclear Magnetic Resonance (NMR) logs

Nuclear magnetic resonance tools represent the


only means of obtaining a continuous vertical profile

ENCYCLOPAEDIA OF HYDROCARBONS

STATIC RESERVOIR STUDY

Table 1. Characteristics of the various methods employed


Method
Core analysis
Minipermeameter
Production tests
PLT
WFT
NMR
Regressions
Empirical equations
Neural networks

Scale

Pression Temperature

Saturation

Measurement

Macro
Micro
Mega
Mega
Macro
Macro
Macro
Macro
Macro

Environment/In situ
Environment
In situ
In situ
In situ
In situ
In situ
In situ
In situ

Absolute
Absolute
Relative
Relative
Relative
Absolute
Absolute
Absolute
Absolute

Direct
Direct
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect

of permeability in the well. Permeability is calculated


using equations based on the proton relaxation time,
and the results obtained may be fairly accurate,
especially where some of the input parameters can be
calibrated on measurements carried out on core
samples in the laboratory.
Petrophysical correlations.

Permeability is often obtained using a correlation


with porosity by means of core measurements
(Nelson, 1994). However, this method tends to
generate permeability profiles which are unnaturally
regular; there are various types of statistical data
processing which allow us to preserve at least in part
the heterogeneity of the original permeability
distribution. These include, for example, regressions
for individual lithological facies, and multiple linear
regressions (Wendt et al., 1986).
Empirical equations

Various empirical equations exist in the relevant


literature for the estimate of permeability on the basis
of known petrophysical parameters. In some specific
cases, these equations may provide fairly acceptable
results, but it is always important to check them using
available core data.
Neural networks

This is a recent methodology, which allows us to


generate permeability profiles using logs or other
petrophysical profiles. The most interesting aspect of
this methodology (Mohaghegh and Ameri, 1996), is
that the obtained estimates correctly represent the
original degree of heterogeneity of the data
measured, and the results do not suffer, as statistical
methods do, from the smoothing effect. Particular
attention should be paid during the preliminary
training process of the neural networks; this
requires adequate calibration data, without which the
results obtained may be misleading. Table 1
illustrates the characteristics of these various
methods.

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

By integrating the data derived from these different


techniques we can often generate reliable permeability
models, which reflect both the static and dynamic
aspects of this property. This allows us to improve and
shorten the validation phase (history matching) of the
dynamic simulation model, thereby optimizing the
quality of the reservoir study and the time required to
perform it.
Determination of net pay

The net pay of a reservoir represents that portion of


rock which effectively contributes to production. This
value is calculated using appropriate cut-off values
applied to petrophysical parameters. Although the
simplicity of the term might lead one to think
otherwise, cut-off is one of the most controversial
concepts within the community of geologists and
reservoir engineers, since there is no clear shared
methodology for its definition. This is also evident
from the lack of literature on the subject, despite the
fact that the determination of net pay is practically
unavoidable in any reservoir study (Worthington and
Cosentino, 2003).
One of the key points in determining the cut-off to
be applied to petrophysical curves is an understanding
of its dynamic nature. This is because the cut-off is
linked to conditions that imply the productive capacity
of hydrocarbons under given pressures, and with a
given development plan. Typically, a porosity cut-off is
selected on the basis of permeability versus porosity
graphs drawn up using data obtained from core
analysis, thus fixing a limit value for permeability
often equivalent to a conventional value of 1 mD
(microdarcy).
In selecting the cut-off, we must consider at least
the following two factors. Firstly, the cut-off must be
chosen on the basis of fluid mobility rather than
permeability alone. Consequently, in the same
geological formation, the value of the cut-off varies as
a function of the fluid present. This is why many of the
worlds gas fields produce from reservoirs with
extremely low permeability, just a few mD, whereas

563

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

the cut-offs normally applied to heavy oil reservoirs


are in the order of tens of mD. Typical cut-off values
for mobility lie in the range of 0.5-1 mD/cp.
Secondly, the choice of cut-off must be a function
of production mechanisms. In reservoirs which
produce by simple fluid expansion (depletion drive),
the value of the cut-off depends on the prevalent
pressure level. It is obvious that rocks with low
permeability, subjected to high pressure differential
(the difference between reservoir pressure and the
pressure imposed in the production tubing), can
contribute to production. As a result, in a reservoir of
this type, the real cut-off changes over time, as
pressure differences increase. The cut-offs
dependency on time emphasizes another aspect of this
complex problem. By contrast, in reservoirs
dominated by convective phenomena (e.g. reservoirs
subjected to secondary recovery processes using water
injection), where pressure does not change
significantly during production, the cut-off depends
more on the efficiency of the displacement process,
and is thus more generally linked to concepts of
Residual Oil Saturation (ROS).
It should be stressed, however, that even taking into
consideration the aspects described above, the
selection of an appropriate cut-off value is difficult,
and often elusive. This explains why such a choice is
highly subjective and difficult to justify. It is no
coincidence that one of the most controversial aspects
of reservoir unitization processes (the pooled
production of a reservoir which extends over two or
more production leases, by agreement or imposed by
law), is precisely the choice of cut-off and the
determination of net pay.
The main problem in determining the cut-off lies in
the choice of the reference value for permeability,
which represents the boundary between productive and
non-productive rocks. Various factors should

contribute to this choice: a good knowledge of the


reservoirs lithologies and fluids, the prevalent
production mechanism, the analysis of data which may
provide direct or indirect information for the purpose
(production tests and DST, data obtained with
measurements performed using WFT and NMR-type
well test tools, etc.). The integration of all these types
of information allows an appropriate choice of the
values to be adopted.
Once the permeability cut-off value for
commercial production has been defined, other
associated petrophysical cut-off values may be
obtained fairly simply on the basis of diagrams
(crossplots) of reservoir properties. This methodology
is illustrated schematically in Fig. 6.
Where a lithological classification is available (see
above), this procedure should be carried out
independently for each facies. This usually results in
greater accuracy, and consequently a more effective
distinction between producing and non-producing
rocks. In some cases, the lithological classification may
also lead to the definition of facies which are reservoir
and non-reservoir, thus making the determination of
net pay even easier, especially when working on
complex three-dimensional geological models.
Finally, it is always advisable to perform sensitivity
analyses on the method employed by using different
working hypotheses, and thus different cut-off values,
and noting the variations in the final values for the
volume of hydrocarbons in place. This phase often
allows us to refine our initial hypotheses, and to
optimize our final choice.
Distribution of petrophysical parameters

The petrophysical well interpretation forms the


basis for the subsequent stage of the study, consisting
in the lateral (2D) or spatial (3D) distribution of
reservoir properties. In both cases, the most complex

log K

log SW

SWC

KC

VshC

Vsh

Fig. 6. Procedure for defining a consistent set of petrophysical cut-offs. K, permeability; F, porosity;

SW, water saturation; Vsh, shale volume; c, cut-off.

564

ENCYCLOPAEDIA OF HYDROCARBONS

STATIC RESERVOIR STUDY

problem is the lack of information on those parts of


the reservoir between wells, especially when dealing
with highly heterogeneous geological formations, or
those characterized by poor lateral continuity.
Traditionally, the interpolation of known values
measured at the wells has represented the classic
methodology for the construction of reservoir maps,
with the geological/sedimentological model forming
the only reference point for this operation. In the
past, the reservoir geologist drew these maps
manually; only from the 1980s onwards did computer
mapping techniques begin to be used. Since the
1990s the situation has changed radically. On the one
hand, the availability of computers with increasingly
high processing and graphic capabilities has
definitively changed the way reservoir geologists
work. On the other, the development of new
methodologies such as geostatistics and the
extraordinary evolution of techniques for acquiring
and processing geophysical data have provided
geologists with new tools, allowing them to build
more accurate and less subjective models. In the
following sections we will describe separately the
two possible approaches: two-dimensional and threedimensional modelling.
Two-dimensional modelling
of reservoir parameters

Two-dimensional geological modelling consists in


the generation of a set of maps representing the
lateral distribution of reservoir parameters. We can
distinguish between two basic types of map: those
which describe the geometry of geological units (top,
bottom and thickness of the various layers: see
above), and those which describe their petrophysical
properties; porosity, water saturation, net/gross ratio,
and permeability. It should be stressed that the latter
type of map, whilst not strictly speaking necessary
for the static model, is essential for dynamic
simulations.
The procedures used to generate maps of
porosity and net/gross (the ratio of net pay to gross
thickness), are basically similar. Mean values are
calculated in the wells for each geological unit, and
these values are then adopted for the interpolation
process by using computer mapping techniques. In
the simplest cases, as we have said, this operation
is performed solely on the basis of the
sedimentological reservoir model, with fairly
reliable results, at least where there is a high
density of existing wells. However, considerable
attention must be paid to the peripheral areas of the
reservoir, where the mapping algorithm may
extrapolate meaningless values. In these cases, it is
common practice to use reference control points,

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

which prevent this type of uncontrolled


extrapolation.
This procedure may be improved by using
geostatistical techniques. In this case, the correlation
function adopted is not predefined, as in the case of
commercial algorithms. Instead, it is calculated
directly on the basis of available data, with obvious
benefits in terms of the accuracy of the end result.
These correlation functions (the variogram, or its
opposite, covariance) express the real lateral continuity
of the variable being modelled, and also allow us to
take into account possible directional anisotropies. The
geostatistical algorithm used in the next stage of the
evaluation process is known as kriging. This algorithm
allows us to represent accurately the lateral
distribution of the parameters, and has the additional
advantage of providing an evaluation of local
uncertainty (kriging variance).
A further improvement of the expected results
may be obtained by using seismic data. Geophysics
is the only direct source of information on areas of
the reservoir which are distant from wells, and in
recent years the geophysical techniques available for
this purpose have improved considerably. This
approach is based on a possible correlation between
particular characteristics (or attributes) of the
seismic signal recorded, and the petrophysical
characteristics of the reservoir (typically porosity
and/or net pay). This correlation is defined in the
calibration phase, by comparing surface seismic data
with data measured at the wells (sonic and velocity
logs, VSP, etc.). Once the correlation has been
defined, we proceed to integrate the seismic data,
generally using the following methods (in order of
complexity):
The normal well data interpolation, improved by
using maps of seismic attributes; these are used to
calculate the large-scale trend of the parameter
under consideration.
The conversion of the map of the seismic attribute
(e.g. amplitude or acoustic impedance) into a
porosity map, using the correlation defined at the
wells. Later, the resulting map is modified to be
consistent with available well values.
The geostatistical approach, using spatial distribution functions calculated on the basis of the correlation between well data and seismic data. The use
of collocated cokriging techniques (Xu Wenlong et
al., 1992) has become widespread in recent years.
Fig. 7 shows an example of a porosity map
generated by integrating information obtained from
wells with geophysical data.
This type of approach to the construction of
reservoir maps is becoming increasingly common in
current practice, due mainly to the availability of

565

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

highly sophisticated software applications. These


allow us to visualize seismic and traditional
geological data simultaneously, with obvious benefits
for the modelling process. However, considerable
care is required in these operations, since seismic
signals are influenced by a broad range of factors
(lithology, petrophysical characteristics, fluid
content, overlying formations), and it is thus
important to check the correlation between seismic
data and well data carefully. Spurious correlations are
more common than one might think, especially where
only a few wells are available for control (Kalkomey,
1997).
There are also various methodologies for the
production of water saturation maps. As for porosity
and net/gross, the most traditional technique is based
on the direct mapping of values measured at the wells
for each geological layer. This procedure works fairly
well where a large number of wells are available, and
has the added advantage of reflecting the values
effectively measured in the wells themselves.
However, this methodology fails to take into account
the correlation with other petrophysical parameters
(porosity and permeability), and does not allow an
accurate reproduction of the capillary fringe (see
Chapter 4.1). Moreover, it is prone to consistency
problems in the petrophysical interpretation of the
various wells.
Another technique frequently used to generate
saturation maps consists in the direct application of a
porosity-water saturation correlation. In cases where
pore geometry is relatively simple, we can frequently
observe a linear correlation between these parameters
on a semilogarithmic scale. The main advantage of this
technique lies in its speed of execution and the
consistency of results. However, it does not allow us to
model the capillary fringe; its principal application is
thus for gas fields, and in general for those reservoirs
where the height of the capillary fringe can be
disregarded.
Other techniques for generating saturation maps
rely on the application of capillary pressure curves,
which reproduce the distribution of the fluid phases
relative to the height above the water-hydrocarbon
contact. These functions may be derived from
capillary pressure data measured in the laboratory
(see above), or they may be calculated on the basis of
multiple linear regressions. In the latter case, both
petrophysical (porosity) curves and height above the
contact are used, and this allows us to simultaneously
take into consideration the dependence on the porous
system, and on the distance from the interface
between the fluids. These methods, whilst more timeconsuming, generally represent the most satisfactory
compromise for the generation of saturation maps,

566

partly because the methodology employed is similar


to that used for dynamic simulation in the
initialization phase of the model. In this light, this
method promotes greater consistency between the
hydrocarbons in place values calculated during the
geological modelling phase, and those calculated
during the dynamic simulation phase.
The construction of an accurate permeability map
is one of the most important aspects of a reservoir
study, since the results of the dynamic simulation
model are largely dependent on it. Various
methodologies are available, and the choice of which
to use depends on the characteristics of the reservoir
under examination, on the available data, and on
available human and technological resources. The
traditional method, as in the case of porosity and
net/gross, involves direct mapping of available well
values. However, as compared to other petrophysical
parameters, this methodology has greater limitations,
linked to the following aspects.
Availability of data. Generally, the availability of
data for the mapping process is more limited than for
other petrophysical parameters, given that, with the
partial exception of nuclear magnetic resonance logs,
permeability data are available only from cored wells.
Type of data. As already discussed (see above),
there are generally various possible sources of
permeability data, each of which provides
characteristic values relating to scale, saturation
conditions and type of information (direct/indirect).
The homogenization of these data required prior to a
mapping process often turns out to be an arduous task,
and subject to compromises.
Spatial variability. The spatial continuity (lateral
and vertical) of permeability is usually much lower
than that of other reservoir parameters. In the case of
highly heterogeneous formations, this continuity may

0.275
0.250
0.225
0.200
0.175
0.150
0.125
0.100
0.075
0.050
0.025

Fig. 7. Example of a porosity map

generated by integration with seismic data


(courtesy of L. Cosentino).

ENCYCLOPAEDIA OF HYDROCARBONS

STATIC RESERVOIR STUDY

be as little as one metre, or even entirely inexistent. It


is worth remembering that most algorithms used in
software packages assume a predefined and implicitly
very high spatial continuity which generates fairly
regular maps. In the case of permeability this is often
unrealistic.
Despite this, the mapping of permeability using
production tests carried out in wells may generate
accurate maps, especially when a sufficiently large
number of tests are available. These permeability
values are often extremely representative, and allow
us to produce consistent maps which are particularly
well-suited to dynamic simulation. In the case of
fractured reservoirs, where core data are shown to be
inadequate for the representation of the actual
reservoir permeability, this type of approach is often
a forced choice. Finally, it should be stressed that, as
for other reservoir parameters, these interpolations
may be further improved by using geostatistical
techniques and kriging algorithms.
An alternative methodology which is frequently
employed is based on the generation of a permeability
map from a map of porosity, using a correlation
between the two parameters, generally calculated on
the basis of available core data. In this case, the
resulting permeability map will intrinsically resemble
that of porosity, the implicit assumption being that the
spatial correlation function for these two parameters is
of the same type. However, this is normally inaccurate,
and the resulting maps often appear unnaturally regular.
Furthermore, it should be stressed that the relationship
between porosity and permeability on which this
method rests is often far from clear, especially in the
case of carbonate sediments. As such, the results may
be improved through a careful analysis of the basic
correlation, and the identification of lower-order
correlations, preferably for each individual facies.

Generally speaking, two types of approach can be


identified: in the first, the distribution of petrophysical
parameters is carried out directly in the threedimensional space of the reservoir, starting from well
profiles (single-stage model). This method does not
require a three-dimensional lithological model of the
facies (see above). In the second, the distribution is
implemented on the basis of the lithological model. In
this case, the petrophysical parameters are distributed
following the three-dimensional modelling of the
facies, in accordance with statistical laws specific to
each facies (two-stage model).
The second method has the advantage of being
based on a geological reference model which forms
the basis for lithological modelling. This generally
allows a better assignment of petrophysical
properties, especially in the presence of complex
lithologies characterized by different porous
systems.
A particularly interesting aspect of 3D modelling is
the possibility of integrating seismic data, traditionally
used in a two-dimensional context, directly in three
dimensions. Thanks to the availability of sophisticated
processing algorithms which allow us to improve the
vertical resolution of seismic data, and to the use of
new techniques to characterize the seismic signal, we
can identify seismic facies within the set of seismic
data. These in turn can be correlated with the more
traditional facies deriving from the lithological
characterization of the reservoir. Fig. 8 shows an
example of profiles derived from seismic data
characterized in terms of seismic facies. Examples of
this type represent a notable point of convergence
between lithological, petrophysical and seismic
modelling, the integration of which may produce
extremely accurate three-dimensional models.

Three-dimensional modelling
of reservoir parameters

The 2D methodology described in the previous


paragraph is gradually being replaced by more
complex techniques, based on a three-dimensional
approach to geological modelling. It is now possible to
generate and visualize rapidly three-dimensional
models of any reservoir parameter, with a resolution
that frequently exceeds tens of millions of cells. This
means that the reservoir geologist can quickly check
different working hypotheses and analyse results
directly on the screen of his own computer, with
obvious benefits in terms of time, and the accuracy of
the end results. Three-dimensional modelling may be
applied to all reservoir parameters, basically using the
same procedures already described for twodimensional models.

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

Fig. 8. Example of seismic profiles characterized

in terms of seismic facies


(courtesy of L. Cosentino).

567

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

4.5.6 Integrated geological model


Until a few years ago, the geological model
referred to a workflow rather than an object.
During the past ten years the extraordinary
development of information technologies and the
spatial modelling of oil fields has occasioned such
radical changes in the way reservoir geologists
work and even think, that the meaning of the
geological model has changed considerably. On the
one hand, it has become clear that the integration
of different disciplines, static and above all
dynamic, is fundamental for the correct static
characterization of the reservoir. On the other, the
information platforms on which we work today
allow the gradual construction of a model (first
structural, then stratigraphic, lithological, and
finally petrophysical), which comprises and
summarizes the results of the interpretations
carried out by the various experts participating in
the interdisciplinary study.
The integrated geological model has thus taken
on a revolutionary meaning compared to the past,
becoming a virtual object which represents the
actual reservoir present underground in a discrete
(but extremely detailed) way. It is characterized
quantitatively by petrophysical parameters
distributed within the three-dimensional space of the
reservoir, and may be modified and up-dated rapidly
if new data become available, for example from new
wells.
The theoretical and practical basis for this new
approach to reservoir geology is represented by
stochastic modelling. The use of stochastic (or
geostatistical) models is relatively recent, but is
becoming the most common practice among
reservoir geologists. From the 1990s onwards
numerous algorithms have been developed, the most
versatile being available in commercial applications
which have made them fairly simple to use. In brief
(Haldorsen and Damsleth, 1990), stochastic
modelling refers to the generation of synthetic
geological models (in terms of facies and
petrophysical parameters), conditioned to all
available information, both qualitative (soft), and
quantitative (hard).
These models generate equiprobable realizations,
which share the same statistical properties, and which
represent possible images of the geological complexity
of the reservoir. There is no a priori method for
choosing which realization to use in a reservoir study,
and this hinders the full acceptance of these
methodologies by the geological community. On the
other hand, the availability of a theoretically unlimited
series of realizations allows us to explore thoroughly

568

(for a given algorithm and its associated parameters)


the uncertainties linked to the available data. The
stochastic approach therefore represents a considerable
improvement on traditional geological modelling
techniques.
Currently, the most frequently used algorithms
for stochastic modelling belong to either the pixelbased or object-based category. In pixel-based
models, also known as continuous models,
the variable simulated is considered a continuous
random function, whose distribution (often of
Gaussian type) is characterized by cut-off values
which identify different facies or different intervals
for petrophysical values. The most commonly used
algorithms in this category are truncated Gaussian
random functions (Matheron et al., 1987), and
functions of indicator kriging type (Journel et al.,
1990).
These models are applied especially in the
presence of facies associations which vary
continuously within the reservoir, as is frequently
the case in geological formations of deltaic type or
shallow water marine reservoirs. No a priori
assumptions are made on the form and extension of
the sedimentary bodies, which are simulated solely
on the basis of the spatial distribution functions
used (variograms and proportionality curves). This
approach is often adopted in cases characterized by
relatively high net/gross ratios, in other words in
prevalently sandy geological formations with
intercalations of clay or other non-productive
layers.
By contrast, object-based models, also known
as Boolean models, generate three-dimensional
distributions of sedimentary bodies, obtained by
juxtaposing objects of simplified geometry, such as
disks or tabular bodies, within a clayey matrix. The
parameters of these bodies (orientation, sinuosity,
length, width, etc.) can be estimated on the basis of
the sedimentological model adopted, geophysical
data, outcrops of comparable rocks, or on the basis
of production test interpretations. This type of
model is used more frequently for fluvial-type
reservoirs, characterized by channels or meanders
located within prevalently clayey geological units,
where the overall net/gross ratio is relatively low.
In these contexts, we can obtain extremely
interesting results, with highly realistic images of
the geology simulated. By contrast, in cases where
the net/gross ratio is higher (typically 40%), and
when the number of conditioning wells is high,
these algorithms may require an extremely long
time to process.
Fig. 9 shows an example of a geological model
generated with a pixel-based algorithm. Note the

ENCYCLOPAEDIA OF HYDROCARBONS

STATIC RESERVOIR STUDY

discontinuous nature of the facies generated, whose


extension and orientation depend on the parameters of
the variogram used. Fig. 10, on the other hand, shows
an image generated with an object-based algorithm, in
which the sedimentary bodies are more clearly defined
and separated from one another.
It is worth underlining that there is no a priori
criterion for choosing one of these two approaches,
nor any specific algorithm within these families. With
the exception of the general indications based on the
sedimentary environment, which are outlined above,
the algorithm is chosen by the reservoir geologist
carrying out the study on a largely subjective basis.
Since different algorithms generate geological images
which often vary considerably, especially when
conditioning data is limited, the final result should
clearly be understood in a statistical sense, even in
those cases (the most frequent) where it is used in a
deterministic way. The correct use of this type of
results should therefore be seen within the much more
complex context of the evaluation of uncertainties
linked to geological modelling, a topic which will be
discussed later.
The enormous potential of stochastic modelling
is basically the possibility of quantitatively
integrating a whole range of different types of
information and data generated during the study by
the various specialists. This technique is
particularly useful for a general geological
understanding. General geological knowledge of
the reservoir, based for example on known
depositional models or on the existence of
comparable outcrops, may be input into the
stochastic model along with the often insufficient
information from wells, thus allowing us to
generate more realistic geological models. Recent
theoretical developments (i.e. multi-point
geostatistics) allow us to use data derived from

Fig. 9. Example of a geological model created

with a pixel-based algorithm


(courtesy of L. Cosentino).

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

Fig. 10. Example of a geological model

created with an object-based algorithm


(courtesy of L. Cosentino).

general geological models in a quantitative way


within the stochastic model.
Even the petrophysical interpretation of the
reservoir rock, defined in the well logs quantitative
interpretation phase, can be extended to the entire
reservoir through stochastic modelling. As already
mentioned, this can be done both directly, by
simulating petrophysical properties, or indirectly, by
simulating facies, and then associating mean
petrophysical values or frequency distributions with
these.
The stochastic approach can also be used to
simulate structural characteristics on a small or
medium scale (faults and fractures), which cannot be
distinguished in a deterministic way on the basis of
available data. Later, these faults and fractures may
also be characterized with hydraulic parameters.
Finally, the integration of dynamic data
(production tests and production data) represents
one of the current frontiers of stochastic modelling.
It is significant because a geological model
conditioned to available dynamic data, once input
into the dynamic simulation model, should allow a
much more rapid validation of the production
history of the reservoir (history match). Currently
there are no standard methodologies for performing
this integration, but in the relevant literature
various approaches of notable interest have already
been presented.
The integrated geological model generated using a
stochastic approach represents the end result of the
static modelling process. Once this is available, it is
used to estimate the amount of hydrocarbons in place,
and where required, it can be used to assess
uncertainty.

569

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

4.5.7 Calculation of hydrocarbons


in place

the integrated geological model. These estimates are


based on the following formula:

The determination of Original Hydrocarbons In


Place (OHIP, or OOIP for oil and GOIP for gas) is
generally considered the final stage of the static
reservoir study. It is during this stage that the
description of the reservoir, in terms of external
and internal geometry, and the properties of the
reservoir rock, are quantified through a number
expressing the amount of hydrocarbons present in
the reservoir at the time of discovery.
In fact, the most important number for the
economic evaluation of a field is that relating to
the reserves; in other words that portion of
hydrocarbons which can actually be recovered with
a given development plan. The relation between
hydrocarbons in place and Recoverable Reserves
(RR) is expressed by the well-known equation:

[2]

[1]

RROHIPRf

where Rf is the recovery factor. The value of this


factor, and consequently of the reserves, depends
both on the geological characteristics of the
reservoir, and on a series of other elements such as
the type of hydrocarbon, the characteristics of drive
mechanisms, the development plan adopted, the
surface equipment, gas and oil prices, etc. (see
Chapter 4.6). The value of hydrocarbons in place,
on the other hand, is independent of these factors,
and therefore extremely important, especially
because it gives a clear and immediate picture of
the importance and potential of the existing
accumulation.
Basically, there are two techniques for
estimating hydrocarbons in place: the traditional
method, based on geological volumetric calculation
techniques; and methods based on material balance
(see Chapter 4.3). In this context, it is worth
remembering that dynamic simulation does not
provide an independent estimate of hydrocarbons
in place, since the values calculated by the
simulator simply derive from the geological model
used as input.
Below, only geological evaluation methods are
described in detail. It should be stressed, however,
that material balance techniques may often provide
extremely accurate estimates of hydrocarbons in
place, and that it is the reservoir geologists task to
check the agreement between the various methods,
and to justify any disagreements.
Volumetric evaluations

These refer to estimates of the quantity of original


hydrocarbons in place calculated using the results of

570

N
OHIPGBV 1 /(1Sw)
G

where GBV is the Gross Bulk Volume of rock in the


reservoir; N/G the net to gross (ratio of net pay to
gross thickness); / the porosity (fraction); Sw the
water saturation (fraction); (1Sw), equal to Sh, the
hydrocarbon saturation (fraction).
If we know the mean values of these parameters for
the reservoir in question, we can quickly calculate the
amount of hydrocarbons in place. In fact, in common
practice, this calculation is not performed using mean
values (except as a preliminary evaluation), but rather
using surfaces (in two dimensions) or volumes (in
three dimensions) representing the spatial distributions
of the parameters in the equation. All the computer
applications commonly used in two or threedimensional static modelling supply the relevant
calculation algorithms, allowing us to obtain the
volume of hydrocarbons in place simply and rapidly.
In traditional two-dimensional modelling, based on
the combination of surfaces (grids), we obtain a map
known as the equivalent hydrocarbon column (Gross
payN/G/Sh), which provides a clear and immediate
picture of the hydrocarbon distributions within the
reservoir. The value of OHIP is then obtained simply
by integrating this map. In the case of threedimensional models, the OHIP value is calculated
directly on the basis of the integrated geological
model, using suitable calculation algorithms which
realize the sum of the volume of hydrocarbons present
in each of the basic cells of the model. It is important
to note that Eq. [2] supplies a value for OHIP under
reservoir conditions. To convert this into surface
conditions, we need to take into consideration the
variation in volume that oil and/or gas undergo when
they reach the surface. This variation in volume, which
is mainly a function of pressure, is measured
experimentally in the laboratory, and is known as
Formation Volume Factor (FVF). In the case of oil, the
equation linking downhole volume and surface volume
is as follows:
[3]

OHIP
OHIPST121R
Bo

where OHIPST is the volume under stock tank


conditions, OHIPR is the volume under reservoir
conditions, and Bo is the FVF of the oil, expressed in
reservoir barrels over stock tank barrels. In the case of
gas, the FVF is indicated with an equivalent volume
factor Bg.
It should be emphasized that the application of this
formula often leads to misunderstandings, because

ENCYCLOPAEDIA OF HYDROCARBONS

STATIC RESERVOIR STUDY

reports on the PVT analysis of reservoir oils (see


Chapter 4.2) usually give different values for the
volume factor, according to the experiments carried
out in the laboratory. We can thus define a differential
Bo, a flash Bo, and other Bo types deriving from
separation tests at different pressures and
temperatures. These Bo values usually differ from one
another, especially in the case of volatile oils.
Furthermore, by combining differential Bo values with
those from separation tests we can calculate a
composite Bo, which takes into account both the
behaviour of the oil under reservoir conditions
(differential test), and the actual separation conditions
at the surface. This composite value generally
represents the best approximation of the fluids
volumetric behaviour, and is that which should be used
in Eq. [3].
The direct use of the value under reservoir
conditions expressed by Eq. [1] eliminates possible
ambiguities relating to the choice and use of the
volume factor, especially when data calculated
volumetrically must be compared with data calculated
using the simulation model, where the volume factors
are determined using more complex calculations.
Deterministic and probabilistic evaluations

Generally speaking, the volume of hydrocarbons in


place may be calculated deterministically and/or
probabilistically.
Deterministic values of OHIP are obtained simply
by combining the mean values (in one dimension),
surfaces (two dimensions) or grids (three dimensions)
of the reservoir parameters indicated in Eq. [2]. These
estimates are deterministic in that all the parameters
are calculated in a univocal way, without taking into
account the possible uncertainties associated with each
of them. In other words, the estimates calculated for
the representation of these parameters are implicitly
considered to be correct.
This is the type of estimate traditionally supplied
by the reservoir geologist, and most frequently
performed. However, the process of constructing a
geological model on the basis of insufficient, scattered
information (wells) involves uncertainties due to errors
of measurement, the lack of representative data,
interpretative problems, etc. As a result, the value for
OHIP obtained using this type of procedure is just one
of many possible values, and depends on the specific
interpretative process adopted. If we were to use, for
example, a different interpolation algorithm, we would
usually obtain a different value for OHIP which is, a
priori, equally valid.
In contrast to deterministic evaluations,
probabilistic evaluations generally provide a much
more realistic estimate of the amount of hydrocarbons

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

in place, since they also evaluate the accuracy of the


estimate itself. The probabilistic approach involves
taking into account the probability distributions of
every single parameter involved in the calculation.
Each of these probability distributions quantitatively
reflects the degree of knowledge, and thus of
uncertainty, of the parameter in question. In the
simplest case (one dimension), these distributions are
sampled repeatedly and at random (Monte Carlo
method), ultimately generating a distribution of OHIP
values. This distribution is characterized by statistical
parameters (mean, median, standard deviation, etc.)
which give a concise representation of the results
obtained. In two or three dimensions, the Monte Carlo
method can nevertheless be applied, replacing simple
one-dimensional distributions with surface and grid
distributions. In any case, the final result is still
represented by a frequency distribution, and therefore
a probability distribution, for OHIP values.
In general, however, when making a probabilistic
evaluation of hydrocarbons in place, the preferred
methodology is that of stochastic modelling.
Uncertainties relating to geological modelling

The reservoir geologist has the difficult task of


reconstructing with maximum accuracy the geometry
and petrophysics of a reservoir about which he usually
has little, and mostly indirect, information. It is
therefore obvious that the final model will always
present some degree of uncertainty. The quantitative
evaluation of uncertainties relating to geological
modelling is one of the most complex and interesting
aspects of a reservoir study.
In a typical static reservoir modelling study, we can
identify four main sources of uncertainty.
Uncertainties linked to the quality of data and
their interpretation. All of the basic data in a study,
from geophysical data to logs and core data, are
associated with errors of measurement which
influence the accuracy of the final result. Even though
it is in theory possible to quantify these errors, this
task is rarely carried out, and the basic data are
generally assumed to be correct. This is even more
true of the interpretative stages.
Uncertainties linked to the structural and
stratigraphic models. The structural interpretation
carried out by the geophysicist is in most cases of a
deterministic nature, and does not include
quantifying associated uncertainties, although it is
clear that this phase of the work is to some degree
subjective. The same can be said of the correlation
phase (stratigraphic model), especially when dealing
with depositional environments characterized by
poor lateral continuity (e.g. continental type
deposits).

571

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

Uncertainties linked to the stochastic model and its


parameters. Static modelling may be carried out using
different stochastic algorithms, each of which will
usually provide different results. Furthermore, as
already mentioned, there are no fixed rules for
preferring one algorithm a priori to another. A further
source of uncertainty is linked to the parameters
selected for the algorithm itself, for example the length
of correlation chosen for the variogram, or the
geometries of depositional units in a Boolean
algorithm. Uncertainties on these points are rarely
explored, although they have a significant impact on
the end results.
Uncertainties linked to the different realizations of
the stochastic algorithm. These uncertainties can be
quantified simply by comparing different realizations
of the stochastic model used; this is the most
frequently carried out evaluation, probably due to its
simplicity. However, uncertainties linked to this aspect
of the study are negligible, or almost, compared to
those mentioned above.
These brief considerations highlight the
importance of the space of the uncertainties under
exploration. The total space of uncertainties is
obviously unknown, but when we attempt a
quantitative evaluation of the uncertainties of a
geological model it is important to ensure that this
space is adequately sampled. If we consider only
the aspects covered in point 4, for example, we risk
quantifying in detail just a limited part of the
overall uncertainty, thus creating a false sense of
accuracy.
The problem becomes even more complex
when we go on to evaluate uncertainties linked to
the dynamic simulation phase. These are even more
significant as they have a direct impact on the
economic evaluation of a reservoir, and implicitly
include the uncertainties linked to the geological
model. When carrying out a complete analysis of
these uncertainties, for example using a massive
approach (i.e. by realizing hundreds of dynamic
simulations with different input parameters), we
may find that some aspects become so significant
as to almost cancel out the impact of factors which,
in the static model on its own, appeared important.
Fig. 11 shows an example of a risk analysis
carried out on a deep sea reservoir. In this case, the
analysis took into consideration a broad series of
parameters, both static and dynamic. The results
show that the greatest uncertainties in the final
results concern the dimensions of the aquifer (over
75%), followed by the position of the water-oil
contact. Uncertainties regarding static modelling,
on the other hand, are negligible, despite the use of
geological models which differ considerably from

572

aquifer
water-oil contact
unexplained
drawdown
model
Cr
skin
Kr

81.79789
14.02499
1.64918
1.16589
0.71339
0.39629
0.25219
0.00019

Fig. 11. Results of risk analysis

on a deep sea reservoir.

one another. In this case, a detailed study of the


uncertainties linked to the static model alone is
obviously pointless, at least with hindsight.

References
Amaefule J.O. et al. (1993) Enhanced reservoir description:
using core and log data to identify hydraulic (flow) units
and predict permeability in uncored intervals/wells, in: Oil
and gas strategies in the 21st century. Proceedings of the
68th conference of the Society of Petroleum Engineers,
Houston (TX), 3-6 October, SPE 26436.
Archie G.E. (1942) The electrical resistivity log as an aid in
determining some reservoir characteristics, American
Institute of Mining, Metallurgical, and Petroleum Engineers.
Transactions, 146, 54-62.
Cosentino L. (2001) Integrated reservoir studies, Paris,
Technip.
Haldorsen H.H., Damsleth E. (1990) Stochastic modelling,
Journal of Petroleum Technology, April, 404-412.
Journel A.G. et al. (1990) New method for reservoir mapping,
Journal of Petroleum Technology, 42, 212-219.
Kalkomey C.T. (1997) Potential risks when using seismic
attributes as predictors of reservoir properties, The
Leading Edge, March, 247-251.
McCoy D.D. et al. (1997) Water salinity variations in the Ivishak
and Sag River reservoirs at Prudhoe Bay, Society of
Petroleum Engineers. Reservoir Engineering, 12, 37-44.

ENCYCLOPAEDIA OF HYDROCARBONS

STATIC RESERVOIR STUDY

Matheron G. et al. (1987) Conditional simulation of the


geometry of fluvio-deltaic reservoirs, in: Proceedings of
the Society of Petroleum Engineers annual technical
conference and exhibition, Dallas (TX), 20-30 September,
SPE 16753.
Mezghani M. et al. (2000) Conditioning geostatistical models
to flowmeter logs, in: Proceedings of the Society of Petroleum
Engineers European petroleum conference, Paris, 24-25
October, SPE 65122.
Mohaghegh S., Ameri S. (1996) Virtual measurement of
heterogeneous formation permeability using geophysical
well log responses, The Log Analyst, 37, 32-39.
Nelson P.H. (1994) Permeability-porosity relationships in
sedimentary rocks, The Log Analyst, 35, 38-62.
Vail P.R. et al. (1997) Seismic stratigraphy and global changes
of sea level, in: Payton C.E. (edited by), Seismic stratigraphy.
Applications to hydrocarbon exploration, American
Association of Petroleum Geologists. Memoir, 26, 63-98.

VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

Wendt W.A. et al. (1986) Permeability prediction from well


logs using multiple regression, in: Reservoir characterization.
Proceedings of the Reservoir characterization technical
conference, Dallas (TX), 29 April-1 May 1985, 181-221.
Worthington P., Cosentino L. (2003) The role of cut-off in
integrated reservoir studies, in: Proceedings of the Society
of Petroleum Engineers annual technical conference and
exhibition, Denver (CO), 5-8 October, SPE 84387.
Xu Wenlong et al. (1992) Integrating seismic data in reservoir
modeling. The collocated cokriging alternative, in: Proceedings
of the Society of Petroleum Engineers annual technical
conference and exhibition, Washington (D.C.), 4-7 October,
SPE 24742.

Luca Cosentino
Eni - Agip
San Donato Milanese, Milano, Italy

573

You might also like