You are on page 1of 21

Reservoir Management 1

Introduction
Petroleum reservoir management is a dynamic process that recognizes the
uncertainties in reservoir performance resulting from our inability to fully
characterize reservoirs and flow processes. It seeks to mitigate the effects
of these uncertainties by optimizing reservoir performance through a
systematic application of integrated, multidisciplinary technologies. It
approaches reservoir operation and control as a system, rather than as a set
of disconnected functions. Reservoir management has advanced through
various stages in the past 30 years. The techniques are better, the
background knowledge of reservoir conditions has improved, and the
automation using mainframe computers and personal computers has helped
data processing and management.
reservoir management process consists of three stages:
1 .Reservoir description;
2 .Reservoir model argument;
3. Reservoir performance, well performance and field development
The primary objective in a reservoir management study is to determine the
optimum conditions needed to maximize the economic recovery of
hydrocarbons from a prudently operated field.

Reservoir
Engineer

Research & Geologists|


Development Petrophysicist

Reservoir
Operations &
Facilities Management Geophysicist
Team

Production Drilling
Engineer Engineer

Economist

Fig. 1.1 Reservoir Management Team


1
Reservoir Modeling
Geoscientists frequently use 2D maps and cross sections to analyze
reservoir geology. These methods can work well for relatively
homogeneous reservoirs, but they cannot accurately represent
heterogeneities for reservoirs with high variabilities. Three-dimensional
(3D) reservoir modeling can better deal with heterogeneities of subsurface
formations. As more and more heterogenous reservoirs have been
developed in the last two to three decades, 3D modeling has become
increasingly important. Reservoir modeling is a process of constructing a
computer-based 3D geocellular representation of reservoir architecture and
its properties through integrating a variety of geoscience and engineering
data. Maximizing the economics of a field requires accurate reservoir
characterization. A 3D reservoir model can describe reservoir properties in
more detail through integration of descriptive and quantitative analyses.
Reservoir modeling was the missing link between geosciences and
reservoir engineering for field development planning before the mid-
1980s, and since then its use has increased significantly. A model should
be a good representation of reality for its relevance to the problems to be
solved. In modeling, reservoir architectures and compartmentalization are
defined using structural and stratigraphic analyses; reservoir properties,
including lithofacies, net-to-gross ratio, porosity, permeability, fluid
saturations, and fracture properties, are analyzed and modeled through
integration of geological, petrophysical, seismic, and engineering data.

Fig. 1.2 Example of an integrated reservoir modeling of a slope channel system. (a) An integrated
display of the reservoir model, along with the 3D seismic data and geological and reservoir
properties. (b) Three cross sections of sand probability. (c) One layer of the 3D lithofacies model.
2
Using Geostatistics in Modeling
Just decades ago, statistics was a narrow discipline, used mostly by
specialists. The explosion of data and advances in computation in the last
several decades have significantly increased the utilization of statistics in
science and engineering. Despite the progress of quantitative analysis of
geosciences, there is still a lack of using statistics in quantitative
multidisciplinary integration. Many geoscience problems are at the core of
statistical inference. Traditional probability and statistics are known for
their frequency interpretation and analysis. Most statistical parameters and
tools, such as mean, variance, histogram, and correlation, are defined using
frequentist probability theory. Geostatistics, a branch of spatial statistics,
concerns the characterization and modeling of spatial phenomena based on
the descriptions of spatial properties. The combination of using both
traditional statistics and geostatistics is critical for analyzing and modeling
reservoir properties. In big data, we are often overwhelmed by information.
In fact, what is important is not information itself, but the knowledge of
how to deal with information. Data analytics and integration of the
information are the keys. Data cannot speak for itself unless data analytics
is employed. In big data, everything tells us something, but nothing tells us
everything. We should not completely focus on computing capacities;
instead, we should pay more attention to data analytics, including queries
of data quality, correlation analysis of various data, causal inference, and
understanding the linkage between data and physical laws.

Fig. 1.3 Relationships among input data, analytics and model

3
Data Analytics 2

Basic Statistical Parameters


One of the most important tools in statistics is the histogram, which can be
used to describe the frequency distribution of data. A histogram is
constructed from data as compared to a probability distribution that is a
theoretical model. One of the main advantages of histogram is the graphic
interpretation, as it can reveal frequency properties of the data. The basic
statistical parameters (Table 2.1), such as the mean, variance and skewness,
are conveyed in histogram. The mean of data describes a central tendency
even though it may or may not be among the data values. The median and
mode, somewhat describing the “central” tendency of the data, are also
conveyed in the histogram. The variance describes the overall variation in
the data around the mean. The skewness describes the asymmetry of the
data distribution. All these parameters are useful for exploratory data
analysis. Figure 2.1 shows a porosity histogram, with the mean porosity
equal to 0.117, the median equal to 0.112, the mode equal to 0.132, and the
variance equal to 0.00034. These common statistical parameters, along
with other useful parameters, are described in Table 2.1.
Table 2.1 Commonly used statistical parameters

1
Fig. 2.1 Histogram of porosity from well logs in a fluvial sandstone reservoir

Mean
The mean has several variants, including arithmetic mean (the basic,
default connotation), geometric mean, and harmonic mean.
The arithmetic mean is defined as
1 𝑛
𝑚𝑎 = ∑ 𝑥𝑖
𝑛 𝑖=1
The geometric mean is defined as
𝑛 1⁄
𝑛
𝑚𝑔 = (∏ 𝑥𝑖 ) = 𝑛√𝑥1 𝑥2 . . . 𝑥𝑛
𝑖=1
The harmonic mean is defined as
𝑛
𝑚ℎ =
1 1 1
+ +. . .
𝑥1 𝑥2 𝑥𝑛

The arithmetic mean is calculated by the sum of sample values divided by


the number of samples and is often used to describe the central tendency
of the data. It is an unbiased statistical parameter to characterize the
average mass in the data. The geometric mean is more appropriate for
describing proportional growth, such as exponential growth and varying
growth. For example, the geometric mean can be used for a compounding
growth rate. More generally, the geometric mean is useful for sets of
positive numbers interpreted according to their product. The harmonic
mean provides an average for cases where a rate or ratio is concerned, and
it is thus more useful for sets of numbers defined in relation to some unit.

2
Weighted Mean/Average
The weighted mean is commonly used in geosciences and reservoir data
analyses. In weighted averaging, the mean is expressed as a linear
combination of data with weighting coefficients. Only the relative weights
are relevant in such a linear combination and the weighting coefficients
sum to one (termed a convex combination). The weighted mean of a dataset
of n values, xi, is

𝑤1 𝑥1 + 𝑤2 𝑥2 +. . . +𝑤𝑛 𝑥𝑛
𝑚𝑥 =
𝑤1 + 𝑤2 +. . . +𝑤𝑛

*An element with a higher weighting coefficient contributes more to the


weighted mean than an element with a lower weight. The weights cannot
be negative; some weights can be zero, but not all the weights can be zero
because division by zero is not allowed.

Mean, Change of Scale and Sample’s Geometries


One of the most common uses of the mean in geosciences and reservoir
analysis is to average data from a small scale to a larger scale. Core plugs
are a few centimeters, well logs are often sampled or averaged at 15 cm
(half foot) intervals, and 3D reservoir properties are generally modeled
with 20–100 m lateral grid cells and 0.3–10 m in thickness. As a result,
averaging data from a small scale to a larger scale is very common. The
question is what averaging method to use. Change of spatial scale of data
can be treacherous in reservoir characterization and modeling. The
weighted mean has many uses, including the change of scales from well
logs to geocellular grids and from geocellular grids to dynamic simulation
grids, estimating population statistics from samples, and mapping
applications.
Figure 2.2a shows a scheme in which the unweighted mean of three
porosities is 15%; the length-weighted mean is 8.6%, such as

0.5 0.2 1.5


𝑚= × 0.2 + × 0.22 + × 0.03 ≅ 0.086
0.5 + 0.2 + 1.5 0.5 + 0.2 + 1.5 0.5 + 0.2 + 1.5

Figure 2.2b shows a configuration of an upscaling. Assuming the lateral


size (in both X and Y directions) of the grid cells are constant, the volume
of Cell 1 is ¾ of the volume of Cell 3, Cell 2’s volume is 1/4 of the
volume of Cell 3, and Cell 4 has the same volume as Cell 3. Then, Cell 1
has a weight of 0.25 [ 𝑖. 𝑒. , 0.75/⁄(0.75 + 0.25 + 1 + 1) = 0.25 ] Cell 2
has a weight of 0.083 [𝑖. 𝑒. , 0.25/(0.75 + 0.25 + 1 + 1) ≈ 0.083], and
Cells 3 and 4 each have a weight of 1/3.

3
Fig. 2.2 (a) Uneven sampling scheme with 3 samples (can be a horizontal or vertical well, or any
1D sampling scheme). (b) Illustration of upscaling 4 grid cells into 1 large cell

Example:

In mapping the area shown below, the fractional volumes of dolomite


(Vdolomite) at the 2 locations are given. The location with Vdolomite 0.1
is perfectly at the middle; the location with Vdolomite 0.9 is at the 1/6
length to the east side. Assuming no geological interpretation was done,
estimate the target Vdolomite for the map.

Solution:

𝑳 × (𝟏/𝟐) 𝑳 × (𝟏/𝟔)

L
The sample of Vdolomite 0.1 should have a weight twice as much as the
weight for the sample of Vdolomit 0.9 in estimating the target fraction for
the map because it should represent not only the central area, but also the
western area as no sample is available there. Therefore, the following
weighted average should be used for estimating the target Vdolomite:

Let L = 1
1 1 1 1
0.1 × ( + ) + 0.9 × ( + )
2 6 6 6
2 1
→ 0.1 × + 0.9 × = 0.367.
3 3

In contrast, the estimate by a simple average is 0.5.

4
Averaging Permeability Data
Consider the permeability distributions shown in Figure 2.3. Each
geometry requires a different expression to determine the average
permeability, as outlined below. For horizontal flow through layered
permeability, use the arithmetic average (also called the mean):
∑ 𝑘 𝑖 ℎ𝑖
𝑘𝑎𝑣𝑔 = 𝑘𝑎𝑟𝑖𝑡ℎ =
∑ ℎ𝑖
The unweighted arithmetic average permeability k is determined from:
∑ 𝑘𝑖
𝑘̅𝐴 =
𝑛
For vertical flow through layers, use the harmonic average:
∑ ℎ𝑖
𝑘𝑎𝑣𝑔 = 𝑘ℎ𝑎𝑟𝑚𝑜𝑛𝑖𝑐 =
∑ ℎ𝑖 /𝑘𝑖
The unweighted harmonic average permeability k is determined from:
𝑛
𝑘̅𝐻 =
1
∑𝑛𝑖=1( )
𝑘𝑖
For flow in any direction through a disorganized permeability distribution,
use the geometric average:

𝑘𝑎𝑣𝑔 = 𝑘𝑔𝑒𝑜𝑚 = 𝑘̅𝐺 = 𝑛√𝑘1 𝑘2 … 𝑘𝑛 .

The above formula for 𝑘𝑔𝑒𝑜𝑚 is a simplified version of the full form, which
includes the volume 𝑉𝑖 of region 𝑖 and the total system volume 𝑉𝑡 :
𝑉 ⁄𝑉𝑡 𝑉2 ⁄𝑉𝑡 𝑉 ⁄𝑉𝑡
𝑘𝑎𝑣𝑔 = 𝑘𝑔𝑒𝑜𝑚 = 𝑘11 𝑘2 … 𝑘𝑛𝑛 .
For the same set of data, 𝑘ℎ𝑎𝑟𝑚 ≤ 𝑘𝑔𝑒𝑜𝑚 ≤ 𝑘𝑎𝑟𝑖𝑡ℎ , and they are equal only
in a homogeneous reservoir. These formulas apply for any scale of
heterogeneity. That is, the averages apply equally whether the
heterogeneities are in the form of cm-thick laminations, meter-scale beds,
or km-size regions

5
Fig. 2.3 Averaging permeability for different permeability distributions: (a) layered permeability and horizontal flow; (b)
layered permeability and vertical flow; (c) rando permeability. Case (c) assumes the regions are roughly equal in size.

EFFECTIVE PERMEABILITY FROM CORE DATA


The effective permeability, obtained from core data, may be estimated
from:

𝜎𝑘2
𝑘𝑒 = (1 + ) × 𝑒 𝑘̅𝐺
6

where 𝑘̅𝐺 is the geometric mean of the natural log of permeability, i.e.:

𝑘̅𝐺 = 𝑛√𝑙𝑛𝑘1 𝑙𝑛𝑘2 𝑙𝑛𝑘3 … 𝑙𝑛𝑘𝑛

and 𝜎𝑘2 is the variance of the natural log of the permeability estimates:

2
∑𝑛𝑖=1(𝑙𝑛𝑘𝑖 − 𝑙𝑛𝑘̅)2
𝜎 =
𝑛−1
Where
∑ 𝑙𝑛𝑘𝑖
𝑙𝑛𝑘̅ =
𝑛

6
Example:

Given the permeability data in the table below calculate:

1-The arithmetic, geometric, and harmonic averages of the core-derived


permeability values.

2-The effective permeability.

Solution:

1)
∑ 𝑘𝑖 120 + 213 + ⋯ + 117
𝑘̅𝐴 = = = 165 mD
𝑛 14
14
𝑘̅𝐺 = √𝑘1 𝑘2 … 𝑘𝑛 = √120 × 213 × … × 117 = 158.7 𝑚𝐷
𝑛

𝑛 14
𝑘̅𝐻 = = = 151.4 mD
1 1 1 1
∑𝑛𝑖=1( ) (120) + (213) + ⋯ + (117)
𝑘𝑖

The harmonic averaging technique yields, as expected, the lowest value of


average permeability. But the difference between the three averages is not
significant, implying that the formation is essentially homogeneous.
2) Natural log of the core-derived permeability values:
1
𝑘̅𝐺 = √𝑙𝑛𝑘1 𝑙𝑛𝑘2 𝑙𝑛𝑘3 … 𝑙𝑛𝑘𝑛 = (7.173 × 109 )14 = 5.058 𝑚𝐷
𝑛

∑ 𝑙𝑛𝑘𝑖 70.938
𝑙𝑛𝑘̅ = = = 5.067 𝑚𝐷
𝑛 14

2
∑𝑛𝑖=1(𝑙𝑛𝑘𝑖 − 𝑙𝑛𝑘̅)2 1.2171
𝜎 = = = 0.093623
𝑛−1 14 − 1

The arithmetic average of the natural log of the 14 permeability values is


practically equal to the geometric mean of the same permeability values.
This further indicates that this particular formation is practically
homogeneous. Using the geometric mean of the natural log of k values, the
effective permeability is:
𝜎𝑘2 0.093623
𝑘𝑒 = (1 + ) × 𝑒 𝑘̅𝐺 = (1 + ) × 𝑒 5.058 = 159.729 𝑚𝐷
6 6

7
Data Analytics 3

Regression
Regression is a method for prediction of a response variable from one
explanatory variable or by combining multiple explanatory variables.
While regression methods appear to be simple, they have many pitfalls for
the unwary. First, regression should be used for predicting an
undersampled variable by another variable or several other variables that
have more sample data.
The main types of regressiong are; the bivariate linear regression and its
variations, along with a nonlinear regression commonly used in geoscience
data analysis and multivariate linear regression (MLR); MLR is useful
in big data analytics because of its capability of combining many input
variables to predict the output or calibrate them to the target variable.
In a bivariate linear regression (or simple linear regression), one
explanatory variable is utilized in the linear function to estimate the
response variable, such as
𝑌 ∗ = 𝑎 + 𝑏𝑋
where 𝑌 ∗ is the estimator of the unknown truth Y, X is the predictor
variable, a is a constant (intercept), and b is the regression coefficient
(slope of the linear equation).
Multivariate linear regression uses many predictor variables for estimating
the response variable. It uses the following linear equation:
𝑌 ∗ = 𝑏0 + 𝑏1 𝑋1 + 𝑏2 𝑋2 . . . +𝑏𝑛 𝑋𝑛
where 𝑌 ∗ is the estimator of the response variable Y, 𝑋1 through 𝑋𝑛 are
predictor variables, 𝑏0 is a constant, and 𝑏1 through 𝑏𝑛 are unstandardized
regression coefficients for the explanatory variables.

Fig. 3.1 Observed (y) and estimated (y*) data

1
Net Pay Cutoffs
When examining net pay cutoffs, it is important to differentiate among
gross rock volume, net sand volume, net reservoir volume, and net pay. For
example, the concept of net pay is used in reserves determination, but the
concept of net reservoir should be used in flow simulation analysis.
Reservoir simulators usually require gross pay with appropriate gross-to
net pay multipliers so that fluid contacts and vertical gradients are modeled
correctly. Hence, the appropriate cutoffs depend on the application. In
practice, cutoffs are applied to well logs or cores, and we therefore examine
thickness (pay) rather than volume. Figure 3.2 shows four categories of
rock: gross pay, net sand, net reservoir, and net pay. Gross pay comprises
all rocks within the evaluation interval, that is, the thickness between the
top and bottom of the zone(s) of interest. In sandstone reservoirs, the
inflection point of the SP curve or the GR curve inflection points are used
to identify the boundaries. In carbonates, porosity cutoffs are used to define
the lithological tops and bottoms. Depending on the application, gross pay
may be set as the thickness of the reservoir or the hydrocarbon interval.
Gross pay can include low permeability intervals, shaly intervals, and
water-saturated intervals through which hydrocarbon fluid will not flow.
Net sand is those rocks that might have useful reservoir properties. Net
reservoir is those sand intervals that do have useful reservoir properties.
Net pay comprises those intervals that do contain movable hydrocarbons.
Gaynor and Sneider (1993) define net pay as “the hydrocarbon bearing
volume of the reservoir that would produce at economic rates using a given
production method”.
There are two general considerations controlling net pay:
• Does the rock have sufficient permeability to produce oil or gas at
economic rates?
• Does the rock contain movable oil volumes (fluid saturation)?

Fig. 3.2 Sequential application of cutoffs to define different classes of reservoir rock.
2
There are typically three factors that differentiate gross pay, net sand, net
reservoir, and net pay: shale volume (Vshale), porosity, and saturation. The
first two criteria address whether or not the rock has sufficient
permeability. The last criteria addresses whether or not the rock has
sufficient hydrocarbons.

Factors Controlling Net Pay Criteria


A common definition of net pay is the thickness of that part of the
hydrocarbon-bearing zone of the reservoir that is capable of producing at
economic rates using a given production method. However, economic rate
is an imprecise concept. Recall Darcy’s law for pseudo steady-state radial
inflow:
𝑘𝑒𝑓𝑓 𝑃𝑟 − 𝑃𝑤𝑓
𝑞∝
𝜇 0.472 𝑙𝑛 { 𝑟𝑒 } + 𝑆
𝑟𝑤
Darcy’s law illustrates that the rate at which a well flows depends on the
effective permeability (and therefore on the oil, gas, and water saturation
and the overburden pressure), the reservoir fluid viscosity, the pay
thickness, the drainage radius, type of completion, drive mechanism, and
the pressure drawdown. In some reservoirs, tight rock can flow into more
permeable rock, and thus reservoir heterogeneity and flow path tortuosity
become factors. The economics of that rate depend on operating costs and
the price of oil and gas. Hence, there are many factors involved in
determining net pay including:
• Oil or gas prices;
• Completion or well design (horizontal versus vertical well,
hydraulically fractured versus nonstimulated well);
• Reservoir heterogeneity and its distribution;
• Total pay thickness;
• Effective vertical thickness and flow path tortousity;
• Drive mechanism;
• Mobility ratio;
• Oil viscosity;
• Lateral continuity of interval for injection processes;
• Amount and quality of data.

Rules of Thumb for Net Pay


Most rules of thumb for pay cutoffs are built on three considerations. First,
does the rock have enough permeability? Second, does the rock have
sufficient movable hydrocarbon pore volume to yield oil or gas at
economic rates? Third, does that volume of rock contain sufficient
permeability to flow substantial volumes of hydrocarbon relative to other
rocks in the reservoir (relative processing or drainage)?
3
Probably the most common cutoffs are permeability cutoffs, with typical
values of:

𝑘𝑒𝑓𝑓 > 0.5 𝑡𝑜 1.0 𝑚𝐷 for oil reservoirs


𝑘𝑒𝑓𝑓 > 0.1 𝑡𝑜 0.5 𝑚𝐷 for conventional gas reservoirs
𝑘𝑒𝑓𝑓 > 0.001 𝑡𝑜 0.000 001 𝑚𝐷 for tight gas with horizontal multifrac
wells

in which the pay that has a permeability exceeding the cutoff is considered
to be net pay. It is less common, but likely more accurate to include the
viscosity as follows:
𝑘𝑒𝑓𝑓 /𝜇 = 𝑚 > 1.0 𝑚𝐷/𝑐𝑝 for oil or gas reservoirs
in which m is the in situ fluid viscosity.

Usually, cutoffs are applied using openhole logs. Cutoff values for
porosity, water saturation, and shale content are required. The porosity
cutoff can be determined from a given permeability cutoff using a porosity-
permeability cross plot. If a cross plot is not available, the following rules
of thumb can be used:

∅ > 1 − 3% for gas-bearing carbonates;


∅ > 2 − 4% for oil-bearing carbonates;
∅ > 5 − 8% for gas-bearing sandstones;
∅ > 7 − 10% for oil-bearing sandstones;
∅ > 26 − 28% for heavy oil-bearing sandstones.

A high-connate water saturation correlates to a lower effective


permeability. Therefore, another rule of thumb is:

𝑆𝑤 < 50%

In this case the interval must have a water saturation below the cutoff to
count as net pay.

A part of the pay zone that has the same properties as the overlying or
underlying sealing rock cannot be considered as net pay. In practice,
sandstones with high shale contents also have low effective permeability.
Hence, a shale cutoff is often applied as well as follows:

𝑉𝑠ℎ < 50%

in which 𝑉𝑠ℎ is the volume of shale in the gross pay interval.

4
Reservoir Heterogeneity 4

Introduction
Heterogeneity is a very important factor in determining the recovery from
petroleum reservoirs. Thus, heterogeneity calculations can be classified
into static and dynamic techniques. Dykstra Parsons and Lorenz
Coefficient methods provide the most excellent means to determine it.
Heterogeneity is the quality and situation of being heterogeneous. It was
first defined in 1898 as the difference or diversity in kind from other kinds.
Other definition consists of parts or things that are very different from each
other. In petroleum studies it is referred to as the isotropy and anisotropy.
Heterogeneity can be named as; complexity, deviation from norm,
difference, discontinuity, randomness, and variability. Many researchers
noted that the differences between homogeneous and heterogeneous is
relative, and it is based mainly on economic considerations.

The term reservoir heterogeneity is used to describe the geological


complexity of a reservoir and the relationship of the complexity to the
fluids through it. The heterogeneity may be defined as the complexity or
variability of a specific system property in a particular volume of space
and/or time. Reservoir heterogeneity is a function of the
porosity/permeability distribution due to lithologic variation during
sedimentary deposition which is further complicated by mechanical
processes related to deformation and chemical processes associated with
diagenesis. Identify of the reservoir heterogeneity is sought to design the
most efficient injection – production system for economy of energy and
maximization of hydrocarbon production. In addition, quantitative
measure of reservoir heterogeneity served as a guide to use homogenous
conditions in petroleum reservoir studies.

Methods of Calculating Reservoir Heterogeneity:


1- Lorenz Coefficient (𝐿𝐾 )
The original Lorenz technique was developed as a measure of the degree
of inequality in the distribution of wealth across a population. In 1950 it
was modified (the Lorenz Curve as used in petroleum engineering) by
generating a plot of cumulative flow capacity against cumulative thickness,
as functions of core measured porosity and permeability. The Lorenz
heterogeneity coefficient is a static measure of heterogeneity which is
taking into consideration the statistic nature of the porosity and the
permeability of a stratified reservoir. The value of 𝐿𝐾 ranges from 0 to 1.

1
The reservoir is considered to have a uniform permeability distribution if
𝐿𝐾 ≈ 0. The reservoir is considered to be completely heterogeneous if
𝐿𝐾 ≈ 1.

Calculation Steps:
(1) Tabulate thickness ℎ, permeability 𝑘, and porosity 𝜙
(2) Arrange permeability data in a descending order
(3) Calculate the cumulative permeability capacity ∑(𝑘ℎ)𝑖 and
cumulative capacity volume ∑(𝜙ℎ)𝑖
∑(𝑘ℎ)
(4) Calculate the normalized cumulative capacities 𝐶𝑘 = ∑(𝑘ℎ) 𝑖 and
𝑡
∑(𝜙ℎ)𝑖
𝐶𝜙 = ∑(𝜙ℎ)
𝑡
(5) Plot 𝐶𝑘 versus 𝐶𝜙 on a Cartesian graph and plot a straight diagonal
from the beginning of curve till its end as shown in Figure 4.1
(6) Use Equation below to calculate the Lorenz coefficient
𝐴𝑟𝑒𝑎 𝑒𝑛𝑐𝑙𝑜𝑠𝑒𝑑 𝑏𝑦 𝑝𝑙𝑜𝑡 𝑝𝑜𝑖𝑛𝑡𝑠 𝑎𝑛𝑑 𝑡ℎ𝑒 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙
𝐿𝐾 =
𝐴𝑟𝑒𝑎 𝑒𝑛𝑐𝑙𝑜𝑠𝑒𝑑 𝑏𝑦 𝑡ℎ𝑒 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙 𝑎𝑛𝑑 𝑏𝑜𝑡𝑡𝑜𝑚 𝑟𝑖𝑔ℎ𝑡 𝑐𝑜𝑟𝑛𝑒𝑟 𝑜𝑓 𝑡ℎ𝑒 𝑝𝑙𝑜𝑡
𝐴𝑟𝑒𝑎𝑢𝑑𝑒𝑟 𝑐𝑢𝑟𝑣𝑒 − 𝐴𝑟𝑒𝑎𝑢𝑛𝑑𝑒𝑟 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙
=
𝐴𝑟𝑒𝑎𝑢𝑛𝑑𝑒𝑟 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙

Fig. 4.1 Flow capacity distribution

From figure 4.1


𝐴𝑟𝑒𝑎𝑢𝑑𝑒𝑟 𝑐𝑢𝑟𝑣𝑒 − 𝐴𝑟𝑒𝑎𝑢𝑛𝑑𝑒𝑟 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙 𝐴𝑟𝑒𝑎𝐴𝐵𝐷𝐴 − 𝐴𝑟𝑒𝑎𝐴𝐶𝐷𝐴 𝐴𝑟𝑒𝑎𝐴𝐵𝐶𝐴
𝐿𝐾 = = =
𝐴𝑟𝑒𝑎𝑢𝑛𝑑𝑒𝑟 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙 𝐴𝑟𝑒𝑎𝐴𝐶𝐷𝐴 𝐴𝑟𝑒𝑎𝐴𝐶𝐷𝐴

2
Example
Taking a known thickness of the reservoir and the well with most clear core
data. Table 1 clarifies the method of calculations if the reservoir was
divided into 10 layers only with porosity value of 0.33 and

. Table 4.1 Lorenz Coefficient calculations

Lorenz Coefficient (𝐿𝐾)


1
0.9
0.8
Fraction of Total 𝑘ℎ

0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fraction of Total 𝜙ℎ

Calculating Area under the curve Graphically by dividing the curve into
intervals of regular shapes (Squares and Triangles) and calculate the areas
of them then find the summation.

*This method of calculating area involves some errors its better to calculate
area numerically.

3
Lorenz Coefficient (𝐿𝐾)
1
0.9
Fraction of Total 𝑘ℎ 0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fraction of Total 𝜙ℎ

1 1
𝐴𝑟𝑒𝑎𝑢𝑛𝑑𝑒𝑟 𝑐𝑢𝑟𝑣𝑒 = [ × (0.1 − 0) × (0.4 − 0)] + [(0.2 − 0.1) × (0.4 − 0)] + [ × (0.2 − 0.1) × (0.6 − 0.4)] + [(0.3 − 0.2) × (0.6 − 0)]
2 2
1 1
+ [ × (0.3 − 0.2) × (0.78 − 0.6)] + [(0.4 − 0.3) × (0.78 − 0)] + [ × (0.4 − 0.3) × (0.84 − 0.78)]
2 2
1
+ [(0.5 − 0.4) × (0.84 − 0)] + [ × (0.5 − 0.4) × (0.9 − 0.84)] + [(0.6 − 0.5) × (0.9 − 0)]
2
1 1
+ [ × (0.6 − 0.5) × (0.94 − 0.9)] + [(0.7 − 0.6) × (0.94 − 0)] + [ × (0.7 − 0.6) × (0.96 − 0.94)]
2 2
+ [(0.8 − 0.7) × (0.98 − 0)] + [(0.9 − 0.8) × (0.9 − 0)] + [(1 − 0.9) × (1 − 0)]

𝐴𝑟𝑒𝑎𝑢𝑛𝑑𝑒𝑟 𝑐𝑢𝑟𝑣𝑒 ≈ 0.782

Lorenz Coefficient (𝐿𝐾)


1
0.9
0.8
Fraction of Total 𝑘ℎ

0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fraction of Total 𝜙ℎ

1 1
𝐴𝑟𝑒𝑎𝑢𝑛𝑑𝑒𝑟 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙 = [ × (0.1 − 0) × (0.1 − 0)] + [(0.2 − 0.1) × (0.1 − 0)] + [ × (0.2 − 0.1) × (0.2 − 0.1)] + [(0.3 − 0.2) × (0.2 − 0)]
2 2
1 1
+ [ × (0.3 − 0.2) × (0.3 − 0.2)] + [(0.4 − 0.3) × (0.3 − 0)] + [ × (0.4 − 0.3) × (0.4 − 0.3)] + [(0.5 − 0.4) × (0.4 − 0)]
2 2
1 1
+[ × (0.5 − 0.4) × (0.5 − 0.4)] + [(0.6 − 0.5) × (0.5 − 0)] + [ × (0.6 − 0.5) × (0.6 − 0.5)] + [(0.7 − 0.6) × (0.6 − 0)]
2 2
1 1
+ [ × (0.7 − 0.6) × (0.7 − 0.6)] + [(0.8 − 0.7) × (0.7 − 0)] + [ × (0.8 − 0.7) × (0.8 − 0.7)] + [(0.9 − 0.8) × (0.8 − 0)]
2 2
1 1
+ [ × (0.9 − 0.8) × (0.9 − 0.8)] + [(1 − 0.9) × (0.9 − 0)] + [ × (1 − 0.9) × (1 − 0.9)]
2 2

𝐴𝑟𝑒𝑎𝑢𝑛𝑑𝑒𝑟 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙 = 0.5


Or simply we can calculate the area of the entire triangle as
1
𝐴𝑟𝑒𝑎𝑢𝑛𝑑𝑒𝑟 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙 = [ × (1 − 0) × (1 − 0)] = 0.5 4
2
0.782−0.5
𝐿𝐾 = = 0.564 (Heterogenous Reservoir)
0.5

2- Dykstra-Parsons coefficient (𝑉𝐾 )


Dykstra and Parsons used the log-normal distribution of permeability to
define the coefficient of permeability variation, 𝑉𝐾
𝑆
𝑉𝐾 =
𝑘̅
where 𝑆 and 𝑘̅ are the standard deviation and the mean value of 𝑘,
respectively. The standard deviation of a group of n data points is:
(𝑘𝑖 − 𝑘̅)2
𝑆= √
𝑛
where 𝑘 is the arithmetic average of permeability, n the total number of
data points, and 𝑘𝑖 the permeability of individual core samples.

The Dykstra-Parsons coefficient of permeability variation, 𝑉𝐾 , can be


obtained graphically by plotting permeability values on log-probability \
paper, and then using the following equation:

𝑘50 − 𝑘84.1
𝑉𝐾 =
𝑘50
where
k50 = permeability value with 50% probability
k84.1= permeability at 84.1% of the cumulative sample.

The Dykstra-Parsons coefficient is an excellent tool for characterizing the


degree of reservoir heterogeneity. The term 𝑉𝐾 is also called the Reservoir
Heterogeneity Index (RHI). The range of this index is 0 < 𝑉𝐾 < 1 as state
by this table
Table 4.2 Dykstra-Parsons coefficient

5
The procedure for graphically determining the Dykstra-Parsons
coefficient is as follows:
(1) Arrange permeability data in descending order
(2) Determine the frequency of each permeability value
(3) Find the number of sample with larger probability.
(4) Calculate the cumulative probability distribution by dividing values
obtained in step 3 with the total number of permeability points, n
(number of values of permeability)
(5) Plot permeability data versus cumulative frequency data (step 4) on
a log-normal probability graph.
(6) Draw the best straight line through the data, with more weight
placed on points in the central portion where the cumulative
frequency is close to 50%. This straight line reflects a quantitative,
as well as a qualitative, measure of the heterogeneity of the
reservoir rock.

Example:
For the following permeability data, calculate RHI and estimate the
reservoir status.
Interval K, mD
1 120
2 213
3 180
4 200
5 212
6 165
7 145
8 198
9 210
10 143
11 79
12 118
13 212
14 117

6
Solution
Make table like this:

Plot column 2 against column 5 on log-normal probability sheet, and


extend the best line through the points.

Dykstra-Parsons coefficient (VK)


250

200
Permeability mD

150

100

50

0
0 10 20 30 40 50 60 70 80 90 100
Percent samples with larger permeability

𝑘50 −𝑘84.1 158−112


𝑉𝐾 = = = 0.29 (Heterogenous Reservoir)
𝑘50 158

*There are also more other methods to calculate heterogeneity such as


(Coefficient of variation (𝐶𝑉 ) and Ordinary Kriging technique).

You might also like