You are on page 1of 174

19.

Assisted History Matching


User Guide

Rock Flow Dynamics

March 2019
19.1

Copyright Notice
Rock Flow Dynamics r (RFD), 2004–2019. All rights reserved. This document is the intel-
lectual property of RFD. It is not allowed to copy this document, to store it in an information
retrieval system, distribute, translate and retransmit in any form or by any means, electronic
or mechanical, in whole or in part, without the prior written consent of RFD.

Trade Mark
RFD, the RFD logotype and tNavigator r product, and other words or symbols used to identify
the products and services described herein are trademarks, trade names or service marks of
RFD. It is not allowed to imitate, use, copy trademarks, in whole or in part, without the prior
written consent of the RFD. A graphical design, icons and other elements of design may be
trademarks and/or trade dress of RFD and are not allowed to use, copy or imitate, in whole
or in part, without the prior written consent of the RFD. Other company, product, and service
names are the properties of their respective owners.

Security Notice
The software’s specifications suggested by RFD are recommendations and do not limit the
configurations that may be used to operate the software. It is recommended to operate the
software in a secure environment whether such software is operated on a single system or
across a network. A software’s user is responsible for configuring and maintaining networks
and/or system(s) in a secure manner. If you have any questions about security requirements
for the software, please contact your local RFD representative.

Disclaimer
The information contained in this document is subject to change without notice and should
not be construed as a commitment by RFD. RFD assumes no responsibility for any error that
may appear in this manual. Some states or jurisdictions do not allow disclaimer of expressed
or implied warranties in certain transactions; therefore, this statement may not apply to you.
Since the software, which is described in the present document is constantly improved, you
may find descriptions based on previous versions of the software.

2
19.1

Contents
1. Introduction 6

2. Defining Variables 8
2.1. Standard scenarios of variables definition in GUI . . . . . . . . . . . . . . . 10
2.1.1. Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.2. Relative Permeability (RP) . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.3. Multiply Permeability by Regions . . . . . . . . . . . . . . . . . . . . 17
2.1.4. Multiply Permeability by Layers . . . . . . . . . . . . . . . . . . . . . 20
2.1.5. Adjust KV/KH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.6. Multiply Transmissibility by Regions . . . . . . . . . . . . . . . . . . . 25
2.1.7. Multiply Pore Volume by Regions . . . . . . . . . . . . . . . . . . . . 28
2.1.8. Modify Scale Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.9. Multiply Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.1.10. Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2. File’s structure of history matching project . . . . . . . . . . . . . . . . . . 40
2.2.1. File’s structure of experiment . . . . . . . . . . . . . . . . . . . . . . . 40
2.2.2. Saving project’s modifications . . . . . . . . . . . . . . . . . . . . . . 41
2.2.3. Deleting results of experiment . . . . . . . . . . . . . . . . . . . . . . 42
2.3. Defining Variables for models with Reservoir Coupling option . . . . . . . . 42

3. Experimental Design 43
3.1. Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2. Custom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3. Plackett-Burman design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3.1. General Plackett-Burman . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3.2. Include line with minimal values . . . . . . . . . . . . . . . . . . . . . 47
3.3.3. Folded Plackett-Burman . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4. Grid search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5. Latin hypercube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.6. Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.7. Tornado . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.8. Implementation of Variable Filter . . . . . . . . . . . . . . . . . . . . . . . 54

4. Objective Function 56
4.1. Specifying the Objective Function . . . . . . . . . . . . . . . . . . . . . . . 57
4.2. History matching objective function . . . . . . . . . . . . . . . . . . . . . . 60
4.2.1. Objective function for different objects . . . . . . . . . . . . . . . . . . 60
4.2.2. Objective function formula . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2.3. Weights automatic calculation . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.4. Selecting historical points for a history matching . . . . . . . . . . . . 63
4.2.5. Loading a pressure’s history into a base model . . . . . . . . . . . . . . 65
4.3. Forecast optimization objective function . . . . . . . . . . . . . . . . . . . . 66

CONTENTS 3
19.1

4.3.1. Objective function for different objects . . . . . . . . . . . . . . . . . . 66


4.3.2. Objective function formula . . . . . . . . . . . . . . . . . . . . . . . . 66
4.4. Objective function normalization . . . . . . . . . . . . . . . . . . . . . . . . 68
4.5. Objective function based on user graphs for a field . . . . . . . . . . . . . . 68
4.6. Using UDQ as an objective function . . . . . . . . . . . . . . . . . . . . . . 69
4.7. Examples of objective functions . . . . . . . . . . . . . . . . . . . . . . . . 70

5. Optimization Algorithms 74
5.1. Creating New Experiment From Selected Variants . . . . . . . . . . . . . . 76
5.2. Termination criteria of algorithms . . . . . . . . . . . . . . . . . . . . . . . 78
5.3. Response Surface (Proxy models) . . . . . . . . . . . . . . . . . . . . . . . 80
5.4. Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.4.1. Brief description of the algorithm . . . . . . . . . . . . . . . . . . . . . 81
5.4.2. More about parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.4.3. Algorithm versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.5. Simplex method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.5.1. Definitions and brief algorithm description . . . . . . . . . . . . . . . . 86
5.5.2. Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.5.3. Termination tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.6. Particle Swarm Optimization algorithm . . . . . . . . . . . . . . . . . . . . 91
5.6.1. Brief algorithm description . . . . . . . . . . . . . . . . . . . . . . . . 91
5.6.2. Particle Swarm Optimization algorithm in general . . . . . . . . . . . . 91
5.6.3. Velocity update formula . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.6.4. Parameters influence on algorithm working . . . . . . . . . . . . . . . 94
5.7. Multi-objective Particle Swarm Optimization algorithm . . . . . . . . . . . . 97
5.7.1. Brief description of algorithm . . . . . . . . . . . . . . . . . . . . . . . 97
5.7.2. Multi-objective Particle Swarm Optimisation algorithm implementation 97
5.7.3. MOPSO algorithm parameters . . . . . . . . . . . . . . . . . . . . . . 99

6. Analysis of results 100


6.1. Project Info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.2. Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.3.1. Top panel buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.3.2. Left panel buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.3.3. Right panel buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.4. Results Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.4.1. Mismatch calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.5. Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.6. Graph calculator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.6.1. Data structures and functions . . . . . . . . . . . . . . . . . . . . . . . 111
6.6.2. Importing libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.6.3. Usage examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.7. Crossplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

CONTENTS 4
19.1

6.7.1. Pareto front visualization . . . . . . . . . . . . . . . . . . . . . . . . . 123


6.8. Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.9. Stacked Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.10. Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.10.1. Pareto chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.10.2. Tornado Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.10.3. Quantiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.10.4. Creating a Filter for Variables . . . . . . . . . . . . . . . . . . . . . . . 135
6.11. Mds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.11.1. Weighted Mds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.12. Cdf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.13. Proxy model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.13.1. Constructing Proxy model formula . . . . . . . . . . . . . . . . . . . . 140
6.13.2. Implementation of artificial neural network . . . . . . . . . . . . . . . 142
6.13.3. Analysis using the Monte Carlo method . . . . . . . . . . . . . . . . . 143
6.14. Creating a group of variants . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.15. Table of coefficients R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6.16. Clusterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

7. From AHM to Forecast 152


7.1. Creating Forecast in GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.1.1. NFA (No Further Action) scenario . . . . . . . . . . . . . . . . . . . . 154
7.1.2. Load User Forecast schedule file . . . . . . . . . . . . . . . . . . . . . 155
7.1.3. Load User Forecast schedule file with variables . . . . . . . . . . . . . 157

8. Workflows 160
8.1. Editing workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
8.2. Creating variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
8.3. Running workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

9. Run AHM from Model and Geology Designers 164


9.1. Use of Discrete Fourier transform algorithm for history matching . . . . . . 167
9.1.1. Discrete cosine transform (DCT) algorithm . . . . . . . . . . . . . . . 167
9.1.2. How to use DCT via GUI . . . . . . . . . . . . . . . . . . . . . . . . . 169

10. References 173

CONTENTS 5
19.1

1. Introduction
tNavigator is a software package, offered as a single executable, which allows to build static
and dynamic reservoir models, run dynamic simulations, perform extended uncertainty anal-
ysis and build surface network as a part of one integrated workflow. All the parts of the
workflow share common proprietary internal data storage system, super-scalable parallel nu-
merical engine, data input/output mechanism and graphical user interface. tNavigator supports
METRIC, LAB, FIELD units systems.
tNavigator is a multi-platform software application written in C++ and can be installed on
Linux, Windows 64-bit OS and run on systems with shared and distributed memory layout
as a console or GUI (local or remote) based application. tNavigator runs on workstations and
clusters. Cloud based solution with full GUI capabilities via remote desktop is also available.
tNavigator contains the following 8 functional modules licensed separately:

• Geology Designer (includes PVT Designer and VFP Designer);

• Model Designer (includes PVT Designer and VFP Designer);

• Network Designer (includes PVT Designer and VFP Designer);

• Black Oil simulator;

• Compositional simulator;

• Thermal simulator;

• Assisted History Matching (AHM, optimization and uncertainty analysis);

• Graphical User Interface.

The list of tNavigator documentation is available in tNavigator Library.

In this document there is a description of the module Assisted History Matching which
is fully integrated with simulation modules (Black Oil simulator, Compositional simulator,
Thermal simulator).

Module Assisted History Matching can be used for:


• Uncertainty analysis;

• Sensitivity test;

• History matching of model;

• Multivariant matching of model;

• Probabilistic forecast;

• Production optimization;

1. Introduction 6
19.1

• Risks analysis;

• Research validation.

tNavigator User Manual contains the description of physical model, mathematical model
and the keywords that can be used in dynamic model.

1. Introduction 7
19.1

2. Defining Variables
To run any algorithm it is required to define variables in advance. Different parameters can
be used as variables for Assisted History Matching (AHM) and uncertainty analysis. For
example:

• different geological realizations;

• permeability;

• RP data;

• aquifer parameters;

• well data;

• fault transmissibility.

For forecast optimization the following parameters can be set as variables:

• wells’ trajectories;

• wells’ parameters.

The number of variables defined by user in a project is not limited. However, increase in
the number of variables leads to increase in the AHM, uncertainty analysis and optimization
challenges.
Variables can be set by two ways:

• via User Graphical Interface (GUI);

• using the keyword DEFINES (see 12.1.24) (for models in tN, E1, E3, IM, ST, GE
formats) and the keyword VDEF (see 12.1.25) (for models in MO format).

The set of variables in GUI is limited by standard scenarios. Using the keyword DEFINES
(see 12.1.24) it is possible to define any parameter as variable.

2. Defining Variables 8
19.1

Examples of how to define variables in the AHM module are shown in


training tutorials:

• AHM1.1. AHM Theoretical Course;

• AHM1.2. How To Use Assisted History Matching;

• AHM1.3. How To Use RFT in History Matching;

• AHM1.4. How to find the best Well trajectory;

• AHM1.5. Corey correlation for RP in Assisted History Match-


ing;

• AHM1.6. How To Use AHM for Hydraulic Fracture;

• AHM1.7. How to run different geological realizations;

• AHM1.8. How to go from AHM to Forecast;

• AHM1.9. How To Analyze Uncertainty.

2. Defining Variables 9
19.1

2.1. Standard scenarios of variables definition in GUI


To select and define variables in GUI use the button History Matching Variables Man-
ager as shown in the figure 1. If the history matching project is started from the tNavigator’s
main window (see figure 2) then the program automatically suggests to define project’s vari-
ables. No any additional advanced preparation is needed.
Notice that some scenarios appear in the History Matching Variables Manager window
only if specific keywords are in the model data file. Having selected a scenario check variables,
which will be used for model’s history matching. Define ranges of selected variables, i.e. their
minimum and maximum values.

Figure 1. History Matching Variables Manager.

The following scenarios of variables definition are available:

• Equilibrium;

• Relative Permeability (RP);

• Multiply Permeability by Regions;

• Multiply Permeability by Layers;

• Adjust KV/KH;

• Multiply Transmissibility by Regions;

• Multiply Pore Volume by Regions;

2.1. Standard scenarios of variables definition in GUI 10


19.1

• Modify Scale Arrays;

• Multiply Faults;

• Other.

Figure 2. Start of AHM project from the main window.

2.1. Standard scenarios of variables definition in GUI 11


19.1

2.1.1. Equilibrium
Variables of scenario
In this scenario of history matching a depth of water-oil contact (WOC) and a depth of
gas-oil contact (GOC) are used as variables.
Availability of scenario in GUI
This scenario is available in the History Matching Variables Manager if the keyword
EQUIL (see 12.16.2) is defined in the model’s data file. WOC in the first equilibrium region
is defined as a variable for history matching in the figure 3.

Figure 3. Defining a depth of WOC as a variable via GUI.

Scenario’s file automatically saved in the USER folder


After running a history matching project the file <project_name>_equil.inc is
automatically saved in the USER folder.
In this file the keyword DEFINES (see 12.1.24), names of variables, ranges of variables,
types of variables are written. Names of variables are used between symbols @ in the keyword
EQUIL (see 12.16.2). During an assisted history matching process each variable is substituted
by the value from the variable’s range defined in the keyword DEFINES (see 12.1.24).

2.1.1. Equilibrium 12
19.1

Example
DEFINES
'WOC_EQLREG_1' 1877 1876 1878 REAL/
/
...
EQUIL
– depth pres depth-wo pres-wo depth-go pres-go rsvd rvvd accu-
racy
1816 180 @WOC_EQLREG_1@ 0 1816 0 0 /
/

In this example WOC depth is defined as a variable. Its initial value is 1877, its minimum
value is 1876, its maximum value is 1878. Using the keyword EQUIL (see 12.16.2) the WOC
depth DWOC is set equal to the value of variable WOC_EQLREG_1 in all blocks of the model.

2.1.1. Equilibrium 13
19.1

2.1.2. Relative Permeability (RP)


Variables of scenario
In this scenario of history matching relative permeability end–points are used as variables.
Availability of scenario in GUI
This scenario is available in the History Matching Variables Manager if the relative
permeabilities are defined using the Corey correlation (COREYWO (see 12.6.3), COREYGO
(see 12.6.4), COREYWG, see 12.6.5) or LET correlation (LETWO (see 12.6.8), LETGO
(see 12.6.9), LETWG, see 12.6.10). RP curvatures are created automatically using Corey or
LET correlations based on end–points defined by user.

An advantage to define RP via correlation is to that there is no need to rebuild RP tables


manually when moving one point during a history matching process.

If RP are defined by tables (e.g., SWOF (see 12.6.1), SGOF, see 12.6.2) it is necessary
to convert a table into the Corey (LET) correlation and then to run this scenario. To convert
RP tables into the Corey (LET) correlation go to menu Documents and select in the pop-up
menu Approximate RP and Convert to Corey (or LET) correlations.
It is possible to define variables for regions as follows (see figure 4):

• Set by reg. The variable’s value is set in the selected regions or in all regions. The
variable’s value will be set as target parameter’s value;

• Mult by reg. The target parameter’s value will be multiplied by the variable’s value in
a region (regions);

• Plus by reg. The variable’s value will be added to the target parameter’s value in a
region (regions).

Scenario’s file automatically saved in the USER folder


After running a history matching project the file <project_name>_rp.inc is auto-
matically saved in the USER folder.
In this file the keyword DEFINES (see 12.1.24), names of variables, ranges of variables,
types of variables are written. Names of variables are used between symbols @ in corre-
sponding keywords, e.g. COREYWO (see 12.6.3), COREYGO, see 12.6.4) or LET (LETWO
(see 12.6.8), LETGO, see 12.6.9). During an assisted history matching process each vari-
able is substituted by the value from the variable’s range defined in the keyword DEFINES
(see 12.1.24).

2.1.2. Relative Permeability (RP) 14


19.1

Figure 4. Defining end–points as variables via GUI.

Example
DEFINES
'K_RORW_M_2_4' 1 0.5 2 REAL /
'N_W_P_2_4' 0 -0.1 0.1 REAL /
'S_WCR_S_2_4' 0.39 0.29 0.49 REAL /
/
...
COREYWO
– SWL SWU SWCR SOWCR KROLW KRORW KRWR KRWU PCOW NOW NW NP SPC0
0.238 1 0.296 0.254 0.8 0.52 0.28 1 0 3.3 2.4 0 -1 /
0.238 1 @S_WCR_S_2_4@ 0.23 0.8 @0.11 * K_RORW_M_2_4@ 0.22 1 0
4 @2.4 + N_W_P_2_4@ 0 -1 /
0.238 1 0.34 0.265 0.8 0.435 0.398 1 0 3.5 2.4 0 -1 /
0.238 1 @S_WCR_S_2_4@ 0.27 0.8 @0.217 * K_RORW_M_2_4@ 0.302 1
0 3.3 @1.8 + N_W_P_2_4@ 0 -1 /
0.238 1 0.3 0.266 0.8 0.58 0.344 1 0 2.8 2 0 -1 /
/

In the Example 1 RP end–points are defined as variables for 2-nd and 4-th regions of satu-
ration. Further, these variables are used as parameters in the keyword COREYWO (see 12.6.3).
In particular:

• the value of krORW , equal to 0.217, is multiplied by value of variable K_RORW_M_2_4

2.1.2. Relative Permeability (RP) 15


19.1

(denoted by letter M in the variable’s name), varying from 0.5 to 2;

• the value of variable N_W_P_2_4, varying from -0.1 to 0.1, is added to the value of
nW , equal to 1.8. The summation is denoted by letter P in the variable’s name;

• the value of SWCR is set to the value of S_WCR_S_2_4 (denoted by S in the variable’s
name), varying from 0.29 to 0.49.

By default the value of variable K_RORW_M_2_4 equals 1, the value of variable N_W_P_2_4
equals 0 and the value of variable S_WCR_S_2_4 equals 0.39. If all regions are selected the
range of regions will be written in the variable’s name, e.g. K_RORW_M_1TO5 means that
regions from 1 to 5 are selected.

Example
COREYWO
– SWL SWU SWCR SOWCR KROLW KRORW KRWR KRWU PCOW NOW NW NP SPC0
0.24 1 0.29 0.25 0.8 0.5 @0.28+K_RWR_P_1TO5@ 1 0 3 2 0 -1 /
0.24 1 0.39 0.23 0.8 0.11 @0.22+K_RWR_P_1TO5@ 1 0 4 3 0 -1 /
0.24 1 0.34 0.27 0.8 0.43 @0.4+K_RWR_P_1TO5@ 1 0 3 2 0 -1 /
0.24 1 0.35 0.28 0.8 0.2 @0.3+K_RWR_P_1TO5@ 1 0 3.3 2 0 -1 /
0.24 1 0.31 0.26 0.8 0.58 @0.34+K_RWR_P_1TO5@ 1 0 3 2 0 -1 /
/

Notice that for multiplication and summation operations the program automatically con-
trols the values of variables defined by user, thus, does not allow to define values of variables
leading to nonphysical results. If a variable’s value is out of the correct range its color will
change from black to red.
In the next example (Example 2) the relative permeability of water krW R equals 0.22 in
the 2-nd region, and krW R will be equal to 0 if the value of variable K_RWR_M_1TO5 is
-0.22.

2.1.2. Relative Permeability (RP) 16


19.1

2.1.3. Multiply Permeability by Regions


Variables of scenario
In this scenario of history matching multipliers of permeability are used as variables in
the selected regions. All 3 permeabilities PERMX (see 12.2.13) (in X direction), PERMY
(see 12.2.13) (in Y direction) and PERMZ (see 12.2.13) (in Z direction) are multiplied by
multipliers.
You can calculate permeability in Z direction PERMZ (see 12.2.13) based on permeability
in X direction PERMX (see 12.2.13), using the formula PERMZ = PERMX ∗ @KV _KH@,
in the scenario Adjust KV/KH.
Availability of scenario in GUI
Defining multipliers of permeability as variables in the FIP regions is shown in the figure 5.

Figure 5. Defining multipliers of permeability as variables via GUI.

2.1.3. Multiply Permeability by Regions 17


19.1

Scenario’s file automatically saved in the USER folder


After running a history matching project in the USER folder the file with the name
<project_name>_hm_mult_perm_by_regs.inc is automatically saved.
In this file the keyword DEFINES (see 12.1.24), names of variables, ranges of variables,
types of variables are written. Names of variables are used between symbols @ in the keyword
ARITHMETIC (see 12.3.2). During an assisted history matching process each variable is sub-
stituted by the value from the variable’s range defined in the keyword DEFINES (see 12.1.24).
Example
DEFINES 'M_PERM_FIPNUM_1' 1.000000 0.100000 10.000000 REAL /
'M_PERM_FIPNUM_2' 1.000000 0.100000 10.000000 REAL /
'M_PERM_FIPNUM_3' 1.000000 0.100000 10.000000 REAL /
/ ...

ARITHMETIC PERMX = IF (IWORKFIPNUM == 1, PERMX *


@M_PERM_FIPNUM_1@, PERMX)
PERMY = IF (IWORKFIPNUM == 1, PERMY * @M_PERM_FIPNUM_1@,
PERMY)
PERMZ = IF (IWORKFIPNUM == 1, PERMZ * @M_PERM_FIPNUM_1@,
PERMZ)
PERMX = IF (IWORKFIPNUM == 2, PERMX * @M_PERM_FIPNUM_2@,
PERMX)
PERMY = IF (IWORKFIPNUM == 2, PERMY * @M_PERM_FIPNUM_2@,
PERMY)
PERMZ = IF (IWORKFIPNUM == 2, PERMZ * @M_PERM_FIPNUM_2@,
PERMZ)
PERMX = IF (IWORKFIPNUM == 3, PERMX * @M_PERM_FIPNUM_3@,
PERMX)
PERMY = IF (IWORKFIPNUM == 3, PERMY * @M_PERM_FIPNUM_3@,
PERMY)
PERMZ = IF (IWORKFIPNUM == 3, PERMZ * @M_PERM_FIPNUM_3@,
PERMZ)
/

In this example the multiplier of permeability is defined as a variable for each region:
M_PERM_FIPNUM_1 etc. For all multipliers initial value is 1, minimum value is 0.1 and
maximum value is 10. Type of variables is REAL.
During an assisted history matching process (see Example 1) PERMX, PERMY and
PERMZ are multiplied by selected variables in different FIPNUM regions. PERMX, PERMY
and PERMZ are defined in grid.inc file. In the model’s data file the REGIONS section, in
which FIPNUM regions are defined, follows the GRID section. Therefore, FIPNUM regions
property is included as a property defined by user (i.e. as an array) with a name IWORKFIP-
NUM (see the keyword IWORK, see 12.3.6). Further we can use this FIPNUM-array in the

2.1.3. Multiply Permeability by Regions 18


19.1

arithmetic.
In the EDIT section using the keyword ARITHMETIC (see 12.3.2) permeabilities PERMX
(see 12.2.13), PERMY (see 12.2.13) and PERMZ (see 12.2.13) are multiplied by these multi-
pliers in each FIPNUM region.

2.1.3. Multiply Permeability by Regions 19


19.1

2.1.4. Multiply Permeability by Layers


Variables of scenario
In this scenario of history matching multipliers of permeability are used as variables in the
selected layers (groups of layers). For each selected layer (group of layers) it is possible that:

• all 3 properties: PERMX (see 12.2.13), PERMY (see 12.2.13) and PERMZ (see 12.2.13)
are multiplied by a multiplier;
• PERMX (see 12.2.13) and PERMY (see 12.2.13) are multiplied by a multiplier;
• only PERMZ (see 12.2.13) is multiplied by a multiplier.

Figure 6. Defining multipliers of permeability as variables in the selected layers via GUI.

Availability of scenario in GUI


Defining multipliers of permeability as variables in layers is shown in the figure 6. To
subdivide the model into groups of layers (in Z direction) define in the field Number of
Parts to Divide Into the required number of groups and press the button Divide. Then, for
each group a general multiplier can be set.

2.1.4. Multiply Permeability by Layers 20


19.1

You can calculate permeability in Z direction PERMZ (see 12.2.13) based on permeability
in X direction PERMX (see 12.2.13), using the formula PERMZ = PERMX ∗ @KV _KH@,
in the scenario Adjust KV/KH.
Scenario’s file automatically saved in the USER folder
Having run a history matching project in the USER folder the file with the following
name <project_name>_hm_mult_by_layers.inc is automatically saved. In this file
the keyword DEFINES (see 12.1.24), names of variables, ranges of variables, types of variables
are written. Names of variables are used between symbols @ in the keyword ARITHMETIC
(see 12.3.2). During an assisted history matching process each variable is substituted by the
value from the variable’s range defined in the keyword DEFINES (see 12.1.24).
Example
DEFINES 'MULT_PERMXYZ_1_12' 1 0.1 15 REAL /
'MULT_PERMXYZ_13_25' 1 1 10 REAL /
'MULT_PERMXYZ_26_38' 1 1 10 REAL /
'MULT_PERMXYZ_39_51' 1 0.5 5 REAL /
'MULT_PERMXYZ_52_64' 1 0.1 10 REAL /
'MULT_PERMXYZ_65_76' 1 0.1 10 REAL/
'MULT_PERMXYZ_77_89' 1 0.2 3 REAL /
'MULT_PERMXYZ_90_102' 1 0.1 10 REAL/
'MULT_PERMXYZ_103_115' 1 0.1 10 REAL/
'MULT_PERMXYZ_116_128' 1 0.1 10 REAL/
/ ...

ARITHMETIC PERMX („1:12) = PERMX(„1:12)*@MULT_PERMXYZ_1_12@


PERMY („1:12) = PERMY(„1:12)*@MULT_PERMXYZ_1_12@
PERMZ („1:12) = PERMZ(„1:12)*@MULT_PERMXYZ_1_12@
PERMX („13:25) = PERMX(„13:25)*@MULT_PERMXYZ_13_25@
PERMY („13:25) = PERMY(„13:25)*@MULT_PERMXYZ_13_25@
PERMZ („13:25) = PERMZ(„13:25)*@MULT_PERMXYZ_13_25@
PERMX („26:38) = PERMX(„26:38)*@MULT_PERMXYZ_26_38@
PERMY („26:38) = PERMY(„26:38)*@MULT_PERMXYZ_26_38@
PERMZ („26:38) = PERMZ(„26:38)*@MULT_PERMXYZ_26_38@
...
/

In this example multipliers of permeability for groups of layers in Z direction are defined:
MULT_PERMXYZ_1_12 is set for the group of layers from 1 to 12 etc. All variables have a
REAL type and initial value 1. The variable’s range is different for each variable.
In the EDIT section using the keyword ARITHMETIC (see 12.3.2) permeabilities PERMX
(see 12.2.13), PERMY (see 12.2.13) and PERMZ (see 12.2.13) are multiplied by these multi-
pliers for each group of layers (e.g., for a group of layers („1:12)).

2.1.4. Multiply Permeability by Layers 21


19.1

Example
DEFINES 'MULT_PERMZ_1_12' 1 0.1 15 REAL /
'MULT_PERMZ_13_25' 1 1 10 REAL /
'MULT_PERMZ_26_38' 1 1 10 REAL /
'MULT_PERMZ_39_51' 1 0.5 5 REAL /
'MULT_PERMZ_52_64' 1 0.1 10 REAL /
'MULT_PERMZ_65_76' 1 0.1 10 REAL/
'MULT_PERMZ_77_89' 1 0.2 3 REAL /
'MULT_PERMZ_90_102' 1 0.1 10 REAL/
'MULT_PERMZ_103_115' 1 0.1 10 REAL/
'MULT_PERMZ_116_128' 1 0.1 10 REAL/
/ ...

ARITHMETIC
PERMZ („1:12) = PERMZ(„1:12)*@MULT_PERMZ_1_12@
PERMZ („13:25) = PERMZ(„13:25)*@MULT_PERMZ_13_25@
PERMZ („26:38) = PERMZ(„26:38)*@MULT_PERMZ_26_38@
PERMZ („39:51) = PERMZ(„39:51)*@MULT_PERMZ_39_51@
PERMZ („52:64) = PERMZ(„52:64)*@MULT_PERMZ_52_64@
PERMZ („65:76) = PERMZ(„65:76)*@MULT_PERMZ_65_76@
PERMZ („77:89) = PERMZ(„77:89)*@MULT_PERMZ_77_89@
PERMZ („90:102) = PERMZ(„90:102)*@MULT_PERMZ_90_102@
PERMZ („103:115) = PERMZ(„103:115)*@MULT_PERMZ_103_115@
PERMZ („116:128) = PERMZ(„116:128)*@MULT_PERMZ_116_128@
/

In the Example 2 a multiplier of permeability for each group of layers in Z direction is


defined: MULT_PERMZ_1_12 is set for the group of layers from 1 to 12 etc. However, in
contrast to the previous example, only permeability in Z direction PERMZ (see 12.2.13) is
multiplied by defined multipliers in corresponding layers.

2.1.4. Multiply Permeability by Layers 22


19.1

2.1.5. Adjust KV/KH


Variables of scenario
In this scenario of history matching the multiplier KV_KH is used as a variable. The
permeability in Z direction PERMZ (see 12.2.13) is computed based on a permeability in X
direction PERMX (see 12.2.13) as PERMZ = PERMX * @KV_KH@.
Availability of scenario in GUI
Figure 7 shows a selection of KV_KH multiplier as a variable via GUI.

Figure 7. Defining KV_KH multiplier as a variable via GUI.

Scenario’s file automatically saved in the USER folder


After running a history matching project <project_name>_hm_adjust_kv_kh.inc
is automatically saved in the USER folder. The keyword DEFINES (see 12.1.24), variable’s
name, range and type are written in this file. The variable’s name is used between symbols @
in the keyword ARITHMETIC (see 12.3.2). During an assisted history matching process each
variable is substituted by the value from the variable’s range defined in the keyword DEFINES
(see 12.1.24).
Example
DEFINES 'KV_KH' 0.1 0.1 1 REAL /
/ ...

ARITHMETIC
PERMY = PERMX
PERMZ = PERMX * @KV_KH@
/

2.1.5. Adjust KV/KH 23


19.1

In this example the variable KV_KH is defined. Its initial value is 0.1, its minimum value
is 0.1 and its maximum value 1. Its type is REAL. In the EDIT section using the keyword
ARITHMETIC (see 12.3.2) a permeability PERMZ (see 12.2.13) is computed as PERMZ =
PERMX * @KV_KH@.

2.1.5. Adjust KV/KH 24


19.1

2.1.6. Multiply Transmissibility by Regions


Variables of scenario
In this scenario of history matching multipliers of transmissibility defined in selected
regions are used as variables. MULTX (see 12.2.15), MULTY (see 12.2.17) and MULTZ
(see 12.2.19) are multiplied by these multipliers in the corresponding regions.
Availability of scenario in GUI
Defining multipliers of transmissibility as variables in regions is shown in the figure 8.

Figure 8. Defining multipliers of transmissibility as variables in regions via GUI.

Scenario’s file automatically saved in the USER folder


After running a history matching project in the USER folder the file with the following
name <project_name>_hm_mult_trans_by_regs.inc is automatically saved.
In this file the keyword DEFINES (see 12.1.24), names of variables, ranges of variables,
types of variables are written. Names of variables are used between symbols @ in the keyword
ARITHMETIC (see 12.3.2). During an assisted history matching process each variable is sub-
stituted by the value from the variable’s range defined in the keyword DEFINES (see 12.1.24).

2.1.6. Multiply Transmissibility by Regions 25


19.1

Example
DEFINES 'M_TRANSMISSIBILITY_FIPNUM_1' 1 0.1 15 REAL /
'M_TRANSMISSIBILITY_FIPNUM_2' 1 1 10 REAL /
'M_TRANSMISSIBILITY_FIPNUM_3' 1 0.1 10 REAL /
'M_TRANSMISSIBILITY_FIPNUM_4' 1 0.1 10 REAL /
/ ...

ARITHMETIC
MULTX = IF (IWORKFIPNUM == 1, MULTX *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTX)
MULTXM = IF (IWORKFIPNUM == 1, MULTXM *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTXM)
MULTY = IF (IWORKFIPNUM == 1, MULTY *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTY)
MULTYM = IF (IWORKFIPNUM == 1, MULTYM *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTYM)
MULTZ = IF (IWORKFIPNUM == 1, MULTZ *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTZ)
MULTZM = IF (IWORKFIPNUM == 1, MULTZM *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTZM)
MULTX = IF (IWORKFIPNUM == 2, MULTX *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTX)
MULTXM = IF (IWORKFIPNUM == 2, MULTXM *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTXM)
MULTY = IF (IWORKFIPNUM == 2, MULTY *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTY)
MULTYM = IF (IWORKFIPNUM == 2, MULTYM *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTYM)
MULTZ = IF (IWORKFIPNUM == 2, MULTZ *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTZ)
MULTZM = IF (IWORKFIPNUM == 2, MULTZM *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTZM)
...
/

In this example in each FIPNUM region a multiplier of transmissibility is defined as


a variable: in the first region M_TRANSMISSIBILITY_FIPNUM_1 is defined etc. For all
variables their initial value is 1, their minimum is 0.1 and maximum is 10. The type of
variables is REAL.
MULTX (see 12.2.15), MULTY (see 12.2.17) and MULTZ (see 12.2.19) are defined in
grid.inc file. However, in the model’s data file the REGIONS section, in which FIPNUM re-
gions are defined, follows the GRID section. Therefore, FIPNUM regions property is included
as a property defined by user (i.e. as an array) with a name IWORKFIPNUM (see the keyword

2.1.6. Multiply Transmissibility by Regions 26


19.1

IWORK, see 12.3.6). Further we can use this FIPNUM-array in the arithmetic.
In the EDIT section using the keyword ARITHMETIC (see 12.3.2) permeabilities PERMX
(see 12.2.13), PERMY (see 12.2.13) and PERMZ (see 12.2.13) are multiplied by these multi-
pliers in each FIPNUM region.

2.1.6. Multiply Transmissibility by Regions 27


19.1

2.1.7. Multiply Pore Volume by Regions


Variables of scenario
In this scenario of history matching multipliers of pore volume defined in the selected
regions are used as variables.
Availability of scenario in GUI
This scenario is available in the History Matching Variables Manager if the section
EDIT is in the model’s data file. Multipliers of pore volume in several regions are defined as
variables for a history matching in the figure 9.

Figure 9. Defining multipliers of pore volume as variables in the selected regions via GUI.

Scenario’s file automatically saved in the USER folder


After running a history matching project in the USER folder the file with the name
<project_name>_hm_mult_porv_by_regs.inc is automatically saved. In this file
the keyword DEFINES (see 12.1.24), names of variables, ranges of variables, types of variables
are written. Names of variables are used between symbols @ in the keyword ARITHMETIC
(see 12.3.2). During an assisted history matching process each variable is substituted by the
value from the variable’s range defined in the keyword DEFINES (see 12.1.24).

2.1.7. Multiply Pore Volume by Regions 28


19.1

Example
DEFINES 'M_PORV_FIPNUM_1' 1.000000 0.100000 10.000000 REAL /
'M_PORV_FIPNUM_2' 1.000000 0.100000 10.000000 REAL /
'M_PORV_FIPNUM_3' 1.000000 0.100000 10.000000 REAL /
/ ...

ARITHMETIC
PORV = IF (IWORKFIPNUM == 1, PORV * @M_PORV_FIPNUM_1@, PORV)
PORV = IF (IWORKFIPNUM == 2, PORV * @M_PORV_FIPNUM_2@, PORV)
PORV = IF (IWORKFIPNUM == 3, PORV * @M_PORV_FIPNUM_3@, PORV)
/

In this example the multiplier of pore volume is defined as a variable for each region:
M_PORV_FIPNUM_1 etc. For all multipliers their initial value is 1, minimum value is 0.1
and maximum value is 10. A type of variables is REAL.
During an assisted history matching process (see Example 1) PORV is multiplied by
defined variables (M_PORV_FIPNUM_1 etc.) in different FIPNUM regions. An effective
pore volume of blocks PORV (see 12.2.27) is modified in the EDIT section. However, in the
data file of the model the EDIT section is followed by REGIONS section, in which FIPNUM
regions are defined. Therefore, FIPNUM regions property is included as a property defined by
user (i.e. as an array) with a name IWORKFIPNUM (see the keyword IWORK, see 12.3.6).
Further we can use this FIPNUM-array in the arithmetic.
In the EDIT section, using the keyword ARITHMETIC (see 12.3.2), a pore volume PORV
(see 12.2.27) is multiplied by corresponding multiplier in each FIPNUM region.

2.1.7. Multiply Pore Volume by Regions 29


19.1

2.1.8. Modify Scale Arrays


Variables of scenario
In this scenario of history matching arrays of end-points are used as variables:
SWCR (see 12.6.29), SOWCR (see 12.6.31), SWU (see 12.6.33), SWL (see 12.6.26),
KRW (see 12.6.59), KRWR (see 12.6.59), KRORW (see 12.6.58), KRO (see 12.6.58),
PCW (see 12.6.62), SWLPC (see 12.6.27), SGCR (see 12.6.30), SOGCR (see 12.6.32),
SGU (see 12.6.34), SGL (see 12.6.28), KRG (see 12.6.60), KRGR (see 12.6.60), KRORG
(see 12.6.58), PCG (see 12.6.63).

Figure 10. Defining arrays of end-points as variables via GUI.

Availability of scenario in GUI


Possibilities of defining variables are similar to that describing in the scenario for Rela-
tive permeabilities. Figure 10 shows how to define these variables in the History Matching
Variables Manager.
It is possible to define variables for regions as follows (see figure 10):

• Set by reg. The variable’s value is set in the selected regions or in all regions. The
variable’s value will be set as target parameter’s value;

2.1.8. Modify Scale Arrays 30


19.1

• Mult by reg. The target parameter’s value will be multiplied by the variable’s value in
a region (regions);

• Plus by reg. The variable’s value will be added to the target parameter’s value in a
region (regions).

Scenario’s file automatically saved in the USER folder


If in the model the keyword ENDSCALE (see 12.1.136) is not defined, after running
a history matching project, the file <project_name>_runspec.inc with the keyword
ENDSCALE (see 12.1.136) will be automatically saved in the USER folder.
Example
ENDSCALE
/

Moreover, the file <project_name>_hm_edit_endscale.inc will be automati-


cally saved in the USER folder. In this file the keyword DEFINES (see 12.1.24), names of
variables, ranges of variables, types of variables are written. Names of variables are used
between symbols @ in the keyword ARITHMETIC (see 12.3.2). During an assisted history
matching process each variable is substituted by the value from the variable’s range defined
in the keyword DEFINES (see 12.1.24).

2.1.8. Modify Scale Arrays 31


19.1

Example
IWORKSATNUM
14*0 4*2 14*0
6*2 10*0 8*2
10*0 9*2 9*0
...
/
DEFINES
'SATNUM_SWCR_S_1TO5' 0.296 0.196 0.396 REAL/
'SATNUM_SWU_P_2_4' 0 -0.1 0.1 REAL/
'SATNUM_KRW_M_1_3_5' 1 0.5 2 REAL/
/
...
ARITHMETIC
SWCR = IF(IWORKSATNUM == 1, @SATNUM_SWCR_S_1TO5@, SWCR)
SWCR = IF(IWORKSATNUM == 2, @SATNUM_SWCR_S_1TO5@, SWCR)
SWCR = IF(IWORKSATNUM == 3, @SATNUM_SWCR_S_1TO5@, SWCR)
SWCR = IF(IWORKSATNUM == 4, @SATNUM_SWCR_S_1TO5@, SWCR)
SWCR = IF(IWORKSATNUM == 5, @SATNUM_SWCR_S_1TO5@, SWCR)
SWU = IF(IWORKSATNUM == 2, SWU+@SATNUM_SWU_P_2_4@, SWU)
SWU = IF(IWORKSATNUM == 4, SWU+@SATNUM_SWU_P_2_4@, SWU)
KRW = IF(IWORKSATNUM == 1, KRW*@SATNUM_KRW_M_1_3_5@, KRW)
KRW = IF(IWORKSATNUM == 3, KRW*@SATNUM_KRW_M_1_3_5@, KRW)
KRW = IF(IWORKSATNUM == 5, KRW*@SATNUM_KRW_M_1_3_5@, KRW)
/

In the second example (see Example 2) for all regions of saturation the variable SAT-
NUM_SWCR_S_1TO5 with initial value 0.296 and varying from 0.196 to 0.396 is defined.
For 1-st, 3-rd and 5-th regions the variable SATNUM_KRW_M_1_3_5 with initial value
1-st and varying from 0.5 to 2-nd is defined. For 2-nd and 4-th regions the variable SAT-
NUM_SWU_P_2_4 with initial value 0 and varying from -0.1 to 0.1 is defined. All variables
have a type REAL.
During an assisted history matching process (see Example 2) SWCR, SWU and KRW are
modified according to defined operation’s type in different SATNUM regions. SWCR, SWU
and KRW are defined in the file props.inc. However, in the model’s data file the PROPS
section is followed by REGIONS section, in which SATNUM regions are defined. Therefore,
each SATNUM regions property is included as a property defined by user (i.e. as an array)
with a name IWORKSATNUM (see the keyword IWORK, see 12.3.6). Further we can use this
SATNUM-array in the arithmetic.
In the EDIT section using the keyword ARITHMETIC (see 12.3.2):

• for all SATNUM regions SWCR value is set equal to SATNUM_SWCR_S_1TO5;

2.1.8. Modify Scale Arrays 32


19.1

• for 1-st, 3-rd and 5-th regions KRW value is multiplied by the variable SAT-
NUM_KRW_M_1_3_5;

• for 2-nd and 4-th regions the variable SATNUM_SWU_P_2_4 is added to SWU value.

2.1.8. Modify Scale Arrays 33


19.1

2.1.9. Multiply Faults


Variables of scenario
In this scenario of history matching multipliers of faults transmissibility are used as vari-
ables.
Availability of scenario in GUI
This scenario is available in the History Matching Variables Manager if the keyword
MULTFLT (see 12.2.39) is defined in the model’s data file. Multipliers of faults transmissibility
are defined as variables for a history matching in the figure 11.

Figure 11. Defining multipliers of faults transmissibility via GUI.

Scenario’s file automatically saved in the USER folder


Having run a history matching project the file <project_name>_hm_mult_faults.inc
is automatically saved in the USER folder. In this file the keyword DEFINES (see 12.1.24),
names of variables, ranges of variables, types of variables are written. Names of variables
are used between symbols @ in the corresponding keywords (e.g., COREYWO (see 12.6.3),
COREYGO, see 12.6.4) or LET (LETWO (see 12.6.8), LETGO, see 12.6.9)). During an as-
sisted history matching process each variable is substituted by the value from the variable’s
range defined in the keyword DEFINES (see 12.1.24).

2.1.9. Multiply Faults 34


19.1

Example
DEFINES
'M_FAULT22' 1 0 1 REAL /
'M_FAULT23' 1 0 1 REAL /
'M_FAULT24' 1 0 1 REAL /
'M_FAULT21' 1 0 1 REAL /
/
MULTFLT
'FAULT22'@M_FAULT22@ /
'FAULT23'@M_FAULT23@ /
'FAULT24'@M_FAULT24@ /
'FAULT21'@M_FAULT21@ /
/

In this example multipliers of faults transmissibility are defined as variables: M_FAULT22


etc. All variables have a REAL type. Their initial value equals 1 and they vary from 0
to 1. In the EDIT section using the keyword MULTFLT (see 12.2.39) multiplier of fault’s
transmissibility FAULT22 is set to variable’s value M_FAULT22 etc.

2.1.9. Multiply Faults 35


19.1

2.1.10. Other
Variables of scenario
Variables defined in the model’s data file using the keyword DEFINES (see 12.1.24) (see
Example 1) will be included in the tab Other of the History Matching Variables Manager
menu.
Example
DEFINES
'L1' 50 50 50 REAL/
'L2' 50 50 50 REAL/
'DAYS' 100 100 100 INTEGER/
'AZIMUTH' 60 60 60 REAL/
PORO_FILENAME 'PORO_1' 1* 1* STRING/
/
...

INCLUDE
@PORO_FILENAME + ".inc"@ /

The following types are available:

• INTEGER – integer number (min and max must also be integer numbers);

• REAL – float number;

• STRING – string.

Figure 12. Other variables for a history matching.

2.1.10. Other 36
19.1

Availability of scenario in GUI


After running a history matching project it is possible to change a varying range of vari-
ables defined in the model’s data file via GUI using the History Matching Variables Man-
ager. Moreover, in the History Matching Variables Manager you can add variables from
standard scenarios to a project. It can be seen in the figure 12 that in the history matching
project along with variables defined in the Other tab multipliers of permeability defined via
Multiply Permeability by Regions scenario will be used as variables.

Figure 13. Configure values for a variable of type STRING.

2.1.10. Other 37
19.1

In this example four variables L1, L2, DAYS and AZIMUTH are defined. Three variables
are of REAL type, while DAYS is of INTEGER type. For all variables their initial values
(first number) and varying range (two last numbers) are defined.
The variable PORO_FILENAME is of STRING type and its initial value is set to PORO_1.
Using the keyword INCLUDE (see 12.1.80) files with extension *.inc can be included in
the model instead the variable of STRING type. In this example the file with the name
PORO_1.inc and containing poro property is included into the project instead the variable
PORO_FILENAME.
Values of the STRING variable can be controlled by algorithm or set by external grid
search (see figure 13). When external grid search is used a series of experiments is created, i.e.
for each value of the STRING variable a separate experiment will be created for other model
variables. The external grid search can be used for all experiments (see section Experimental
Design) and algorithms (see section Optimization Algorithms).
A variable of STRING type can be controlled by algorithm for Custom, Grid search,
Latin hypercube, Monte Carlo experiments and Differential Evolution, Particle Swarm Opti-
mization algorithm optimization algorithms. In such case an algorithm configures an optimal
combination of variables including a STRING variable.

Figure 14. Creating an experiment with STRING variable.

In the window Create New Experiment (see figure 14) double–click on Values
field for the variable PORO_FILENAME. In the pop up dialog Configure Values for
”PORO_FILENAME” specify names of loaded files, but without extension *.inc. As can
be seen in the figure 14 files with names PORO_1.inc, PORO_2.inc, PORO_3.inc and

2.1.10. Other 38
19.1

PORO_4.inc will be successively loaded in the model. For each value of STRING vari-
able (PORO_1.inc, PORO_2.inc etc.) a separate experiment will be created for other model
variables L1, L2, DAYS.

2.1.10. Other 39
19.1

2.2. File’s structure of history matching project


For correct work of the AHM module it is not recommended to modify a file’s structure of
history matching project. It is allowed to delete results of experiment only via user graphical
interface (GUI).
A history matching project is saved in the folder with a special extension – .hmp. Together
with this folder a folder with the name <project_name>.hmf is created. In this folder a
MODELS folder is created simultaneously. The copy of the base model, i.e. data file, including
USER and RESULTS folders will be saved in the CURRENT folder of the MODELS folder.
Files with extension .inc are not copied into the CURRENT folder, therefore, when defining
variables manually it is necessary to define them in a data file or in user’s files located in the
USER folder.
In order not to corrupt model’s initial data file a copy of model’s data file is saved in the
CURRENT folder. Any change done via GUI will be modified only the model’s copy saved
in the CURRENT folder. Moreover, it is possible to make changes directly in the model’s data
file.
After running any experiment in the project’s folder <project_name>.hmf the folder
with the name <experiment_name>, e.g., A001, containing data files of experiment and
its variants, will be created. At the same time the current version of project, contained in the
CURRENT folder, will be saved in the MODELS/SAVE_1 folder.

2.2.1. File’s structure of experiment


The data file of experiment (see Example 1) contains an information about variables defined
in the project and the reference on the data file of the copy of the base model saved in the
MODELS/SAVE_1 folder. For each variant of experiment the data file contains values of
variables for substitution into the base model and the name of data file of experiment (see
Example 2).
Example
– Experiment A001 [#1] (Latin Hypercube)
– experiment base parameter values
PREDEFINES
M_PERM_FIPNUM_1 1 0.1 10 REAL /
M_PERM_FIPNUM_2 1 0.1 10 REAL /
M_PERM_FIPNUM_3 1 0.1 10 REAL /
M_PERM_FIPNUM_4 1 0.1 10 REAL /
/
– project base data file
OPEN_BASE_MODEL
’../MODELS/SAVE_1/PUNQ_S3N.DATA’/

In Example 1, in the data file of experiment e1.data, using the keyword PREDEFINES

2.2. File’s structure of history matching project 40


19.1

(see 12.1.26) multipliers of permeability (M_PERM_FIPNUM_1 etc.) are defined as vari-


ables in each region and their minimum and maximum values and their type. The key-
word OPEN_BASE_MODEL (see 12.1.27) substitutes values of variables instead of @vari-
able_name@ in the base model. This keyword is generated by tNavigator automatically and
should not be used in models.
Example
PREDEFINES
M_PERM_FIPNUM_1 3.1 0.1 10 REAL /
M_PERM_FIPNUM_2 7.127 0.1 10 REAL /
M_PERM_FIPNUM_3 8.455 0.1 10 REAL /
M_PERM_FIPNUM_4 5.507 0.1 10 REAL /
/
OPEN_BASE_MODEL
’e1.DATA’/

The Example 2 shows a file of experiment’s variant. In this file values of multipli-
ers of permeability M_PERM_FIPNUM_1 etc. are written in the keyword PREDEFINES
(see 12.1.26). These values are substitute instead of @variable_name@ in the base model.
Moreover, minimum and maximum values and type of variable are defined. A substitution is
carried out using the keyword OPEN_BASE_MODEL (see 12.1.27).
Results of calculations of experiment’s variants (with different values of variables) are
saved in the folder RESULTS created in the folder <experiment_name>.
Right clicking on a model’s variant in the list of variants and select Save as... allows to
save the model’s variant as a standard data file, in which corresponding values of variables
are implemented.

2.2.2. Saving project’s modifications


During working process user can modify a current model, i.e. a copy of initial model saved
in the CURRENT folder. You can work with this model as a normal model. To start working
with a model press the button Open base model. Then, you can modify your model via
GUI, or you can make changes directly in a data file. For example, you can change relative
permeabilities, add wells etc. All user modifications will be saved in corresponding files (e.g.,
in case of modification of relative permeabilities, in the file <project_name>_rp.inc)
located in the folder CURRENT/USER. When creating a new experiment the modified model
will be saved in the folder MODELS/SAVE_N, where N is a number, which increases if
the model was modified by user. It is recommended to reload a history matching project if
the model was modified and then run a new experiment using the button Create new
experiment for project base model. If a current model was not modified then in a new
experiment the same copy of the base model as in a previous one will be used (i.e., located in
the folder MODELS/SAVE_N).

2.2.2. Saving project’s modifications 41


19.1

2.2.3. Deleting results of experiment


It is strongly recommended to delete results only via GUI using buttons located on the tab
Project Info near the name of experiment:

• – delete an experiment from a project without restoring data;

• – delete experiment’s files while keeping the possibility to restore it. All experi-
ment’s files will be deleted. The experiment’s entry in the project will remain. Experi-
ment configurations and list of variants will be available.

2.3. Defining Variables for models with Reservoir Coupling option


tNavigator supports an option Reservoir Coupling to combine calculations of different mod-
els (see section 5.13 of tNavigator User Manual). Several subordinate (SLAVE) models are
coupled by main (MASTER) model via groups controls and surface network.
When running a coupling model results from subordinate models are recorded into the
corresponding folders located in the folder RESULTS of MASTER model. Such way of results
recording of coupling model allows to:

• open and run a MASTER model and coupled with it a SLAVE model simultaneously as
independent models;

• open and run several variants of MASTER model sharing SLAVE models;

• launch history matching project for coupling model.

Variables can be specified in both MASTER model and SLAVEs via GUI using the button
History Matching Variables Manager or using the keyword DEFINES (see 12.1.24).
Names of variables specified in MASTER and SLAVE models should be different.
If names of variables in MASTER and SLAVE models coincide then values

!
of variables in SLAVE models will be equal to the values specified in the
MASTER model. If in different SLAVE models there are variables with the
same name then in the MASTER model the variable value will be taken from
the first SLAVE model readed by MASTER model and the value will be used
in all SLAVE models in each model variant. Having read information from
SLAVE models values of variables are updated only in the MASTER model.

! If a variable specified in GUI then it can be renamed in the corresponding


file located in the folder USER.
A history matching project is launched from the MASTER model. Specified variables for
both SLAVE and MASTER models will be shown in the Experimental Design window.

2.2.3. Deleting results of experiment 42


19.1

3. Experimental Design
Before launch any optimization algorithm it is required to carry out a sensitivity test of history
matching project to select variables. For this purpose it is recommended to run experiment(s).

In this section the following experimental designs are described:


• Custom;
• Plackett-Burman design;
• Grid search;
• Latin hypercube;
• Monte Carlo;
• Tornado.

Figure 15. Create an experiment.

figure 15 shows a dialog that prompts user to create an experiment. To make a creation
more convenient there are the following buttons in the dialog:

3. Experimental Design 43
19.1

• Modify Ranges for Selected Variables. Allows to modify a range of variation of


selected variable.

• Unselect all variables.;

• Select variable by filter. Allows to include into experiment only variables selected
by the filter (see Implementation of Variable Filter);

• Hide unused variables. Unticked variables will be hide in the dialog Crete New
Experiment;

• Unhide unused variables.

Examples how to work with experiments in the AHM module are de-
scribed in training tutorials:

• AHM1.1. AHM Theoretical Course;

• AHM1.2. How To Use Assisted History Matching;

• AHM1.3. How To Use RFT in History Matching;

• AHM1.4. How to find the best Well trajectory;

• AHM1.5. Corey correlation for RP in Assisted History Match-


ing;

• AHM1.6. How To Use AHM for Hydraulic Fracture;

• AHM1.7. How to run different geological realizations;

• AHM1.8. How to go from AHM to Forecast;

• AHM1.9. How To Analyze Uncertainty.

3. Experimental Design 44
19.1

3.1. Sensitivity Analysis


Using experiments it is possible to evaluate the selected variables and their varying ranges.
If using them we can find a good model’s variant of history matching then variables can
be considered as good ones (see the example on figure 16). On the figures we can see that
historical values are between calculated or calculated variants are ”approaching” historical.
So if we continue to find solution with selected variables and ranges then we can find a good
history matching case.

(a) (b)

Figure 16. Sensitivity Analysis: ”good” variants.

If sensitivity analysis shows that variables and/or their varying ranges are not satisfied (see
the example on the figure 17), i.e. calculated data is far from historical, then if we continue
to find solution with selected variables and ranges then we can spend time on simulations but
don’t find a good history matching case. So in this case you can try to use other variables
or/and change their varying ranges and run once again experiment to evaluate a sensitivity of
variables before launch of optimization algorithms.

Figure 17. Sensitivity Analysis: ”bad” variants.

Additional examples of possible model’s variants are shown in figure 18.

3.1. Sensitivity Analysis 45


19.1

Figure 18. Sensitivity Analysis: examples of possible variants.

3.2. Custom
User can manually define values of variables for each experiment’s variant in GUI:
Figure 19 shows that for each experiment’s variant (Variant #0, Variant #1, Variant #2
etc.) values of variables are defined by user.

3.2. Custom 46
19.1

Figure 19. Custom experimental design.

3.3. Plackett-Burman design


Plackett-Burman design is used to design an experiment. The following methods are available:
• General Plackett–Burman;
• Include line with minimal values;
• Folded Plackett–Burman.

3.3.1. General Plackett-Burman


In this method the matrix with rows which are combinations of maximum and minimum
variable values is constructed. In general method each possible combination of minimal and
maximal variable values (−−, −+ , +−, ++) appears the same number of times.
There is an example on the figure 20. Maximal value of variable is denoted as ”+”, minimal
one is denoted as ”-”. Columns are variables (there are 3 ones), rows are model variants (there
are 4 ones). Each combination of levels for any pair of variables appears one time.

3.3.2. Include line with minimal values


The row containing only minimal values of variable is appended to the standard Plackett–
Burman matrix.

3.3. Plackett-Burman design 47


19.1

Figure 20. Plackett–Burman design.

3.3.3. Folded Plackett-Burman


The complementary matrix, i.e. the one where minimal values are replaced by maximal ones
and vice versa, is appended to the standard Plackett–Burman matrix. If the number of rows of
standard matrix is n then the number of rows of obtained matrix is 2n.

3.3.3. Folded Plackett-Burman 48


19.1

3.4. Grid search


This algorithm is an exhaustive search through a subset of the parameter hyperspace. For
M variables it generates (s1 + 1) · (s2 + 1) · ... · (sM + 1) variants, where si is the number of
intervals for ith variable.

Figure 21. Grid search scheme. Here M = 2, s1 = 3, s2 = 3.

3.4. Grid search 49


19.1

3.5. Latin hypercube


This algorithm generates variants in the following way: for N model variants and M variables
a search space is divided into N hyperplanes for each variable and then N points are chosen
so that each hyperplane contains exactly one point. An advantage of this algorithm is to the
abovementioned way to select points, which allows to cover a search space even for a small
number of variants. Moreover, the number of variants, i.e. the number of running calculations,
is defined by user, not by algorithm and can be decreased if necessary.

Figure 22. Latin hypercube scheme. Here N = 5, M = 2.

A variable’s value in the defined region can be distributed as:

• Uniform (see figure 23(a));

• Log-normal (see figure 23(b)). The peak of log-normal distribution of a variable is


located at the base value of the variable;

• Normal (see figure 23(c));

• Triangular (see figure 23(d)). Triangle peak is located at the base value of the variable;

• Discrete (see figure 24). It is required to specify a Value of variable and Probability that
variable has the specified value (button Add new value and probability). To normalize
the range of probabilities press the button Normalize.

Reproduction of experiment’s results. By default for each launch of ’Latin hypercube”


experiment different model’s variants will be generated (due to the use of random seed in this
algorithm). Therefore, when running this experiment on different computers or rerunning it
on the same computer different variants will be obtained. In order to reproduce results of a
previous experiment set Random seed equal to the value used in the previous experiment (see
figure 25). The value of random seed should not be equal to zero.
For model’s variants calculated using ”Latin hypercube” experimental design quantiles can
be obtained. Quantiles are available on the tab Results. Analysis.

3.5. Latin hypercube 50


19.1

(a) (b)

(c) (d)

Figure 23. Distributions of variable’s value for ”Latin hypercube” experiment

3.5. Latin hypercube 51


19.1

Figure 24. Specifying a discrete distribution of variable values.

Figure 25. Experimental design ”Latin hypercube”.

3.5. Latin hypercube 52


19.1

3.6. Monte Carlo


In Monte Carlo algorithm for each model variant variables are independently generated ac-
cording to the specified distribution. The method is used to generate an arbitrary number of
model variants using the created Proxy model (see 6.13).
For the algorithm scheme shown in figure 26, N = 5 model variants with M = 2 variables
are independently generated.

Figure 26. Monte Carlo algorithm scheme. Here N = 5, M = 2.

A variable’s value in the defined region can be distributed as:

• Uniform (see figure 23(a));

• Log-normal (see figure 23(b)). The peak of log-normal distribution of a variable is


located at the base value of the variable;

• Normal (see figure 23(c));

• Triangular (see figure 23(d)). Triangle peak is located at the base value of the variable;

• Discrete (see figure 24). It is required to specify a Value of variable and Probability that
variable has the specified value (button Add new value and probability). To normalize
the range of probabilities press the button Normalize.

Reproduction of experiment’s results. Similarly to the Latin hypercube this algorithm


uses random seed for generation of distribution of variables. Therefore, when running this
experiment on different computers or rerunning it on the same computer different model
variants will be obtained. In order to reproduce results of a previous experiment set Random
seed equal to the value (different from zero) used in the previous experiment.
For model’s variants calculated using Monte Carlo method quantiles can be obtained.
Quantiles are available on the tab Results. Analysis.

3.6. Monte Carlo 53


19.1

3.7. Tornado
Tornado experiment is used to build a Tornado diagram. In this experiment each variable is set
to its min and max values while other ones have default values. If M parameters are varied,
then 2M + 1 variants are generated (including base model).

Figure 27. Tornado scheme. Here M = 2.

For a Tornado experiment a Tornado Diagram can be calculated and seen on the tab
Results. Analysis

3.8. Implementation of Variable Filter


When creating an experiment a variable filter allows to use only variables strongly affected
the model and exclude variables weakly affected it.
The variable filter is created in the tab Analysis (see Creating a Filter for Variables).
Press the button to create a new experiment. In oder to implement the created variable
filter press the button and select the required filter as shown in figure 28.

3.7. Tornado 54
19.1

Figure 28. Implementation of variables filter.

3.8. Implementation of Variable Filter 55


19.1

4. Objective Function
To run any optimization algorithm an objective function (criterion of evaluation of model’s
quality) should be configured. The main task of the objective function (hereinafter – OF) – is
to help to choose the best model’s variant for given parameters. In tNavigator two objective
functions are available:

• Custom OF;

• OF of experiment.

An objective function is used for the following optimization algorithms:

• Differential Evolution;

• Simplex Method;

• Particle Swarm Optimization;

• Response Surface (Proxy models).

The following types of OF are supported in tNavigator:

• History matching, Quadratic;

• History matching, Linear;

• Forecast optimization.

4. Objective Function 56
19.1

4.1. Specifying the Objective Function


Custom objective function and objective function of experiment are configured via GUI on the
tab Objective Functions (see figure 30). Furthermore, a list of predefined objective function
is available:

• Oil Rate Mismatch;

• Water Rate Mismatch;

• Liquid Rate Mismatch;

• Gas Rate Mismatch;

• Oil Total Difference;

• Water Total Difference;

• Liquid Total Difference;

• Gas Total Difference.

Formulas for calculation of these objective function are presented in the section 6.4.1. These
objective functions are not editable.
To create a customizable objective function select from a drop down menu Custom Ob-
jective Function or press the button and specify a name of objective function. The OF
can be deleted using the button , renamed using , duplicated using and load from
project using the button .
When loading an objective function from another project tick required functions in the list.
Loaded objective functions will appear in the list of objective functions in the tab Objective
Functions (see figure 29). If the objective function was created in another project for another
model then the objective function transfer results in lose of some settings due to inconsistency
of objects and time steps in models. For example, settings for historical points at nonexistent
time steps will be skipped.
Further a type of objective function and its terms are specified. In order to add (delete)
a term of objective function press the button Add term/ Delete term. Several terms can be
added simultaneously using the button Add several terms. For each term of objective function
Objects and corresponding Parameters should be selected. The available objects are:

• Wells;

• Groups;

• Field;

• RFT/PLT;

• User graph field;

4.1. Specifying the Objective Function 57


19.1

Figure 29. Movement of objective functions between projects.

• User graph well;

• User graph group.

To check only injectors press the button Check all injectors, only producers – Check all
producers.
Oil total, water total, liquid total, etc. can be selected as parameters. For each parameter
a Deviation (acceptable mismatch) and Deviation Type (Relative or absolute) are specified
(see section 4.2.2).
For each object it is possible to specify or calculate its weight in a objective function.
To specify a Weight double click on a weight value corresponding to an object. Weights of
objects can be calculated based on the historical data of the selected parameter (see section
4.2.3). Select Weight Parameter from the drop down menu and press the button Calculate.
Historical values, absolute and relative deviation values of obtained results from historical
ones are shown in the table on the right by clicking the button on the right panel. In oder
to visualize the difference between historical and calculated results press the button .
If an objective function is properly specified the inscription ”Ok” will be displayed in the
bottom of the dialog. Values of created objective function for different model variants can be
seen in the tab Results Table.
The aim of Optimization algorithms is to find a minimum of the objective function of
experiment. After running an optimization algorithm you cannot change a configuration of
objective function of experiment.
A custom objective function is generally used for analysis of results. During analysis of
results you can change parameters of custom objective function and compare a custom OF

4.1. Specifying the Objective Function 58


19.1

Figure 30. Configuring the custom objective function.

with experimental one. Moreover, a custom objective function is used to exclude historical
points from consideration.
Notice that it is possible to use a configured objective function as an objective function of
experiment for optimization algorithms. In order to do this press the button Select objective
function as shown in figure 31.

4.1. Specifying the Objective Function 59


19.1

Figure 31. A selection of objective function.

4.2. History matching objective function


History matching objective function is a measure of the mismatch between historical and
calculated values of the parameters selected by user. The objective function provides the
possibility to choose the best model’s variant for the set of parameters. For example, the
minimum sum of mismatches of oil rate, watercut and pressure. If the best variant is chosen
using only one parameter (oil rate mismatch) it can be not good enough for other parameters
(pressure or watercut mismatches). The objective function allows to choose the optimal variant
for the set of parameters.
The smaller the value of objective function is, the better quality of history matching we
have. The value of objective function is computed for each model variant, which helps to
choose the best variant among them.

4.2.1. Objective function for different objects


Different objects for which assisted history matching (or optimization) is performing can be
grouped in one objective function. In this case the objective function f for n objects is the
sum of n terms fi ( f1 , f2 , ..., fn ) with the specified weights wi .

4.2. History matching objective function 60


19.1

Thus, the objective function is one function combining at the same time different terms:
rates, pressure, watercut for wells, parameters for groups and field, RFT-pressure, etc.

4.2.2. Objective function formula


In general case OF of this type is the following:
!!
N
∑ wob j ∑ w p ∑ lnS ,
ob j p n=k

where:

• summing parameter ob j is the set of wells or well groups;

• wob j is the weight of well or group. It can be calculated automatically based on historical
data of the selected parameter (see section 4.2.3);

• summing parameter p is the set of all selected parameters (Water, Oil, watercut and so
on);

• w p is the parameter weight;

• n is the time step number;

• ln is the length of time step n (from the selected k to the last N );

• S is the deviation (absolute or relative).

If the Function Type is History Matching, Quadratic, then the deviation S is calculated
as:
value(H)−value(C) 2
 
• S= g , if Deviation Type is absolute;

value(H)−value(C) 2
 
• S= g·value(H) , if Deviation Type is relative.

where

• value(C) is the calculated value;

• value(H) is the historical value;

• g is the deviation value specified by user. For example, if Deviation is set equal to 0.05
and Deviation Type is relative then if S is lower then 1 means that deviation between
historical and calculated values does not exceed 5%;

If the Function Type is History Matching, Linear, then the deviation S is calculated as:

4.2.2. Objective function formula 61


19.1

|value(H)−value(C)|
• S= g , if Deviation Type is absolute;
|value(H)−value(C)|
• S= g·value(H) , if Deviation Type is relative.

! If at some time step a historical rate (for oil, water or gas) is zero, then this
step is not take into account when calculating an objective function.

If historical value value(H) is 0, then value in denominator g·value(H) would


have to be 0, but it is unacceptable. In this case the value of g · value(H) is

!
assumed to be equal to:

• for RFT – 0.001 atm (METRIC);

• for WCT and GOR – 0.0001;

• for the others parameters – 0.1 (METRIC).

4.2.3. Weights automatic calculation


When configuring an objective function object weights wob j can be calculated automatically
based on historical data of the selected parameter (see figure 32). Historical data for the
selected parameter are taken at last time step.
The following parameters can be selected:
• Oil Total;
• Gas Total;
• Water Total;
• Liquid Total;
• Gas Injection Total;
• Water Injection Total;
• Watercut;
• Gas-Oil Ratio.
Weights of N objects are distributed as follows:
pi
wiob j = N
× 100%
∑i=1 pi
where
• pi is the selected parameter value of ith object.

4.2.3. Weights automatic calculation 62


19.1

Figure 32. Well weights automatic calculation.

4.2.4. Selecting historical points for a history matching


Unreliable historical data can be excluded from a history matching. Unreliable historical points
can be directly excluded from a configured custom objective. To exclude points follow the
steps:

• Press the button Show control points. The difference between historical and cal-
culated values in absolute or relative terms is shown as gaps on the historical graph.
The size of ”segment” can be defined by user when configuring objective function. For
example, in the figure 33 the Deviation Type is defined as relative and the Deviation
is equal to 0.05. Then, a historical data error varies from -0.05 to 0.05.
Press the button to see historical values, absolute/relative deviation values as table
on the right. Dragging the beginning/end of ”segment” you can modify the absolute
and relative deviations from historical values in the table. Moreover, values of abso-
lute or relative deviations in the table can be edited then the lengths of corresponding
”segments” in the graph will be changed.

4.2.4. Selecting historical points for a history matching 63


19.1

If the button is pressed then buttons and Uncheck / are available.

• Press the button Edit control points;

• Right-click on the selected point in order to exclude/include point by point. Press and
hold Shift to select points inside a rectangle;

• Create a corresponding objective function;

• Run a new experiment (e.g., using the button ) and select as an objective function of
experiment the created OF using the button Select objective function (see figure 31).

! It is required to exclude historical points for parameters and objects, using


which an objective function will be calculated.

Figure 33. Edit of historical points.

4.2.4. Selecting historical points for a history matching 64


19.1

4.2.5. Loading a pressure’s history into a base model


Historical BHP and THP values can be set by the keyword WCONHIST (see 12.19.43) and
then loaded into history matching project automatically. Yet historical WBP and average pres-
sure values are loaded separately, because keywords don’t allow it. This procedure can be
done via the dialog of objective function settings (use the button Load Pressure (H)).

The well data are loaded using a format ”Well” ”Date” ”Pressure”, where:

1. Well is the well name;

2. Date is the date of measurement in format DD.MM.YYYY;

3. Pressure is the pressure value.

For wells a history can be loaded for 3 pressure types:

• BHP. This parameter is compared with calculated BHP at calculation of objective func-
tion;

• THP. This parameter is compared with calculated THP at calculation of objective func-
tion;

• WBP. This parameter is compared with calculated WBP, WBP4, WBP5, WBP9 at
calculation of objective function;

Field data are loaded using a format ”FIELD” ”Date” ”Pressure”, where:

1. FIELD is the field name;

2. Date is the date of measurement in format DD.MM.YYYY;

3. Pressure is the value of average reservoir pressure.

The loaded average pressure is compared with calculated average pressure for calculation
of objective function.

4.2.5. Loading a pressure’s history into a base model 65


19.1

4.3. Forecast optimization objective function


Forecast optimization objective function is a unified measure of two types of calculated
values: that ones, which are needed to be maximized (oil rate, for example), and other ones,
which are needed to be minimized (for example, water rate). So, the bigger the value of OF
the rate of ”needed” is greater and the rate of ”unwanted” is less. OF values are calculated
for each model variant, that helps to choose the best among them. An example of objective
function is shown in figure 34.

4.3.1. Objective function for different objects


Different objects for which assisted history matching (or optimization) is performing can be
grouped in one OF. In this case OF f for n objects is the sum of n terms fi ( f1 , f2 , ..., fn )
with the specified weights wi .
So for forecast optimization OF is one function we can combine at the same time different
terms: rates maximization for wells, groups, field, NPV, etc.

4.3.2. Objective function formula


Expression for OF of this type is given by:
!
∑ wob j ∑ w pX ,
ob j p

where:

• summing parameter ob j – set of wells or well groups;

• wob j – weight of well or group;

• summing parameter p – set of all selected parameters (Water, Oil, watercut and so on);

• w p – parameter weight;

• X has different value which depends on OF parameter type:

– Maximize accumulated. Then X is the difference between accumulated values


at moments t2 and t1 : X = T (t2 ) − T (t1 ),t2 > t1 , where t1 – the time step from
which OF is calculated, t2 – the time step till which OF is calculated;
– Minimize accumulated. This case is opposite to the first one: X is the difference
between accumulated values at moments t1 and t2 : X = T (t1 ) − T (t2 ), t2 > t1 ,
where t1 – the time step from which OF is calculated, t2 – the time step till which
OF is calculated;

4.3. Forecast optimization objective function 66


19.1

Figure 34. Forecast optimization objective function for well.

– Constant rate duration. X is the number of days within which the specified well
(or group) rate stays constant (i.e., does not deviate from the target rate value). X is
the difference between two time moments: X = t2 −t1 , t2 > t1 . The well (or group)
control rate (target value) is specified automatically and is equal to the rate at zero
time step of the forecast model. Calculated and target rates are compared with
accuracy (Rate Accuracy) (by default 1%) specified in the Objective Function
Configuration dialog. If calculated rate deviates from the target rate less than 1%
a well (or group) rate is considered to be constant.

4.3.2. Objective function formula 67


19.1

4.4. Objective function normalization


An objective function (OF) is automatically normalized by time, by objects, by parameters.
The normalization by time/objects/parameters makes the OF order independent from number
of the time steps/objects/parameters in the model.
Time normalization is carried out independently for each object for all time steps where
its values are available.
Normalization by objects and by parameters is via OF division by the sum of objects’
weights and parameters’ weights:

∑ob j wob j ∑ p w p S
.
∑ob j wob j ∑ p w p
Due to the normalization of the objective function by the sum of objects’ and parameters’
weights the value of normalized objective function equal to one means that for all terms of the
objective function the mismatch between historical and calculated value is of the same order
as the measurement error.

4.5. Objective function based on user graphs for a field


An objective function can be based on user graphs for a field (see 6.6. Graph calculator). To
create an user graph press the button Graph Calculator located on the top panel.
When history matching is executed (Function Type – History Matching, Quadratic (or
linear)), if it is possible, historical values are automatically calculated for user graphs. These
values are used for calculation of objective function.
In the drop down menu Object select, e.g., User graph well and any well (see figure 35).
In the window Parameters select the required user graph.
Furthermore, to specify a forecast objective function (Function Type – Forecast Optimiza-
tion) user graphs can be used as well.

4.4. Objective function normalization 68


19.1

Figure 35. Objective function based on the user graph ”User_graph”.

4.6. Using UDQ as an objective function


You can specify an arbitrary objective function in a *.data file of model using keyword UDQ
(see 12.19.165). The arbitrary objective function can, for example, include parameters not
listed in the Objective Functions tab. In order to use an arbitrary objective function for
history matching or creating a forecast tick the box Parameter. FUOBJF (see figure 36).

4.6. Using UDQ as an objective function 69


19.1

Figure 36. Using a user objective function FUOBJF.

4.7. Examples of objective functions


Example 1 (figure 37). Object – Group, FIELD is selected; parameters – Oil Total, Water
Total; weight (W) – 1; deviation (g) – 0.05; deviation type – relative.
Objective function:

N N  2  2 !
FOPT − FOPT H FW PT − FW PT H
∑ ∑ ln S = ∑ 0.05 · FOPT H
+
0.05 · FW PT H
,
p=oil,water n=0 n=0

where:
• FOPT – field oil production total;

• FOPTH – historical field oil production total;

• FWPT – field water production total;

• FWPTH – historical field water production total.


This OF can be used at the beginning of model history matching process when the goal is
to match total parameter values for field. Then another OF can be used that provides tighter

4.7. Examples of objective functions 70


19.1

history matching criteria.

For example Another example of tighter history matching criteria is below in the Example
2 for rates.

4.7. Examples of objective functions 71


19.1

Figure 37. Objective function. Example 1.

Example 2 (figure 38). Object – Wells; 7 producers in the model; parameters – Oil Rate,
Water Rate; Weight (W) – 1; deviation (g) – 1; deviation type – relative; K = 0 – sum for
time steps from zero time step.
Objective function:
!  !
7 N 7 N 
(WOPR −WOPRH)2 (WW PR −WW PRH)2
∑ 1 · ∑ ∑ ln S = ∑ ∑ 1 ·WOPRH
+
1 ·WW PRH
ln ,
j=1 p=oil,water n=0 j=1 n=0

where:

• WOPR – well oil production;

• WWPR – well water production;

• WOPRH – historical well oil production;

• WWPRH – historical well water production.

Summation is for all 7 wells for all N time steps.

4.7. Examples of objective functions 72


19.1

Figure 38. Objective function. Example 2.

4.7. Examples of objective functions 73


19.1

5. Optimization Algorithms
This section contains description of algorithms of Assisted History Matching and Uncertainty
Analysis. The following optimization algorithms are available:

• Response Surface (Proxy models);

• Differential Evolution;

• Simplex method;

• Particle Swarm Optimization algorithm.

To run any algorithm it is required to define variables and an objective function in advance.
Moreover, before running an optimization algorithm it is recommended to carry out a sensitive
test of variables using abovementioned experiments.
Termination criteria of algorithms are discribed in the section – 5.2.

figure 15 shows a dialog that prompts user to create an experiment. To make a creation
more convenient there are the following buttons in the dialog:

• Modify Ranges for Selected Variables. Allows to modify a range of variation of


selected variable.

• Unselect all variables.;

• Select variable by filter. Allows to include into experiment only variables selected
by the filter (see Implementation of Variable Filter);

• Hide unused variables. Unticked variables will be hide in the dialog Crete New
Experiment;

• Unhide unused variables.

5. Optimization Algorithms 74
19.1

The detailed description of AHM features is in training tutorials:

• AHM1.1. AHM Theoretical Course;

• AHM1.2. How To Use Assisted History Matching;

• AHM1.3. How To Use RFT in History Matching;

• AHM1.4. How to find the best Well trajectory;

• AHM1.5. Corey correlation for RP in Assisted History Match-


ing;

• AHM1.6. How To Use AHM for Hydraulic Fracture;

• AHM1.7. How to run different geological realizations;

• AHM1.8. How to go from AHM to Forecast;

• AHM1.9. How To Analyze Uncertainty.

5. Optimization Algorithms 75
19.1

5.1. Creating New Experiment From Selected Variants


There is a possibility to start an algorithm with population selected by user. You need to
calculate several model variants, then go to the Results tab, mark some variants and select
Create New Experiment From Selected Variants in the context menu. In creating a new
experiment from selected variants only optimization algorithms (Response Surface (Proxy
models), Differential Evolution etc) are available.
Initially selected variants won’t be added to a list of variants for the new experiment, i.e.
in the new experiment the variant with number 0 is the first variant belonging to an array
of variants created by algorithm. In particular, initially selected variants are not included into
Maximal number of simulator launches. However, initially selected variants are used in the
new experiment as bases variants, i.e. they are included into an initial population of the used
algorithm.
Initially selected variants are not recalculated. In the new experiment the calculated results
of initially selected variants are used. Therefore, any modifications of values of variables do
not affect initially selected variants and results of their calculations. All modifications will be
taken into account when creating new variants.
When creating a new experiment from selected variants a setting variables allows:

• include/exclude variables;

• vary base values;

• vary search area (in order to cover all initial values).

If values of variables are different at least for two selected variants they are considered to
be ”important” and marked with orange color (see figure 39). ”Important” variables are used
in experiment by default (i.e. variables are ticked). Variables having the same value for all
selected variants are considered to be ”unimportant” and they are not ticked by default. When
configuring a new experiment you can include/exclude variables. However, even unticked
initially ”important” variables stay marked with orange color to the end of configuring process.
The variable value taken from the first initially selected variants is indicated as a variable
base value (Base). If it is required the base value can be changed.
Maximum and minimum of variable used in an initial experiment are set as Min. and
Max. by default. If selected variants are taken from different experiments then the minimal
variable value of Min. and the maximal variable value of Max. over all experiments are set as
Min. and Max. for the variable. Min. and Max. values can be changed. At the same time new
Min. and Max. values should be specified such as all variable values from selected variants
are included in the new range. In order to see the variable variation range you can hover
over the line containing the variable (see figure 39). A tip shows a varible name and its value
(in case of the value is the same for all variants) or its variation range (from its minimum
to its maximum). Notice that all above mentioned modifications do not affect variables or
calculations of initially selected variants and are used to create variants of the new experiment.
For example, the variation range of the variable M_PERM_FIPNUM_4 shown in figure 39
is from 0.25215 to 2.17595 over all selected variants. By default Min. and Max. of the variable

5.1. Creating New Experiment From Selected Variants 76


19.1

are 0.1 and 10, respectively and taken from the initial experiment. If it is required to decrease
the variable range the possible Min. and Max. values are 0.25 and 2.176 (see figure 40).
Notice that Min. values can not be higher than 0.25215 (e.g., 0.26) since the variable value
0.25215 will be outside the new range. In addition, Max. can not be lower than 2.17595 (e.g.,
2.17) otherwise, the variable value 2.17595 will be outside the new range.

Figure 39. Setting variables when creating a new experiment from selected variants.

5.1. Creating New Experiment From Selected Variants 77


19.1

Figure 40. Variations of variable Min. and Max. values when creating a new experiment from
selected variants.

5.2. Termination criteria of algorithms


Four termination criteria of algorithms are supported in tNavigator. One is absolute, others are
relative:

• Objective Value to Reach;

• Objective Function Variation;

• Variables Variation;

• Stop in slow improvment.

An optimization algorithm will be stopped if at least one of three conditions (described in


this section) is satisfied. First criterion is absolute (set OF value to reach). An algorithm will
be stopped if there is a model’s variant, for which OF value is less than target value. Other
two criteria are relative.
General idea of relative stopping criteria usage. During history matching process there
is a possible situation that model variants that are generated become ”more and model similar
with each other”. Thus OF values for them become very close (OF variation becomes small).
This behavior can mean that the model variant that is quite good enough (for this set of pa-
rameters) is already found and we need to stop algorithm. In this case relative optimization
algorithm stopping criteria can be used. Verification will be performed to check if we have a
wide range of OF values from minimum to maximum (Objective Function Variation), and if if
we have a wide range of variables values from minimum to maximum (Variables Variation).

5.2. Termination criteria of algorithms 78


19.1

If the range is sufficiently wide then the algorithm continues its work, else – it will be stopped.

Optimization algorithm will be stopped if at least one of the four conditions described
below is satisfied.
Objective Value to Reach
Define the target value of OF. An algorithm will be terminated if there is a model’s variant
for which OF value is less than a target value. Default value is zero.
Objective Function Variation
Define the value of variation of objective function (in percents). An algorithm will be
terminated if deviation of OF from average value becomes less than the defined value.
Variables Variation
Define the variable’s value (in percents). Algorithm will be terminated if the deviation of
each variable from average characteristic becomes smaller than the defined value.
Stop on slow improvement
Define the number of iterations (Iteration Count) and the value of OF value improvement
(Improvment Value) in percent. Algorithm will be terminated if objective function value will
not be improved by specified number of percent after the selected number of iterations.

Clarification.
Range of OF values in population is the difference between maximal and minimal value.
Average characteristics is an average of maximal and minimal values in population.
The notion of ”population” (set of model variants) for stopping criterion check differs depend-
ing on the type of optimization algorithm.

• Differential Evolution – the size of population is predefined for algorithm, it can be


changed manually using Advanced settings (value 6 for default).

• Simplex Method – population is the set of N + 1 model variants (current vertices of


simplex).

• Particle Swarm Optimization (classic) – the size of population is predefined for al-
gorithm, it can be changed manually using Advanced settings (parameter swarm size,
default – 14).

• Particle Swarm Optimization (flexi) – the size of population is equal to the product of
swarm size and difference between one and proportion of explorers; it can be changed
manually using Advanced settings (default: population size is 10 · (1 − 0.5) = 5).

• Response Surface – two last calculated variants are taken to check the algorithm stop-
ping criterion.

5.2. Termination criteria of algorithms 79


19.1

5.3. Response Surface (Proxy models)


The Response Surface method is an approximation-based optimization algorithm of minimiz-
ing an objective function. Maximal number of iterations is defined for this algorithm. One
iteration needs one simulator launch.
Objective function can be set in combination with this algorithm.
On each iteration the algorithm builds a quadratic polynomial approximation of an objec-
tive function. First the Pearson correlation for each monomial is calculated. Monomials with
least correlation values remain unused. Then coefficients of this polynomial are chosen by the
method of least squares. Finally, the minimum of current polynomial is calculated and this
point is set as the next point of the algorithm.

The detailed theoretical description of Proxy models construction is available in the section
Proxy model.

5.3. Response Surface (Proxy models) 80


19.1

5.4. Differential Evolution


5.4.1. Brief description of the algorithm

Differential Evolution (DE) is a stochastic optimization algorithm aimed to minimize an ob-


jective function in a given search space. Maximal number of iterations is defined for the
algorithm. One simulator launch is performed during one iteration. Objective function can be
set in combination with this algorithm.
The algorithm operates with some set of vectors from the search space. This set is called
population. The more the population, the better algorithm will "feel" the objective function.
During first iterations the algorithm fills initial population with random vectors from the
search space. Vector corresponding to base model is always included in population. Once the
initial population filled, DE generates sample vector at every iteration and calculate objective
function of it by launching simulator. Sample vector is generated by random mixing of target
vector and mutant vector. Every component of sample vector is either component of target
vector or mutant vector. The Cr parameter represents probability of every component of
target vector to be replaced by component of mutant vector. Selection of target vector from
population can be customized.

Vsample = MixingCr (Vtarget ,Vmutant )

Mutant vector is composed as a sum of base vector and a few differences of random
vectors from the population multiplied by F parameter.
Selection of base vector from population and number of differences can also be cus-
tomized.
If number of differences is 2, the formula for calculating mutant vector will be:

Vmutant = Vbase + F · (Vrandom1 −Vrandom2 ) + F · (Vrandom3 −Vrandom4 )

Once the objective function of sample vector is calculated, it is compared with the objec-
tive function of target vector.

fob jective (Vsample ) ? fob jective (Vtarget )

If sample vector provides better objective function value it replaces target vector in pop-
ulation. And then DE switches to next iteration until number of iterations exceeds maximum
defined by user.

5.4.2. More about parameters


This section includes detailed description of algorithm parameters and their influence on its
working. For further explanation the following notation will be used.

5.4. Differential Evolution 81


19.1

Notation.
Niter — maximal number of simulator launches (number of iterations)
Vsample — sample vector (calculated on iteration)
Vtarget — target vector (used in creation of sample vector, will be replaced by it if it will
be better)
Vbase — base vector (used in creation of mutant vector)
Vmutant — mutant vector (used in creation of sample vector)
Vrandom1 , Vrandom2 , . . . — random vectors (different from target and base vectors, used
in creation of mutant vector)
N p — population size (number of vectors from which target, base and random vectors
are selected)
F — weight of differences (used in creation of mutant vector)
Cr — crossover (component replacement probability, used in creation of sample vector)
N_di f f — number of differences (number of random vector pairs, used in creation of
mutant vector)
Random_seed — random number (determines the initial state of pseudorandom number
generator)
N_sim_calc — number of simultaneously calculated models (for parallel run)

Parameters connection.
Algorithm parameters are connected by two main formulae used in creation of sample
vector at each iteration. They have been mentioned in section 5.4.1. Now rewrite them using
the notation.

Vmutant = Vbase + F · (Vrandom1 −Vrandom2 ) + F · (Vrandom3 −Vrandom4 ) + . . .


Vsample = mixCr (Vtarget ,Vmutant )

Parameters domain.
Niter — not less than N p + 1
Np — not less than 2 + 2 · Ndi f f

Vtarget — Worst (a point with worst objective function value


is selected from population)
Random (random point is selected from population)

5.4.2. More about parameters 82


19.1

Vbase — Best (a point with best objective function value


is selected from population)
Random (random point is selected from population)
F — on the interval (0, 1)
Cr — on the interval (0, 1)

N_di f f , Random_seed and N_sim_calc parameters are set as positive integers. If


Random_seed = 0 is used, this parameter will be generated automatically.
Parameters influence on algorithm working.
• Niter
Number of iterations limit. The parameter determines maximal number of variants after
which the algorithm will stop. Note that this parameter means the same (maximal num-
ber of variants) while using parallel run (see below), the meaning does not change to
number of variants multiplied by parallel run parameter.

• Np
Number of points which are used in creation of a new point at each iteration. At the
beginning of algorithm working the initial population is scattered through the search
space.
N p parameter increasing leads to better algorithm "sense" but also increasing of pop-
ulation inertness. Thus probability of finding the global minimum will increase but
rate of convergence to local minimum will slow down. Connection between the rate of
convergence and the population size is ambiguous.

• F Weight of differences vector. The parameter determines a value of deviation from base
vector for mutant vector. With small value of F a premature convergence may appear.
Small values of F lead to search space localization near current population points
which is suit for the local minimization problem. Large values of F make it possible
to examine search space far beyond current population bounds but decrease the rate of
convergence, such values correspond to the global minimization problem. However, too
small or too large values of F do not provide good results in both cases.
Note also that F parameter is changed while algorithm working.

• Cr
Crossover. The parameter determines probability which is used in creation of sample
vector when components of target vector are replaced by components of mutant vector
or stay the same. For each sample vector one randomly chosen component is always
from mutant vector. Other components are from mutant vector with probability of Cr .
Thus the more Cr value, the more components of target vector will be replaced. Small
values of Cr are suit for separable problems, large values are for nonseparable problems.
History matching problems are generally nonseparable. Note, however, that overly large
values of Cr do not provide good results.

5.4.2. More about parameters 83


19.1

• N_sim_calc
Parameter for parallel run. The parameter determines number of variants which will be
calculated simultaneously.
Parallel version of DE is asynchronous, and parallel run enables algorithm to examine
more deeply properties of current population, scatter band of generated sample points
becomes wider. This leads the local search version to an extent turning towards global
search if using parallel run with increasing number of iterations.
It may be good to chose random selection of target and base vectors when using ad-
vanced version with parallel run.
It is recommended to set N_sim_calc = N p if there is corresponding computational
power. Speedup and/or improvement of the quality of convergence may be obtained by
using N_sim_calc 6 2 · N p.
There are two ways of handling Niter when using parallel run. The first way is to in-
crease number of iterations (this is essential if the initial Niter is small) to obtain the
quality improvement with the same calculation time as in sequential version (increasing
of Niter within the bound of Niter · N_sim_calc). The second way is to leave unaltered
number of iterations (e.g. if the initial Niter is large enough) to decrease the time of
algorithm working just by parallelism.
On the whole, for using parallel run number of iterations increasing is desired (may be
less than N_sim_calc times as).

5.4.3. Algorithm versions


There are two predefined algorithm versions: for local and global search.

• Local search version.

The version is aimed to fast convergence to a local minimum. It does not provide the
possibility to analyse sensitivity of the objective function in the search space and search
for the global minimum.
Recommended values of Niter (for sequential run) : 30-60.
Recommended values of N_sim_calc: 6 (acceptable values 6 12). Corresponding in-
creasing of Niter is desired.

• Global search version.

The version is for global minimum search. It requires large number of iterations but
makes it possible to search the most qualitative points in the search space.
Recommended values of Niter : more than 200.
Recommended values of N_sim_calc: 6, 12 (acceptable values 6 24). Corresponding
increasing of Niter is desired.

• Advanced version.

5.4.3. Algorithm versions 84


19.1

The version provides user the ability to change algorithm parameters.

5.4.3. Algorithm versions 85


19.1

5.5. Simplex method


5.5.1. Definitions and brief algorithm description
The Nelder-Mead algorithm [1] (or simplex method) is designed to solve the classical uncon-
strained optimization problem of minimizing a given nonlinear function of n variables. The
method

• uses only function values at some points;

• does not try to form an approximate gradient at any of these points.

Hence it belongs to the general class of direct search methods. Objective function can be
set in combination with this algorithm.
The Nelder-Mead method is simplex-based. A simplex S ⊂ Rn – the convex hull of n + 1
vertices x0 , x1 , ..., xn ∈ Rn . For example, a simplex in R2 is a triangle, and a simplex in R3 is
a tetrahedron.
A simplex-based direct search method begins with a points set x0 , ..., xn ∈ Rn , that are con-
sidered as the vertices of a working simplex S , and the corresponding set of function f values
at the vertices fi = f (xi ), i = 0, ..., n. The initial working simplex S has to be nondegenerate,
i.e., the simplex points must not lie in the same hyperplane.
The method then performs a sequence of transformations of the working simplex S , aimed
at decreasing the function values at its vertices. At each step, the transformation is determined
by computing one or more ”test” points, together with their function values, and by comparison
of these function values with those at the vertices.
This process is terminated when the working simplex S becomes sufficiently small in
some sense, or when the function values fi are close enough in some sense (provided f is
continuous).
The Nelder-Mead algorithm typically requires only one or two function evaluations at each
step, while many other direct search methods use n or even more function evaluations.

5.5.2. Algorithm
Initial simplex.
The initial simplex S is usually constructed by generating n + 1 vertices around a given
input point xin ∈ Rn . In practice, the most frequent choice is x0 = xin to allow proper restarts
of the algorithm. The remaining n vertices are then generated to obtain one of two standard
shapes of S :

• S is right-angled at x0 , based on coordinate axes, i.e.

xi = x0 + hi ei , i = 1, ..., n,

where hi is a stepsize in the direction of unit vector ei ∈ Rn .

• S is a regular simplex, where all edges have the same specified length.

5.5. Simplex method 86


19.1

Simplex transformation algorithm.


One iteration of the Nelder-Mead method consists of the following three steps:

1. Ordering. Determine the indices xh , xs , xl of the worst, second worst and the best vertex,
respectively, in the current working simplex S

fh = max fi
i

fs = max fi
i6=h

fl = min fi
i

In some implementations, the vertices of S are ordered with respect to the function
values, to satisfy f0 6 ... 6 fn . Then l = 0, s = n − 1, h = n.

2. Centroid. Calculate the centroid c of the best side – this is the one opposite the worst
vertex:
1 n
c = ∑ xi .
n i6=h

3. Transformation. Compute the new working simplex from the current one. First, try to
replace only the worst vertex xh with a better point by using reflection, expansion or
contraction with respect to the best side. All test points lie on the line defined by xh and
c, and at most two of them are computed in one iteration. If this succeeds, the accepted
point becomes the new vertex of the working simplex. If this fails, shrink the simplex
towards the best vertex xl . In this case n new vertices are computed.
Simplex transformations in the Nelder-Mead method are controlled by four parameters:
α for reflection, β for contraction, γ for expansion and δ for shrinkage. They should
satisfy the following constraints:
α > 0,
0 < β < 1,
γ > 1, γ > α,
0 < δ < 1.

The standard values, used in most implementations, are


1 1
α = 1, β= , γ = 2, δ= .
2 2

The effects of various transformations are shown in the corresponding figures. The new
working simplex is shown in green.

5.5.2. Algorithm 87
19.1

Figure 41. Reflection.

• Reflect. Compute the reflection point xr :

xr = c + α(c − xh ),

and fr = f (xr ). If fl 6 fr < fs , accept xr and terminate the iteration.


• Expand. If fr < fl , compute the expansion point xe = c + γ(xr − c); fe is f at
this point. If fe < fr , accept xe ; otherwise, accept xr . Iteration is terminated in
both cases.

Figure 42. Expansion.

• Contract. If fr > fs , compute the contraction point xc by using the better of the
two points xh and xr .
– Outside. If fs 6 fr < fh , then xc = c + β (xr − c); fc is f value at point xc .
If fc 6 fr , accept xc and terminate the iteration. Otherwise, perform a shrink
transformation.
– Inside. If fr > fh , then xc = c + β (xh − c); fc fc is f value at point xc . If
fc < fh , accept xc and terminate the iteration. Otherwise, perform a shrink
transformation.

5.5.2. Algorithm 88
19.1

Figure 43. Contraction ”Outside”.

Figure 44. Contraction ”Inside”.

• Shrink. Compute n new vertices xi = xl + δ (xi − xl ); fi = f (xi ) for i = 0, ..., n, j 6=


l.

Figure 45. Shrinkage.

The shrink transformation was introduced to prevent the algorithm from failing.
Failed contraction can occur when a valley is curved and one point of the simplex

5.5.2. Algorithm 89
19.1

is much farther from the valley bottom than the others; contraction may then cause
the reflected point to move away from the valley bottom instead of towards it.
Further contractions are then useless. The action proposed contracts the simplex
towards the lowest point, and will eventually bring all points into the valley.

5.5.3. Termination tests


A practical implementation of the Nelder-Mead method must include a test that ensures ter-
mination in a finite amount of time. The termination test is often composed of three different
parts: termx ,term f , and f ail .

• termx is the domain convergence or termination test. It becomes true when the working
simplex S is sufficiently small in some sense (some or all vertices xi are close enough).

• term f is the function-value convergence test. It becomes true when (some or all) func-
tion values f j are close enough in some sense.

• f ail is the no-convergence test. It becomes true if the number of iterations or function
evaluations exceeds some prescribed maximum allowed value.

The algorithm terminates as soon as at least one of these tests becomes true.
If the algorithm is expected to work for discontinuous functions f , then it must have
some form of a termx test. This test is also useful for continuous functions, when a reasonably
accurate minimizing point is required, in addition to the minimal function value. In such cases,
a term f test is only a safeguard for ”flat” functions.

5.5.3. Termination tests 90


19.1

5.6. Particle Swarm Optimization algorithm


5.6.1. Brief algorithm description
Particle Swarm Optimization (PSO) algorithm is stochastic optimization algorithm which pur-
posed to minimize objective function in search space. Algorithm was develop to model social
behavior. Then it was noted this algorithm is useful for optimization tasks.
For this algorithm maximal number of iterations is setting. For one algorithm iteration
simulator runs once. Objective function can be set in combination with this algorithm.
Algorithm works with set of particles which is called swarm. Each particle is described
by its position in space and velocity vector. Moreover, each particle remembers its local best
position. Swarm remembers global best position. The more swarm size, the better search space
will be explored.
Algorithm fills swarm by particles which have random positions and velocity vectors.
Position of base model is always in the swarm. After calculation of objective function values
at space points (i.e. particles positions), global and local best positions are updated. Particle’s
velocity vector is updating due to new data. Particle is moving along calculated vector.
That is, particle swarm explores search space optimizing objective function.
The main part of the algorithm is the formula of velocity vector modifying. Its type
depends on algorithm version. Two versions of algorithm are supported: Classical method of
Particle Swarm Optimization and FlexiPSO (based on the Muhammad Kathrada’s paper [2]).

5.6.2. Particle Swarm Optimization algorithm in general


Take a look at PSO algorithm in general:

1. Initialization of particle swarm components: positions in search space, velocity vec-


tors and best local positions. Initialization of the best global position of swarm.
Certain number of particles are generated, one of them corresponds to the base model.

2. Objective function calculation.


Calculation of objective function values at space points (particles positions). It is possi-
ble to parallel calculations.

3. Refreshing of the global and local best positions and other parameters of algorithm.

4. Velocity vectors modifying.


The main part of the algorithm. Velocity vectors modifying way depends on algorithm
version. Formulae of velocity modifying are below.

5. Particle shifting along their velocity vectors.


Refreshing of particles position. Particles move along new velocity vector.

5.6. Particle Swarm Optimization algorithm 91


19.1

6. Boundary conditions processing.


Particle can leave search space after moving along its velocity vector. To prevent this
situation method of particle reflection from boundary of search space is developed.
Reflection type is set by corresponding parameter.

7. Checking for algorithm termination conditions.


There are several conditions of algorithm termination:

• objective function value is equal to certain one;


• maximal number of iterations is achieved;
• variation of particles positions is achieved some small value.

If any of these conditions is true, then algorithm is terminated. Otherwise the process is
continued from step 2.

5.6.3. Velocity update formula


In this section several formulae of particle velocity update will be viewed in details.
Formula of velocity update in classical version of PSO.
In this version of algorithm the formula has the following type:

V̂ = w ·V + r1 · nostalgia · (PBest − X) + r2 · sociality · (GBest − X),


where

V̂ — updated velocity vector of the particle;

V — old velocity vector of the particle;

X — vector of search space which describes particle’s position;

PBest — vector of search space which describes best local particle’s position;

GBest — vector of search space which describes best global particle’s position;

w — inertia coefficient;

r1 , r2 — random numbers of interval [0, 1];

nostalgia — nostalgia coefficient (external parameter);

sociality — sociality coefficient (external parameter).

5.6.3. Velocity update formula 92


19.1

Velocity update formula of FlexiPSO version.


In this version of algorithm, which was developed by Muhammad Kathrada [2], formula is
more complex. In comparison with canonical version the number of parameters is increased.
Moreover, three types of particles behavior are entered: standard, egoistic and highly social.
For each particle its behavior type is determined at each step independently on the other
particles. Frequencies, with which the last two types will be assigned, are determined by
parameters egoism_rate and comm_rate. The formula has the following expression:


w ·V + r1 · nostalgia · (PBest − X) + r2 · sociality · (GBest − X)+







 +r3 · neighborliness · (LBest − X), in case of standard behavior;
V̂ =
w ·V + r1 · nostalgia · (PBest − X), in case of egoistic behavior;







w ·V + r · sociality · (GBest − X), in case of highly social behavior.
2

where

V̂ — updated velocity vector of the particle;

V — old velocity vector of the particle;

X — vector of search space which describes particle’s position;

PBest — vector of search space which describes best local particle’s position;

GBest — vector of search space which describes best global particle’s position;

LBest — vector of search space which describes best position among Nneighbor neighbor
particles;

w — inertia coefficient;

r1 , r2 , r3 — random numbers of interval [0, 1];

nostalgia — nostalgia coefficient (external parameter);

sociality — sociality coefficient (external parameter);

neighborliness — neighborliness coefficient (external parameter).

Moreover, special set of swarm particles is separated, they are called “researchers”. Their
velocity vector is updated at approximation to the best global position.

5.6.3. Velocity update formula 93


19.1

5.6.4. Parameters influence on algorithm working


• Niter
Number of iterations limit. The parameter determines maximal number of variants after
which the algorithm will stop. Note that this parameter means the same (maximal num-
ber of variants) while using parallel run (see below), the meaning does not change to
number of variants multiplied by parallel run parameter.

• Ns
The number of particles in swarm. At the beginning of algorithm working swarm parti-
cles are randomly scattered through the search space.
Ns parameter increasing leads to probability of finding the global minimum increasing
but rate of convergence to local minimum will slow down. Connection between the rate
of convergence and the population size is ambiguous.

• N_sim_calc
Parameter for parallel run. The parameter determines number of variants which will be
calculated simultaneously.
This parameter doesn’t affect the algorithm “sensitivity” but allows to get results faster
and calculate more variants for the same time.

• wstart , w f inish
Initial and final values of inertia coefficient. It is recommended to provide condition
0 ≤ w f inish ≤ wstart ≤ 1.
This parameter allows particles to explore search space carefully at initial iterations but
to converge faster at final ones.

• nostalgia
Nostalgia of swarm particles.
This parameter is about particle’s attraction to its best local position.
Increasing of this parameter leads to more careful search space exploration but rate of
convergence to local minimum will slow down.

• sociality
Sociality of swarm particles.
This parameter is about particle’s attraction to the best global position of swarm. In-
creasing of this parameter leads to increasing of convergence rate, but it leads to less
careful search space exploration and may cause algorithm stopping at local minimum.

• damping_ f actor
Elasticity factor of collisions with boundaries. This parameter characterizes particles’
behavior around search space boundary.
In algorithm the method of particle reflection from boundary of search space is devel-
oped. It is applied when particle tries to leave search space. In this case particle’s elastic

5.6.4. Parameters influence on algorithm working 94


19.1

bump with boundary and its reflection are emulated. After bumping particle’s velocity
is decreasing; elasticity factor characterizes velocity decreasing.
In other words, if elasticity factor is 1, then the bump is perfectly elastic, and the
particles bounce off the wall with the same velocity as before. Otherwise, if elasticity
factor is 0, then the particles change their velocities to 0 and stick to the boundary.
This parameter is important for exploring boundary region. It is not recommended to
set value of it to minimal or maximal value (0 and 1), because then particles either will
stick to boundary, or will be unable to settle around it.
This parameter should be from the interval [0, 1].
• Nneighbor
The number of particle’s neighbors. This parameter is used only in FlexiPSO version.
It allows to use not only the best local and global positions, but the best neighbor
positions too. It allows to explore search space more carefully. Recommended value is
25% of swarm size. That is, it is necessary to provide condition Nneighbor ≤ Ns .
• neighborliness
Swarm particles’ neighborliness factor. This parameter is used only in FlexiPSO version.
This parameter characterizes the attraction of particles to the best position of their neigh-
bors; this parameter is some kind of average of nostalgia and sociality.
• explorer_rate
Percent of particles with special behavior type “explorer”. This parameter is used only
in FlexiPSO version.
This parameter sets percent of particles which make more “wide steps” in search space.
Increasing of this parameter make the exploration more broad, but less detailed.
Parameter should belong to interval [0, 1]. Recommended value is 0.5.
• egoism_rate
Percent of egoism in particles’ behavior. This parameter is used only in FlexiPSO ver-
sion.
It sets frequency of cases in which special behavior type “egoism” is turned on.
Parameter should belong to interval [0, 1]. Recommended value is 0.1.
• comm_rate
Percent of collectivity on particles’ behavior. This parameter is used only in FlexiPSO
version.
It sets frequency of cases in which special behavior type “collectivity” is turned on.
Parameter should belong to interval [0, 1]. Recommended value is 0.6.
Moreover, there is a connection between parameters egoism_rate and comm_rate.
Egoistic behavior and highly social behavior exclude each other, i.e. egoism_rate +
comm_rate ≤ 1.

5.6.4. Parameters influence on algorithm working 95


19.1

• rel_crit_dist
Relative critical distance on which particles-”explorers” can approach to the best global
swarm position. This parameter is used only in the FlexiPSO version. Parameter varies
in the interval [0, 1].
Particles-”explorers” search for new best global swarm positions, but they should
not approach too close to the current best global position GBest . For each particle-
”explorer” position X and each variable i the condition: |X(i) − GBest| > (imax −
imin ) × rel_crit_dist should be valid, where imax is the maximum of variable i, imin
is the minimum of variable i. If a particle comes too close to the best global position
it will bounce off. The parameter rel_crit_dist formalizes the concept of ”too close”.
Thus a particle-”explorer” should be located from the current best global position GBest
at minimal distance (imax − imin ) × rel_crit_dist in each coordinate.

5.6.4. Parameters influence on algorithm working 96


19.1

5.7. Multi-objective Particle Swarm Optimization algorithm


This approach is an extension of the standard Particle Swarm Optimization algorithm. In the
single-objective algorithm the general objective function that is a sum of separate terms based
on different parameters (rate mismatches, etc.) and different objects (wells, groups, etc.) is
minimized (see sections 4.2.2).
In multiobjective approach there are multiple objectives (objective functions), each of
which can be a separate term or a sum of terms. It is supposed that there are multiple solutions
providing an optimum balance between different objectives while preserving the diversity of
these solutions.
Multiobjective Particle Swarm Optimisation method uses the crowding distance approach
and the mutation operator to provide the diversity of solutions [7]. Advantages of this method
include providing large variety of solutions and finding all possible variants of good fitting
model solutions that have similar match quality. All of these decrease the probability of
algorithm sliding towards the local minimum and provide a more reliable forecast.

5.7.1. Brief description of algorithm


In contrast to the single objective optimization that searches by one objective, the main
problem of multi-objective optimization is the presence of conflicting objectives (objective
functions). In such case improvements in one objective may cause deterioration in another.
However, there are trade-offs between such conflicting objectives, and the task is to find so-
lutions balancing these trade-offs. A balance is achieved when a solution cannot improve any
objective without degrading one or more of other objectives. Such solutions exist and are
called non-dominated. A set of non-dominated solutions form the Pareto front (see figure 46).
The multi-objective optimization algorithm is based on the construction of the Pareto
front and searching of the leader solution in the front using the crowding distance approach
maintaining the diversity of solutions of Pareto front.
The crowding distance is calculated as follows [7]. For each objective function the set
of particles (model variants) is sorted in descending order of objective function values. The
crowding distance of a particle is the average of the distances between this particle and two
neighboring particles in the objective function space. The total crowding distance of a particle
is a sum of distances for each objective function.
Similar to Particle Swarm Optimisation method, the algorithm works with set of particles
called swarm. The Pareto front (archive of non-dominated solutions) is created for the swarm.
The best global swarm position is included into the Pareto front. To specify the best global
position and delete particles from the limited archive of non-dominated solutions the crowding
distance approach is used.

5.7.2. Multi-objective Particle Swarm Optimisation algorithm implementation


The main MOPSO algorithm steps are the following:

1. Initialization of swarm of particles (model variants): their positions in a search space,


their velocity vectors and their best local positions. The number of generated particles

5.7. Multi-objective Particle Swarm Optimization algorithm 97


19.1

Figure 46. Pareto front.

is specified by Swarm size (by default 14). It can be changed using Advanced Param-
eters.

2. Construction of Pareto front. Initialization of the best global swarm position in the
Pareto front (external archive). The Pareto front can be visualized in the crossplot (see
section 6.7.1).

3. Update velocity vectors of particles. Formulas for calculation of new velocities for
different PSO algorithm versions are presented in the section 5.6.

4. Update particle positions along vector velocities. Particles move in directions of new
velocity vectors.

5. Mutation. This operation is implemented if Flexi PSO is used (see section 5.6.3).

6. Calculation of objective functions.

7. Update best local positions of particles.

8. Update of the Pareto front. An arbitrary replacement of particles when archive is full.

9. Evaluation of new best global position.

10. Control of stopping criteria. Algorithm stopping criteria are the following:

• Maximal number of simulator launches is reached;

5.7.2. Multi-objective Particle Swarm Optimisation algorithm implementation 98


19.1

• If deviation of objective function from average value becomes less than the defined
value (Obj.Function Variation, see section 5.2);
• If the deviation of each variable from average characteristic becomes smaller than
the defined value (Variables Variation, see section 5.2).

If any of these criteria is met, then algorithm is terminated. Otherwise the process is
continued from step 3.

5.7.3. MOPSO algorithm parameters


Narch is the archive size of all non-dominated solutions. This archive is used for searching the
best global swarm position.

5.7.3. MOPSO algorithm parameters 99


19.1

6. Analysis of results
In this section features of assisted history matching (AHM) module available via graphical user
interface (GUI) are described. The window of AHM project is shown in figure 47. The window
contains a horizontal panel: File, Queue, Results, Settings. Menus Queue and Results are
activated if you switch to tabs Calculations and Results located below, respectively.
Top panel buttons is located below.
There are three main tabs allowing to switch between Project Info, Calculations and Re-
sults.
A horizontal panel consists of:

1. menu File:

• – Create new experiments;


• – Load models for comparison. Allows to load model’s variants for compari-
son with available variants. This allows to merge history matching projects created
for the same model using the same set of variables, but calculated by different
experiments, algorithms (or, e.g., on different computers);
• – Load all models from folder for comparison. Define a path to folder, in
which subfolders containing model’s variants are saved. Model’s variants will be
sequentially loaded from all subfolders;
• – Open base model. The model located in the CURRENT folder will be
opened. Further you can work with this model as a normal model, i.e. you can
modify it (e.g., change relative permeability, add wells etc.);
• Close.

2. menu Queue

3. menu Results:

• Check/uncheck all;
• Check new models. If this option is selected then during calculation results of
new calculated variant is automatically added to the Results tab (Graphs, Results
Table etc.);
• Hide error models;
• Keep Sorted. If this option is selected then when adding a new calculated
variant all variants are automatic resorted in accordance with earlier defined sorting
(e.g., in descending order of oil rate mismatch);
• Group Checked. If this option is selected then checked models are grouped
on the top of list of model’s variants;
• Export. Export table data to file;

6. Analysis of results 100


19.1

• Load pressure. Open Load history into base model dialog menu;
• Settings.

4. menu Settings:

• General;
• New Job Options;
• New Job Options (Advanced);
• Common Email Settings. Allows to configure Email settings to get notifications
about status of calculations;
• Current Window Email Settings. Allows to configure Email settings to get noti-
fications about completeness and/or status of calculations;
• Power Savings;
• Appearance;
• Info Columns. Check columns to see on the tab Calculations.

6. Analysis of results 101


19.1

6.1. Project Info


The Project Info tab shows information about history matching project (see figure 47). The tab
contains the project’s name, name of base model (based on which the project has been created)
the name of experiment used in the project, experiment’s parameters. After creating project
the list of experiment’s variants will be created and run automatically. Near experiment’s name
few buttons, allowing to work with experiment, are located:

• – stop experiment. Stop all running variants;

• – run experiment. Run all experiment’s variants in accordance with settings;

• – delete an experiment from a project without restoring data;

• – delete experiment’s files while keeping the possibility to restore it. All experi-
ment’s files will be deleted. The experiment’s entry in the project will remain. Experi-
ment configurations and list of variants will be available;

• – restore experiment’s files and add restored files to queue;

• – add comment to the experiment.

The window containing the list of experiment’s variants is located on the left. It is shown
variant’s id, its status (calculated, calculating or queued) etc. By right–mouse clicking on the
variant you can get information about variant, or create new experiment starting from this
variant, or create user variant from this variant (see figure 47).

Figure 47. History Matching. Project Info.

6.1. Project Info 102


19.1

6.2. Calculations
The Calculations tab contains full information about status of calculations (see figure 48). It is
shown the full Path to experiment’s variant, Model Type, Cores Count using for calculations
and calculation’s Status. If a calculation is done the runtime is shown. If a calculation is still
running the time remaining until its completion is shown. On the right commands to work
with calculations are located:

• Run Jobs. Run calculation of the selected experiment’s variant;

• Pause Jobs. Pause the current calculation of variant;

• Stop Jobs. Stop the selected calculation;

• Rerun Jobs. Recalculate the selected experiment’s variant;

• Kill Jobs.

• View Results. View results of the selected experiment’s variant;

• View Graphs.

• View Log.

• Move Up. Move variant up in the queue for calculation;

• Move Down. Move variant down in the queue;

• (Un)Select All.

• Show Finished.

• Log.

• Options. see Settings

6.2. Calculations 103


19.1

Figure 48. History Matching. Calculations.

6.2. Calculations 104


19.1

6.3. Results
The Results tab allows to visualize obtained results and analyze them in order to evaluate the
quality of history matching. Main tabs of Results tab are:

• Results Table;

• Mismatch calculation;

• Graphs;

• Crossplot;

• Histogram;

• Stacked Plot;

• Analysis;

• Mds.

Below are described the common interface elements of the Results tab.

6.3.1. Top panel buttons


Buttons located on top panel are used to perform general actions such as clusterization, con-
figuration of custom objective function etc.

• – create new experiment from the project’s base model;

• – load model for comparison;

• – open base model. Open base model of history matching project for modification.

• Groupset Manager. Allows to colorize model’s variants, experiments, created


clusters and groups with user defined or default colors. Moreover, allows to delete
created groups and clusters;

• Create clusterization. Creates clusterization of model’s variants;

• R2 table. Calculates a table of coefficients R2 to analyze the quality of model’s


history matching for model’s objects and parameters;

• Load history for BHP, THP or WBP from file into base model;

• Graph Calculator. Graph calculator allows to do different operation for graphs etc.
by means of Python programming language.

6.3. Results 105


19.1

6.3.2. Left panel buttons


Buttons located on the left panel are used to perform actions on a list of model located to the
right.

• Check/uncheck all;

• Hide error models;

• Variant Filter. Allows to select required variants to visualize;

• Check new models. If this option is selected then during calculation results of new
calculated variant is automatically added to the Results tab (Graphs, Results Table etc.);

• Group Checked. If this option is selected then checked models are grouped on the
top of list of model’s variants;

• Keep Sorted. If this option is selected then when adding a new calculated variant
all variants are automatic resorted in accordance with earlier defined sorting (e.g., in
descending order of oil rate mismatch);

• Show/hide log.

6.3.3. Right panel buttons


Buttons located on the right panel perform actions available for the tab.

• Export. Export table data to a file;

• Settings.

6.3.2. Left panel buttons 106


19.1

6.4. Results Table


A table of results contains the obtained results of calculations (oil rate, water rate, average
pressure etc.), mismatches and variables used in calculations for all experiment’s variants (see
figure 49). Obtained results are shown at the time step at which the time slider is set. It is
possible to visualize data in accordance to objects.
Results visualized in the table can be configured using the button Settings. In the dia-
log Configure List of Parameters (see figure 50) select an object (groups, wells, mismatches,
variables etc.) at the top left of the window and corresponding parameters at the bottom left of
the window. To add the selected parameter press the button Add Parameter(s). The selected
parameters are shown to the right. To delete a parameter from the list select the parameter and
press the button Delete Selected Params

Figure 49. History Matching. Results Table.

6.4.1. Mismatch calculation


In the Results Table mismatches are shown for the whole time period: from the initial time
step up to the final step
A mismatch of rate (oil, water, liquid etc.) is calculated as:
N
Mismatch = ∑ ln|Rate(H) − Rate(C)|
n=0

where n is the number of calculated steps (from the initial time step to the last step N ), ln
is the length of time step, Rate(H) and Rate(C) are historical (H) and calculated (C) values
of rates, respectively, at nth time step. The difference between historical and calculated totals
(oil, water, liquid etc.) is calculated as:
N
Di f f = ∑ ln|Total(H) − Total(C)|
n=0

where Total(H) and Total(C) are historical (H ) and calculated (C ) total values at nth time
step.

6.4. Results Table 107


19.1

Figure 50. Configure List of Parameters.

!
To calculate mismatches or the differences between total values for the spe-
cific time period (starting from the intermediate time step k) you can use an
objective function.

! For each model’s variant variables will be written in the corresponding file
of variant in the keyword PREDEFINES (see 12.1.26).

6.4.1. Mismatch calculation 108


19.1

6.5. Graphs
The Graphs tab is visualized the obtained results.

Figure 51. History Matching. Graphs.

An example of the tab Graphs is shown in figure 51. In the top window on the right an
object (wells, fields etc.) is selected, for which graphs will be created. In the bottom window
a parameter (oil rate, water rate etc. ), which will be visualized, is selected. The Add new
graph button below the list of parameters allows the creation of custom graphs (see 6.6. Graph
calculator).
On the right panel the following buttons are located:

• Show graph points as a table;

• Show difference between historical and calculated values.

6.5. Graphs 109


19.1

6.6. Graph calculator


Graph calculator allows the arbitrary combination of existing parameters and data series of the
model using mathematical functions, numerical differentiation and/or integration, conditional
operators, cycles, and other means of Python programming language.

For usage examples see the training tutorial COMMON1.4. How to use
Graph Calculator Python.

Graph calculator is also available in the simulator and Model Designer GUI, where it
allows working with individual connections and other objects.

Figure 52. Graph calculator

Text editor of the graph calculator window allows entering of arbitrary code in Python. The
code is executed upon pressing Calculate. Importing standard libraries using import <name>
is possible (see also Importing libraries). Python console output is directed to the window
below and can be used for debug purposes.
An arbitrary number of user scripts can be created and managed using buttons Add /
Delete. They are saved as separate *.py files at hmf/GraphCalculator/ when the project
is saved.
For the resulting graphs to appear in the user interface, they have to be passed through
the export() function (see below). A script may contain arbitrarily many export statements.
Once a script with proper export statements has been executed, the resulting graph appears in

6.6. Graph calculator 110


19.1

the list of available graphs (see figure 53) and can be selected for display individually or in
combination with other graphs. Its name and dimension are specified in the export statement.
Whether it will appear for Field, Group, Well, FIP, or Connection object is determined by its
type, which in turn is determined by its declaration (see below graph function under Global
functions) or by the type of the graph(s) it was derived from. Inconsistency in these types may
lead to an error in the script.
If a script does not export any graphs, its execution triggers a warning that suggests
to use the Auto Export Graph button. Upon pressing this button, an export statement is
automatically added after the last line of code. The variable used in the last assignment
operator is passed to export() as an argument. Sometimes the calculator may be used just for
some cursory calculations while displaying the result via the console output window, without
exporting any graphs. In this case the warning may be ignored.

Figure 53. User graphs

!
Note that user graphs from other scripts, including those defined in the same
template, are not accessible from the code by their names. You may, however,
produce multiple user graphs from a single script.
Custom graph may be used in the objective function (see Objective function based on user
graphs for a field).

6.6.1. Data structures and functions


Programmatically, a graph object is a complex data structure containing entries for all time
steps, for all objects of corresponding type (wells, groups, connections, FIP regions), and
for all loaded models. Graphs of the same type may be transformed and combined with the
arithmetical operations and mathematical functions, which apply to them elementwise. Graphs
may also be combined with scalar values or with graphs of lower dimension. Besides, there

6.6.1. Data structures and functions 111


19.1

are special functions for numerical differentiation, integration, averaging over sets of objects,
etc.
Lower right section contains the list of mnemonics (same as in the keyword SUMMARY,
see 12.18.1). Their meaning is explained in the pop-up messages. Mnemonics are grouped by
type (field, group, well, etc.); types are selected in the lower left field. Mnemonics can be
used directly in the code and are interpreted as graph objects containing values for all time
steps and for all objects of corresponding type (wells, groups, etc).
Note that the mnemonics only work on the time steps for which the graphs
! have been recorded. The graphs which were not recorded on a particular step
are interpolated with the last available value. Result recording options are
described in section 9.1 of tNavigator User Manual.
If the model contains any variables created by the keyword UDQ (see 12.19.165), those
can be used by putting their names in the code. They are also interpreted as graph objects.
For the purpose of retrieving the subsets or individual values of data, a graph object works
as a multidimensional array indexed by the objects of the following types (depending on its
own type):

Graph type Index objects


Well Model, timestep, well
Group Model, timestep, group
Connection Model, timestep, connection
FIP Model, timestep, FIP region
Field Model, timestep
For example, wopr[m1,w1,t1] returns a single value of oil rate for the well w1 in the
model m1 at timestep t1. The indexing elements may be entered in arbitrary order (so that
wopr[t1,w1,m1] is equivalent to the example above). An expression where only a part of the
indexes is specified returns the corresponding subset of the graph. For example, wopr[m1,
w1] returns a graph containing oil rates for the well w1 in the model m1 at all timesteps.
The code may include predefined objects (field, wells, groups, time steps, in simulator GUI
version also connections and FIP regions). For treating these objects, the following properties
and functions are defined and accessible on the right panel:
• Add well function
Well object has the following accessible properties and functions:
◦ .name is a property containing the name of the well.
Usage example: s1 = w1.name

!
Code fragments presented here and below are merely illustrations
of syntax. They are not self-sufficient and not intended to work if
copied-and-pasted to the calculator "as is". For the ready-to-use
examples see Usage examples.

6.6.1. Data structures and functions 112


19.1

◦ .is_producer() (no arguments) returns a time-dependent graph that casts to boolean


True when the well is a producer, and to False otherwise.
Usage example: if w1.is_producer(): hdo somethingi
◦ .is_opened() (no arguments) returns a time-dependent graph that casts to boolean
True when the well is open, and to False otherwise.
Usage example: if w1.is_opened(): hdo somethingi
◦ .is_stopped() (no arguments) returns a time-dependent graph that casts to boolean
True when the well is stopped, and to False otherwise.
Usage example: if w1.is_stopped(): hdo somethingi
◦ .is_shut() (no arguments) returns a time-dependent graph that casts to boolean True
when the well is shut, and to False otherwise.
Usage example: if w1.is_shut(): hdo somethingi

• Add group function


Group object represents a group of wells and has the following accessible properties:

◦ .name is a property containing the name of this group.


Usage example: s1 = g1.name
◦ .wells is a property which is an iterator object containing the wells of this group.
Usage example: for w in g1.wells: hdo somethingi

Iterator is a structure that provides an interface for traversing a


collection of elements one by one (for ... in ...). Can be transformed
i to an array which allows direct access to any element by number:
wells = [*g1.wells()]
w = wells[5]
◦ .parent_group is a property containing the parent group of this group.
Usage example: g2 = g1.parent_group
◦ .child_groups is a property which is an iterator object containing the child groups
of this group.
Usage example: for g in g1.child_groups: hdo somethingi

• Add model function


Model object has the following accessible property:

◦ .name is a property containing the model name (relevant when the results of multiple
model calculations are loaded).
Usage example: s1 = m1.name

• Add timestep function


Timestep object represents an individual step in the time line of the model, and has the
following accessible properties and functions:

6.6.1. Data structures and functions 113


19.1

◦ .name is a property containing the calendar representation of this time step object
according to the template (selected from the dropdown list in the Date format
field below).
Usage example: s1 = t1.name
◦ .to_datetime() (no arguments) returns the Python datetime object corresponding to
this time step. The object has standard Python properties and methods. Usage
example:
dt1 = t1.to_datetime()
if dt1.year > 2014: hdo somethingi
• Add graph function
Graph object represents a graph which may be either one of the standard graphs or
derived via calculations. The ultimate result of script execution is also an object of this
type. A graph has the following accessible functions:
◦ .fix(model=<model>,object=<object>,date=<timestep>) returns the value of the
specified graph for the given model, object, and timestep, which all must be spec-
ified as Python objects of the corresponding type, and not by name. Type of the
object (well, group, in simulator GUI version also connection or a FIP region)
must correspond to the type of the graph. All arguments are optional. If some of
them are missing, the function returns a data structure containing the values of the
graph for all possible values of the missing argument(s).
Usage example:
graph2 = graph1.fix(object=get_well_by_name('PROD1'))
takes a graph for all wells and returns a graph object for only one well, namely
PROD1.
◦ max,min,avg,sum(models=<models>,objects=<objects>,dates=<timesteps>) re-
trieve a subset of values for the given models, objects, and timesteps (all arguments
may include either arrays or single values), and then return the minimum, max-
imum, average, or sum of the resulting array. Arguments must be specified as
Python objects of the corresponding type, and not by name. Type of the objects
must correspond to the type of the graph. All arguments are optional. If some of
them are missing, the functions return an object containing the values of minimum,
maximum, average, or sum over all specified argument(s) for all possible values
of the missing argument(s).
Usage examples:
graph2 = graph1.max(objects=get_wells_by_mask('WELL3*'))
returns a graph object containing the maximum among the values of the original
graph for the wells with names WELL3*, i.e. WELL31, WELL32, WELL33, etc.;
graph2 = graph1.avg(dates=get_all_timesteps()[15:25])
returns a graph object containing the average among the values of the original
graph from the 15 th to the 24 th time step.
◦ .aggregate_by_time_interval(interval='<interval>',type='<type>') takes the ar-
ray of values of the original graph over the specified interval (possible values:

6.6.1. Data structures and functions 114


19.1

month, year) and derives a new graph where all steps within the interval have the
same value calculated according to the specified type:
– avg: average value;
– min: minimum value;
– max: maximum value;
– last: last value;
– sum: sum of values;
– total: difference between the last and first values.
The possible values of interval are:
– month
– quarter
– year
Usage example:
w1 = wopr.aggregate_by_time_interval(interval = 'year', type = 'avg')
returns a graph which is piecewise constant over one-year intervals, and the value
on each interval is the average of the original graph (wopr, that is, oil rate) over
that interval.
◦ .to_list() (no arguments) returns an array of values of the graph. This function only
works for one-dimensional graphs, otherwise it throws an error. To make a graph
one-dimensional, that is, dependent on time only, you have to exclude the depen-
dence on the model and the well, either specifying these explicitly via .fix(), or by
finding the value of .min(), .max(), etc. over them all. Usage example:
x=fopr.fix(model='BRUGGE_VAR_1').to_list()
returns an array of the field oil rate values for all time steps.

• Add global function


General purpose functions, including:

◦ exp(<number>), ln(<number>), sqrt(<number>) are mathematical functions: ex-


ponent, logarithm, and square root, respectively. When a graph is passed as an
argument, they apply to it elementwise.
Usage examples:
t = ln(y)
x = exp(r)
◦ diff(<series>) performs numeric differentiation of the time series, that is, return the
series of differences of successive values.
Usage example: graph2 = diff(graph1)
In this example we are calculating oil totals per time step from oil totals:

465, 1165, 2188, 3418, 4968 . . . → 465, 700, 1023, 1230, 1550 . . .

6.6.1. Data structures and functions 115


19.1

◦ diff_t(<series>) is the same as diff, only the results are divided by the time step
length in days. Usage example: graph2 = diff_t(graph1)
In this example we are calculating oil rates from oil totals. Let the time steps
represent months and have the duration of 31, 28, 31, 30, 31... days. Then:

465, 1165, 2188, 3418, 4968 . . . → 15, 25, 33, 41, 50 . . .

◦ cum_sum(<series>) performs numeric integration of the time series, that is, returns
the series of sums.
Usage example: graph3 = cum_sum(graph1)
In this example we are calculating oil totals from oil totals per time step:

465, 700, 1023, 1230, 1550 . . . → 465, 1165, 2188, 3418, 4968 . . .

◦ cum_sum_t(<series>) is the same as cum_sum, only the increments are multiplied


by the time step length in days.
Usage example: graph3 = cum_sum_t(graph1)
In this example we are calculating oil totals from oil rates. Let the time steps
represent months and have the duration of 31, 28, 31, 30, 31... days. Then:

15, 25, 33, 41, 50 . . . → 465, 1165, 2188, 3418, 4968 . . .

◦ if_then_else(<condition>,<option if true>,<option if false>) is the conditional op-


erator that works on array variables elementwise.
Usage example: graph1 = if_then_else(wopr > 10, 1, 0)
◦ get_well_by_name(<name>) returns a well by its name.
Usage example: w1 = get_well_by_name('prod122')
◦ get_group_by_name(<name>) returns a group by its name.
Usage example: g1 = get_group_by_name('group21')
◦ get_all_wells() (no arguments) returns an iterator object containing all wells.
Usage example: for w in get_all_wells: hdo somethingi
◦ get_all_groups() (no arguments) returns an iterator object containing all groups.
Usage example: for g in get_all_groups: hdo somethingi
◦ get_all_models() (no arguments) returns an iterator object containing all models
(relevant when the results of multiple model calculations are loaded).
Usage example: for m in get_all_models: hdo somethingi
◦ get_all_timesteps() (no arguments) returns the iterator object containing all time
steps.
Usage example: for t in get_all_timesteps: hdo somethingi
◦ get_timestep_from_datetime(<date>,mode = '<mode>') returns the time step by
the specified date, which must be a Python object of the type date or datetime.
According to the mode parameter, the time step is searched for in the following
manner:

6.6.1. Data structures and functions 116


19.1

– exact_match: searches for the exact match;


– nearest: searches for the nearest time step to the specified date;
– nearest_before: searches for the nearest time step before the specified date;
– nearest_after: searches for the nearest time step after the specified date;
Default: exact_match.
If the step cannot be found within the limitations of the mode, or if the specified
date falls outside the time range of the model, an error is returned.
Usage example:
t1 = get_timestep_from_datetime(date(2012,7,1), mode='nearest_after')

!
Most manipulations with Python datetime object re-
quire to load the corresponding external library before-
hand (see Importing libraries). This is done as follows:
from datetime import datetime
◦ create_table_vs_time(<array>) returns a graph containing a piecewise linear ap-
proximation of the given time series. The series must be represented by an array
of two-element tuples (date,value). Here the date must be a Python object of the
type date or datetime.
Usage example:
oil_price_list = []
oil_price_list.append((date(2011,1,1),107.5))
oil_price_list.append((date(2012,1,1),109.5))
oil_price_list.append((date(2013,1,1),105.9))
oil_price_list.append((date(2014,1,1), 96.3))
oil_price_list.append((date(2015,1,1), 49.5))
oil_price_list.append((date(2016,1,1), 40.7))
oil_price = create_table_vs_time(oil_price_list)
Here we build a graph of oil prices. For maximum clarity, the array is prepared by
adding elements one by one.
◦ get_wells_by_mask(<mask>) returns an iterator object containing wells that match
the given name mask. The mask may contain wildcards: ? means any character, *
means any number of characters (including zero).
Usage example: for w in get_wells_by_mask('prod1*'): hdo somethingi
◦ get_wells_from_filter(<filter name>) returns an iterator object containing wells that
are included in the given well filter. The filter must be created beforehand using
Well Filter (see the tNavigator User Guide).
Usage example: for w in get_wells_from_filter('first'): hdo somethingi
◦ shift_t(<original series>,<shift>,<default value>) returns the original graph
shifted by the specified number of time steps. The empty positions are padded
with the specified default value.
Usage example: graph2 = shift_t(graph1,3,10)

6.6.1. Data structures and functions 117


19.1

In this example we shift the historic records of oil rate which were mistakenly as-
signed to the wrong time. The series is shifted 3 steps to the right, and the starting
positions are filled with the first known value of oil rate (10).

10, 12, 19, 24, 30, 33, 31, 27, 25 . . . −→ 10, 10, 10, 10, 12, 19, 24, 30, 33 . . .
| {z } | {z }
graph1 shift_t(graph1,3,10)

◦ get_project_folder() (no arguments) returns the full path to the folder containing
the current model, which you might need in order to write something to a file.
Usage example: path = get_project_folder()
◦ get_project_name() (no arguments) returns the file name of the current model with-
out an extension.
Usage example: fn = get_project_name()
◦ export(<expression>,name='<name>',units='<units>') exports the given expres-
sion to the user graph, while specifying its name and (optionally) units of mea-
surement.
The expression should evaluate to a graph object, otherwise an error will occur.
Units should be specified by the mnemonic name which can be selected from a
dropdown list to the right.
Usage example: export(w1, name='graph1')
◦ graph(type='<type>',default_value=<value>) initializes a graph of the given type
(field, well, group, in simulator GUI version also conn for connections, or fip for
FIP regions) and fills it with the given default values.
Usage example: tmp = graph(type='field', default_value=1)

6.6.2. Importing libraries


Python has a considerable body of libraries for data processing, including sophisticated math-
ematical methods, export to Excel and other common formats, etc. All this can be accessed
from the graph calculator.
Standard Python libraries can be imported as is:
import sys
To import custom or third-party libraries, do the following:
1. Install Python 3.6.4 or later for all users.

2. If you intend to use Win32 API:

2.1. Install pywin32 package.


2.2. Run the following command:
hPython installation f olderi\Scripts\pywin32_postinstall.py -install

3. In that instance of Python, install the libraries you intend to use.

4. In tNavigator main window, go to Settings → Options → Paths.

6.6.2. Importing libraries 118


19.1

5. Change the following parameters:

5.1. Check Use External Python Library and enter the path to Python36.dll (or sim-
ilar) from the new instance of Python.
5.2. If needed, check Select path to python modules and enter the path to imported
Python modules. Multiple semicolon-separated locations can be specified.

To obtain the path to modules used by the already installed Python


instance, open the interactive Python interpreter and run the following
i commands:
import sys
’;’.join(sys.path)

If external Python installation is removed, tNavigator automatically falls back to using


internal Python.

6.6.3. Usage examples


The following code examples are fully functional only when run on a model which contains
all objects and structures that the code relies upon. For instance, if the code refers to a well
named P1 which is not there in the model, an error will be returned.
Example 1
Suppose we want to find the amount of oil accumulated by each well during certain time
interval, or (depending on time) during the portion of that interval that has already passed.
The script proceeds as follows:

1. Create a graph (x) that equals oil rate (wopr) within the time range of interest, and
0 otherwise. To do so, we compare time (measured in days since the start) with the
borders obtained elsewhere, and then have the resulting boolean values implicitly cast
to integers: True to 1 and False to 0.

2. Calculate the accumulated sum of x.

3. Export the resulting graph.

Example
x = wopr * (time >= 215) * (time <= 550)
w1 = cum_sum_t(x)
export (w1, name = 'PeriodProd', units = "liquid_surface_volume")

The created graph may be used on bubble maps.

6.6.3. Usage examples 119


19.1

Figure 54. Selecting user graph for bubble maps

Example 2
Suppose we want to see what portion of the well’s oil rate comes from the layers with
70 6 k < 100.

!
This is possible in the simulator or Model Designer GUI, where the graph
calculator has access to the data on individual connections, but not in the
AHM GUI.
The script proceeds as follows:

1. Initialize a temporary data structure (tmp) of the appropriate type (graph in the Well
context) and fill it with 0;

2. Iterate over all connections:

• If the connection is located in the desired area,


– add its oil rate value to that of the corresponding well in the temporary struc-
ture;

3. Export the temporary array divided by the array of total oil rate values for the wells (the
division of graphs is applied elementwise, that is, a sum over connections of any well
is divided by the rate of the same well).

6.6.3. Usage examples 120


19.1

Example
tmp = graph(type='well', default_value=0)
for c in get_all_connections():
if c.k in range(70,100):
tmp[c.well] += copr[c]
export(tmp/wopr, name='wopr_layer2')

! Pay attention to the spaces at the beginning of the lines. They are essential
to Python syntax, and are easily lost during copying-and-pasting.

Example 3
Suppose we want to calculate the average oil rate over a certain subset of wells (those
with names starting with 'WELL3') and compare it with the historic data, which are stored in
a file elsewhere. The deviation will then be used as an objective function for matching. The
script proceeds as follows:

1. Import the standard datetime library which allows handling dates with more agility.

2. Call the avg function and feed to it the iterator over the required subset of wells, so as
to obtain the desired average (obs).

3. Locate the file input.txt in the model folder and open it for reading.

4. Transform the array of file lines into the array of tuples (string,value).

5. Parse the date, thus turning it into an array of tuples (date,value).

6. Build the interpolation graph from the obtained array in the file (hist).

7. Build and export the graph of squared deviation.

Example
from datetime import datetime
obs = wopr.avg (objects = get_wells_by_mask ('WELL3*'))
inpf = open(get_project_folder()+'/input.txt', 'r')
raw = [(line.split()[0],float(line.split()[1])) for line in inpf]
arr = [(datetime.strptime(x[0], '%d.%m.%Y'),x[1]) for x in raw]
hist = create_table_vs_time(arr)
export((obs - hist)**2, name='fuobj')

Example 4
Suppose we have the graphs of historic bottom hole pressure measured only at some
points; the rest is filled with 0. We want to interpolate those for the entire time range. The
script proceeds as follows:

6.6.3. Usage examples 121


19.1

1. Initialize a temporary data structure (tmp) of the appropriate type (graph in the Well
context) and fill it with 0;

2. Iterate over all models and all wells:

• Retrieve the BHP data for the given well;


• Create an empty array to store the actual BHP measurements (observed);
• Iterate over all time steps:
– If the BHP at this time step is greater than 0,
– we append it to the array;
• If the array contains at least 2 elements,
• we create an interpolated graph from it and put it in the temporary structure;

3. Export the temporary structure.

Example
tmp = graph (type = 'well', default_value = 0)
for m in get_all_models():
for w in get_all_wells():
current = wbhph[m,w]
observed = []
for t in get_all_timesteps():
if current[t] > 0:
observed.append ((t.to_datetime(), current[t]))
if len (observed) >= 2:
tmp[m,w] = create_table_vs_time(observed)
export(tmp, name='interpolated_wbhph')

6.6.3. Usage examples 122


19.1

6.7. Crossplot
A crossplot visualizes the dependence between two selected parameters (see figure 55). In
the top window, along axes Y, an object (e.g., Group, Well, Mismatches etc.) can be selected,
in the bottom window a parameter corresponding to the selected object can be defined. The
similar menu is available for selecting parameter along X axes.
Figure 55 shows a crossplot between a custom objective function and a number of variant
of optimization algorithm (here, Differential evolution algorithm). Each variant of optimization
algorithm corresponds to its value of objective function. It can be seen that increase in number
of variants leads to decrease in a value of objective function (i.e., an objective function tends
to its minimum). Bringing the cursor to the crossplot’s point the following information appears
in the status bar (at the bottom of the window): experiment’s number, experiment’s variant
and a value of objective function.

Figure 55. History Matching. Crossplot.

6.7.1. Pareto front visualization


The Pareto front (see section 5.7.1), like a group of model variants, can be visualized in an
Crossplot. The Pareto front can be constructed for arbitrary objective functions (objectives).
To visualize the Pareto front follow steps:

• Create a crossplot for the selected objective functions. In the example, shown in fig-
ure 57, the crossplot is constructed for earlier configured objective functions oil_rate_of
and water_rate_of (see section 4.1). The objective function oil_rate_of is selected along
axis Y, water_rate_of along X, where oil_rate_of is a objective function of history
matching quadratic type based on the oil rate (parameter) and the group ”FIELD” (ob-
ject) and water_rate_of is a objective function of history matching quadratic type based
on the water rate (parameter) and the object group ”FIELD” (object);

• Select model variants form the variant tree;

6.7. Crossplot 123


19.1

• Right click on the selected model variants and choose form drop-down menu Create
Pareto Front From Selected Variants;

• In the Create Pareto front dialog it is required to select objective functions (at least
2). The list of available objective functions is shown on the left. To add a new function
press the button Add entry (see figure 56);

• Selected objective functions will appear on the right.

Figure 56. Creation of Pareto front.

Generally speaking the Pareto front is a group of model variants, therefore all features
available for groups can be implemented for Pareto fronts (see 6.14). It is allowed to create
several Pareto fronts. To switch between them press the button Groupsets manager and
select the required front.

6.7.1. Pareto front visualization 124


19.1

Figure 57. Pareto front visualization.

6.7.1. Pareto front visualization 125


19.1

6.8. Histogram
A histogram allows to evaluate how large is the number of experiment’s variants having
the selected parameter’s value in the specific range. The parameter along X axes is selected
using the menu in the bottom part of the tab defined histogram settings. An interval between
maximum value X max and minimum value X min of parameter is subdivided into the defined
number of sub-intervals. The amount of sub-intervals can be adjusted in the field Bins. For
each sub-interval [X i , X i+1 ] the number of variants having the value of parameter X in this
sub-interval is shown. Move a time slider to see a histogram at the required time moment.
The histogram is shown for the time period marked by a red line on the time line.
You can change a histogram’s orientation from horizontal to vertical or vice versa. The
parameter’s value can be visualized in percentage. Bring the cursor to the histogram’s bin to
see a corresponding range of a parameter and the number of variants in the status bar.
For example, in the figure 58 5 variants have total oil in the range [328675, 331700] sm3 .
Variants corresponding to this range are highlighted in blue in the list of variants located on
the left.

Figure 58. History Matching. Histogram.

6.8. Histogram 126


19.1

6.9. Stacked Plot


A stacked plot shows the contribution of each parameters into an objective function using
different colors. Configure an objective function (define objects, parameters and a time period
From Step – Till Step). The objective function is calculated for the period of time marked by
red line on the time scale.
A stacked plot can be resolved into:
• Objects;

• Components;

• Terms.

Figure 59. History Matching. Stacked Plot – Objects. Mismatch mode.

It is possible to select one of two modes of a stacked plot:


• Mismatch mode.
This mode shows well mismatch contributions into an objective function. Mismatches
are calculated using the formula:

value(H) − value(C) 2
 
∑∑ g
ob j p

– ob j is the set of wells or well groups;


– p is the set of selected parameters (e.g., oil rate, water rate etc.);
– value(C) is the calculated value;
– value(H) is the historical value;
– g is the deviation value specified by user.

6.9. Stacked Plot 127


19.1

• Absolute mode.
This mode allows to identify high rate wells in order to choose correct weights for the
objective function. The following formula is used:
 2
value(C)
∑∑ g
ob j p

Figure 60. History Matching. Stacked Plot – Objects. Absolute mode.

Figure 59 shows the stacked plot resolved into objects. As an Objective Function a custom
objective function is defined. The objective function is based on wells (objects) and oil and
water rates (parameters). The objective function is calculated for the time period marked by
the red color. By right-clicking on a histogram bar a value of objective function and a variant’s
number are shown in the status bar.
Using a stacked plot, for example, you can detect wells with history matching problems
and concentrate on them further. In the figure 59 it can be seen that wells ”PRO-20” and
”PRO-4” make the largest contribution into the objective function, i.e. both wells have history
matching problems. Probably, the selected variables or varying ranges are not suitable for
history matching. In this case, you can try to use other variables and/or varying ranges.
Stacked plot with Absolute mode shown in figure 60 allows to identify high rate wells:
”PRO-1”, ”PRO-4”, ”PRO-5” and ”PRO-11”. For the calculation of the objective function
these wells should have larger weights than low rate wells.
figure 61 shows an example of stacked plot resolved into components – Oil and Water
rates. The plot shows a contribution of water and oil rate mismatches into the objective
function. Right-click on a bar to see the value of selected component and variant’s number in
the status bar.

6.9. Stacked Plot 128


19.1

Figure 61. History Matching. Stacked Plot – Components.

6.9. Stacked Plot 129


19.1

6.10. Analysis
To analyze the obtained results the following tools can be used:

• Pareto chart
• Tornado Diagram
• Quantiles
• Creating a Filter for Variables

6.10.1. Pareto chart


Pareto chart is calculated based on different correlations. The following correlations are avail-
able:

• Pearson correlation;
• Spearman correlation.

Pearson correlation
Pearson correlation creating associations between model’s variables and model’s param-
eters (oil rate, water rate, gas rate, mismatches etc.) and is computed using the following
formula:
∑(X − X̄)(Y − Ȳ )
rXY = p
∑(X − X̄)2 ∑(Y − Ȳ )2
The correlation allows to evaluate, which variable stronger affects model’s parameters and
an objective function. Set a time slider at the required time step to see a correlation at this
time step. Generally speaking, the longer a bar is, the closer value of correlation between
parameters to 1 (in absolute terms), while a relation between these parameters is closer to a
linear dependence. A bar can be:

• Green color – positive values of correlation. An increase in variable’s value results in


enhancement of model’s parameter;
• Blue color – negative values of correlation. An increase in variable’s value results in
reduction of model’s parameter.

Detected effective variables can be used further, in other experiments; noneffective vari-
ables can be excluded from consideration.
The figure 62 shows the correlation between model’s variables M_PERM_FIPNUM_1 etc.
and model’s parameters. To sort a column press on parameter’s name at the top of column.
Variables strongly affected an objective function will be located at the top of column. It can be
seen that variations of variables M_PERM_FIPNUM_2 and M_PERM_FIPNUM_3 result in
the significant change of oil total, but a variation of variable M_PERM_FIPNUM_1 weakly
affects the parameter Oil Total.

6.10. Analysis 130


19.1

Figure 62. History Matching. Pareto chart based on the Pearson correlation

Spearman correlation
The Spearman correlation specifies a degree of dependency of two arbitrary variables X
and Y based on the analysis of data (X1 ,Y1 ), . . . , (Xn ,Yn ). A rank is set for each value X
and Y . Ranks of X are sequentially arranged: i = 1, 2, . . . , n. The rank of Y , Yi , is a rank of
the pair (X,Y ) for which the rank of X is i. Then the Spearman correlation coefficient is
calculated as:
6 ∑ di2
ρ = 1−
n(n2 − 1)
where di is the difference between ranks Xi and Yi , i = 1, 2, . . . , n. The correlation coefficient
varies from -1 (corresponds to a decreasing monotonic trend between X and Y ) to +1 (cor-
responds to a increasing monotonic trend between X and Y ). The coefficient equal to zero
means that variables X and Y are independent.
The Pearson correlation shows the degree of linearity of dependency between variables:
if the correlation coefficient is equal to 1 (in absolute value) then the one variable linearly
depends on another one. On the other hand, the Spearman correlation shows the degree of
monotonicity of dependency: if the correlation coefficient is 1 (in absolute value) then the
dependence is monotonous but not necessarily linearly.
Set a time slider at the required time step to see a correlation at this time step. The longer
the bar is, the closer the correlation coefficient between variables to 1 (in absolute value) and
the dependence between variables is closer to monotonous one.
A bar can have one of the following colors:

• Green color — positive values of correlation coefficient. The dependence between vari-
able and model parameter is monotonically increasing (the coefficient is equal to +1);

• Blue color — negative values of correlation coefficient. The dependence between vari-
able and model parameter is monotonically decreasing (the coefficient is equal to -1).

In the figure 63 the correlation between model’s variables M_PERM_FIPNUM_1 etc.


and model’s parameters is shown. To sort a column press on parameter’s name at the

6.10.1. Pareto chart 131


19.1

Figure 63. History Matching. Pareto chart based on the Spearman correlation

top of column. It can be seen that Avg. Pressure does not depend on the variable
M_PERM_FIPNUM_4 (correlation coefficient is around 0), while Watercut depends mono-
tonically on M_PERM_FIPNUM_4 (correlation coefficient is around 1).

6.10.2. Tornado Diagram


Tornado diagram can be calculated only for Tornado experiment. This diagram provides the
possibility to analyze a correlation between model variables (decreasing and increasing of
the variable value) and model’s functions (rates mismatches, differences for total parameters,
objective function). The example of Tornado diagram is shown in figure 64.

The longer the bar the stronger correlation between variations of variables and variation
of objective function.
Color of the bar can be:

• Blue color – variable value decreases;

• Green color – variable value increases.

The bar has directions:

• Left – the value of the function decreases;

• Right – the value of the function increases.

A Tornado Diagram calculated for Oil Total Difference is created below as an example.

1. Difference is calculated via the formula:

(|TotalValue(H) − TotalValue(calc)|)L − (|TotalValue(H) − TotalValue(calc)|)C

where:

6.10.2. Tornado Diagram 132


19.1

Figure 64. Tornado Diagram.

• H – historical value of Oil Total;


• calc – calculated value of Oil Total;
• L – difference at the last time step;
• C – difference at the current time step (where the time slider is).

2. If you need to analyze data for all simulation period, then Tornado Diagram should be
visualized at zero time step (move time slider to the left position).

3. Then two experiments are taken for which the variable value is maximum and minimum.

4. For these experiments we calculate if the Oil Total Difference increases or decreases.
Percent is calculated relatively the experiment 0000.

5. Variable decreases - blue color, variable increases - green color. Total Difference de-
creases - left direction, Total Difference increases - right direction.

6. In case if the histogram bar has same direction for variable increasing and decreasing,
this means that we have the same tendency. For example both variable increasing and
decreasing bring us far from historical data. May be we need to change variable ranges
of choose a new variable for AHM process.

6.10.3. Quantiles
Quantiles can be calculated for model variants generated using the Latin Hypercube algorithm
and for forecast models. They are available in the tab Results. Analysis.
The range of uncertainty of the obtained parameters (e.g., oil total, water total etc.) can be
represented by a probability distribution. In case of the range of uncertainty is represented by
a probability distribution, a low, best and high estimate shall be provided such that:

• P90 is the low estimate, i.e. the values of selected parameter will exceed the low estimate
with 90% probability;

6.10.3. Quantiles 133


19.1

Figure 65. Quantiles.

• P50 is the best estimate, i.e. the values of selected parameter will exceed the best
estimate with 50% probability;

• P10 is the high estimate, i.e. the values of selected parameter will exceed the high
estimate with 10% probability.

Thus, for any parameter all quantiles obey the relation:

P10 ≥ P50 ≥ P90

Quantiles are calculated for each parameter. For set of parameter values calculated from
variants of experiment parameter values corresponding to the low P90, best P50 and high P10
estimates are specified. Quantile values are calculated at each time step: in order to see them,
move time slider to another time step.
The same quantiles calculated for different parameters may correspond to different model
variants. For example, quantile P10 for Oil Total corresponds to the third variant of model,
while the quantile P10 for Water Total – the first variant of model. In order to go to a model
variant corresponding to the selected quantile right-click on the quantile value and select Go
to the model of this quantile. The corresponding model will be highlighted in the variants
tree.

Quantiles P90, P50 and P10 corresponding to model variants are visualized on the tab Cdf
as solid diamonds.

6.10.3. Quantiles 134


19.1

Quantile calculation
Quantiles are calculated for successfully calculated model variants of one experiment. It
is supposed that these variant are equally possible. Quantiles are calculated as follows:
In set of N parameter values Vi (i = 1, ..., N ) sorted in ascending order a number i of value
of quantile α (i.e., Pα = Vi ) is equal to b(1 − α)(N + 1)c (these brackets denote rounding
N
to an integer value (below) this value) for α ∈ (0, (N+1) ]. For α = 0 number i = N , for
N
α > (N+1) number i = 1.
As an example, let us suggest that we have N model variants obtained via Latin hypercube.
We want to calculate α -quantile of oil rate, where α varies from 0 to 100. First, variants are
sorted by rate value, then a number of the specific variant is calculated using the formula
b(1 − α)(N + 1)c. Thus, we obtain a number of variant i (from 1 to N ), value of which
corresponds to α -quantile for this parameter (i.e., Pα = Vi ) for this set of variants. In GUI
α -quantiles are shown in percentage, i.e. α × 100%.
Add user Quantile.
There are three defaulted quantiles: P10, P50 and P90. Press Add Quantile and enter the
value to calculate any user quantile. Enter the value in percents from 0 to 100). For example
to calculate the quantile P75 enter the α value 75.

6.10.4. Creating a Filter for Variables


In the tab Analysis a filter for model variables can be created. This filter is further used
when creating a new experiment (see Implementation of Variable Filter). It allows to exclude
variables weakly affected the model.
Right-click on the selected variables and in the pop up menu select Create variables filter
(see figure 66). Specify a filter name. The filter is created.

Figure 66. Create a filter variables.

6.10.4. Creating a Filter for Variables 135


19.1

6.11. Mds
Mds (multi-dimensional scaling) is a method of multivariate data visualization in a space of
lower dimension than dimension of original data space [3, 4]. In this case points are distributed
in such way that distances between every pair of N items in the final low dimensional space
will be close to distances in the original space.
Mds is used to visualize results of history matching. The basic result of history matching
is a set of vectors of variables {m m1 , . . . , m K } (where K is the number of model’s variants).
For each variant a vector of variables m i is created using the selected algorithm. The vector’s
length is equal to the number of variables N defined in the history matching project. An
advantage of Mds method in comparison to other approaches (e.g., crossplot) is to this method
allows to evaluate closeness of points in N -dimensional space.
Let us suppose that as a result of history matching we have a set of vectors of variables
M = {m m1 , . . . , m K } for K variants of the model, i.e. ith variant of the model has a vector of
variables m i . The length of vector equals the number of variables N defined in the history
matching project. We consider N -dimensional space, in which each object is a model’s variant
with coordinates defined by variant’s vector of variables. The distance between pair of points
m i and m j in N -dimensional space can be computed as:
s
N
mi , m j ) =
di j = d(m ∑ (msi − msj )2
s=1

Let’s P = {pp1 , . . . , p K } is a set of vectors of coordinates of projections of M = {m


m1 , . . . , m K }
p p
on a plane. Then, the distance between pair of projections i and j is computed as:
q
dbi j = (p2i − p2j )2

Thus, the primary aim of the Mds method is to find the set of coordinates of projections P,
for which the function F can be minimized:
 2
1/2
∑ (di j − dc
i j)
 i< j
F =

di2j


i< j

In other words, Mds method projects each object of N -dimensional space onto a plane
such as distances between pair points of N -dimensional space will be as close as possible to
distances between their projections on the plane.
An example of projection of set of points (set of vectors of variables) on plane using Mds
method is shown in figure 67.

6.11. Mds 136


19.1

Figure 67. Mds–multi-dimensional scaling.

6.11. Mds 137


19.1

6.11.1. Weighted Mds


Weighted multi-dimensional scaling allows to specify a weight of each variable that will be
used in calculation of distance between pair of points. By default Mds treats all variables
equally when doing the plane projection. Weighted Mds work: assigns different weights or
priorities to different variables so that changes in the low priority variables have a lower effect
on the distance between points (i.e. assigning a high priority to porosity and a low priority
to permeability meaning that changes in porosity will have a bigger impact on the distance
between points in Mds plot).
By default all variables have the same weight. To change a weight of variable press the
button and in the appeared dialog specify the weight of the selected variable by double
clicking on the weight value (see figure 68)

Figure 68. Specifying weight values for Mds.

! Different veriable weights can be specified as well when clusterizing a set of


model variants (see 6.16).

6.11.1. Weighted Mds 138


19.1

6.12. Cdf
The cumulative distribution function (cdf) for selected parameter (total oil, total water etc.)
and model variants are visualized on the Cdf tab. The graph horizontal axis is values of
the selected parameter and a value of cdf varies from 0 to 1 along vertical axis. Each model
variant corresponds to a point (X,Y ) of cdf graph, indicating that the probability of the selected
parameter value is greater than or equal to X is Y . figure 69 shows that the probability of oil
total is greater than X =294.86 th.sm 3 is equal to Y =0.906.
Cdf is calculated under the assumption that distribution of (X,Y ) points (model variants)
are of the same probability, i.e. that points of cdf graph are located uniformly along vertical
axis.
In order to visualize quantiles P10, P50 and P90 in the cdf graph tick the option Show
Quantiles.

Figure 69. Cdf graph.

Quantiles P10, P50 and P90, shown in the graph of cdf as empty diamonds
(see figure 69), may not coincide with points of cdf graph corresponding to

!
model variants. In such case a model variant corresponding to the quantile is
located to the left (solid diamond), while a value of parameter for the quantile
matches the value calculated in the quantiles table on the Analysis tab.
It can be seen in the figure 69 that quantiles P10 and P90 do not coincide
with model variants. For P90 oil total is 295.59 th.sm3 . But for the model
variant (to the left) corresponding to P90, oil total is 294.86 th.sm3 (this
value matches with one calculated in the quantiles table on the Analysis tab).

6.12. Cdf 139


19.1

6.13. Proxy model


In case of the full calculation of model variant is time and computational resources consuming
the use of Proxy model allows to rapidly generate an arbitrary number of model variants using
the Monte Carlo method. In GUI it is possible to construct and export the formula of quadratic
Proxy model. This expression can be further used to generate model variants using Monte
Carlo sampling method (see 6.13.3). The Proxy model provides a quadratic approximation of
the parameter over the selected variants of the model.
Let us assume a function F is specified in each point of ensemble D ⊂ Rn . Initial variables
are denoted as x1 , . . . xn . Let’s introduce new variables z following the rule:

z1 = x1 , z2 = x2 , . . . zn = xn ,
zn+1 = x1 · x1 , zn+2 = x1 · x2 , ... zm = xn · xn , (6.1)
m = (n2 + n)/2

The quadratic polynomial P is constructed with variables xi , containing n + 1 monomials.


Let’s consider the polynomial P as linear polynomial of variables z j . For each variable z j
the Pearson correlation coefficient between z j and the function F , using all data. Let’s select
n variables with the largest correlation coefficient and denote them as z1 , . . . zn . The first
monomial is always constant term of polynomial. Then the approximated polynomial can be
written as:
n
P(z) = ∑ pi · zi , z0 = 1 (6.2)
i=0
where pi i = (0, . . . , n) is the vector of polynomial coefficients.
The polynomial coefficients pi can be calculated using least mean square method:

(F − P(z1 , . . . zn ))2 → min (6.3)

6.13.1. Constructing Proxy model formula


To obtain Proxy model formula, follow these steps (see figure 70):

• Select variants of the model from the model tree;

• Right-click on the selected variants and select from the appeared menu Create Proxy
Model From Selected Variants;

• In the dialog Create Proxy model select an object (at the top left of the window):
Groups, Wells, Field FIP etc. and corresponding parameter at the bottom left of the
window;

• Specify time step for which a proxy model will be created;

• Specify Significance Threshold. All Pearson correlation coefficient below this value
will be ignored when constructing a polynomial;

6.13. Proxy model 140


19.1

• Press the button Add entry;


• Parameters of Proxy model will appear to the right.

Figure 70. Create proxy model dialog.

The created based on all variants Proxy model is demonstrated in figure 71. At the left top
of the window the name of Proxy model is shown and the object, parameter, and time step
for which this model will be created. The formula of quadratic polynomial approximated the
given function (in this example – oil rate) are shown.
Real (calculated) results of oil rate are along abscissa and approximated values of oil rate
– along ordinate, i.e. oil rate values calculated using the Proxy model formula with variables
corresponding to variants of the model. The gray line shows the graph y = x . It is assumed
that variants grouped along the gray line (e.g., variant 5) are well approximated by the created
polynomial. Variants located far from this line (e.g., variants 1 and 8) are poorly approximated
by polynomial.
A quality matching between values provided with Proxy model and calculated results is
evaluated by R2 coefficient (see section Table of coefficients R2). The closer the R2 coefficient
to 1, the better the Proxy model approximates calculated results. In the example the obtained
R2 is equal to 0.808.

6.13.1. Constructing Proxy model formula 141


19.1

Figure 71. Proxy model.

6.13.2. Implementation of artificial neural network


Artificial neural network (ANN) is the mathematical model constructed in a similar way
as the biological neural networks that constitute animal brains. The neural network is not
an algorithm but rather a structure for many different machine learning algorithms working
together and processing complex input data [8, 9].
Similar to the system of neurons in a brain the artificial neural network model is the
system of connected and interacted simple processors – artificial neurons (see figure 72).
Each connection like the synapses in a biological brain, can transmit information (or signal)
from one artificial neuron to another. After receiving a signal the artificial neuron processes it
and then sends the signal to other neurons connected with it. In common network a signal at
a connection between neurons is a real number. Each neuron is calculated the weighted sum
of all elements of input vector and the activation function (semi–linear or sigmoid function)
is applied to the obtained result. Artificial neurons may have a threshold: the signal will be
sent if only the aggregate signal exceeds the threshold. Connections between artificial neurons
are called edges. Generally, artificial neurons are combined into layers. Different layers can
perform different transformations on their input data. Signals travel from the first (input) layer
to the last (output) layer, passing through the inner layers.
The neural network is not programmed in common sense, they are trained by performing
tasks for training sets. Artificial neuron and edges have weight that varies during the training
process. A weight can decrease or increase the signal strength at connection between neurons.
The deep learning uses many hidden layers of artificial neural network. Such approach models
a human brain functionality when transforming light and sound into vision and hearing.
In training the neural network can find complex dependences between input and output
data and aggregate data. This means that in case of successful training the network can provide
a correct results based on data that are absent in the training set or/and incomplete, or/and

6.13.2. Implementation of artificial neural network 142


19.1

partially degraded data.

Figure 72. Artificial neural network scheme.

To create a Proxy model using the neural network tick Neural Network Proxy. The
scheme of artificial neural network is shown in figure 72. The input data for neural network are
model variables: x1 , x2 , ..., xn (n is the number of variables) for each model variant. Variables
can be selected in the dialog Create proxy model (see figure 70). The number of neurons in
the hidden layers can be specified by the option Number of Neurons in Hidden Layer. For
each model variant the neural network provides the parameter value at the specified time step.
The neural network training is performed by correction of weights at connections between
(1) (m)
neurons: w11 , ..., w(1)nn , ..., w11 , ..., w(m)nn , (m is the number of hidden layers of neurons).
Network weights vary until the required deviation of output parameter value (y in figure 72)
from its calculated value is obtained or the specified Number of Training Epochs is reached.

!
The number of neurons in the hidden layer and the number of training epochs
are empirically selected. The number of neurons should not be too large, oth-
erwise a network works well only for the training set, or too small, otherwise
the network can not be properly trained.
In figure 73 the Proxy model obtained using the artificial neural network is shown. The
number of neurons in hidden layers is set equal to 20 and the number of training epochs is
100. It can be seen that the quality of the Proxy model is high: R2 equals 0.999.

6.13.3. Analysis using the Monte Carlo method


The obtained quadratic Proxy model can be used for analysis using the Monte Carlo method.
Multiple number of model variants for the selected parameter are calculated using the ob-
tained proxy formula by substituting model variable in it. The distribution of variables is
specified using the Monte Carlo method and can be uniform, normal, Log–normal, triangular
and discrete (see section 3.6). The maximum number of variants, a distribution type, etc. can
be specified by user (see figure 74) when starting the Monte Carlo experiment.

6.13.3. Analysis using the Monte Carlo method 143


19.1

Figure 73. Proxy model obtained using the artificial neural network.

Figure 74. Settings of Monte Carlo experiment for Proxy model.

To run a calculation press the button Start Monte Carlo and specify settings of the Monte
Carlo experiment. The tab Monte Carlo Results will be automatically created. On this tab for
further analysis of obtained model variants the following instruments are available: Results
Table, Crossplot, Histogram, Analysis and Cdf.

6.14. Creating a group of variants


To work only with a specific set of variants of the model there is a possibility to set up a group
of variants. Moreover, two groupset – Experiments and Variants are automatically created. The
groupset Experiments contains variants grouped in accordance with carried out experiments
(see section Experimental Design). The groupset Variants includes all model variants.
To create a user defined group of variants select required variants of the model in the
tree of variants and right click on them. In the pop up menu select Add Variants to Group.

6.14. Creating a group of variants 144


19.1

Figure 75. Crossplot: Oil rate (for different model variants) along Y axis and the variable
M_PERM_FIPNUM_2 along axis X.

Create new Groupset. Specify a groupset name in the pop up window. To edit a group press
the button Groupsets Manager or in the menu Add Variants to Group select Call
Groupsets Manager.

Figure 76. Colorization of variants according to the oil rate gradient

Variants from one group have the same color. Press the button Colorize to colorize all vari-
ants of the model according to available groups in the tree of variants, in tabs Mds, Graphs
and Crossplot. Variants not included in any groups are colorized with grey. For groupset Vari-
ants there is a possibility to colorize variants according to the gradient of selected parameter
(e.g., oil rate, water rate etc.) as shown in figure 76. In the dialog Groupsets Manager select

6.14. Creating a group of variants 145


19.1

the groupset Variants and press the button Add gradient and select a parameter that used to
create a gradient (see figure 77).

Figure 77. Groupset Manager. Add gradient

In the dialog Groupset Manager it is possible to:


• Select a color of variants, included in the group, by pressing on the color rectangular
corresponding to the group. To reset default colors press the button Reset colors to
defaults;

• Add a new group by pressing the button Add new group;

• Add variants to the selected group. Select variants in the tree of variants. Open the
dialog Groupset Manager and tick a group to add the selected variants, and then press
the button Add variants to this group. In addition, selected in the tree variants can be
added to a group as following: right–click on variants and in the pop up menu select
Add variants to group and the group to add variants.
Moreover, model variants can be moved between groups belonging to one groupset.
Select variants in a group or Other and right click on them. In the pop up menu select
a group to move these variants. To delete selected variants from a group select in the
pop up menu Remove variants from group (see figure 78).
Model variants included in the group are visualized with a color (specified in the dialog
Groupset Manager) in tabs Mds, Ggraphs or Crossplot. To set a color of group or edit a

6.14. Creating a group of variants 146


19.1

Figure 78. Groupset Manager. User defined groupset

group in these tabs press the button Settings (see figure 79). In the pop up dialog press
the button Select coloring mode and then in the pop up dialog Groupset Manager select a
group for setting a color or editting. Two groups of variants (red dots–GroupSet[1], green dots
– GroupSet[2]) are shown in figure 79. Other variants have a gray color.
When switching to Graphs tab you can tune a color of graphs using similar to described
above steps. Press the button Settings and select a color for the created group. As can be
seen in figure 80 graphs corresponding to the group GroupSet[1] are colored with red, graphs
from GroupSet[2] are colored with green, while other ones are gray.

6.14. Creating a group of variants 147


19.1

Figure 79. Mds. User defined group of variants

Figure 80. Graphs. User defined group of variants

6.14. Creating a group of variants 148


19.1

6.15. Table of coefficients R2


To analyze a quality of model’s history matching for model’s objects (e.g., wells, fields etc.)
and parameters (e.g., oil rate, water rate etc.) a table of coefficients R2 can be calculated.
Parameters are set for table’s columns and objects are set for table’s lines. For each pair, object
(O) – parameter (P), a coefficient R2 is calculated. For selected object and parameter a set
of items (Pk (H), Pk (C)), k = 1, ..., N are distributed on the plane, where Pk (H) is historical
value of this parameter and Pk (C) is calculated value of this parameter at k−th time step, N
is the number of time steps. For the created set of items a trend Tk is calculated using the
least squares method. Coefficients R2 are calculated as follows:

∑(Pk (C) − Tk )2
R2 = 1 −
∑(Pk (C) − ∑ Pk (C))2
Each coefficient shows a closeness of calculated data Pk (C) to the trend. If a coefficient
R2 equals 1, then calculated data coincide with historical ones. If coefficient R2 is close to
zero, then historical and calculated data are significantly different.
The following objects can be set in a table:
• Objects;

• Fields;

• Groups.

Figure 81. Table R2.

Objects and parameters, shown in the R2 table, can be added to or removed form the
table using buttons Objects and Parameters, respectively (see figure 81). You can set bound
values (Bad match value and Good match value) corresponding to bad and good quality of
history matching. If coefficient R2 is higher than Good match value, then history matching

6.15. Table of coefficients R2 149


19.1

quality is good and a table’s cell is highlighted by green. If coefficient R2 is lower than Bad
match value, then history matching quality is bad and a table’s cell is highlighted by red.
Intermediate values of coefficients R2 are highlighted by yellow. Moreover, this table can be
calculated for the selected model’s variant or group of variants. To do this select a variant or
a group of variants in a list of variants and press the button R2 table.
It is possible to add (delete) template using buttons Add New Template (Delete Template)
for the convenience of users. For each template you can set different objects and parameters.
figure 81 shows an example of R2 table calculated for 8-th model’s variant. It can be seen
that for the well ’PRO-15’ the calculated oil rate coincides with historical rate (coefficient R2
equals 0.999), however, water rates are quite different (coefficient R2 equals 0.072).

6.15. Table of coefficients R2 150


19.1

6.16. Clusterization
Clusterization is grouping of items in such way that items in one group (called a cluster)
are more similar (in some sense or another) to each other than to those in other groups
(clusters). Further, you can take one representative from each cluster and use them for forecast
calculations.
Let us suppose that as a result of history matching we have a set of vectors of variables
M = {m m1 , . . . , m K } for K variants of the model, i.e. ith variant of the model has a vector of
variables m i . The vector’s length is equal to the number of variables N defined in the history
matching project. Let us consider N -dimensional space RN , in which each object is model’s
variant and object’s coordinates are a vector of variables of this variant.
For clustering a set of vectors M = {m m1 , . . . , mK } in the space RN into L clusters the
K–means algorithm [5, 6] is used. The algorithm tends to minimize total square deviation of
points of clusters from their centers:
L
V=∑ ∑ m j − µ i )2
(m
i=1 m j ∈Ki

where Ki is the resulted cluster, µ i is the centroid of groups of items (vectors) m j ∈ Ki ,


i = 1, ..., L . At each iteration the algorithm calculates new centroid µ i for each cluster Ki
obtained at previous time step using the formula:
1
µi = m j)
(m
|Ki | m ∑
j ∈K i

µ i = {µi1 , . . . , µiN }
where |Ki | is the number of points including in the cluster Ki . Having calculated centroids,
an available set of items M is subdivided into clusters again in such way that the distance
between an item and a new cluster’s center is minimum. This means that for each vectors
m j from a set of vectors M distances di between a point m j and all cluster’s centroids are
calculated: s
N
di = d(µ
µ i, m j ) = ∑ (µis − msj )2, i = 1, ..., L
s=1
An item (vector) will be include in a cluster K p if the distance between it and the cluster’s
centroid is minimum. Thus, when the algorithm finds a minimum distance d p for a vector,
i.e. dmin = d p , then this vector will be included in the cluster K p .
The algorithm terminates if centroids of clusters do not change. This happens for the
finite number of iterations since an amount of possible subdivisions of finite set of items
is limited and at each iteration the total quadratic deviation V does not increase, therefore,
the algorithm cannot run in a loop. Initial centroids of clusters are selected in such way that
distances between centroids are maximum.

6.16. Clusterization 151


19.1

7. From AHM to Forecast


Different variants of model’s history matching can result in different values of forecast to-
tal parameters, therefore, several model’s variants with high quality of history matching are
required for calculation of forecast.
To go from history matching to forecast select from obtained variants of history matching
a set of best variants (e.g., first 20 variants). Using Mds method project these points on a
plane.

An example of how to create forecast is described in the training tutorial


AHM1.8. How to go from AHM to Forecast.

If clusters appeared, as shown in figure 82, this means that values of variables for variants,
including in the cluster, are close to each other. Thus, in order to create a model’s forecast
it is enough to take only one variant (representative) from each cluster. In contrast to history
matching in this case we maximize an objective function.

Figure 82. Clustering a set of model’s variants to create a forecast.

An advantage of such approach is that, first of all, this approach decreases the number
of forecasts from the number of all variants to the number of clusters. Second of all, among
similar values of objective function really different sets of parameters are selected.

7. From AHM to Forecast 152


19.1

7.1. Creating Forecast in GUI


Select a set of best variants from obtained variants of history matching (e.g., first 20 variants).
Right-click on the selected variants and choose Create Forecast User Variants From Se-
lected Variants in the appeared menu. In the base model project window go to Documents.
Configure Forecast From AHM project... In the Configure Forecast window the following
parameters are (see figure 83):
• Main Options:
– Forecast from Step;
– Step Length – (one year, six months, three months, month, week, day);
– Forecast Length – (one year, 5 years, 15 years, Custom).
• Well.
• Load User Forecast Schedule File.

Figure 83. Configure forecast window.

There are three possibilities to create forecast:

7.1. Creating Forecast in GUI 153


19.1

• NFA (No Further Action) scenario;

• Load User Forecast schedule file;

• Load User Forecast schedule file with variables.

7.1.1. NFA (No Further Action) scenario


In this scenario all wells are on BHP control taken at last time step (see figure 83). In the
dialog Configure Forecast well options Bottom Hole Pressure is ticked by default. Step
Length and Forecast Length can be selected in the dialog. The schedule file is automatically
created.
Further it is required to recalculate selected variants with another writing settings (by
default only well data are recorded on a disc, but to run a forecast calculation we need grid
properties (pressure, saturations etc.)). On the tab Calculations forecast variants will appear
in a queue and calculations of these variants will start as soon as the corresponding base cases
(selected variants) are recalculated and recorded all required information.
In figure 84 future production range for selected variants is shown.

Figure 84. Oil total. 15 years forecast.

7.1.1. NFA (No Further Action) scenario 154


19.1

7.1.2. Load User Forecast schedule file


In addition a forecast can be loaded as an user schedule file. In the dialog Configure Forecast
press the button Load User Forecast schedule file and select a schedule file. An example of
the schedule file is shown in figure 85. In this file a new well ”PRO-N” is specified and all
wells are on BHP control (keyword WCONPROD, see 12.19.42). It is required to specify dates
(keyword DATES, see 12.19.124) which will be used instead dates specified automatically.

Figure 85. An example of user forecast schedule file.

Further it is required to recalculate selected variants with another writing settings (by
default only well data are recorded on a disc, but to run a forecast calculation we need grid
properties (pressure, saturations etc.)). On the tab Calculations forecast variants will appear
in a queue and calculations of these variants will start as soon as the corresponding base cases
(selected variants) are recalculated and recorded all required information.
In figure 86 future production range provided by different HM variants for the well ”PRO-
N” is shown. Different HM variants provides different estimation of future production for the
well ”PRO-N”.

7.1.2. Load User Forecast schedule file 155


19.1

Figure 86. Production range provided by different HM variants for the well ”PRO-N”.

7.1.2. Load User Forecast schedule file 156


19.1

7.1.3. Load User Forecast schedule file with variables


The forecast can be loaded as an user schedule file with variables. The schedule file specifies
what is going to happen during the forecasting period. For example, changing well location
or trajectory, well controls etc. In the schedule file variables and their variation ranges are
specified using keywords DEFINES (see 12.1.24). Other required keywords, e.g., WELSPECS
(see 12.19.3), COMPDAT (see 12.19.6) etc. are specified in the schedule file as well. It is
required to specify dates for events (keyword DATES, see 12.19.124). These dates will be
used instead dates specified by default.

Figure 87. An example of user forecast schedule file with variables.

In the example of schedule file (see figure 87) a new well ”PRO-N” is specified and all
wells are on BHP control (keyword WCONPROD, see 12.19.42). In the keyword DEFINES
(see 12.1.24) two variables BHP1 and BHPN, which are bottom hole pressure for well ”PRO-
1” and new well ”PRO-N”.
If Create Forecast Experiment Group From Selected Variants is chosen then for each
selected HM variant a group of forecasts will be calculated. The number of forecasts is defined

7.1.3. Load User Forecast schedule file with variables 157


19.1

by the selected algorithm (see figure 88).

Figure 88. Creating a group of forecasts from selected variants.

Further it is required to recalculate selected variants with another writing settings (by
default only well data are recorded on a disc, but to run a forecast calculation we need grid
properties (pressure, saturations etc.)). On the tab Calculations forecast variants will appear
in a queue and calculations of these variants will start as soon as the corresponding base cases
(selected variants) are recalculated and recorded all required information.
For the example of creating a forecast shown in figure 88 the Tornado experiments is
selected. As can be seen in figure 89 for each history matching variant five forecast variants
have been calculated.

7.1.3. Load User Forecast schedule file with variables 158


19.1

Figure 89. Group of forecasts from selected variants.

7.1.3. Load User Forecast schedule file with variables 159


19.1

8. Workflows
All of the Designer modules in tNavigator support Python based workflows. This feature
enables users to record and replay sequences of functional steps for: input data interpretation,
building static models, dynamic simulations, postprocessing of results, uncertainty analysis or
history matching. Workflows can also be used for connecting various modules of tNavigator,
calling external user scripts and third-party software like Excel™.
For example, one could set up an arbitrary user defined workflow, which would include
step-by-step building of a structural model in Geology Designer followed by snapping seis-
mic surfaces to match markers, grid generation, upscaling, SGS property interpolation and
dynamic model initialization with static and dynamic uncertainty variables. This static-to-
simulation workflow can be run from the Assisted History Matching module and provide
comprehensive sensitivity analysis of simulation results with respect to variations of static
and dynamic parameters.

Examples of workflows usage are shown in training tutorials:

• GDAHM1.1. How To Use Integrated Workflow AHM.

• GDAHM1.2. How To Use Integrated Uncertainty.

8.1. Editing workflow


The project may contain multiple workflows. They are stored as *.py scripts at Workflows
in the project directory and can be loaded into another project (see Import workflow below).
It is not recommended to edit them manually. If you still intend to do it at your own risk,
respect the #region...#endregion comment lines. Though ignored by Python, they are
important for the preprocessing routine.
To edit a workflow, open the Calculations and Workflows window, which may be ac-
cessed either from the top menu (File → Workflows) or via the keyboard shortcut Alt+W.
The top panel of the window contains interface for handling workflows, including the
following elements:
• Dropdown list lets you select a workflow among those present in the project.

• Add creates a new empty workflow.

• Delete removes a workflow (disabled if there is currently only one workflow).

• Duplicate creates a copy of the current workflow.

• Rename renames the current workflow.

• Import workflow (on the right) reads a workflow from a file.

8. Workflows 160
19.1

The left column of the window contains all available calculations.


The list of available actions is split into the following groups:

• Utilities are the common utilities, including:

◦ Print Log sends a message to the log file.


◦ Add custom code lets you run arbitrary code, including another workflow (select
Workflows on the lower right panel).

• Objective Function creates an objective function from standard terms (discrepancies in


oil, gas, water, and BHP).

• Variables Filter creates a variable filter.

• Launching Experiment allows launching of certain types of experiments, namely Latin


hypercube and various kinds of Optimization.

The middle column of the window contains the calculations already added to the current
workflow. They can be executed all together or in a selective manner, see Running workflow.
Besides those, it contains the list of model variables, see Creating variables.
Between these columns is the interface for handling individual calculations, including the
following elements:

• Insert adds the selected available calculation to the current workflow.

• Delete removes the selected calculation from workflow.

• Up, Down move the selected calculation up and down the sequence within the
current workflow.

• Duplicate creates a copy of the selected calculation.

• Show code displays a read-only Python source code of the selected calculation.

The right column of the window contains the parameters of the currently selected calcula-
tion.

8.2. Creating variables


Variables serve for producing multiple different variants of model in the course of assisted
history matching (AHM). See the AHM User Guide for more details. Nearly any numeric
parameter in the workflows can be turned into a variable.
In the window Calculations and Workflows press the button Add Variable (see fig-
ure 90) or right-click on free space below the list of variables. The following parameters
should be specified:

• Name. Variable name;

8.2. Creating variables 161


19.1

Figure 90. Creating a variable

• Value. Initial variable value;

• Min. Minimal variable value;

• Max. Maximal variable value;

• Type. Variable type. The following variable types are available:

– INTEGER – integer number (variable minimum and maximum should be integer as


well);
– REAL – real number;
– STRING – string. In this case, instead of maximum and minimum, you have to
enter the list of possible values.

To use a variable in some calculation, type its name instead of any parameter value.

8.3. Running workflow


To reproduce a workflow, open it in the Calculations and Workflows window. The middle
column will display the list of calculations contained in the workflow. Each of them can be
toggled on/of separately.
To run all checked calculations at once, press the Run Workflow button below. The
calculations will start executing in the same order as they follow in the list. Once a line is

8.3. Running workflow 162


19.1

successfully executed, it is highlighted green. The visualization area displays the ongoing
changes immediately as they occur.
If a line can’t be executed because of an error (say, if it relies on an object which does not
exist in the current model), it is highlighted red and the execution stops. In this case you may
want to uncheck this and all previous lines, run the faulty action manually via the interface (if
you still need it), and resume running the workflow.
A workflow can be tested without actually running it. To do so, click the Test button in the
lower left part of the window. All checked calculations will be tested for consistency. Those
which lack the necessary data to run will be highlighted red.

8.3. Running workflow 163


19.1

Figure 91. Using a variable SEISER in the Calculator.

9. Run AHM from Model and Geology Designers


The module of assisted history matching (AHM) and uncertainty analysis can be run using a
workflow directly from Model and Geology Designer. The integrated history matching allows
to vary structures and dynamic parameters in order to find the best model realization providing
historical data.
To use history matching via a workflow, go to the top menu Document and select from
the drop-down list the option Workflows.
Create a variable as described in 8.2. Creating variables. The created variable can be used
in:

• Calculator. A variable is specified between symbols @...@ (see figure 91);

• Calculation. Type the variable name instead of any parameter value. For example,
in the calculation Adjust Variogram variogram’s Azimuth and Ranges are replaced by
variables (see figure 92). An initial variable value is shown to the right. In the calculation
Adjust Analytical RP Table all parameters specified via variables are shown with green
color.

9. Run AHM from Model and Geology Designers 164


19.1

Figure 92. Using variables in a calculation.

Examples of how to use workflows are shown in training tutorials:

• GDAHM1.1. How To Use Integrated Workflow AHM;

• GDAHM1.2. How To Use Integrated Uncertainty.

From Geology and Model Designer the AHM module can be launched using two ways:

• Using current hydrodynamic model (integrated modeling).


Integration of modules Geology Designer, Model Designer, Simulator, AHM;

• Using a table.
This way can be used for the uncertainties analysis of model’s properties and the sensi-
tivity analysis of variables to volumes of fluid in place. In this case it is not necessary
to build full hydrodynamic model.
In workflow it is required to specify variables and use them in calculations of volumes
in place. Create a statistic table. The creation of the statistic table should be the last
action in the workflow list. When launching the AHM module select Use table and
an appropriate table. Use Experimental Design for uncertainty analysis or Optimization
Algorithms for optimization.

9. Run AHM from Model and Geology Designers 165


19.1

Figure 93. The launch of uncertainties analysis using a table.

An example of how to use uncertainties analysis via a table is presented


in the training tutorial:

• GD3.4. How To Do Volume Estimation (How to estimate the


volumes in place).

9. Run AHM from Model and Geology Designers 166


19.1

9.1. Use of Discrete Fourier transform algorithm for history matching


In practice measured values of permeability of reservoir rock are available only in blocks
where rock samples were taken. Therefore to create a geological model an interpolation of
available data of permeability is carried out. In this case results of calculation of model created
in such way can be different from historical data.
Thus it is required to solve an inverse problem, i.e. to find parameters of model which
provide results close to historical data. In general we have to solve a minimization problem,
dimension of which is equal to a number of blocks, i.e. of the order 105 ÷ 107 . Most of
standard optimization algorithms (e.g., Differential Evolution, Particle Swarm Optimization
algorithm) are obviously inefficient due to the huge number of variables.
A standard approach to solve this problem is to subdivide the model grid into regions,
then specify a multiplier for each region and use multipliers as variables in history matching.
As result, the number of variables significantly reduces. However, this approach has several
disadvantages:

• there is no universal way to subdivide the model grid into regions, therefore it is not
clear how to choose these regions;

• the geological structure of the model can be violated since the property values in differ-
ent regions are multiplied by different independent multipliers.

tNavigator implements another approach based on the discrete cosine transform (DCT)
algorithm. This approach overcomes the above mentioned drawbacks, while using smaller
number of variables.

An example of how to use the discrete cosine transform algorithm for


history matching is demonstrated in the training tutorial:

• MDAHM1.2 How To Use 3D Cos Transf Workflow AHM (Ac-


celerated Model Designer to AHM workflow based on Discrete
Cosine Transform of permeability).

9.1.1. Discrete cosine transform (DCT) algorithm


If values of parameters in grid blocks are not independent and there is a spatial correlation
then a number of parameters required for the description of grid property can be reduced. The
stronger the correlation, the fewer parameters are required.
Let’s denote a vector of grid property values as m = (m1 , ..., mN )T , where N = Nx ×
Ny × Nz is the number of grid blocks. The vector m is decomposed in an orthogonal basis
e i = (e1i , ..., eNi )T :
N N
m = ∑ ci e i = ∑ li , (9.1)
i=1 i=1

9.1. Use of Discrete Fourier transform algorithm for history matching 167
19.1

where li = ci e i and c = (c1 , ..., cN )T is the vector of coefficients (spectrum) of decomposition


of property values m .
To decompose a vector of grid property values the 3D discrete cosine transform (DCT)
algorithm type II is used. Then decomposition coefficients c are calculated as:
N −1
s
8 Nx −1 y Nz −1
     
π(i + 1/2)p π( j + 1/2)q π(k + 1/2)s
c pqs = ∑ ∑ ∑ mi jk cos cos cos
Nx Ny Nz i=0 j=0 k=0 Nx Ny Nz

where p = 1, ..., Nx , q = 1, ..., Ny and s = 1, ..., Nz .


Obviously, not all li vectors contribute equally into a decomposition (9.1). In order to
estimate the contribution of a vector li let’s introduce a relative information weight calculated
as follows:
|ci |2
E(li ) = N × 100%.
∑ j=1 |c j |2
This value shows a portion of data (in comparison to all data stored in a property) contained
in the vector li .
Let’s sort all terms of the right hand side of 9.1 in descending order and separate all
vectors with small contribution to data. The portion of rejected vectors (in percent) is denoted
as Pcomp (and specified by the option Compression level). {li1 , li2 , ..., liN } is the sorted set of
vectors. After rejecting Ncomp = N × Pcomp /100 vectors having minimal relative information
weight, the set of remaining vectors is Ω = {li1 , li2 , ..., li0N }, where N 0 = N − Ncomp . Thus the
decomposition (9.1) can be rewritten as:
N0
m = ∑ li j + lcomp ,
i=1

where
N
lcomp = ∑ 0 li j
j=N

and a relative information weight is calculated as:

|ci |2
E(li ) = × 100%.
∑l j ∈Ω |c j |2

Then the relative information weight for the set of vectors Ω = {li1 , li2 , ..., liN 0 } is calculated
as:
N0
0
E(N ) = ∑ E(li j )
j=1

and shows a portion of data (in comparison with all property data) containing in the set Ω. The
relative information weight is a function of number N 0 . The steep increase of E(N 0 ) function
with N 0 increase means that property data contain in a smaller number of coefficients (data
are well correlated) and definition of smaller number of variables is required. A flat E(N 0 )

9.1.1. Discrete cosine transform (DCT) algorithm 168


19.1

function indicates that many coefficients are required to reproduce main features of property,
i.e. many variables should be defined in a model.
New model variables are multipliers W1 , W2 , ... Decomposition terms with largest relative
information weight and multiplied by W1 , W2 , ... are denoted as {l˜1 , l˜2 , ..., l˜k }. A portion of
relative information weight (i.e. a portion of data) containing in this set of vectors is equal to
Evariation (specified by the option Variation).
Let a set of vectors {l˜1 , l˜2 , ..., l˜k } is the first k vectors from a set Ω = {li1 , li2 , ..., liN 0 } and a
sum of relative weights of k vectors is higher than Evariation . If a value of Evariation (specified
by the option Variation) is equal to 100% then all vectors form a Ω set will be selected as k
vectors set.
The required number of model variables is denoted as Nvar (and specified by the option
Number of output variables). If the number of vectors with large relative weights k is larger
than number of variables Nvar , then a set of vectors {l˜1 , l˜2 , ..., l˜k } is subdivided into Nvar
number of groups Gi consisting of consecutive vectors in such way that for each group the
relative information weight should be of the same order. The multiplier Wi is assigned to each
group, and all vectors from a group will be multiplied by this multiplier.
Thus the decomposition (9.1) can be rewritten as:
Nvar N0 Nvar
m = l˜1 + ∑ Wi ∑ l˜j + ∑ li j + lcomp = mMean + ∑ Wi mi + mnoise + lcomp
i=1 l˜j ∈Gi j=k+1 i=1

where

• mMean = l˜1 is the mean data value;

• mi = ∑l˜j ∈Gi l˜j ;

• mrest = ∑Nj=k+1 li j contains the rest of data;

• W1 ,...,WNvar are multipliers used as variables for history matching.

9.1.2. How to use DCT via GUI


Discrete cosine decomposition of a grid property can be carried out in Model and Geology
Designers (Calculations → Auxiliary Calculations → Expand Grid Property in Cosines).
However, to apply the DCT algorithm to a grid property of existing model for history matching
it is recommended to use the Model Designer and follow the steps:

1. Import the model to Model Designer: go to the top menu Document and select Import
Data from Existing Model;

2. Create a hydrodynamic model by pressing on the top panel button Open Dynamic
Model;

3. Go to the top menu Document and select Workflows;

9.1.2. How to use DCT via GUI 169


19.1

Figure 94. Specifying the DCT algorithm parameters.

4. Press the button and select Expand Grid Property in Cosines (see figure 94). The
following parameters should be specified:

• Grid. Select a grid;


• Property. Specify a property to which the DCT algorithm will be applied;
• Number of output variables. Specify number of model variables Nvar ;
• Compression level. Pcomp is a portion of rejected vectors (in percent) from a set
of vectors {li1 , li2 , ..., liN };
• Variation level. Evariation is a portion of property data containing in the set of
vectors with the largest contribution in decomposition;
• Output grid properties prefix. A prefix name of output vectors {mi , ..., mNvar }
and mrest ;
• Output Variable prefix. A prefix name of output variables W1 ,...,WNvar .

5. After completion of discrete cosine transform several properties containing mean value
(mMean ), terms of decomposition (mi ) and rest of data (mrest ) will be generated on
the tab Geometry Objects. Property (see figure 95). Created variables and an arith-
metic expression used for history matching are shown on the tab Input variables and
Calculator, respectively;

6. Select Input variables (see figure 96). Created decomposion variables will be shown
to the right. Their base, Min. and Max values can be changed by double-clicking on the
selected value;

7. Select Calculator (see figure 97) the decomposition formula is shown to the right;

9.1.2. How to use DCT via GUI 170


19.1

Figure 95. Obtained DCT properties: permX_Mean contains mean value, properties that are
terms of decomposition permX_1, permX_2 and permX_3, rest of data permX_rest .

8. To run history matching form the Model Designer window press the button .

!
The number of output variables can be less than Nvar . This may happen when
relative weights of vectors {l˜1 , l˜2 , ..., l˜k } are very high and the number of
vectors k is less than Nvar .
This feature is also accessible as a procedure in workflow, see 8.1.

9.1.2. How to use DCT via GUI 171


19.1

Figure 96. Variables of decomposition for history matching.

Figure 97. The decomposition formula for history matching.

9.1.2. How to use DCT via GUI 172


19.1

10. References
[1] Nelder, J.A. and Mead, R., A simplex method for function minimization, Comput. J., 7, pp.
308–313, 1965.

[2] Kathrada, Muhammad, Uncertainty evaluation of reservoir simulation models using particle
swarms and hierarchical clustering Doctoral dissertation, Heriot-Watt University, 2009.

[3] Kruskal, J. B., Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis
Psychometrika, pp. 1–27, 1964.

[4] Richard A. Johnson and Dean W. Wichern, Applied Multivariate Statistical Analysis - 6th ed
Pearson, 2007.

[5] H. Steinhaus, Sur la division des corps materiels en parties Bulletin Polish Acad. Sci. Math, 1956.

[6] S. P. Lloyd, Least square quantization in PCM IEEE Transactions on Information Theory, 1982.

[7] L. Mohamed, M. Christie, V. Demyanov, History Matching and Uncertainty Quantification: Mul-
tiobjective Particle Swarm Optimisation Approach SPE 143067, Vienna, Austria, 23–26 May
2011.

[8] J. Hertz; R.G. Palmer, A.S. Krogh, Introduction to the theory of neural computation Addison-
Wesley, 1991.

[9] C.C. Aggarwal, Neural Networks and Deep Learning Springer, 2018.

[10] N.S. Bahvalov, N.P. Zhidkov, G.M. Kobelkov, Numerical methods, M. «Nauka», 1987 [in russian]

[11] Clayton V. Deutsch, Geostatistical Reservoir Modeling, Oxford University Press, 2002

[12] A. Bardossy Introduction to Geostatistics University of Stuttgart

[13] S. D. Conte, Carl de Boor Elementary Numerical Analysis McGraw-Hill Book Company, 1980.

[14] J-P Chiles, P. Delfinder Geostatistics Modeling Spatial Uncertainty Wiley & Sons, Canada, 1999.

[15] V.V. Demianov, E.A. Savelieva Geostatistics theory and practice M. «Nauka», 2010 [in russian]

10. References 173


Rock Flow Dynamics

Phone: +1 713-337-4450
Fax: +1 713-337-4454
Address: 2200 Post Oak Boulevard, STE 1260, Houston, TX 77056
E-mail: tnavigator@rfdyn.com
Web: http://rfdyn.com

To locate the office nearest to you, please visit https://rfdyn.com/contact/

© Rock Flow Dynamics. All rights reserved. 15.03.2019

You might also like