You are on page 1of 19

J191378 DOI: 10.

2118/191378-PA Date: 5-December-18 Stage: Page: 2409 Total Pages: 19

Robust Life-Cycle Production Optimization


With a Support-Vector-Regression Proxy
Zhenyu Guo and Albert C. Reynolds, University of Tulsa

Summary
We design a new and general work flow for efficient estimation of the optimal well controls for the robust production-optimization prob-
lem using support-vector regression (SVR), where the cost function is the net present value (NPV). Given a set of simulation results, an
SVR model is built as a proxy to approximate a reservoir-simulation model, and then the estimated optimal controls are found by maxi-
mizing NPV using the SVR proxy as the forward model. The gradient of the SVR model can be computed analytically so the steepest-
ascent algorithm can easily and efficiently be applied to maximize NPV. Then, the well-control optimization is performed using an SVR
model as the forward model with a steepest-ascent algorithm. To the best of our knowledge, this is the first SVR application to the

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


optimal well-control problem. We provide insight and information on proper training of the SVR proxy for life-cycle production optimi-
zation. In particular, we develop and implement a new iterative-sampling-refinement algorithm that is designed specifically to promote
the accuracy of the SVR model for robust production optimization. One key observation that is important for reservoir optimization is
that SVR produces a high-fidelity model near an optimal point, but at points far away, we only need SVR to produce reasonable approxi-
mations of the predicting output from the reservoir-simulation model. Because running an SVR model is computationally more efficient
than running a full-scale reservoir-simulation model, the large computational cost spent on multiple forward-reservoir-simulation runs
for robust optimization is significantly reduced by applying the proposed method. We compare the performance of the proposed method
using the SVR runs with the popular stochastic simplex approximate gradient (StoSAG) and reservoir-simulations runs for three syn-
thetic examples, including one field-scale example. We also compare the optimization performance of our proposed method with that
obtained from a linear-response-surface model and multiple SVR proxies that are built for each of the geological models.

Introduction
Life-cycle production optimization is a subsequent step of assisted history matching in reservoir management (Brouwer and Jansen
2004; Jansen et al. 2005, 2009; Peters et al. 2010; Chen and Reynolds 2016; Chen et al. 2017a). By optimizing the well operating condi-
tions at each control step during a reservoir’s lifetime, the life-cycle production-optimization procedure can obtain the maximum eco-
nomic benefit, usually measured as NPV. Robust life-cycle production optimization pertains to the case where the geological uncertainty
of the reservoir is represented by a set (ensemble) of reservoir realizations (reservoir models). Compared with production optimization
using a single-reservoir model, robust production optimization with uncertain reservoir models will need many more forward-reservoir-
simulation runs (usually proportional to the number of uncertain reservoir models) to obtain the optimal solution. In practice, considering
the difficulties and cost of operations, the length of each control step used in life-cycle production optimization typically is from 1 to
6 months. Most of the literature on the optimal well-control problem (Sarma et al. 2005; van Essen et al. 2006, 2011; Kraaijevanger et al.
2007; Chen et al. 2009, 2012; Jansen et al. 2009; van Essen et al. 2009; Chen 2011; Fonseca et al. 2013; Isebor and Durlofsky 2014a, b;
Oliveira and Reynolds 2014; Chen and Reynolds 2016) uses a full-scale finite-volume or a finite-difference reservoir-simulation model
as the forward model to predict the NPV of each uncertain reservoir model for the set of controls generated at each iteration during the
optimization procedure. Among the numerous methods for production optimization, the gradient-based method with adjoint gradient has
shown better computational efficiency than methods that are derivative free or use only a stochastic approximation of the gradient. A
multiscale method (Moraes et al. 2017) has been used to improve the computational efficiency of obtaining the adjoint gradient for his-
tory matching. If the adjoint gradient is not available, the StoSAG (Fonseca et al. 2015, 2016; Chen et al. 2017a; Lu et al. 2017; Chen and
Reynolds 2018) has proved to provide a sound alternative for robust optimization. StoSAG has a sound theoretical basis, which indicates
it should perform better than ensemble-based optimization (EnOpt) when the geological uncertainty is large.
Depending on the scale of a problem, a single forward-reservoir-simulation run might take 1 hour to more than 1 day to finish.
Therefore, robust optimization performed by running reservoir-simulation models is quite computationally expensive. To reduce the
computational cost, much work has been performed on reducing the scale of reservoir models, such as reduced-order modeling using
proper orthogonal decomposition (van Doren et al. 2006; Cardoso and Durlofsky 2010; Gildin et al. 2013; He and Durlofsky 2014;
Chen et al. 2015; Jansen and Durlofsky 2017). With a reduced-order model, robust production optimization can be performed more effi-
ciently. However, the development of these reduced-order models requires access to the source code of the simulator, which is not
available for a commercial reservoir simulator.
On the other hand, a data-driven model (Yousef et al. 2005; Sayarpour 2008; Weber 2009; Nguyen 2012; Cao et al. 2015; Zhao et al.
2016; Guo et al. 2018a, b) does not require a priori knowledge of a detailed geological model or any access to a reservoir simulator. How-
ever, a data-driven model can still be used for production optimization, which can serve as a proxy simulation model for robust life-cycle
production optimization. Specifically, a data-driven model is obtained by determining its parameters with history matching of production
data. The development of one data-driven model requires a relatively small investment of time compared with the time required to his-
tory match a full-scale simulation model. In addition, production optimization dependent on running data-driven models (Lake et al.
2007; Lerlertpakdee et al. 2014; Guo et al. 2018b) is far more efficient than running full-scale reservoir-simulation models. One issue
associated with data-driven models is the requirement that a large number of production data are available. Thus, for greenfields, it might
not be feasible to use the data-driven model for robust production optimization. Moreover, data-driven models such as the capacitance-
resistance model (CRM) (Yousef et al. 2006), interwell numerical simulation model (INSIM) (Zhao et al. 2016), and INSIM with front
tracking (INSIM-FT) (Guo et al. 2018a, b) currently only apply for 2D two-phase flow (water/oil). Recently, INSIM-FT has been
extended to 3D flow with gravitational effects included, but still only applies for water/oil systems (Guo and Reynolds in press). On the

Copyright V
C 2018 Society of Petroleum Engineers

Original SPE manuscript received for review 25 September 2017. Revised manuscript received for review 3 April 2018. Paper (SPE 191378) peer approved 10 May 2018.

December 2018 SPE Journal 2409

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2410 Total Pages: 19

other hand, a machine-learning model can be used to emulate any input/output relations for three-phase reservoir-simulation problems
and compositional reservoir-simulation problems, as long as we have enough training data. The flexibility of machine-learning methods
for reservoir optimization is another important motivation for doing this research. Proxies derived by means of machine learning provide
an alternative to a data-driven model, but machine-learning methods require a geological model to train the proxy.
Although machine learning includes a set of different branches, nonlinear regression can be a useful tool for robust life-cycle produc-
tion optimization. Nonlinear regression enables us to build a proxy model as a computationally efficient forward model to replace a full-
scale reservoir-simulation model, such that the heavy computational cost of robust production optimization can be significantly reduced.
Proxies dependent on response surfaces are commonly used in the petroleum industry (Eide et al. 1994; Yeten et al. 2005; Slotte and
Smorgrav 2008; He et al. 2015, 2016; Chen et al. 2017b). A response-surface proxy is a parameterized mathematical formulation that
approximates the input/output relation of one target function (He et al. 2015). For instance, the reservoir simulator that maps a series of
model parameters to a series of reservoir-flow responses can be approximated by a set of response-surface proxies. Naively, polynomial
regression is a candidate for building such a response-surface proxy of a reservoir-simulation model, although it is not suitable for highly
nonlinear input/output relations (Morris et al. 2011). Other methods to build response-surface-proxy models have been investigated,
which include Kriging (Landa and Güyagüler 2003) and spline interpolation (Castellini et al. 2010). These methods have an issue that
commonly exists for all interpolation-based methods; namely, the issue of data overfitting if the training outputs are corrupted with noise.
For reservoir-simulation problems, training data come from simulated-flow responses, which are mingled with numerical noise from
inexact solutions of both linear and nonlinear solvers for reservoir simulation (Guo et al. 2017b). In the machine-learning area, many
regression methods can alleviate data-overfitting problems, but also give reasonably small predictive bias. One such machine-learning

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


method that has been widely used is SVR (Drucker et al. 1997; Saunders et al. 1998; Suykens et al. 2002; Guo et al. 2017a, 2018c).
The basic idea of SVR is to transform the input data from the original space into a higher-dimensional feature space, where the out-
put data have a linear relationship with the variables in the feature space. The parameters defining the linear relationship between the
output and the feature variables are obtained by solving an optimization problem that minimizes the complexity of the SVR model and
the deviation of the predicted response from the “true” response. However, the transformation into the feature space is usually not per-
formed in an explicit way because of computational infeasibility (Suykens et al. 2002). Instead, a kernel function that satisfies the
Mercer (1909) condition is introduced to convert the optimization problem from a primal space into a dual space, which makes the
problem solvable. One popular version of SVR, referred to as e-SVR (Drucker et al. 1997), has the feature that only a part of the train-
ing data is included for training to reduce the training cost. Typically, one solves a convex quadratic programming problem in a dual
space to determine an e-SVR model. Later, a least-squares (LS) version of SVR (LS-SVR) was proposed to solve nonlinear-regression
problems (Saunders et al. 1998). Compared with e-SVR, solutions of LS-SVR are easier to obtain because the optimization problem of
solving for an LS-SVR model only involves equality constraints, whereas inequality constraints are required to be solved for the e-SVR
model. According to Appendix A, the major complication for obtaining an LS-SVR model is to solve a linear system of Eq. A-10,
whereas to obtain a e-SVR model, an iterative method, which requires more computational cost, is required (Platt 1998). Because of the
simplicity and computational efficiency of LS-SVR, LS-SVR is applied in this research.
The objective of this research is to reduce the computational cost associated with robust production optimization using a new work
flow integrated with LS-SVR, which from this point will be referred to as SVR. To the best of our knowledge, it is the first time that
SVR has been used for robust optimization. The cost function is the NPV of life-cycle production, and the optimization variables
(design variables) are the well controls (well operating conditions) at control steps of a specified length. To improve the accuracy of the
SVR model as a proxy for the reservoir simulator, we develop a new iterative-sampling-refinement technique. This iterative-sampling
technique is important because it is developed to improve the accuracy of the SVR proxy as a predictor of the NPV generated from the
reservoir simulator in the neighborhood of maxima. Unlike most proxy models, our aim is not to produce a proxy model that predicts
with high accuracy the value of NPV output from the reservoir simulator for any input vector of well controls. Instead, our aim is to pro-
duce a proxy that agrees reasonably well with the simulator for all input control vectors, but has much higher fidelity when input vectors
are in the neighborhood of a point that maximizes the NPV. SVR with iterative sampling accomplishes this accuracy goal with only a
relatively small increase in computational cost. By design, iterative sampling should allow one to construct an SVR proxy with a
smaller training set than would be required if we demand that the SVR be capable of accurately matching the NPV generated by the res-
ervoir simulator for any possible input control vectors. The gradient of the SVR proxy can be calculated analytically, which enables the
use of gradient-based optimization techniques, promotes computational efficiency, and provides far-more-accurate derivatives than can
be obtained with finite-difference approximations. We show that for the general robust optimization problem considered, the SVR per-
forms far better than a proxy that is dependent on linear-response-surface methodology. We also show that the estimated optimal NPV
obtained with the SVR-generated work flow agrees well with the StoSAG (Fonseca et al. 2015, 2016). To compare the performance of
the proposed method using SVR runs with StoSAG using reservoir-simulation runs, we test three synthetic examples. The first one is a
2D reservoir model involving a complex channelized system; the second one pertains to a three-layered reservoir with petrophysical
properties given by Gaussian random fields; and the third one is a field-scale example, the Brugge Reservoir (Peters et al. 2010). All the
examples consider production under waterflooding.
This paper is organized as follows. First we introduce the robust optimization problem for waterflooding reservoirs and its associated
objective function, and we review the StoSAG algorithm. We then introduce the implementation of the SVR model in this work and the
method to estimate the optimal well controls that maximizes the expected NPV [E(NPV)] with the SVR model; present and discuss
computational results of three examples, where in the third example, the algorithm of the new iterative-sampling refinement is provided
and applied; and provide conclusions and discussions.

Methodology
Robust Production Optimization. In this work, the NPV of production is regarded as the cost function of measuring the economic
benefit for the life-cycle production optimization. For a reservoir under waterflood with geological uncertainty, the NPV defined for a
realization of reservoir model is given by
8 " #9
XNt < X
P XI =
Dtn n n n
Jðu; mÞ ¼ tn ðro  qo; j  c w  qw; j Þ  ðc wi  q wi; j Þ ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð1Þ
:
n¼1 ð1 þ bÞ365 j¼1 j¼1
;

where m is an Nm-dimensional vector of reservoir-model parameters that represents one realization of the uncertain reservoir models; u
is an Nu-dimensional vector of well controls over the production lifetime; ro is the oil revenue (in USD/STB); cw is the cost of disposing

2410 December 2018 SPE Journal

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2411 Total Pages: 19

of produced water (in USD/STB); cwi is the water-injection cost (in USD/STB); and b is the annual discount rate. Nt is the total number
of simulation timesteps; tn represents the nth time level (in days); and Dtn is the length of the nth timestep. P and I denote the number of
producers and injectors, respectively; qno; j and qnw; j denote the average oil- and water-production rate, respectively, at the jth producer at
the nth timestep (in STB/D); and qnwi; j denotes the average water-injection rate at the jth injector at the nth timestep (in STB/D).
The robust optimization is to maximize the expectation of NPV as defined in Eq. 1 over m. Throughout this text we consider that a
set of Ne reservoir models (denoted by mi, i ¼ 1; 2; …; Ne ) represent the uncertainty of the reservoir models. The expectation of NPV
over m is approximated by the mean value of ½Jðu; mi ÞNi¼1 e
,

1 X Ne
JE ðuÞ ¼ Jðu; mi Þ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð2Þ
Ne i¼1

where JE ðuÞ denotes the approximate expectation of NPV. By assuming the well controls are constrained by simple bounds, the robust
optimization problem is given by

maximize
N
JE ðuÞ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð3Þ
u2R u

subject to

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


ulow
l  ul  uup
l ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð4Þ

where ulow
l and uup
l represent the lower and upper bounds, respectively, of the lth control variable of u.

Review of StoSAG. StoSAG (Fonseca et al. 2016) is a recently proposed method used to estimate the gradient for production optimiza-
tion using a stochastic approximate gradient. The theoretical results of Fonseca et al. (2016) indicate that StoSAG outperforms the well-
known EnOpt method (Chen and Oliver 2009) for robust production optimization when the uncertainty in the geological model is large.
To calculate the StoSAG gradient for iteration k, k ¼ 1; 2… until convergence, a set of well controls are sampled (^ u i ; i ¼ 1; 2…; Ne )
around the estimate of the optimal well control at the current iteration, uk. The stochastic simplex gradient for Jðuk ; mi Þ; i ¼ 1; 2…; Ne ,
is then given by the right-hand side of Eq. 5,

u i  uk Þþ T ½Jð^
ru Jðuk ; mi Þ  dk;i ¼ f½ð^ u i ; mi Þ  Jðuk ; mi Þg; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð5Þ

where the plus sign represents the Moore-Penrose pseudoinverse (Broyden 1975). The search direction for maximizing the expected
NPV in Eq. 2 is given by

1 XNe
dk;sto ¼ Cu dk;i  rJE ðuk Þ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð6Þ
Ne i

where Cu is a covariance matrix used to enforce the temporal smoothness of well controls.

SVR Model. SVR is a widely used machine-learning method for solving a nonlinear-regression problem. Given a number of training
input data and training output data where the output data are generated from the “true” model, SVR “learns” how to map the input data
to the output response and generates a function to predict the output response of the true model for any given input data. We let Ns be
the number of training data, let xk 2 R1Nin ; k ¼ 1; 2; …; Ns be the kth training input vector, where Nin is the dimension of xk, and let yk
be a scalar “true” response corresponding to xk. The pair (xk, yk) is referred to as the training sample, and the set S containing all the
training samples fðxk ; yk Þ; k ¼ 1; 2; …; Ns g is called a training set. For applications with SVR, normalizing the training set is a common
way of data preprocessing before training (Crone et al. 2006). To treat all the input variables equally for training an SVR proxy, we nor-
malize all the input variables to the same scale of [0,1] by applying the linear transformation as

xk  xlow
xk ¼ ; k ¼ 1; 2; …; Ns ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð7Þ
xup  xlow
where xup and xlow are two vectors with each entry representing the maximum value and the minimum value of that entry, respectively,
from the training input set. Also, the training output variable is normalized by

yk  ylow
yk ¼ ; k ¼ 1; 2; …; Ns ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð8Þ
yup  ylow
to generalize our problems. In Eq. 8, yup and ylow represent the maximum value and the minimum value, respectively, from the training
output set. The normalized training set S is now given by fðxk ; yk Þ; k ¼ 1; 2; …; Ns g. By the normalization of Eqs. 7 and 8, each element
of S has a value between zero and unity.
The training procedure of an SVR model is expressed as, given a training set S, finding a function ^y so that ^yðxÞ is a “good predictor”
for the corresponding value of ^y. In machine learning, the function ^yðxÞ is called a hypothesis. The hypothesis, or the predictive SVR
proxy model, is given by
X
Ns
^yðxÞ ¼ ak Kðxk ; xÞ þ b; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð9Þ
k¼1

where b and ak, k ¼ 1; 2; …; Ns are scalar parameters decided from the training procedure as described in Appendix A. Kðxk ; xÞ is a
kernel function. For our applications, a radial-basis-function (RBF) kernel is used. Drucker et al. (1997) suggest that if there is little
knowledge about the training data, the RBF kernel is generally a good choice for regression with SVR to ensure the smoothness of the

December 2018 SPE Journal 2411

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2412 Total Pages: 19

response surface. The production-optimization problem fits this standard quite well because we cannot clearly know the shape of the
response surface of the NPV with respect to different control variables. In addition, we need the response surface to be smooth to avoid
being trapped in small local maxima and to avoid overfitting if the training data are noisy. Suykens et al. (2002) also recommend that
the RBF kernel should be used for regression with SVR, and observed that the predictor generated with the RBF kernel generally
outperforms those generated with other kernels they tested.
The RBF kernel is given by

Kðxk ; xÞ ¼ expðjjxk  xjj22 r2 Þ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð10Þ

where r is the specified kernel bandwidth. Using the rule-of-thumb method, r is calculated by

r ¼ 0:5  jjxup  xlow jj2 ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð11Þ

where xup and xlow are the upper and lower bounds, respectively, of a normalized training input set. From Eqs. 7 and 8,

xup ¼ ½1; 1; …; 1T ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð12Þ

and

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


xlow ¼ ½0; 0; …; 0T : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð13Þ

Substituting Eqs. 12 and 13 into Eq. 11 yields


pffiffiffiffiffiffiffi
r ¼ 0:5 Nin : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð14Þ

The gradient of ^yðxÞ in Eq. 9 is given by


X
Ns
r^
yðxÞ ¼ ½ak rKðxk ; xÞ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð15Þ
k¼1

and the gradient of Kðxk ; xÞ is computed using the chain rule and is given by
2
rKðxk ; xÞ ¼ Kðxk ; xÞ  ðxk  xÞ: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð16Þ
r2
Substituting Eq. 16 into Eq. 15 yields
XNs  
2ak
r^yðxÞ ¼ Kðx k ; xÞ  ðx k  xÞ : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð17Þ
k¼1
r2

Estimation of Optimal Well Controls. To obtain a more computationally efficient algorithm for production optimization than is cur-
rently available, we wish to build a machine-learning model that can predict the NPV given a reservoir model (m) and an associated set of
well controls (u). Here, we focus on using an SVR model as the machine-learning proxy model of a full-scale reservoir-simulation model.
Robust optimization is then performed, and the SVR proxy is run as the forward model. Compared with a full-scale reservoir-simulation
model, an SVR forward model is much more efficient to run; therefore, optimizing with an SVR model significantly reduces the computa-
tional cost of robust optimization. To build such an SVR model, a training set S is required, with one training sample in the set given by
n o
½uT ; mT T ; Jðu; mÞ ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð18Þ

where ½uT ; mT T is an input column vector that contains in its entries both a specific set of well controls (u) and model parameters (m),
and J(u, m) is the corresponding life-cycle NPV predicted by running the reservoir simulator. To generate a robust SVR model, the
training set should include a number of training examples using different sets of well controls and reservoir models. Because the geo-
logical uncertainty of the model space is represented by Ne reservoir models, m for each training example in Eq. 18 is selected from one
of mi, i ¼ 1; 2; …; Ne . The selection of u for each training example is from Latin-hypercube sampling (LHS) (McKay et al. 1979), which
is a common design-of-experiment method. For our problem, LHS can generate a number of well-control vectors that are evenly distrib-
uted in the Nu-dimensional well-control space. Letting the number of well-control vectors sampled by LHS be Ns, the Ns sets of controls
are expected to cover the range of well controls during the robust production optimization. For our examples presented later, Ns ¼ 200 sets
of well controls are generated for Ne ¼ 20 reservoir models (i.e., ten sets of well controls are paired with each reservoir model). The
training input set consists of the 200 suites of paired reservoir models and well controls, whereas the training output set is composed of
the NPVs predicted by running the reservoir simulator with the 200 suites of paired reservoir models and well controls. Our 200 samples
for training were chosen using our knowledge that it would yield five to 10 times computational savings compared with ensemble-
based methods (Chen et al. 2010; Fonseca et al. 2016). However, we note that, for the first two examples, increasing the number of
training samples did not significantly improve the proxy quality or yield a nonnegligibly higher E(NPV) value. Moreover, in the third
example where we apply the iterative-sampling technique to iteratively improve the SVR proxy, we also build the initial SVR model
with 140 training samples. The results show that when the controls obtained with the iterative-sampling SVR implementation are
entered into Eclipse (Schlumberger 2013), the NPV predicted with Eclipse almost exactly matches the NPV predicted by optimization
with the SVR proxy as the forward model. Thus, training samples with a number even fewer than 200 sets is feasible in our work flow
by integrating the new iterative-sampling technique. Note that the training set is normalized by the procedure in Eqs. 7 and 8 as a
preprocessing procedure before training. Then, each example of the normalized training set is now given by
n o
½uT ; mT T ; Jðu; mÞ ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð19Þ

2412 December 2018 SPE Journal

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2413 Total Pages: 19

where u and m represent the normalized u and the normalized m, respectively, with the definitions

u  ulow
u¼ ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð20Þ
uup  ulow
and

m  mlow
m¼ ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð21Þ
mup  mlow
where ulow and uup represent the lower and upper bounds of u, respectively, and mlow and mup denote the minimum and maximum value,
respectively, for that element over mi, i ¼ 1; 2; …; Ne . Using Eqs. 20 and 21, each element of u and m is bounded between the interval
[0,1]. Jðu; mÞ in Eq. 19 is the normalized NPV with u and m defined by

Jðu; mÞ  J low
Jðu; mÞ ¼ ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð22Þ
J up  J low
where J up and J low represent the minimum and maximum value, respectively, of the simulation-generated NPVs in the training set. The
^ represent the prediction function of the
SVR proxy model (hypothesis) is generated by the training procedure in Appendix A. Let JðxÞ
normalized NPV given the input vector x, where x contains a vector of normalized u and a vector of normalized m,

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


x ¼ ½uT ; mT T : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð23Þ

For a specific normalized well-control vector u, the expectation of normalized NPV from an SVR model is given by

1 X Ne
^ T ; mi T T Þ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð24Þ
J^E ðuÞ ¼ Jð½u
Ne i¼1

and the unnormalized estimate of the expectation of NPV from the SVR proxy is calculated as
J^E ðuÞ ¼ J^E ðuÞðJ up  J low Þ þ J low : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð25Þ

The robust production optimization is performed using the SVR model as the forward model with a steepest-ascent algorithm given by
 
kþ1 k dk
u ¼ u þ bk ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð26Þ
jjdk jj1

where the superscript k denotes the iteration number, k ¼ 0; 1; 2; … until convergence, u0 is the initial guess of the normalized well-
control vector, bk is the step size, and dk is the search direction usually estimated by different stochastic algorithms such as StoSAG
(Eq. 6). Here, because the SVR model has the analytical form, we can directly compute the exact gradient of J^E ðuÞ as the search direc-
tion with the SVR model. Differentiating Eq. 24 yields

1 XNe
^ T ; mi T T Þ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð27Þ
dk ¼ rJ^E ðuÞ ¼ ru Jð½u
Ne i¼1

where ru J^ ½uT ; mi T T can be obtained in a similar way as for Eq. 17. To avoid the abrupt change of well controls over time, the search
direction dk in Eq. 27 is premultiplied by the same covariance matrix Cu in Eq. 6. The obtained optimal well controls are entered into
the reservoir simulator to calculate the simulation-approximated expectation of NPV. The complete algorithm is listed as follows.
Algorithm 1: Robust Optimization With an SVR Proxy
1. Specify the maximum number of iterations, Nmaxiter , allowed in the steepest ascent and the maximum number of simulation runs,
Nmaxsim , allowed for optimization.
2. Specify the upper bound (uup ) and lower bound (ulow ) for the well-control vector, and the number of well-control vectors (Ns)
sampled for training an SVR model, where Ns should be divisible by Ne. Ns
3. Sample Ns sets of controls using LHS within the well-control bounds. Letting ¼ Nse , each reservoir model (mi,
i ¼ 1; 2; …; Ne ) is paired with Nse sets of well controls. N e
4. Predict the NPVs of the Ns suites of reservoir models and well controls using the Ns number of reservoir-simulation runs.
5. Generate the training set with all the reservoir models, well controls, and the corresponding NPVs; normalize the training set (see
the procedure in Eqs. 20 through 22) so that all the training inputs and outputs are between zero and unity.
6. Obtain an SVR model according to the training procedure described in Appendix A using the normalized training set.
7. Start production optimization using the SVR model as the forward model and set k ¼ 0, where k is the iteration index for the
steepest ascent (Eq. 26). In well-control production-optimization examples presented later, well controls during the optimization
period are set to bottomhole-pressure (BHP) controls for producers and rate controls for injectors. Specify the normalized initial
control vector (Nu-dimensional) as

u0 ¼ ½0:5; 0:5; …; 0:5T :

8. FOR k ¼ 0; 1; …; Nmaxiter ,
• Predict the normalized expectation of NPV at the kth iteration, J^E ðuk Þ, using Eq. 24 with the SVR proxy.
• Compute dk ¼ rJ^E ðuÞ using Eq. 27.  
dk
• Perform a line search with backtracking to obtain a value of bk, such that J^E ðukþ1 Þ > J^E ðuk Þ, with ukþ1 ¼ uk þ bk .
jjdk jj1
Note that during the line search, if any component of the proposed control vector is outside the interval of [0,1], apply

December 2018 SPE Journal 2413

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2414 Total Pages: 19

truncation to ensure all components of the proposed control vector satisfy the bound constraints. The initial guess of bk is set to
0.1, and the step size is cut by one-half during each line-search iteration. If the line search cannot find a control vector that
increases the value of J^E in the maximum allowable number of step-size cuts, Ncuts , we set ukþ1 equal to the control vector
obtained during the line search that provides the highest J^E . Ncuts ¼ 5 throughout this process.
• If Nmaxsim or Nmaxiter is reached, or both of the following conditions are satisfied,

jJ^E ðukþ1 Þ  J^E ðuk Þj


 eJ^; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð28Þ
max½J^E ðuk ; 1:0Þ
jjukþ1  uk jj
 eu ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ð29Þ
max½jjuk jj2 ; 1:0
then terminate the iteration. In our applications, eJ^ ¼ 104 and eu ¼ 103 .
END FOR
9. Denote the optimal normalized well-control vector by uopt . Predict the expectation of NPV by running reservoir simulations with
the optimal well control generated from the SVR proxy, u ¼ uopt ðuup  ulow Þ þ ulow .

Examples

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


Three examples are considered in this paper. The first example pertains to a 2D reservoir with a two-facies channel system. The second
example is a three-layered reservoir with the absolute-permeability field given by a Gaussian distribution. The third example is a field-
scale synthetic case, the Brugge reservoir. The well controls are rate controls for water-injection wells and BHP controls for production
wells. The reservoir simulator used throughout is Eclipse (Schlumberger 2013).

Example 1. The Eclipse reservoir-simulation model has a 50501 grid. Each gridblock is 505050 ft. For this example, 20 real-
izations of reservoir models are used to represent the uncertainty of the reservoir for NPV prediction. Fig. 1 shows the absolute-log-
permeability fields of four realizations for the channelized reservoir. There are four injectors and nine producers placed in this reservoir
in a five-spot pattern. The production lifetime is 3,000 days. For robust optimization, the production life is equally divided into 30 con-
trol steps with each step of 100 days; therefore, the control vector u has 1330 ¼ 390 elements. The control variables of the injectors
are injection rates with an upper bound of 1,000 STB/D and a lower bound of 200 STB/D; the controls of the producers are bottomhole
pressures with an upper bound of 5,000 psi (the initial reservoir pressure) and a lower bound of 2,500 psi.

8 8
7 7
6 6

5 5
4 4
3 3

2 2

8 8
7 7
6 6

5 5
4 4
3 3

2 2

Fig. 1—Four realizations of logarithmic permeability fields for the 2D channelized reservoir. The dark-blue zones represent the
low-permeability shales and dark-red zones represent the high-permeability channels.

An optimal well-control strategy is obtained by maximizing the expectation of NPV, as defined in Eq. 2. The economic parameters
are provided with ro ¼ 50 USD/STB, cw ¼ 5 USD/STB, and cwi ¼ 2 USD/STB. The discount rate b ¼ 0.1.
To train an SVR proxy for optimization, 200 well-control vectors within the given upper bounds and lower bounds of controls are
generated using LHS. Then, 10 control vectors are paired with each reservoir model, which results in 200 sets of NPV values predicted
by running the Eclipse simulator (Schlumberger 2013). The entire training set includes 200 suites of well controls, reservoir models,
and the corresponding NPVs. Next, robust production optimization is performed using the SVR proxy model generated with the training
set as the forward model, with Nmaxsim ¼ 2; 000 and Nmaxiter ¼ 200.
The expectation of NPV obtained with SVR vs. the SVR runs is shown in Fig. 2. In 2,000 SVR runs, the expectation of NPV is
increased from USD 184 million to 218 million. To calculate the “real” expectation of NPV by applying the optimal well controls gen-
erated with SVR, we entered the optimal well controls into Eclipse and obtained an expectation of NPV of USD 208.8 million, 5% less
than the optimal NPV estimated by optimization using the SVR model. We also perform robust production optimization with StoSAG
using the Eclipse reservoir-simulation model as the forward model to compare the SVR results. To rule out the random effects, the

2414 December 2018 SPE Journal

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2415 Total Pages: 19

optimization with StoSAG is initialized with five different random seeds and the resulting values of the expectation of NPV vs. the
number of simulation runs is shown in Fig. 3. In 1,500 reservoir-simulation runs, robust production optimization with StoSAG increases
the expectation of NPV from USD 185 million to 209 million on average, which is quite close to the NPV generated by applying the
SVR-generated optimal well controls into Eclipse.

220

E(NPV) (USD million)


210

200

190

180

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


0 500 1,000 1,500 2,000
SVR Runs

Fig. 2—SVR-generated expectation of NPV vs. SVR runs.

210
E(NPV) (USD million)

205

200

195

190

185
0 500 1,000 1,500
Simulation Runs

Fig. 3—Expectation of NPV vs. simulation runs using StoSAG initialized with five different random seeds.

The comparisons of optimal well controls for injectors and producers obtained using SVR and StoSAG are shown in Figs. 4 and 5,
respectively. Even though the optimal well controls generated with StoSAG and SVR are quite different, the resulting values of the
E(NPV) are very similar. The oil-saturation fields for the 20th reservoir realization at the end of the production life obtained by applying
the optimal well controls of SVR and StoSAG are compared in Fig. 6. The two oil-saturation fields are quite similar.

1,000 1,000
1 1
900 900
Injector Index

Injector Index

2 800 2 800

3 700 3 700

600 600
4 4
500 500
10 20 30 10 20 30
Control Step Control Step
(a) StoSAG (b) SVR

Fig. 4—Comparison of optimal injection rates obtained using an SVR proxy and StoSAG with Eclipse, Example 1.

In terms of the computational efficiency, running an SVR proxy model requires less than 1 second, whereas running an Eclipse sim-
ulation model requires more than 10 seconds. The training procedure to build an SVR proxy for this example is finished in 1 second
using a Fortran program, which is negligible compared with the computational costs of forward simulation runs. Considering the cost to
generate the training data, the total computational cost of robust production optimization using the SVR model is the cost of running
200 reservoir-simulation models plus the small amount of cost spent on optimization with the SVR model, whereas the computational
cost with StoSAG by running Eclipse is approximately 1,500 reservoir-simulation runs. As a result, in this example, performing robust
production optimization with an SVR model is roughly seven times more computationally efficient than optimization with StoSAG and
running reservoir simulations, yet both methods yield a comparable value of E(NPV).

December 2018 SPE Journal 2415

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2416 Total Pages: 19

5,000 5,000
1 1
2 4,500 2 4,500

Producer Index

Producer Index
3 3
4 4,000 4 4,000
5 5
6 3,500 6 3,500
7 7
3,000 3,000
8 8
9 9
2,500 2,500
10 20 30 10 20 30
Control Step Control Step
(a) StoSAG (b) SVR

Fig. 5—Comparison of optimal production BHPs obtained using an SVR proxy and StoSAG with Eclipse, Example 1.

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1
(a) StoSAG (b) SVR

Fig. 6—Comparison of oil-saturation fields for the 20th reservoir model using SVR and StoSAG.

Example 2. The synthetic reservoir has three layers with grid dimensions of 27273. Two water injectors are placed in the third
layer, and six producers are placed in the first layer. The petrophysical properties of the reservoir, including the horizontal and vertical
absolute permeabilities and porosities on a gridblock-by-gridblock basis, are generated using multivariate Gaussian random sampling.
In total, 20 sets of reservoir models are generated to represent the uncertainty of this reservoir. Then, the number of model parameters,
nm, for one reservoir realization is equal to the number of gridblocks multiplied by three (i.e., 6,561). One realization of the horizontal-
logarithmic-permeability fields and the locations of the wells can be seen in Fig. 7. The initial reservoir pressure is 3,400 psi. The total
production life is 1,800 days, which is divided into Nc ¼ 30 control steps, with each control step of 60 days. The total number of control
variables is 830 ¼ 240. The bound constraint for water injectors is between 0 and 15,000 STB/D and for production wells is between
1,000 and 3,400 psi.

5 8

10 7

6
15

5
20
4
25
3
5 10 15 20 25

Fig. 7—One realization of the horizontal-log-permeability fields, Example 2.

There are 200 well-control training samples generated by LHS, with 10 samples of well-control vectors for each realization (reser-
voir model). The training outputs (NPVs) are generated by entering the 200 sets of paired reservoir models and well-control vectors
into Eclipse. We pick the first 150 training samples to generate an SVR proxy and the remaining 50 samples for a blind test. As shown
in Fig. 8, the blind test has an R2 (the coefficient of determination) value of 0.87, which indicates that the SVR proxy is reasonably
accurate. Then, we retrain an SVR proxy by using all the 200 training samples, and with the new SVR proxy model, robust production
optimization is performed with the economic parameters for maximizing the expectation of NPV, given by ro ¼ 50 USD/STB, cw ¼
5 USD/STB, cwi ¼ 2 USD/STB, and b ¼ 0.1. For comparison, robust production optimization is performed using the SVR model and the
reservoir-simulation model as the forward model. The maximum numbers of forward runs and iterations are provided with Nmaximum ¼
2; 000 and Nmaxiter ¼ 200.
The expectation of NPV calculated from the SVR proxy model vs. SVR runs is shown in Fig. 9, with an optimal E(NPV) value of
approximately USD 1.12 billion, whereas the StoSAG results obtained by optimization initialized with five different random seeds are

2416 December 2018 SPE Journal

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2417 Total Pages: 19

shown in Fig. 10. Note that the Eclipse-generated optimal NPVs obtained with five different seeds are close to each other. The quantita-
tive comparison for the expectation of NPV obtained from different scenarios is shown in Table 1. When we entered the controls
obtained by optimizing with the NPV proxy into Eclipse 100, we obtained an NPV of USD 1.103 billion, which is approximately 3%
less than the optimal NPV value estimated using StoSAG, but the computational cost of StoSAG is approximately 10 times the compu-
tational cost of optimization with SVR.

1,060
R 2 = 0.8724
1,040

Predicted NPV (USD million)


1,020

1,000

980

960

940

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


920

900
900 920 940 960 980 1,000 1,020 1,040 1,060
Simulated NPV (USD million)

Fig. 8—Blind-test results for SVR, Example 2.

1,150
E(NPV) (USD million)

1,100

1,050

1,000

950

900
0 500 1,000 1,500 2,000
SVR Runs

Fig. 9—SVR-generated E(NPV) vs. SVR runs, Example 2.

1,200

1,150
E(NPV) (USD million)

1,100

1,050

1,000

950

900
0 500 1,000 1,500 2,000
Simulation Runs

Fig. 10—Expectation of NPV vs. simulation runs using StoSAG initialized with five different random seeds, Example 2.

One referee suggested that we compare the SVR results with another proxy model; thus, in this work, we compare SVR results with
those generated with a linear proxy generated with linear regression. Note that the second-order-polynomial response-surface proxy and
other higher-order proxies are not computationally feasible to use in our robust optimization work flow. This is because reservoir
parameters include gridblock values of petrophysical properties; thus, the required number of coefficients in a quadratic response-
surface proxy would be prohibitively large (e.g., more than 1010 for the third example considered in this work). Compared with the
results of SVR in Fig. 8, the linear proxy provides a much-lower value of R2, 0.53, as shown in Fig. 11. Fig. 12 shows that when using

December 2018 SPE Journal 2417

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2418 Total Pages: 19

optimization with the linear-response surface, the E(NPV) is only increased from USD 980 million to 994 million, whereas with SVR
optimization, we obtain an optimization that increases NPV to USD 1.12 billion, as opposed to the initial value of USD 902 million
shown in Fig. 9.

Initial (million USD) Optimal (million USD) Computational Cost


SVR 902 1,120 200 simulation runs+2,000 SVR runs
SVR/Eclipse 900 1,103 –
StoSAG Opt 900 1,135, average 2,000 simulation runs

Table 1—Comparison of expectation of NPV obtained from different scenarios. SVR represents the
results generated from the SVR runs; SVR/Eclipse represents the results calculated by entering the
SVR optimal well controls into Eclipse; and StoSAG Opt represents the results obtained with StoSAG
using the Eclipse reservoir-simulation model as the forward model.

1,060

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


R 2 = 0.5339
1,040
Predicted NPV (USD million)

1,020

1,000

980

960

940

920

900
900 920 940 960 980 1,000 1,020 1,040 1,060
Simulated NPV (USD million)

Fig. 11—Blind-test results for linear proxy, Example 2.

1,150

1,100
E(NPV) (USD million)

1,050

1,000

950

900
0 500 1,000 1,500 2,000
Forward Runs

Fig. 12—E(NPV) vs. linear proxy, Example 2.

The comparisons of optimal wells controls obtained with SVR and StoSAG (results for the first random seed) for injectors and pro-
ducers are shown in Figs. 13 and 14, respectively. In general, both methods suggest that the injectors should be performed under the
maximum injection rate over the production life. Even though the optimal BHPs for producers differ for the two methods, the expecta-
tions of NPV predicted by Eclipse using the optimal controls generated with the two procedures are quite close to each other.
The oil-saturation fields at the end of production life by applying the optimal well controls generated with SVR and StoSAG are
shown in Fig. 15. The oil-saturation distributions are quite similar for the two methods considered here.

Example 3: Brugge Reservoir. Brugge Field was developed by TNO as a benchmark case for closed-loop reservoir management. The
top structure of the reservoir is shown in Fig. 16 (Chen 2017), where 20 vertical production wells and 10 vertical injection wells are
drilled. The 20 realizations of Eclipse reservoir-simulation models are used to represent the geological uncertainty, with each model
consisting of four geological zones and nine reservoir-simulation layers and where each simulation layer has a 13948 grid. The
20 sets of reservoir-simulation models all have the same properties except for different horizontal-permeability fields. The log horizontal
permeabilities are considered as the uncertain parameters, and therefore the number of uncertain model parameters for each realization,

2418 December 2018 SPE Journal

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2419 Total Pages: 19

nm, is equal to the number of active gridblocks (i.e., 44,550). Here, we consider that the Brugge Reservoir has been producing for
10 years with the well-control strategy initially designed by TNO. Robust well-control optimization is performed for the period between
the 10th year and 20th year, which is divided into 30 control steps with each step equal to 120 days. The control variables for the pro-
ducers are BHPs and for the injectors are water-injection rates. The upper and lower bounds for all BHP controls are 2,465 and 14.7 psi,
respectively, where 14.7 psi is the lowest BHP used in the producers during the history-matching period and 2,465 psi is the initial res-
ervoir pressure. The controls of the injection rates are constrained to lie between 0 to 8,000 STB/D. Because a constant operating condi-
tion is specified for each well at each control step, the total number of control variables Nu ¼ 30  30 ¼ 900.

14,000 14,000

1 12,000 1 12,000
Injector Index

Injector Index
10,000 10,000

8,000 8,000
2 2
6,000 6,000

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


4,000 4,000
10 20 30 10 20 30
Control Step Control Step
(a) StoSAG (b) SVR

Fig. 13—Comparison of optimal injection rates obtained using an SVR proxy and StoSAG with Eclipse, Example 2.

1 1
3,000 3,000
Producer Index

Producer Index

2 2
2,500 2,500
3 3

4 2,000 4 2,000

5 5
1,500 1,500
6 6
1,000 1,000
10 20 30 10 20 30
Control Step Control Step
(a) StoSAG (b) SVR

Fig. 14—Comparison of optimal production BHPs obtained using an SVR proxy and StoSAG with Eclipse, Example 2.

0.8 0.8 0.8


0.7 0.7 0.7
0.6 0.6 0.6
0.5 0.5 0.5
0.4 0.4 0.4
0.3 0.3 0.3
0.2 0.2 0.2
0.1 0.1 0.1
(a) SVR Layer 1 (b) SVR Layer 2 (c) SVR Layer 3

0.8 0.8 0.8


0.7 0.7 0.7
0.6 0.6 0.6
0.5 0.5 0.5
0.4 0.4 0.4
0.3 0.3 0.3
0.2 0.2 0.2
0.1 0.1 0.1
(d) StoSAG Layer 1 (e) StoSAG Layer 2 (f) StoSAG Layer 3

Fig. 15—Oil-saturation fields for the 20th realization at the end of production life by applying the optimal well controls from SVR
(a, b, c) and StoSAG (d, e, f).

December 2018 SPE Journal 2419

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2420 Total Pages: 19

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


Fig. 16—Top structure of Brugge Field.

Similar to the previously discussed examples, 200 sets of well-control training samples are generated by LHS, with 10 sets for each
realization. Because one referee requested that we investigate the performance of production optimization by using multiple SVR prox-
ies, one for each realization, we generate the SVR proxies in two ways. The first one is our proposed method, which generates the train-
ing outputs (NPVs) by entering the 200 suites of paired reservoir models and well-control vectors into Eclipse. Then, a single SVR
proxy is trained as a forward model to perform robust NPV optimization. The second method is to obtain one SVR proxy for each reser-
voir realization. The E(NPV) is calculated by averaging the NPV values predicted by all the SVR proxies. Unlike the first method, the
second method does not use the information collected from reservoir models because it does not add gridblock-based reservoir para-
meters into training, which we believe is undesirable. In addition to optimization with the two methods that run SVR proxies, robust
optimization is also performed by running Eclipse as the forward model using StoSAG. The economic parameters for optimization are
the same ones as used in the first and second examples. The initial guess of optimal well controls for all the producers is 1,240 psi and
for all the injectors is 4,000 STB/D.
Table 2 quantitatively compares the E(NPV) obtained by different scenarios. First, when we entered the optimal well controls gener-
ated with a single SVR proxy in Eclipse, we obtain an expected NPV value that is 2.3% higher than is obtained by entering the optimal
controls generated from multiple SVR proxies in Eclipse. Second, the single SVR proxy generates an optimal E(NPV) that differs only
by 4.2% from the NPV value obtained with Eclipse using the optimal controls generated from the single SVR proxy, whereas the corre-
sponding difference for the optimal E(NPV) generated with multiple SVR proxies is 8.7%. From the observations for this particular
example, we can see that our proposed method (i.e., to generate a single SVR proxy) outperforms the method that uses multiple SVR
proxies in both the proxy accuracy and the final NPV gain. Also, we note that by entering the optimal well controls generated with the
single SVR proxy into Eclipse, we obtain an expected NPV value that is quite close to the NPV value generated by optimization directly
with Eclipse. In terms of computational efficiency, the SVR proxy only requires 200 forward-reservoir-simulation runs for the entire
optimization procedure, whereas optimization with Eclipse requires 1,200 forward-reservoir-simulation runs.

Initial Optimal
(million USD) (million USD) Computational Cost
SVR-Single 3,206 3,453 200 simulation runs+1,200 SVR runs
SVR-Single/Eclipse 3,027 3,605 –
SVR-Multiple 3,204 3,213 200 simulation runs+1,200 SVR runs
SVR-Multiple/Eclipse 3,027 3,524 –
StoSAG Opt 3,027 3,624 1,200 simulation runs

Table 2—Comparison of expectation of NPV obtained from different scenarios, Example 3. SVR-Single
represents the results generated with the single SVR proxy; SVR-Single/Eclipse represents the results
calculated by entering the SVR-Single optimal well controls into Eclipse; SVR-Multiple represents the
results generated with the multiple SVR proxies; SVR-Multiple/Eclipse represents the results calculated
by entering the SVR-Multiple optimal well controls into Eclipse; and StoSAG Opt represents the results
obtained with StoSAG using the Eclipse reservoir-simulation model as the forward model.

Fig. 17 compares the optimal production BHPs obtained with SVR proxies and Eclipse. Compared with the well controls generated
with the multiple SVR proxies, optimal well controls generated with a single SVR proxy are closer to what is obtained with Eclipse.
Iterative-Sampling Refinement. In Table 2, note that when we enter the optimal controls of SVR into Eclipse, the optimal expected
NPV values computed with Eclipse differ by 4.2% from those generated with SVR. Because we believe the error can be reduced, to fur-
ther improve the accuracy of the SVR proxy, we develop an iterative-sampling-refinement algorithm, which refers to the procedure that
repeatedly adds training samples for proxy modeling in addition to the initial samples (Forrester and Keane 2009). For production opti-
mization, however, it is important to note that we wish to have high accuracy of the SVR proxy model mainly in the region near the
optimum predicted to obtain an accurate optimal E(NPV) value. At points away from the optimum, less accuracy is needed. One possi-
bility is to perform a sequence of optimizations where each optimization is performed with an SVR proxy, proxy k, which is built with

2420 December 2018 SPE Journal

ID: jaganm Time: 14:56 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2421 Total Pages: 19

training set Sk, where Sk  Skþ1 ; k ¼ 1; 2; …, and Skþ1 is obtained by adding dS samples to Sk. The optimization procedure with the
new iterative-sampling-refinement algorithm is shown in Algorithm 2.

1 2,500 1 2,500
2 2
3 3
4 4
5 2,000 5 2,000
6 6
Producer Index

Producer Index
7 7
8 8
9 1,500 9 1,500
10 10
11 11
12 12
13 1,000 13 1,000
14 14
15 15
16 16
17 500 17 500
18 18
19 19
20 20
5 10 15 20 25 30 5 10 15 20 25 30
Control Step Control Step

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


(a) StoSAG (b) SVR-Method 1

1 2,500
2
3
4
5 2,000
6
Producer Index

7
8
9 1,500
10
11
12
13 1,000
14
15
16
17 500
18
19
20
5 10 15 20 25 30
Control Step
(c) SVR-Method 2

Fig. 17—Comparison of optimal production BHPs obtained using SVR proxies and StoSAG with Eclipse, Example 3. StoSAG
represents the Eclipse results obtained using StoSAG; SVR-Method 1 represents the results of the SVR proxy generated using the
first method; and SVR-Method 2 represents the results of the SVR proxy generated by the second method.

Algorithm 2: Procedure for Robust Optimization With Iterative-Sampling Refinement


1. Perform the standard SVR training and optimization procedure as described in Algorithm 1; the training set is denoted by S1 and
the optimal well control obtained is denoted by u1opt .
FOR k ¼ 1; 2; …
2. Predict the NPV for each realization given ukopt ; Jðukopt ; mi Þ, for i ¼ 1; 2; …; Ne , by running reservoir simulations.
XNe
3. Compute JE ðukopt Þ ¼ N1e Jðukopt ; mi Þ.
i¼1 jJ^E ðukopt Þ  JE ðukopt Þj
4. IF the SVR-predicted expected NPV, J^E ðukopt Þ, satisfies < 1  103 , THEN EXIT FOR.
JE ðukopt Þ
5. Obtain the training set Skþ1 ¼ Sk [ dS, where dS represents the normalized set of
f½ðukopt ÞT ; mTi T ; Jðukopt ; mi Þg; i ¼ 1; 2; …; Ne :
6. Build a new SVR proxy using Skþ1 and optimize the well controls with the new SVR proxy; the obtained optimal well control is
denoted by ukþ1opt .
END FOR
We apply Algorithm 2 for the Brugge example to investigate whether the optimization performance and the SVR proxy quality can
be improved. Fig. 18 shows SVR-predicted expected NPV vs. SVR proxy runs. Iterative-sampling refinement is performed after the
1,200th SVR run, which corresponds to the end of the first-stage optimization with the original SVR proxy. By adding new samples
into training, initially the SVR proxy experiences a large jump in the E(NPV) value because of the correction of the prediction error of
the first SVR model. After that, two more proxy refinements are performed before convergence, but it is difficult to see when the two
refinements occur from the results of Fig. 18 because the E(NPV) value increases smoothly after the 1,200th SVR run thanks to the
improved quality of the SVR proxy.
Table 3 compares the optimization performance obtained with the original method and the method using proxy refinement. Note
that when the optimal controls obtained with the iterative-refinement SVR implementation are entered into Eclipse, the E(NPV) pre-
dicted by Eclipse is identical to the NPV predicted by optimization using the SVR proxy (i.e., USD 3.611 billion.). By using the
proposed proxy-refinement algorithm, the SVR-generated optimal E(NPV) differs from that obtained with Eclipse using StoSAG by
0.5%, as opposed to the 5% difference obtained with the original work flow. The additional computational cost for the refinement
includes the 600 additional SVR runs and the 60 additional reservoir-simulation runs that are used to generate the infill samples for the
three times of refinement. Overall, the additional cost is minor compared with the total computational cost required. With iterative
refinement, the computation cost of SVR is roughly one-fifth that of optimization with StoSAG.

December 2018 SPE Journal 2421

ID: jaganm Time: 14:57 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2422 Total Pages: 19

3,700

3,600

E(NPV) (USD million)


3,500

3,400

3,300

3,200
0 500 1,000 1,500
Forward Runs

Fig. 18—E(NPV) vs. SVR runs, Example 3.

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


Initial Optimal
(million USD) (million USD) Computational Cost
SVR-Single 3,206 3,453 200 simulation runs+1,200 SVR runs
SVR-Single/Eclipse 3,027 3,605 –
SVR-Infill 3,204 3,611 260 simulation runs+1,800 SVR runs
SVR-Infill/Eclipse 3,027 3,611 –
StoSAG Opt 3,027 3,624 1,200 simulation runs

Table 3—Comparison of expectation of NPV obtained from different scenarios, Example 3. SVR-Single
represents the results generated with the single SVR proxy with 200 samples; SVR-Single/Eclipse
represents the results calculated by entering the SVR-Single optimal well controls into Eclipse; SVR-
Infill represents the results generated with the SVR proxies with refinement; SVR-Infill/Eclipse
represents the results calculated by entering the SVR-Infill optimal well controls into Eclipse; and
StoSAG Opt represents the results obtained with StoSAG using the Eclipse reservoir-simulation model
as the forward model.

We also test the example with Algorithm 2 to determine whether a training set with fewer than 200 samples can provide good
results. Thus, we build the initial SVR model with 140 sets of training samples, with seven well controls paired with each reservoir real-
ization. After adding 20 samples three times, which requires 60 additional reservoir-simulation runs, optimization using Algorithm 2
with the SVR proxy converges and the SVR proxy yields an E(NPV) of USD 3.602 billion. When the optimal controls generated with
the SVR proxy are entered into Eclipse, we obtain the same E(NPV) value of USD 3.602 billion. Compared with the E(NPV) value
obtained by optimization directly with Eclipse, the SVR-generated NPV is only 0.6% less, which indicates that with the iterative-
sampling algorithm, the initial data set for SVR training with fewer than 200 samples is also feasible for this particular large-scale
robust optimization problem, which has 900 well-control variables. Finally, we note that these results obtained with iterative sampling
also provide a way to check whether we have used enough training samples in building the SVR proxy model to produce an estimate of
E(NPV) consistent with the reservoir simulator. First, if the SVR provides an accurate approximation of the reservoir simulator at points
in a neighborhood of the optimum, then the SVR-generated expected NPV should be essentially the same NPV generated by entering
the SVR-estimated controls into the reservoir simulator. This necessary condition is met using iterative sampling regardless of whether
we start the algorithm with 140 or 200 samples. The 200 samples include all the samples from 140 samples plus 60 additional samples.
If the 140 samples are sufficient, the NPV results of SVR optimization should be approximately the same for initialization with
200 samples as for initialization with 140 samples, and they are.

Conclusions
We developed a new framework to perform robust well-control optimization using an SVR proxy to replace a reservoir simulator as the
forward model. By training the SVR proxy to accurately predict the E(NPV) of life-cycle production that would be computed by a set
of reservoir-simulation runs (one run per reservoir model) for a given set of well controls, the total number of reservoir-simulation runs
required to perform robust optimization is limited to those required to train the SVR model to provide a good proxy for the reservoir
simulator. In our examples, 20 plausible reservoir-simulation models represent the geological uncertainty.
We also provide suggestions on how to test whether a sufficient number of training samples has been used so that the SVR proxy
provides a sufficiently accurate prediction of the NPV that would be calculated by the simulator. In the initial computations, the stand-
ard SVR implementation found in the computer science literature is adapted for application to the optimal well-control problem, but we
subsequently introduce an iterative-sampling procedure that, during the optimization process, updates the SVR proxy to conform to the
E(NPV) output by the reservoir simulator in a region near the optimum.
On the basis of conceptual considerations and the results of the three examples, the following conclusions are warranted.
1. One can produce a reasonable approximation of the optimal E(NPV) far more computationally efficiently using the SVR proxy than
by using an ensemble-based stochastic gradient with a reservoir simulator as the forward model.
2. The blind test indicates that one cannot expect to obtain an accurate proxy with a linear response-surface proxy. However, unless the
number of parameters is reduced to the order of a few dozen, it is not computationally feasible to use a higher-order-polynomial
response surface.
3. Generally, one cannot obtain an acceptable approximation of the optimal E(NPV) using a linear response surface.

2422 December 2018 SPE Journal

ID: jaganm Time: 14:57 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2423 Total Pages: 19

4. Optimization with SVR requires one-fifth (or less) the computation time required by StoSAG.
5. With iterative sampling, it is possible to build an SVR proxy that predicts a value of E(NPV) that agrees highly accurately with the
one predicted from the reservoir simulator for optimal controls in a region near their optima.
6. With iterative sampling, the SVR proxy can produce an estimated optimal value of E(NPV) that differs by less than 1% from the
E(NPV) generated with StoSAG.
7. For robust optimization, it appears that training a single SVR proxy across the realizations of the reservoir model used to represent
uncertainty performs better than training an SVR proxy for each realization.

Nomenclature
b ¼ discount factor or bias term
c ¼ cost of disposing or injecting water, USD/STB
Cu ¼ covariance matrix to enforce temporal smoothness of well controls
d ¼ search direction
e ¼ error term
I ¼ number of injector
J ¼ net present value
K ¼ kernel function
L ¼ Lagrangian

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


m ¼ vector of reservoir model parameters
Ne ¼ number of realizations that represent the uncertainty of the reservoir
NS ¼ number of training samples for SVR model
Nu ¼ number of control variables
P ¼ number of producer
q ¼ rate, STB/D
ro ¼ oil revenue, USD/STB
S ¼ training set
u ¼ vector of well-control variables
w ¼ coefficient vector
x ¼ input vector of SVR model
y ¼ output variable of SVR model
a ¼ parameter of SVR model
b ¼ search step in line-search algorithm
c ¼ regularization factor for SVR
eJ^ ¼ convergence threshold for relative change of objective function
eu ¼ convergence threshold for relative change of control variables
r ¼ bandwidth of Gaussian kernel
u ¼ mapping function

Superscripts
k ¼ iteration step
low ¼ lower bound
T ¼ transpose
up ¼ upper bound
þ ¼ Moore-Penrose pseudoinverse

Subscripts
E ¼ expectation of NPV
i ¼ index of realization
l ¼ index of element in a vector
k ¼ index of element in a vector or iteration step

References
Brouwer, D. R. and Jansen, J. D. 2004. Dynamic Optimization of Waterflooding With Smart Wells Using Optimal Control Theory. SPE J. 9 (4):
391–402. SPE-78278-PA. https://doi.org/10.2118/78278-PA.
Broyden, C. 1975. Basic Matrices: An Introduction to Matrix Theory and Practice. Basingstoke, UK: Macmillan.
Cao, F., Luo, H., and Lake, L. W. 2015. Oil-Rate Forecast by Inferring Fractional-Flow Models From Field Data With Koval Method Combined With
the Capacitance/Resistance Model. SPE Res Eval & Eng 18 (4): 534–553. SPE-173315-PA. https://doi.org/10.2118/173315-PA.
Cardoso, M. A. and Durlofsky, L. J. 2010. Use of Reduced-Order Modeling Procedures for Production Optimization. SPE J. 15 (2): 426–435. SPE-
119057-PA. https://doi.org/10.2118/119057-PA.
Castellini, A., Gross, H., Zhou, Y. et al. 2010. An Iterative Scheme to Construct Robust Proxy Models. Oral presentation given at ECMOR XII–12th
European Conference on the Mathematics of Oil Recovery, Oxford, UK, 6–9 September.
Chen, B. 2017. A Stochastic Simplex Approximate Gradient for Production Optimization of WAG and Continuous Water Flooding. PhD dissertation,
University of Tulsa, Tulsa.
Chen, B. and Reynolds, A. C. 2016. Ensemble-Based Optimization of the Water-Alternating-Gas-Injection Process. SPE J. 21 (3): 786–798. SPE-
173217-PA. https://doi.org/10.2118/173217-PA.
Chen, B. and Reynolds, A. C. 2018. CO2 Water-Alternating-Gas Injection for Enhanced Oil Recovery: Optimal Well Controls and Half-Cycle Lengths.
Comput. Chem. Eng. 113 (8 May): 44–56. https://doi.org/10.1016/j.compchemeng.2018.03.006.
Chen, B., Fonseca, R.-M., Leeuwenburgh, O. et al. 2017a. Minimizing the Risk in the Robust Life-Cycle Production Optimization Using Stochastic Sim-
plex Approximate Gradient. J. Pet. Sci. Eng. 153 (May): 331–344. https://doi.org/10.1016/j.petrol.2017.04.001.

December 2018 SPE Journal 2423

ID: jaganm Time: 14:57 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2424 Total Pages: 19

Chen, B., He, J., Wen, X.-H. et al. 2017b. Uncertainty Quantification and Value of Information Assessment Using Proxies and Markov Chain Monte
Carlo Method for a Pilot Project. J. Pet. Sci. Eng. 157 (May): 328–339. https://doi.org/10.1016/j.petrol.2017.07.039.
Chen, C. 2011. Adjoint-Gradient-Based Production Optimization With the Augmented Lagrangian Method. PhD dissertation, University of Tulsa, Tulsa.
Chen, C., Gao, G., Ramirez, B. A. et al. 2015. Assisted History Matching of Channelized Models by Use of Pluri-Principal-Component Analysis. Pre-
sented at the SPE Reservoir Simulation Symposium, Houston, 25–28 February. SPE-173192-MS. https://doi.org/10.2118/173192-MS.
Chen, C., Li, G., and Reynolds, A. 2010. Closed-Loop Reservoir Management on the Brugge Test Case. Computat. Geosci. 14 (4): 691–703. https://
doi.org/10.1007/s10596-010-9181-7.
Chen, C., Li, G., and Reynolds, A. C. 2012. Robust Constrained Optimization of Short- and Long-Term Net Present Value for Closed-Loop Reservoir
Management. SPE J. 17 (3): 849–864. SPE-141314-PA. https://doi.org/10.2118/141314-PA.
Chen, Y. and Oliver, D. S. 2009. Ensemble-Based Closed-Loop Optimization Applied to Brugge Field. Presented at the SPE Reservoir Simulation Sym-
posium, The Woodlands, Texas, 2–4 February. SPE-118926-MS. https://doi.org/10.2118/118926-MS.
Chen, Y., Oliver, D. S., and Zhang, D. 2009. Efficient Ensemble-Based Closed-Loop Production Optimization. SPE J. 14 (4): 634–645. SPE-112873-
PA. https://doi.org/10.2118/112873-PA.
Crone, S. F., Guajardo, J., and Weber, R. 2006. The Impact of Preprocessing on Support Vector Regression and Neural Networks in Time Series Predic-
tion. Proc., International Conference on Data Mining DMIN ’06, Las Vegas, Nevada, 26–29 June, 37–44.
Drucker, H., Burges, C. J. C., Kaufman, L. et al. 1997. Support Vector Regression Machines. In Neural Information Processing Systems, Vol. 9, ed.
M. C. Mozer, J. I. Jordan, and T. Petsche, 155–161. Cambridge, Massachusetts: MIT Press.
Eide, A. L., Holden, L., Reiso, E. et al. 1994. Automatic History Matching by use of Response Surfaces and Experimental Design. Proc., ECMOR

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


IV–4th European Conference on the Mathematics of Oil Recovery, Røros, Norway, 7–10 June.
Fonseca, R., Kahrobaei, S. S., van Gastel, L. J. T. et al. 2015. Quantification of the Impact of Ensemble Size on the Quality of an Ensemble Gradient
Using Principles of Hypothesis Testing. Presented at the SPE Reservoir Simulation Symposium, Houston, 23–25 February. SPE-173236-MS. https://
doi.org/10.2118/173236-MS.
Fonseca, R. M., Chen, B., Jansen, J. D. et al. 2016. A Stochastic Simplex Approximate Gradient (StoSAG) for Optimization Under Uncertainty. Int. J.
Numer. Meth. Eng. 109 (13): 1756–1776. https://doi.org/10.1002/nme.5342.
Fonseca, R. M., Leeuwenburgh, O., Van den Hof, P. M. J. et al. 2013. Improving the Ensemble Optimization Method Through Covariance Matrix Adap-
tation (CMA-EnOpt). Presented at the SPE Reservoir Simulation Symposium, The Woodlands, 18–20 February. SPE-163657-MS. https://doi.org/
10.2118/163657-MS.
Forrester, A. I. and Keane, A. J. 2009. Recent Advances in Surrogate-Based Optimization. Prog. Aerosp. Sci. 45 (1–3): 50–79. https://doi.org/10.1016/
j.paerosci.2008.11.001.
Gildin, E., Ghasemi, M., Romanovskay, A. et al. 2013. Nonlinear Complexity Reduction for Fast Simulation of Flow in Heterogeneous Porous Media. Presented
at the SPE Reservoir Simulation Symposium, The Woodlands, Texas, 18–20 February. SPE-163618-MS. https://doi.org/10.2118/163618-MS.
Guo, Z. and Reynolds, A. C. In press. INSIM-FT in Three Dimensions With Gravity. J. Comput. Phys. (submitted for review 29 December 2017).
Guo, Z., Chen, C., Gao, G. et al. 2017a. EUR Assessment of Unconventional Assets Using Machine Learning and Distributed Computing Techniques.
Presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, Austin, Texas, 24–26 July. URTEC-2659996-MS. https://
doi.org/10.15530/URTEC-2017-2659996.
Guo, Z., Chen, C., Gao, G. et al. 2017b. Applying Support Vector Regression to Reduce the Effect of Numerical Noise and Enhance the Performance of
History Matching. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 9–11 October. SPE-187430-MS. https://
doi.org/10.2118/187430-MS.
Guo, Z., Reynolds, A. C., and Zhao, H. 2018a. A Physics-Based Data-Driven Model for History Matching, Prediction, and Characterization of Water-
flooding Performance. SPE J. 23 (2): 367–395. SPE-182660-PA. https://doi.org/10.2118/182660-PA.
Guo, Z., Reynolds, A. C., and Zhao, H. 2018b. Waterflooding Optimization With the INSIM-FT Data-Driven Model. Computat. Geosci. 22 (3):
745–761. https://doi.org/10.1007/s10596-018-9723-y.
Guo, Z., Chen, C., Gao, G. et al. 2018c. Integration of Support Vector Regression With Distributed Gauss-Newton Optimization Method and its Applica-
tions to Uncertainty Assessment of Unconventional Assets. SPE Res Eval & Eng 21 (4): 1007–1026. SPE-191373-PA. https://doi.org/10.2118/
191373-PA.
He, J. and Durlofsky, L. J. 2014. Reduced-Order Modeling for Compositional Simulation by Use of Trajectory Piecewise Linearization. SPE J. 19 (5):
858–872. SPE-163634-PA. https://doi.org/10.2118/163634-PA.
He, J., Xie, J., Sarma, P. et al. 2016. Proxy-Based Work Flow for a Priori Evaluation of Data-Acquisition Programs. SPE J. 21 (4): 1400–1412. SPE-
173229-PA. https://doi.org/10.2118/173229-PA.
He, J., Xie, J., Wen, X.-H. et al. 2015. Improved Proxy for History Matching Using Proxy-for-Data Approach and Reduced Order Modeling.
Presented at the SPE Western Regional Meeting, Garden Grove, California, 27–30 April. SPE-174055-MS. SPE-174055-MS. https://doi.org/
10.2118/174055-MS.
Isebor, O. J. and Durlofsky, L. J. 2014a. Biobjective Optimization for General Oil Field Development. J. Pet. Sci. Eng. 119 (July): 123–138. https://
doi.org/10.1016/j.petrol.2014.04.021.
Isebor, O. J. and Durlofsky, L. J. 2014b. A Derivative-Free Methodology With Local and Global Search for the Constrained Joint Optimization of Well
Locations and Controls. Computat. Geosci. 18 (3–4): 463–482. https://doi.org/10.1007/s10596-013-9383-x.
Jansen, J.-D., Brouwer, D. R., Naevdal, G. et al. 2005. Closed-Loop Reservoir Management. First Break 23 (1): 43–48. https://doi.org/10.3997/1365-
2397.2005002.
Jansen, J.-D., Brouwer, R., and Douma, S. G. 2009. Closed Loop Reservoir Management. Presented at the SPE Reservoir Simulation Symposium, The
Woodlands, Texas, 2–4 February. SPE-119098-MS. https://doi.org/10.2118/119098-MS.
Jansen, J. D. and Durlofsky, L. J. 2017. Use of Reduced-Order Models in Well Control Optimization. Optimiz. Eng. 18 (1): 105–132. https://doi.org/
10.1007/s11081-016-9313-6.
Kraaijevanger, J. F. B. M., Egberts, P. J. P., Valstar, J. R. et al. 2007. Optimal Waterflood Design Using the Adjoint Method. Presented at the SPE Reser-
voir Simulation Symposium, Houston, 26–28 February. SPE-105764-MS. https://doi.org/10.2118/105764-MS.
Lake, L. W., Liang, X., Edgar, T. F. et al. 2007. Optimization of Oil Production Based on a Capacitance Model of Production and Injection Rates. Pre-
sented at the Hydrocarbon Economics and Evaluation Symposium, Dallas, 1–3 April. SPE-107713-MS. https://doi.org/10.2118/107713-MS.
Landa, J. L. and Güyagüler, B. 2003. A Methodology for History Matching and the Assessment of Uncertainties Associated With Flow Prediction. Pre-
sented at the SPE Annual Technical Conference and Exhibition, Denver, 5–8 October. SPE-84465-MS. https://doi.org/10.2118/84465-MS.
Lerlertpakdee, P., Jafarpour, B., and Gildin, E. 2014. Efficient Production Optimization With Flow-Network Models. SPE J. 19 (6): 1083–1095. SPE-
170241-PA. https://doi.org/10.2118/170241-PA.

2424 December 2018 SPE Journal

ID: jaganm Time: 14:57 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2425 Total Pages: 19

Lu, R., Forouzanfar, F., and Reynolds, A. C. 2017. Bi-Objective Optimization of Well Placement and Controls Using StoSAG. Presented at the SPE Res-
ervoir Simulation Conference, Montgomery, Texas, 20–22 February. SPE-182705-MS. https://doi.org/10.2118/182705-MS.
McKay, M. D., Beckman, R. J., and Conover, W. J. 1979. A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of
Output From a Computer Code. Technometrics 21 (2): 239–245. https://doi.org/10.2307/1268522.
Mercer, J. 1909. XVI. Functions of Positive and Negative Type, and Their Connection With the Theory of Integral Equations. Philos. Trans. Royal Soc.
A. 209 (1 January): 415–446. https://doi.org/10.1098/rsta.1909.0016.
Moraes, R., Rodrigues, J. R. P., Hajibeygi, H. et al. 2017. Multiscale Gradient Computation for Multiphase Flow in Porous Media. Presented at the SPE
Reservoir Simulation Conference, Montgomery, Texas, 20–22 February. SPE-182625-MS. https://doi.org/10.2118/182625-MS.
Morris, A. E., Fine, H. A., and Geiger, G. 2011. Handbook on Material and Energy Balance Calculations in Material Processing. Hoboken, New Jersey:
John Wiley & Sons.
Nguyen, A. P. 2012. Capacitance Resistance Modeling for Primary Recovery, Waterflood and Water-CO2 Flood. PhD dissertation, University of Texas
at Austin, Austin, Texas.
Oliveira, D. F. and Reynolds, A. C. 2014. An Adaptive Hierarchical Multiscale Algorithm for Estimation of Optimal Well Controls. SPE J. 19 (5):
909–930. SPE-163645-PA. https://doi.org/10.2118/163645-PA.
Peters, L., Arts, R., Brouwer, G. et al. 2010. Results of the Brugge Benchmark Study for Flooding Optimization and History Matching. SPE Res Eval &
Eng 13 (3): 391–405. SPE-119094-PA. https://doi.org/10.2118/119094-PA.
Platt, J. 1998. Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines. Technical report, No. MSR-TR-98-
14, Microsoft.

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


Sarma, P., Aziz, K., and Durlofsky, L. J. 2005. Implementation of Adjoint Solution for Optimal Control of Smart Wells. Presented at the SPE Reservoir
Simulation Symposium, The Woodlands, Texas, 31 January–2 February. SPE-92864-MS. https://doi.org/10.2118/92864-MS.
Saunders, C., Gammerman, A., and Vovk, V. 1998. Ridge Regression Learning Algorithm in Dual Variables. Proc., ICML-1998, 15th International Con-
ference on Machine Learning, 515–521.
Sayarpour, M. 2008. Development and Application of Capacitance-Resistive Models to Water/CO2 Flood. PhD dissertation, University of Texas at
Austin, Austin, Texas.
Schlumberger. 2013. Eclipse Reference Manual, Version 2013.1. Houston: Schlumberger.
Slotte, P. A. and Smorgrav, E. 2008. Response Surface Methodology Approach for History Matching and Uncertainty Assessment of Reservoir Simula-
tion Models. Presented at the Europec/EAGE Conference and Exhibition, Rome, 9–12 June. SPE-113390-MS. https://doi.org/10.2118/113390-MS.
Suykens, J. A. K., De Brabanter, J., Lukas, L. et al. 2002. Weighted Least Squares Support Vector Machines: Robustness and Sparse Approximation.
Neurocomput. 48 (1): 85–105. https://doi.org/10.1016/S0925-2312(01)00644-0.
van Doren, J. F. M., Markovinović, R., and Jansen, J.-D. 2006. Reduced-Order Optimal Control of Waterflooding Using Proper Orthogonal Decomposi-
tion. Computat. Geosci. 10 (1): 137–158. https://doi.org/10.1007/s10596-005-9014-2.
van Essen, G., Van den Hof, P. M. J., and Jansen, J. D. 2009. Hierarchical Economic Optimization of Oil Production From Petroleum Reservoirs. Oral
presentation given at the Workshop on Data Assimilation and Reservoir Optimization, Technical University of Delft, Delft, The Netherlands,
20 January.
van Essen, G., Van den Hof, P., and Jansen, J.-D. 2011. Hierarchical Long-Term and Short-Term Production Optimization. SPE J. 16 (1): 191–199.
SPE-124332-PA. https://doi.org/10.2118/124332-PA.
van Essen, G., Zandvliet, M., Van den Hof, P. et al. 2006. Robust Waterflooding Optimization of Multiple Geological Scenarios. Presented at the SPE
Annual Technical Conference and Exhibition, San Antonio, Texas, 24–27 September. SPE-102913-MS. https://doi.org/10.2118/102913-MS.
Weber, D. B. 2009. The Use of Capacitance-Resistance Models to Optimize Injection Allocation and Well Location in Water Floods. PhD dissertation,
University of Texas at Austin, Austin, Texas.
Yeten, B., Castellini, A., Guyaguler, B. et al. 2005. A Comparison Study on Experimental Design and Response Surface Methodologies. Presented at the
SPE Reservoir Simulation Symposium, The Woodlands, Texas, 31 January–2 February. SPE-93347-MS. https://doi.org/10.2118/93347-MS.
Yousef, A. A., Gentil, P. H., Jensen, J. L. et al. 2005. A Capacitance Model To Infer Interwell Connectivity From Production and Injection Rate Fluctua-
tions. Presented at the SPE Annual Technical Conference and Exhibition, Dallas, 9–12 October. SPE-95322-MS. https://doi.org/10.2118/95322-MS.
Yousef, A. A., Gentil, P. H., Jensen, J. L. et al. 2006. A Capacitance Model To Infer Interwell Connectivity From Production and Injection Rate Fluctua-
tions. SPE J. 9 (6): 630–646. SPE-95322-PA. https://doi.org/10.2118/95322-PA.
Zhao, H., Kang, Z., Zhang, X. et al. 2016. A Physics-Based Data-Driven Numerical Model for Reservoir History Matching and Prediction With a
Field Application (associated discussion available as supporting information). SPE J. 21 (6): 2175–2194. SPE-173213-PA. https://doi.org/10.2118/
173213-PA.

Appendix A—Training Procedure of SVR


SVR is a machine-learning method for solving nonlinear-regression problems. Given a training set S ¼ fðxk ; yk Þ; k ¼ 1; 2; …; Ns g,
SVR tries to find a function y^ðxÞ that is a good predictor of y given the input x. The basic idea is to transform the variable x in the input
space into a higher dimension of feature space using a mapping function uðxÞ , where the output y corresponding to x has an approxi-
mately linear relationship with uðxÞ. Letting w be the coefficient vector of uðxÞ and with b as a constant scalar, the linear function
between y and uðxÞ is given by
y^ðxÞ ¼ b þ wT uðxÞ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-1Þ

where b and w are parameters that are obtained by solving the optimization problem
1 1 XNs
minimize Jðw; bÞ ¼ wT w þ c ½yk  wT uðxk Þ  b2 ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-2Þ
w;b 2 2 k¼1

where yk is the true response and c is the regularization term. From numerical tests, we find that the training results are very insensitive
to the selection of c if c > 100. In our examples considered in this paper, we set c ¼ 200. Because the training error term in Eq. A-2 is
represented by the least-squares error between the prediction and the real output, the SVR model generated by solving Eq. A-2 is also
called the LS-SVR model (Suykens et al. 2002). The problem of Eq. A-2 is equivalent to (Guo et al. 2017a)

1 1 XNs
minimize Jðw; eÞ ¼ wT w þ c e2 ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-3Þ
w;b;e 2 2 k¼1 k

December 2018 SPE Journal 2425

ID: jaganm Time: 14:57 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2426 Total Pages: 19

subject to

ek ¼ yk  wT uðxk Þ  b: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-4Þ

We define the Lagrangian as

1 1 XNs XNs
Lðw; b; e; aÞ ¼ wT w þ c e2k  ak ½wT uðxk Þ þ b þ ek  yk : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-5Þ
2 2 k¼1 k¼1

The optimal solution of Eqs. A-3 and A-4 is obtained by solving rL ¼ 0, which yields the linear system
8
> XNs
> @L ¼ 0 ! w ¼
>
> ak uðxk Þ;                                                         ðA-6aÞ
>
> @w
>
> k¼1
>
>
>
> XNs
>
> @L
< ¼0! ak ¼ 0;                                                             ðA-6bÞ
@b k¼1
>
>

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


>
> @L
>
> ¼ 0 ! ak ¼ cek ; k ¼ 1; 2; …; Ns ;              ðA-6cÞ
>
> @e
>
> k
>
>
>
> @L
: ¼ 0 ! wT uðxk Þ þ b þ ek  yk ¼ 0; k ¼ 1; 2; …; Ns :                                      ðA-6dÞ
@a k

Eliminating w by substituting Eq. A-6a into Eq. A-6d yields


X
Ns
yk ¼ al uðxl ÞT uðxk Þ þ b þ ek : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-7Þ
l¼1

Substituting Eq. A-6c into Eq. A-7 yields

X
Ns
ak
yk ¼ al uðxl ÞT uðxk Þ þ b þ : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-8Þ
l¼1
c

We denote 1Ns ¼ ½1; 1; …; 1TNs , Y ¼ ½y1 ; y2 ; …; yNs T , a ¼ ½a1 ; a2 ; …; aNs T , and Xk;l ¼ uðxk ÞT uðxl Þ. From Eq. A-8,
 
1
1Ns b þ X þ I a ¼ Y: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-9Þ
c
Combining Eq. A-9 with Eq. A-6b, we obtain
2 3
0 1TNs    
4 b 0
1 5 ¼ ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-10Þ
1Ns XNs Ns þ I a Y
c

where

Xk;l ¼ uðxk ÞT uðxl Þ; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-11Þ

and the inner product of u can be replaced by a kernel function of

uðxk ÞT uðxl Þ ¼ Kðxk ; xl Þ: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-12Þ

In our examples, an RBF kernel is used and is given by



Kðxk ; xl Þ ¼ exp jjxk  xl jj22 =r2 : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-13Þ
 
1 1
Premultiplying Eq. A-9 by 1TNs X þ I yields
c
   
1 1 1 1
1TNs X þ I 1Ns b þ 1TNs a ¼ 1TNs X þ I Y: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-14Þ
c c

Invoking 1TN a ¼ 0, b is solved from Eq. A-14 as given by




1 1
1TNs
Xþ I Y
c
b¼   : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-15Þ
1 1
1TNs X þ I 1Ns
c

2426 December 2018 SPE Journal

ID: jaganm Time: 14:57 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075


J191378 DOI: 10.2118/191378-PA Date: 5-December-18 Stage: Page: 2427 Total Pages: 19

From Eq. A-9, a is given by


   
1 1 1 1
a ¼ X þ I Y  b X þ I 1Ns : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-16Þ
c c
Substituting Eq. A-6a into Eq. A-1 and applying Kðxk ; xÞ ¼ uðxk ÞT uðxÞ, the prediction function for y(x) is given by
X
Ns
y^ðxÞ ¼ ak Kðxk ; xÞ þ b: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ðA-17Þ
k¼1

Zhenyu Guo is an analytics engineer at the Occidental Petroleum Corporation. His research interests include data-driven
models, machine learning, history matching, uncertainty quantification, and production optimization. Guo holds bachelor’s
and master’s degrees from China University of Geosciences, Beijing, and a PhD degree from the Department of Petroleum
Engineering at the University of Tulsa, all in petroleum engineering. He is a member of SPE.
Albert C. Reynolds is a research professor of petroleum engineering and mathematics and holder of the McMan Chair in Petro-
leum Engineering at the University of Tulsa, where he has been a faculty member since 1970. He is also the director of the Univer-
sity of Tulsa Petroleum Reservoir Exploitation Projects. Reynolds’s research interests include optimization, scientific computation,

Downloaded from http://onepetro.org/SJ/article-pdf/23/06/2409/2118289/spe-191378-pa.pdf by Heriot-Watt University user on 07 February 2023


assessment of uncertainty, reservoir simulation, history matching, and well testing, and he has coauthored more than 100 refer-
eed papers in these areas. He holds a bachelor’s degree from the University of New Hampshire, a master’s degree from Case
Institute of Technology, and a PhD degree from Case Western Reserve University, all in mathematics. Reynolds has received the
SPE Distinguished Achievement for Petroleum Engineering Faculty Award, the SPE Reservoir Description and Dynamics Award,
the SPE Formation Evaluation Award, and the SPE John Franklin Carll Award. He is an SPE Distinguished Member and an SPE
Honorary Member. Reynolds is an associate editor for SPE Journal and for Computational Geosciences.

December 2018 SPE Journal 2427

ID: jaganm Time: 14:57 I Path: S:/J###/Vol00000/180075/Comp/APPFile/SA-J###180075

You might also like