Professional Documents
Culture Documents
art ic l e i nf o a b s t r a c t
Article history: Recently production optimization has gained increasing interest in the petroleum industry. The most
Received 18 February 2015 computationally expensive part of the production optimization process is the evaluation of the objective
Received in revised form function performed by a numerical reservoir simulator. Employing surrogate models (a.k.a. proxy
31 May 2015
models) as a substitute for the reservoir simulator is proposed for alleviating this high computational
Accepted 14 July 2015
cost.
Available online 15 July 2015
In this study, a novel approach for constructing adaptive surrogate models with application in pro-
Keywords: duction optimization problem is proposed. A dynamic Artificial Neural Networks (ANNs) is employed as
Reservoir simulation the approximation function while the training is performed using an adaptive sampling algorithm. Multi-
Surrogate modeling
ANNs are initially trained using a small data set generated by a space filling sequential design. Then, the
Production optimization
state-of-the-art adaptive sampling algorithm recursively adds training points to enhance prediction
Artificial Neural Network
Adaptive sampling accuracy of the surrogate model using minimum number of expensive objective function evaluations.
Jackknifing and Cross Validation (CV) methods are used during the recursive training and network as-
sessment stages. The developed methodology is employed to optimize production on the bench marking
PUNQ-S3 reservoir model. The Genetic Algorithm (GA) is used as the optimization algorithm in this
study. Computational results confirm that the developed adaptive surrogate model outperforms the
conventional one-shot approach achieving greater prediction accuracy while substantially reduces the
computational cost. Performance of the production optimization process is investigated when the ob-
jective function evaluations are performed using the actual reservoir model and/or the surrogate model.
The results show that the proposed surrogate modeling approach by providing a fast approximation of
the actual reservoir simulation model with a good accuracy enhances the whole optimization process.
& 2015 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.petrol.2015.07.012
0920-4105/& 2015 Elsevier B.V. All rights reserved.
678 A. Golzari et al. / Journal of Petroleum Science and Engineering 133 (2015) 677–688
corresponding rate in a water flooding project in the Gulf of the points of the cumulative production curves of interest.
Mexico Pompano field. Queipo et al. (2002) constructed a surro- 3. Developing individual surrogate model for predicting each
gate model employing ANN as the approximation function while output parameter (e.g. one surrogate model for predicting oil
using DACE (Design and Analysis of Computer Experiment) and another one for predicting water).
methodology for the experimental design. The developed surro-
gate model is used to optimize the operational parameters of a The outline of this manuscript is as follows. In Section 2, dif-
Steam Assisted Gravity Drainage (SAGD) process. Zerpa et al. ferent stages of the surrogate modeling process are explained.
(2005) employed multiple surrogate models coupled with a global Section 3 presents details of the developed framework. Section 4
optimization algorithm to estimate optimal design variables of shows the numerical results on the PUNQ-S3 case study. Optimi-
Alkaline-Surfactant-Polymer (ASP) flooding process. Castellini zation is performed using GA while the developed surrogate
et al. (2010) used thin-plate spline regression techniques to con- model and/or the actual reservoir model evaluates the objective
struct the surrogate model. They employed an iterative sampling function. Section 5 presents the general conclusions.
strategy to capture the non-linear behavior of the underlying
system and efficiently refine the surrogate model.
Most of these studies have used one-shot approach to develop 2. Constructing the surrogate model
the surrogate model. In one-shot approach the surrogate model is
constructed during one stage and will be used for all future opti- Surrogate models, according to their approximation strategy, can
mization without further updates. However, one-shot approach be divided into two main categories: (1) model driven or physics
has a main problem as generally the number of training points based approach (Cardoso and Durlofsky, 2010; He et al., 2011;
needed to achieve an acceptable accuracy is not known in advance Rousset et al., 2014; Wilson and Durlofsky, 2013) and (2) data driven
while we are interested to train the surrogate model with as few as or black box approach (Jones et al., 1998; Keane and Nair, 2005;
possible number of points. Adaptive sampling approach by se- Kleijnen, 2007). Model driven approaches, known as Reduced Order
quentially selecting the training points addresses this problem. Models (ROM), approximate the original equations with lower or-
In this study, an adaptive surrogate model is developed for the der equations and finally reduce the computational cost. To apply
application in production optimization problem. The large number these approaches access to the reservoir simulator source codes is
of control variables and response parameters is addressed by required which is generally impossible when using a commercial
reservoir simulator. In contrast, data driven approaches by con-
1. Using a dynamic ANN as the approximation function. sidering the reservoir simulator as a black-box, generate the sur-
2. A modified problem definition while the network receives rogate model using only input data and output responses. The data
consecutive well control parameters and sequentially predicts driven approach is the focus of this study.
A. Golzari et al. / Journal of Petroleum Science and Engineering 133 (2015) 677–688 679
methods are Kriging (Giunta et al., 1998; Jones, 2001; Sacks et al.,
1989), Radial Basis Function (RBF) (Gutmann, 2001; Regis and
Shoemaker, 2005), Polynomial Regression (Myers et al., 2009) and
Artificial Neural Network (ANN) (Foroud et al., 2014; Samar-
asinghe, 2006). Efficiently handling high-dimensional and highly
nonlinear problems and the capability to predict time series, make
ANN a suitable approximation function for our application. De-
velopment of ANN is inspired by the function of human brains
(Samarasinghe, 2006). An ANN is composed of one input layer, one
output layer, and one or more hidden layer. Each of these layers
Fig. 1. Schematic diagram of the data driven surrogate modeling approach.
contains several nodes which represent the neurons in human
brain. The neurons of each layer are connected to the neurons of
other layers by connections with defined weights. Mathematically,
data flow within an ANN with one hidden layer for an input matrix
A surrogate model replaces the true functional relationship f
∧ Z can be expressed as following (Samarasinghe, 2006):
by a mathematical expression f that is much cheaper to evaluate.
∧
H ⎛ L ⎞
The schematic diagram comparing the surrogate model with the yi (Z; θ ) = ∑ ωij s ⎜⎜ ∑ μ jk z k + ζ j ⎟⎟ + ηi, (1 ≤ i ≤ M ),
actual reservoir model is shown in Fig. 1. j=1 ⎝ k=1 ⎠ (3)
Four major steps of the surrogate model construction are as
follows: where y^i is the network output, θ = ωij , ... , ω MH , η1, ... , ηM , μ11,
(
1. Statement of the problem,
)
... , μ HL , ζ1, ... , ζH
represents weights, M is the number of outputs, H is the number
2. Selecting the approximation function,
of hidden neurons, L is the number of input variables and
3. Design of experiments,
4. Surrogate quality assessment. ( )( )
s (t ) = 1 + e−t / 1 − e−t is the transfer function. The weights are
adjustable parameters of the network which are tuned through a
These steps are explained in the following subsections. process called training. During the training process, the weights of
the network are iteratively adjusted to minimize the network
2.1. Statement of the problem prediction error on the training data set. In this study Levenberg–
Marquardt back propagation method, which appears to be the
At this step, inputs and outputs of the surrogate model and the fastest method for training moderate-sized feed forward neural
corresponding variation limits are defined. The production opti- networks, is employed to train the ANN (Hagan et al., 1996). In
mization problem can be formulated as order to avoid over-fitting and improve the network general-
ization, Bayesian regularization is used (For more details about
max J (u), u ∈ R kSubject to: c (u) ≤ 0 (1) the training process see Foresee and Hagan, 1997). Only one
hidden layer with a nonlinear transfer function can precisely
where J(u) is the objective function (e.g. Net Present Value (NPV) approximate any function with finite discontinuities in a feed-
or cumulative oil produced), u is the vector of control variables forward neural network trained using back propagation method
(e.g. well Bottom Hole Pressures (BHPs) and/or well rates) and c which is supported by the universal approximation theorem
represents the nonlinear constraints. In this study a simplified (Krose and Smagt, 1996). Therefore, one layer ANN with a tangent
formulation of the NPV is used as the objective function defined as sigmoid transfer function is employed in this study. A large
(Asadollahi et al., 2014) number of hidden layer’s neurons are considered in this study
nt
Q ot ro − Q wt rw − Q it ri (three times of input layer’s neurons) in order to ensure a good
J (u) = ∑ Δt t
, network generalization is obtained when using Bayesian
t=1 (1 + b/100) p (2) regularization.
where nt is the total number of control steps, Δt is the difference The total number of control variables in a production optimi-
between two control steps in days, Q ot , Q w
t
and Q it are the total field zation problem is equal to np nt where np is the total number of
oil production rate, water production rate and water injection rate wells to be controlled and nt is the total number of control steps. A
all in STB/day over the tth control step, ro , rw and ri are the oil price, modified formulation is proposed in order to reduce the large
cost of water removal and cost of water injection, respectively all number of input variables in a surrogate model applied to pro-
in USD/STB, b is the discount rate in percent per year, and pt is the duction optimization problems. It is assumed that cumulative
elapsed time in years. production at control step t, y (t ), depends on the well control
The aim is to evaluate the objective function using the surro- parameters at that control step, U (t ) = ⎡⎣u1 (t ) , ... , unp (t ) ⎤⎦, difference
gate model instead of the reservoir simulator. As a result, surro- in well control parameters with respect to the previous control
gate model’s inputs are the control variables and its outputs are step, ΔU (t ) = U (t ) − U (t − 1) , and cumu-
different components of the objective function. Moreover, our
= ⎡⎣u1 (t ) − u1 (t − 1) , ... , unp (t ) − unp (t − 1) ⎤⎦
developed surrogate model instead of predicting a single point
predicts the cumulative oil and water production curves versus lative productions at two preceding control steps, y (t − 1) and
time. Later on, these values are used to calculate the objective y (t − 2). Therefore, the inputs of ANN are U (t ), ΔU (t ), y (t − 1) and
function (i.e. NPV). y (t − 2), its output is y(t) at control step t and t ¼1,…,nt. The
structure of the resulting ANN with a feedback loop is shown in
2.2. Selecting the approximation function Fig. 2.
Different approximation functions have been employed for 2.3. Design of experiments
constructing the surrogate model (Forrester et al., 2008; Jurecka,
2007; Keane and Nair, 2005). Among them, the most popular The quality of the training data significantly impacts the
680 A. Golzari et al. / Journal of Petroleum Science and Engineering 133 (2015) 677–688
accuracy of the resulting surrogate model. Hence, a suitable Design N -fold CV the data set, S {X , Y}, consisting of n pairs of input-
Of Experiment (DOE) approach by selecting optimum number and output data ( X ,Y ), is divided into N equal and independent subsets
location of the training data points can significantly enhance the or folds,
surrogate modeling process (Alam et al., 2004). A space filling
design is suggested for deterministic numerical experiments (a.k.a. { } { } { }
S {X , Y} = S1 X 1, Y 1 , S 2 X 2 , Y 2 , ... , S N X N , Y N . (4)
computer experiments) in those the same inputs always yield the
same responses (Booker, 1998; Sacks et al., 1989). In space filling The surrogate model is constructed N times, each time leaving
design, sample points are distributed as evenly as possible over the out one of the subsets from the training data sets. The prediction
entire design space to ensure covering a wide range of the design of these N surrogate models is calculated for each of the C candi-
space. date points.
Moreover, the training data points are generated using a se-
quential approach in this study. In sequential approach, first the 2.3.2. Jackknifing
surrogate model is constructed using a number of initial training Jackknife estimate for each candidate point, j, is calculated as,
points. Then, the surrogate model accuracy is improved by adding
(− 0) (− i)
more data points to the initial training set in a stage-wise manner. y˜ j; i = N × y^ j − (N − 1) × y^ j , j = 1, 2, …, C , i = 1, 2, …, N (5)
While the number of training points required to achieve an ac-
(−0)
ceptable level of accuracy is not known in advance, it is desired to where, y^ j is prediction of the surrogate model trained with all
train the surrogate model with minimum number of points. The (−i )
data points, y^ j is prediction of the surrogate model trained with
sequential approach stops the sampling process as soon as suffi-
cient information is achieved. all data points except those in the ith fold which is held-out when
The main goal during the sequential adaptation of the surro- using N-fold cross validation. The prediction variance is calculated
gate model is to select new points in areas where the response of as,
the surrogate model is not accurate. Among the introduced ap- 1
N
2
proximation functions, only Kriging can provide an estimation of
~
s j2 = ∑ (y~j; i ~¯
−y j ),
N (N − 1) i=1 (6)
the prediction error in not previously observed areas. Jin et al.
(2002) proposed using Cross-Validation (CV) to provide an esti- where,
mation of the prediction error for RBFs. In this study the CV and
N
jackknifing approach proposed by Kleijnen and Van Beers (2004) ~¯ = 1
y j ∑ y~j; i .
is employed to perform sequential adaptation of the surrogate N i=1 (7)
model. Initially, C candidate points are preselected by a space
filling DOE method (in this study mc-intersite method proposed by Here, larger variance indicates higher prediction error hence,
Crombecq et al. (2011) is used). Then, the variance of the surrogate the point among the C candidates with maximum variance is se-
model prediction at each candidate point is estimated using CV lected as the new training point. Reservoir simulation is performed
and jackknifing as explained following. for the selected point which is then added to the existing training
points.
2.3.1. Cross-Validation Fig. 3 presents an example of the proposed sequential training
Cross-Validation (CV) (Meckesheimer et al., 2002; Refaeilzadeh approach using a function with two variables.
et al., 2009) is a statistical method of evaluating and comparing
machine-learning algorithms. The basic form of CV is N-fold CV. In 1. Initial training points are generated (circles in Fig. 3-a).
A. Golzari et al. / Journal of Petroleum Science and Engineering 133 (2015) 677–688 681
Fig. 3. Illustration of adaptive sampling; (a) initial points, (b) initial points and candidate points, (c) selected point and (d) final points.
∧
2. Candidate points are created (squares in Fig. 3-b). It must be where, yi is the surrogate model prediction and yi is the simulation
noted that the values of the candidate points are calculated by model output for data point i, respectively. The maximum error is
the surrogate model constructed using the available training compared with a predetermined value. Data points are added se-
points. quentially to the existing training data set to enhance the pre-
3. The point with the largest prediction variance is selected (the diction accuracy of the surrogate model if the maximum error is
triangle in Fig. 3-c). higher than the predetermined value.
4. The actual reservoir simulation is performed for the new points
and added to the training data set.
5. The process is repeated until the desired prediction accuracy is
3. Developed algorithm for adaptive surrogate modeling
achieved.
The developed algorithm for surrogate model construction is
Fig. 3-d shows final configuration of data points after the se-
summarized in Fig. 4.
quential training.
First, a set of initial training points are generated using the mc-
intersite method and the corresponding outputs are calculated
2.4. Surrogate model quality assessment
using the reservoir simulator. This data set is employed for initial
training of the ANN which is constructing the surrogate model.
Quality of the constructed surrogate model must be assessed
The surrogate model quality is then assessed using CV method. It
using a set of test points other than those used for the training
must be noted that large number of folds in CV slows down the
stage. Our aim is to use a method which performs the quality as-
validation process while small number of folds reduces the vali-
sessment stage using available training data sets rather than re-
dation accuracy. The number of folds in CV, N , is considered to be
quiring computationally demanding new data sets. The N-fold
5 for this study which is observed to provide a good assessment in
cross-validation (Meckesheimer et al., 2002) is used as the as-
a reasonable computational time. The sequential training process
sessment method in this study. The Relative Error (RE) of each data
starts when the surrogate model prediction quality is not accep-
point in omitted subsets is used as the quality measure calculated
table. During the sequential training first, C candidate points are
as follows:
generated using mc-intersite method. Then, Jackknife variance is
∧
yi − yi evaluated for each of these candidate points. The point with the
REi = , maximum jackknife variance is selected as a new training point.
yi (8) The reservoir simulator calculates the corresponding outputs for
682 A. Golzari et al. / Journal of Petroleum Science and Engineering 133 (2015) 677–688
the new point and the data set is added to the existing training
sets which are then used for retraining the ANN. The employed
mc-intersite method is associated with a relatively slow optimi-
zation process which is proportional to the value of C. It was ob-
served for this study C ¼5 provides a balance between accuracy
and the computational cost. Moreover, 4 points which are not
selected at each recursive training iteration are saved in a data-
bank which is then added to the candidate points in the next
iteration. The recursive training continues until the stopping cri-
teria are satisfied. In this study the stopping criteria are maximum
CV relative error is smaller than 0.1 or maximum number of si-
mulation runs is equal to 250. More complex correlations between
inputs and outputs are expected when the number of control
variables increases. The developed adaptive surrogate modeling
approach captures this complexity by larger number of training
data points. It is worth noting that, the developed surrogate
modeling approach can be modified to select more than 1 data
point at each recursive training stage (maximum Jackknife var-
iance is still the selection criterion). This is particularly important
in order to take advantage of the available parallel computing
environment while we expect to speed-up the whole training
process. Fig. 5. Permeability field of the top layer and wells location.
A. Golzari et al. / Journal of Petroleum Science and Engineering 133 (2015) 677–688 683
Fig. 7. Comparison of SM prediction and actual RM; (a) cumulative oil production – test data #1, (b) cumulative water production – test data #1, (c) cumulative oil
production – test data #2, (d) cumulative water production – test data # 2.
Fig. 8. BHP versus production time for two test cases with abrupt change in the control; (a) well p1, (b) well p2, (c) well p3, (d) well p4. (For interpretation of the references
to color in this figure, the reader is referred to the web version of this article.)
A. Golzari et al. / Journal of Petroleum Science and Engineering 133 (2015) 677–688 685
Table 2
Minimum, maximum, mean and standard deviation of relative prediction error of the developed SM for two cases with abrupt changes in the control variables.
Test case-1 0.0004 0.0094 0.0038 0.0025 0.0008 0.0133 0.0076 0.0046
Test case-2 0.0001 0.0081 0.0041 0.0029 0.0015 0.0517 0.0220 0.0175
Fig. 9. SM and RM prediction for two cases with abrupt changes in control variable; (a) cumulative oil production for test case-1, (b) cumulative water production for test
case-1, (c) cumulative oil production for test case-2, (d) cumulative water production for test case-2.
686 A. Golzari et al. / Journal of Petroleum Science and Engineering 133 (2015) 677–688
Fig. 10. NPV versus generation number for RM optimization; case-1. Fig. 13. Comparison of NPV obtained from three cases.
Table 5
Optimization results.
5. Conclusions
a b
c d
Fig. 14. The optimum control scenarios for production wells obtained using case-1 and case-3; (a) well p1, (b) well p2, (c) well p3, (d) well p4.
equipped with a feedback loop is shown to perform efficiently as Asadollahi, M., Nævdal, G., Dadashpour, M., Kleppe, J., 2014. Production optimiza-
approximation function for the new configuration. A space-filling tion using derivative free methods applied to Brugge field case. J. Pet. Sci. Eng.
114, 22–37.
initial design followed by Jackknifing and Cross-Validation is em- Badru, O., Kabir, C.S., 2003. Well Placement Optimization in Field Development. In:
ployed to perform the adaptive training of the surrogate model. Paper SPE 84191 presented at the SPE Annual Technical Conference and Ex-
The developed surrogate model successfully applied to optimize hibition, Denver, Colorado, October 5–8, 2003.
Booker, A., 1998. Design and analysis of computer experiments. In: Proceedings of
production on the PUNQ-S3 reservoir model. Following conclu- the 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and
sions were warranted: Optimization. Multidisciplinary Analysis Optimization Conferences. American
Institute of Aeronautics and Astronautics.
The developed adaptive surrogate modeling approach out- Cardoso, M.A., Durlofsky, L.J., 2010. Use of reduced-order modeling procedures for
production optimization. SPE J. 15 (2), 426–435.
performed the conventional one-shot approach. This is due to Castellini, A., Gross, H., Zhou, Y., He, J., Chen, W., 2010. An iterative scheme to
the fact that in the developed approach adequate number of construct robust proxy models. In: Proceedings of the 12th European Con-
ference on the Mathematics of Oil Recovery.
training data points are iteratively selected from the un-
Centilmen, B., Ertekin, T., Grader, A., 1999. Applications of neural networks in
discovered area which enhances the accuracy of the surrogate multiwell field development. In: Proceedings of SPE Annual Technical Con-
model. ference and Exhibition. SPE Paper 56433. Houston, Texas.
The developed surrogate model was capable of mimicking the Crombecq, K., Laermans, E., Dhaene, T., 2011. Efficient space-filling and non-col-
lapsing sequential design strategies for simulation-based modeling. Eur. J. Oper.
reservoir simulator responses with an acceptable accuracy. It Res. 214 (3), 683–696.
provides good a prediction of the oil and water production Do, S.T., Reynolds, A.C., 2013. Theoretical connections between optimization algo-
under abrupt changes of the control variables which is parti- rithms based on an approximate gradient. Comput. Geosci. 17 (6), 959–973.
Floris, F., Bush, M., Cuypers, M., Roggero, F., Syversveen, A.R., 2001. Methods for
cularly important for production optimization applications. quantifying the uncertainty of production forecasts: a comparative study. Pet.
The best performance is achieved when the optimization is Geosci. 7(S), S87–S96.
performed using combination of the actual reservoir model and Foresee, F.D., Hagan M.T., 1997. Gauss–Newton approximation to Bayesian reg-
ularization. In: Proceedings of International Joint Conference on Neural Net-
the developed surrogate model (Case-3). The developed sur- works. pp. 1930–1935.
rogate model provides a global presentation of the search space Foroud, T., Seifi, A., AminShahidi, B., 2014. Assisted history matching using artificial
which is refined during the optimization process in the pro- neural network based global optimization method – applications to Brugge
field and a fractured Iranian reservoir. J. Pet. Sci. Eng. 123, 46–61.
mising area (where optimum solution is located).
Forrester, A., Sobester, A., Keane, A., 2008. Engineering Design via Surrogate
The developed adaptive surrogate modeling assisted optimi- Modelling: A Practical Guide. John Wiley and Sons, Ltd., Publication, United
zation approach not only provides an accurate and significantly Kingdom.
Giunta, A.A., Watson, L.T., Koehler, J., 1998. A Comparison Of Approximation Mod-
faster substitute for the reservoir simulator-based optimization
eling Techniques: Polynomial Versus Interpolating Models. AIAA Paper(98-
but also enhances the optimization performance by smoothing 4758).
the search space. Gutmann, H.-M., 2001. A radial basis function method for global optimization. J.
Glob. Optim. 19 (3), 201–227.
Guyaguler, B., Roland, N. Horne, Leah, Rogers, Livermore, L., Jacob, J. Rosenzweig,
2000. Optimization of well placement in a gulf of mexico waterflooding project.
In: Proceedings of SPE Annual Technical Conference and Exhibition. SPE Paper
References 63221. Dallas, Texas.
Hagan, M.T., Demuth, H.B., Beale, M., 1996. Neural Network Design. PWS Publishing
Co., Boston, USA.
Alam, F.M., McNaught, K.R., Ringrose, T.J., 2004. A comparison of experimental Haghighat Sefat, M., Salahshoor, K., Jamialahmadi, M., Moradi, B., 2012. A new
designs in the development of a neural network simulation metamodel. Simul. approach for the development of fast-analysis proxies for petroleum reservoir
Modell. Pract. Theory 12 (7–8), 559–578. simulation. Pet. Sci. Technol. 30 (18), 1920–1930.
688 A. Golzari et al. / Journal of Petroleum Science and Engineering 133 (2015) 677–688
Haupt, R.L., Haupt, S.E., 2004. Practical Genetic Algorithms. John Wiley & Sons, New inexpensive metamodel assessment strategies. AIAA J. 40 (10), 2053–2060.
Jersey. Mohaghegh, S.D., 2011. Reservoir simulation and modeling based on artificial in-
He, J., Sætrom, J., Durlofsky, L.J., 2011. Enhanced linearized reduced-order models telligence and data mining (AI&DM). J. Nat. Gas Sci. Eng. 3 (6), 697–705.
for subsurface flow simulation. J. Comput. Phys. 230 (23), 8313–8341. Myers, R.H., Montgomery, D.C., Anderson-Cook, C.M., 2009. Response Surface
Jin, R., Chen, W., Sudjianto, A., 2002. On sequential sampling for global metamo- Methodology: Process and Product Optimization Using Designed Experiments.
deling in engineering design. In: Proceedings of ASME 2002 International De- John Wiley & Sons, New Jersey (705).
sign Engineering Technical Conferences and Computers and Information in Ozdogan, U., Sahni, A., Yeten, B., Guyaguler, B., Chen, W.H., 2005. Efficient Assess-
Engineering Conference. American Society of Mechanical Engineers. pp. 539– ment and Optimization of a Deepwater Asset Using Fixed Pattern Approach. In:
548. Proceedings of SPE Annual Technical Conference and Exhibition. Society of
Jones, D.R., 2001. A taxonomy of global optimization methods based on response Petroleum Engineers.
surfaces. J. Glob. Optim. 21 (4), 345–383. Queipo, N.V., Goicochea, J.V., Pintos, S., 2002. Surrogate modeling-based optimi-
Jones, D.R., Schonlau, M., Welch, W.J., 1998. Efficient global optimization of ex- zation of SAGD processes. J. Pet. Sci. Eng. 35 (1–2), 83–93.
pensive black-box functions. J. Glob. Optim. 13 (4), 455–492. Refaeilzadeh, P., Tang, L., Liu, H., 2009. Cross-Validation. In: Liu, L., ÖZsu, M.T. (Eds.),
Jurecka, F., 2007. Robust Design Optimization Based on Metamodeling Techniques. Encyclopedia of Database Systems. Springer, US, pp. 532–538.
Shaker Verlag GmbH, Germany. Regis, R.G., Shoemaker, C.A., 2005. Constrained global optimization of expensive
Keane, A., Nair, P., 2005. Computational Approaches for Aerospace Design: The black box functions using radial basis functions. J. Glob. Optim. 31 (1), 153–171.
Pursuit of Excellence. John Wiley & Sons, England. Rousset, M.H., Huang, C., Klie, H., Durlofsky, L., 2014. Reduced-order modeling for
Kleijnen, J.P., 2007. Design and Analysis of Simulation Experiments, 111. Springer, thermal recovery processes. Comput. Geosci. 18 (3–4), 401–415.
USA. Sacks, J., Schiller, S.B., Welch, W.J., 1989. Designs for computer experiments. Tech-
Kleijnen, J.P., Van Beers, W.C., 2004. Application-driven sequential designs for si- nometrics 31 (1), 41–47.
mulation experiments: kriging metamodelling. J. Oper. Res. Soc. 55 (8), Samarasinghe, S., 2006. Neural networks for applied sciences and engineering:
876–883. from fundamentals to complex pattern recognition. CRC Press, Florida, US.
Krose, B., Smagt, Pvd, 1996. An Introduction to Neural Networks. The University of Wilson, K.C., Durlofsky, L.J., 2013. Optimization of shale gas field development using
Amsterdam, Amsterdam. direct search techniques and reduced-physics models. J. Pet. Sci. Eng. 108,
McKay, M.D., Beckman, R.J., Conover, W.J., 1979. A comparison of three methods for 304–315.
selecting values of input variables in the analysis of output from a computer Zerpa, L.E., Queipo, N.V., Pintos, S., Salager, J.-L., 2005. An optimization methodol-
code. Technometrics 21 (2), 239–245. ogy of alkaline–surfactant–polymer flooding processes using field scale nu-
Meckesheimer, M., Booker, A.J., Barton, R.R., Simpson, T.W., 2002. Computationally merical simulation and multiple surrogates. J. Pet. Sci. Eng. 47 (3–4), 197–208.