You are on page 1of 11


A Novel Automated and Probabilistic EOR Screening Method to Integrate

Theoretical Screening Criteria and Real Field EOR Practices Using Machine
Learning Algorithms
Mohammadali Tarrahi, and Sardar Afra, Texas A&M University, Irina Surovets, SPD; Irina Surovets, SPD

Copyright 2015, Society of Petroleum Engineers

This paper was prepared for presentation at the SPE Russian Petroleum Technology Conference held in Moscow, Russia, 26 28 October 2015.

This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents
of the paper have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect
any position of the Society of Petroleum Engineers, its officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written
consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300 words; illustrations may
not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

To make promising operational decisions for reservoir exploitation, oil and gas industry relies heavily on
predicting the performance of various enhanced recovery processes. Decisions on recovery strategies
should be taken in an early stage of field development planning. To select an appropriate recovery
technique based on reservoir and fluid characteristics, EOR screening criteria are used as a reliable
decision making approach. A dependable first order screening evaluation algorithm enables critical
decision making on potential enhanced oil recovery strategies with the limited reservoir information.
In this study, we propose to solve the EOR screening problem by machine learning or pattern
recognition methods, which are well-established in computer science literature. We perform a compre-
hensive study on the application of various machine learning methods such as Bayesian Classifier,
K-nearest Neighbor Classifier, Minimum Mean Distance Classifier, and Artificial Neural Networks.
The proposed data-driven screening algorithm is a high-performance tool to select an appropriate EOR
method such as steam injection, combustion, miscible injection of CO2 and N2, based on different
reservoir and fluid properties such as permeability, depth, API, and viscosity. In this innovative approach,
we integrate both theoretical screening principles such as Taber criteria and successful field EOR practices
worldwide. Not only this algorithm proposes an appropriate EOR method for a specific reservoir condition
but it also gives the probability of success or success rate corresponding to each EOR method. In addition,
the proposed algorithm is able to address environmental, economical, geographical and technological
The proposed algorithm permits integration of different types of data, eliminates arbitrary approach in
making decisions, and provides accuracy and fast computation. The suitability of the proposed method is
demonstrated by different synthetic and real field EOR cases. This novel EOR screening method is
capable of evaluating the effectiveness of different EOR scenarios given a specific reservoir condition. We
showed that the proposed EOR screening algorithm is able to predict the appropriate EOR method
correctly in more than 90% of cases. We also ranked the proposed screening algorithms based on their
screening performance.
2 SPE-176725-MS

Acquiring a suitable enhanced oil recovery (EOR) method based on a given set of reservoir and fluid
properties and any additional data is referred to as EOR screening. EOR projects have been performed
more than ever as a result of the continuous fall in the conventional oil production (Taber et al. 1997).
Thus, selecting the best recovery technique becomes the choice of interest for oil and gas industry and
reservoir engineers (Taber et al. 1997). Furthermore, expertise along with many assumptions, structures
and relations among reservoir and fluid characteristics and features are needed to obtain an efficient EOR
technique in each EOR screening scenario.
EOR screening has been first introduced by Taber et al. (Taber and Martin 1997) in which all EOR
screening criteria have been briefly summarized in a table and each EOR method spectrum has been
graphically determined in order to obtain and execute an efficient EOR technique for all types of reservoir
with various fluid and rock properties. The performance of the computer-assisted programs for EOR
screening is highly tied to the accuracy of the input data utilized. Therefore, Taber revisited the EOR
criteria on his later works and proposed EOR screening criteria based on field results and oil recovery
mechanisms which offer more realistic parameters for automated EOR tools in reservoir management
(Taber et al. 1997). Although, computer-assisted algorithms have attracted researchers from various
engineering and science fields such as chemical engineering (Emrani et al. 2011 and 2012), little work has
been don in developing automated EOR screening. While introducing new EOR techniques is very critical
to improve the existing EOR schemes, a need for providing a more reliable and structured screening
techniques becomes inevitable (Emrani et al. 2015; Afra and Nasr El-Din 2015).
In recent years, reservoir management has evolved to a more structured paradigm, in which reservoir
production strategies can be optimized simultaneously with assimilation of new production information in
a real-time fashion (Jafarpour et al. 2011; Tarrahi et al. 2013, and 2015). Employing machine learning and
big data analytics helped petroleum engineers to obtain more reliable, accurate and cost efficient methods
in reservoir management including EOR screening. Many works have been done in literature to apply such
artificial intelligence techniques in many aspects of petroleum engineering like reservoir simulation,
production optimization, field development and history matching. For instance, a very comprehensive set
of studies have been conducted by authors in the field of automated history matching and reservoir
parameterization by introducing a novel parameterization method through tensor algebra (Afra et al. 2013
and 2014; Gildin et al. 2014). Anaother work by authors (Afra et al. 2011) also showed the suitability of
supervised classification algorithms for high dimensional small sample data sets.
Automated EOR screening through artificial intelligence has recently been addressed in the literature.
Various machine learning techniques have been proposed to facilitate the automated EOR screening
procedures including artificial neural networks (ANN), genetic algorithm (GA) and Bayesian networks
(BN). Alvarado et al. (Alvarado et al. 2002) employed machine learning algorithms, e.g., clustering, and
dimensionality reduction methods to extract information from a worldwide EOR/IOR data set to acquire
desired screening. Parada and Ertekin (Parada and Ertekin 2012) developed a neural network based
simulation tool to deliver general EOR screening criteria for variety of recovery schemes which provided
a new design for steam injection recovery projects. Zerafat et al. (Zerafat et al. 2011) used Bayesian
network analysis to come up with a unified expert system that assist further economical and environmental
assessments through classifying suitable EOR methods. Of special interest in this paper is to study the
performance of different pattern recognition algorithms in solving EOR screening problem. A compre-
hensive comparison is also provided on the performance of the classifiers with respect to the correspond-
ing average correct classification rate (ACCR).
The paper is organized as follows. In the next section, the classification framework and designing
proper classifiers are briefly introduced. Data collection Section describes how EOR screening data
collected and integrated into the proposed automated EOR screening schemes. Also, the probabilistic
SPE-176725-MS 3

process of synthetic data generation is explained in details. In results and discussion section experiment
results are presented and explained in details for 9 different EOR methods in a 9-dimentional feature space
of reservoir properties and then all results are compared with respect to the ACCR. Finally, last section
concludes several remarks of the work and proposes best classification method along with some
guidelines for the future work by authors regarding the effect of feature selection and feature conditioning
on the performance of automated EOR screening.


In order to demonstrate the main concepts developed in the present work, we will briefly describe the
classification problem. Eager readers may refer to the machine learning and pattern recognition resources
to get more thorough understanding of the classification paradigm and its application in energy sector. We
start by describing classification rule and then continue by introducing the most applicable classifiers in
EOR screening.
In machine learning and statistics, the problem of recognizing the label or class of a new incoming
unknown input given a training set of observations with known labels is referred to as classification. In
this case, the classification problem would be based on a supervised learning in which a set of properly
labeled and known training data is available. On the other hand, in the lack of such labeled observation,
the unknown data set is categorized based on a specific measure like distance or data similarity which is
known as clustering (Alpaydin 2010; Devroye et al. 1996; Webb et al. 2011).
In two-group statistical pattern recognition, there is a feature vector X p and a label Y {1, 1}.
The pair (X, Y) has a joint probability distribution F, which is unknown in practice. Hence, one has to
resort to designing classifiers from training data, which consists of a set of n independent points Sn {(X1,
Y1), . . ., (Xn, Yn)} drawn from F. A classification rule is a mapping g: {p {0,1}}n p {0, 1}. A
classification rule maps the training data Sn into the designed classifier g (Sn,.): p {0,1}. A linear
discriminant classifier is given by Eq. 1:

Therefore, all training points are correctly classified if Yi(aTXi a0) 0, i 1, . . ., n where yi {1,
1} (Afra et al. 2011).
Of special interest in the present work is to employ pattern recognition tools specifically Bayesian
classifiers to rank the probability of success for all possible EOR schemes for a given reservoir with
specified fluid and rock properties. One may choose the best EOR method for a given reservoir based on
the outcome of this automated screening, known as expert system, and considering the non-learning
parameters outside the training set, e.g., reservoir heterogeneity, production history to name a few, which
either are not a quantity or not included in Taber table. In this paper, in order to perform a computer
assisted EOR screening, we utilized 9 different classification rules including Bayes optimal classification,
neural network, support vector machines, k-nearest neighbors, Gaussian mixture, Gaussian, Nave Bayes,
decision tree and RBF. For the sake of time and space, here we only describe support vector machines.
Eager readers may refer to (Devroye et al. 1996; Webb et al. 2011) for more details on the different
classification methods as well as error estimation analysis.
Linear Support Vector Machines
The main idea in support vector machine is to adjust linear discrimination with margins such that the
margin is maximalthis is called the Maximal Margin Hyperplane (MMH) algorithm. Those points
4 SPE-176725-MS

closest to the hyperplane are called the support vectors and determine a lot of the properties of the
Linear Discrimination with Margin
If we want to have a margin b 0, the constraint becomes Yi (aT Xi a0) b for i 1, . . ., n. The
optimal classification solution to this problem puts all points at a distance at least b/||a|| from the
hyperplane. Since a, a0 and b can be freely scaled, without loss of generality we can set b 1, therefore
we have Yi (aT Xi a0) 1, for i 1, . . ., n. The margin is 1/||a|| and the points that are at this exact
distance from the hyperplane are called support vectors. The idea here is to maximize the margin 1/||a||.
For this, it suffices to minimize subject to the constraints Yi (aT Xi a0) 1, for i 1, . . ., n.
The solution vector a* determines the MMH. The corresponding optimal value is determined later from
a* and the constraints.
Non-Separable Data
If the data is not linearly separable, it is still possible to formulate the problem and find a solution by
introducing slack variables i i 1, . . ., n, for each of the constraints, resulting in a new set of 2n
constraints in Eq. 2:

Therefore, if i 0, the corresponding training point is an outlier, i.e., it can lie closer to the
hyperplane than the margin, or even be misclassified. We introduce a penalty term in the functional,
which then becomes Eq. 3:

The constant C modulates how large the penalty for the presence of outliers is. If C is small, the penalty
is small and a solution is more likely to incorporate outliers. If C is large, the penalty is large and therefore
a solution is unlikely to incorporate many outliers. The method of Lagrange multipliers allows one to find
the solution to this problem and the corresponding MMH. Eager readers may see (Devroye et al. 1996;
Webb et al. 2011) for more details.
Bayesian Classification
As mentioned previously, in many classification approaches the goal is to construct a map from the
feature space to the space of labeled class. In fact, in such cases a classification rule explicitly maps
training data into a designed classifier. Decision trees and neural networks are good examples of such
classifiers. However, Bayesian classification approach slightly differs from others. As a matter of fact, in
the Bayesian approach, the joint probability distribution of the class and feature has to be estimated.
Given an n-dimensional random variable describing the feature space, X, and a random variable, C,
representing class space, the problem is to estimate the following conditional probability function utilizing
Bayes rule in Eq. 4.

Wherein D is the training set. Therefore, learning process in Bayesian classification turns to a joint
probability distribution function approximation. Then, a new incoming data can be classified only by
calculating the conditional probability of class space variables vector given the corresponding features
vector (reservoir and fluid properties) which obtain the most probable class (proper EOR method) through
the constructed estimation in the learning process.
For the sake of simplicity and without losing generality, assume a two-class classification problem. In
this case we are interested in classifying any incoming data through identifying its label and consequently
the correct class of the data based on the joint probability distribution and feature space probability. The
SPE-176725-MS 5

Bayes rules suggest to calculate the all a posteriori probabilities, P(ci|X), and then assign the data to the
class with highest probability Eq. 5.

Wherein, P(ci) is the prior probability of the class lables that can be easily estimated from the training
data. Now, Bayes classification rules can be stated as follows. An incoming instance is assigned to class
ci if the posterior probability of that class given the features is greater that of the second class. Thus,
considering Bayes rule the problem of classification shrinks to estimating class-conditional probability
density function, P(X|ci). The most important point to be noted here is to minimize the misclassification
probability or the classification error probability. Misclassification probability consists of two terms in
case of two-class problem. First term is the probability of classifying unknown data to c1 although it
belongs to c2 and the second term would be stated vice versa. Thus Eq. 6,

Now, employing chain rule, one can write the following Eq. 7

And utilizing Bayes rule results in Eq. 8


Considering equation 8 one may select the decision regions in feature space as follows Eq. 9

Hence, the Bayesian classifier is optimal in minimizing classification error probability. For more
details on Bayesian classification and probability density function estimation see (Alpaydin 2010,
Devroye et al. 1996; Webb et al. 2011).
Confusion Matrix
In the field of machine learning, the performance of a supervised classifier is monitored by means of
confusion matrix. A confusion matrix actually is a counter array of size C C, size of the class space,
at which any classification step is recorded. Each row represents an actual class while each column
represents a predicted class. Once a classification is proceed to assign an unknown input to a class, the
corresponding entry in the confusion matrix is increased by one. The ideal classifier is the one that its
corresponding confusion matrix is a diagonal matrix in which all entries outside the main diagonal is zero.
Considering confusion matrix, correct classification rate can be determined as follows Eq. 10

One has to note that a legitimate error estimator always should be calculated based on the test data sets
not the training data. The next session describes data collection process and explains how synthetic data
are generated based on real reservoir data sets and Taber criteria table.
Data Collection
This section explains how the required training and test data sets are generated utilizing real EOR data
sets. Here, in automated EOR screening, a combination of Taber table, successful EOR projects and prior
knowledge about the reservoir are employed to generate training data in a 9-dimentional feature space. We
choose oil gravity (API), oil viscosity (cp), oil composition, oil saturation, formation type, net thickness
(ft), permeability (md), reservoir depth (ft), temperature (F) as the selection properties or the members
of feature space. Also, the classes of the classification problem are chosen to be the following EOR
6 SPE-176725-MS

techniques including Nitrogen and flue gas, hydrocarbon, CO2, immiscible gases, micellar/polymer, ASP
and Alkaline flooding, polymer flooding, combustion, steam, and surface mining. A multivariable joint
probability density function (PDF) is employed in order to generate the synthetic realistic training data set
based on a real data set of 7 features (continuous features) and 8 classes. Table 1 summarized the
class-sample information of the 744 real EOR projects. Using such a joint PDF helps us to avoid
generating biased data points in feature space and it also makes sure to pick the most reperesentative
samples and cover thoroughly the feature space (all possible reservoir and fluid properties as the input).

Table 1Class-sample information sheet for all EOR projects before and after outlier removal process.
No. of actual samples in each class

EOR Method (Class) Before Outlier Removal After Outlier Removal

Nitrogen and flue gas 20 19

Hydrocarbon 103 100
CO2 164 151
Micellar/polymer, ASP and Alkaline Flooding 3 3
Polymer Flooding 59 54
Combustion 24 24
Steam 371 327

This is of critical importance, since a biased data set can results in a completely wrong classification
and consequently a wrong EOR screening decision. Furthermore the designed classifer will have
promising generalization power and can be applied to a wide range of new unseen cases. Another
significant note in using real EOR data sets is the correlation between some of the features impedes the
independence assumptions. In fact, since features are correlated, one may not easily generate data points
for each feature using a 1D distribution and then combine the corresponding results to form a 9-dimen-
tional data point based on independence assumption. Thus, we assumed correlation between temperature,
viscosity, depth and oil gravity and so there would be 6 different combinations of correlation coefficients
between these features. Fig. 1 shows the corresponding scatter plots of the 4 aforementioned correlated
features required to determine correlation coefficients.
SPE-176725-MS 7

Figure 1Scatter plots corresponding to paired correlation coefficients for temperature, depth, viscosity, oil gravity.

In the present work, 9 data sets of size 1000 samples are generated for all 9 classes, i.e., EOR methods
represents the classes here. In order to create training and test data sets, one third of the samples, 300
8 SPE-176725-MS

samples, used as test samples and the rest utilized to train classifiers. We utilized uniform and triangular
distribution to generate data sets though a Monte Carlo simulation. The Latin hypercube sampling (LHS)
is employed to perform sampling step in an efficient Monte Carlo simulation and to achieve reasonably
accurate random distribution. After generating data points, the calculated correlation coefficients have to
be applied to make the final samples that are correlated random variables.
Data Preprocessing
In the field of statistical pattern recognition, in order to prevent a high variability in measurement or
experimental error, an observation point is defined that is referred to as outlier. An Outlier is a point that
lies very far from the mean of the corresponding random variable. This distance is measured with respect
to a given threshold, usually a number of times the standard deviation. For a normally distributed random
variable a distance of two times the standard deviation covers 95% of the points, and a distance of three
times the standard deviation covers 99% of the points (Grubbs 1969). Here, we removed outliers to cover
99% of the data points which is 678 out of 744 real EOR projects. Table 1 also summarized the results
of outlier removal process for the class-sample information data sheet.
Results and Discussion
In the following section EOR screening results of all classifiers are presented and the performances of
these classifiers are compared with respect to average correct classification rate (ACCR) defined in
Equation (10). Table 2 summarized all the results for the classifiers regarding the ACCR. Based on the
results, Bayes optimal classifier with Gaussian parametric PDF estimator tends to have a better perfor-
mance to classify unknown data samples from test data set. Bayes optimal classifier aims at minimizing
the probability of classification error. The presented Bayesian classifier is designed with the assumption
of uniform a-priori distribution of classes (EOR methods) since a-priori there was no information on the
accessibility and strategic preference of different methods. However if one can designate prior class
distribution for instance based on the economic analysis, EOR method technological availability and
previously established success rate, classifier performance can be improved significantly. For instance by
assigning a more realistic a-priori class distribution (e.g. CO2 and steam injection are two of the most
applicable EOR methods, so they are assigned higher probability a-priori), ACCR can be improved to

Table 2Classifier performance in accordance to average correct classification rate criterion.

Classifier ACCR training ACCR test

Bayes optimal classifier with Gaussian parametric PDF estimator N/A 0.9119
Bayes optimal classifier with Parzen non-parametric PDF estimator N/A 0.7226
Bayes optimal classifier with k-nearest neighbor PDF estimator k2 N/A 0.795
k-nearest neighbor classifier with k1 N/A 0.711
k-nearest neighbor classifier with k11 N/A 0.822
Minimum mean distance with Euclidean distance N/A 0.7722
Minimum mean distance with Manhattan distance N/A 0.6463
Minimum mean distance with Mahalanobis distance N/A 0.5441
Discriminant analysis with MATLAB function classify N/A 0.8652
Neural NetworksMulti layer perceptron/10 neurons 81.81 78.85
Neural NetworksMulti layer perceptron/20 neurons 89.93 87.04
Neural NetworksMulti layer perceptron/30 neurons 90.64 85.60
NNConjugate Gradient Learning/10 neurons hidden layer 84.08 81.51

Based on our analysis as shown in Table 2, Artificial Neural Networks (ANN) with 20 neurons in
hidden layer (multi-layer perceptron) has the second best classification performance. One of the pitfalls
SPE-176725-MS 9

of using ANN is overfitting phenomenon that is clearly shown in Table 2. By increasing the number of
hidden layer neurons (making the network more complicated and more flexible to fit to data), the training
data ACCR increases while the test data ACCR does not show a monotonic behavior. Because of
overfitting, the test data ACCR of ANN with 30 neurons is smaller than the one with 20 neurons.
Visualization of the training data in the feature space will give us great insight about the separation and
mixing of the classes and also the correlation between different features. It also proves how complex and
challenging the classification problem can be. To illustrate the challenge of EOR screening problem, we
should show the scatter plot of the training data (in the feature space) labeled by their corresponding
classes (or EOR methods). For illustration purposes we can only plot 2D or 3D scatter diagrams. We
choose to visualize the training data or training samples in 3D, Fig. 2 (in the space of depth, API and
viscosity) and 2D, Fig. 3 (in the space of depth and API) while the samples labels are specified by colored

Figure 23D visualization of the training data in the space of depth, API and viscosity.
10 SPE-176725-MS

Figure 32D visualization of the training data in the space of depth, API.

We performed a comprehensive study on the application of 9 different classifiers including Bayesian and
artificial neural networks, to deliver a better understanding of automated EOR screening based on real
EOR data sets. We also presented a complete performance comparison between the classifiers with respect
to average correct classification rate criteria. The analysis showed that Bayesian optimal classifier is the
most reliable one among all others with not only proposes an appropriate EOR method for a specific
reservoir condition but also gives the probability of success or success rate corresponding to each EOR
method. The proposed algorithm is able to address environmental, economical, geographical and tech-
nological limitations. Furthermore, the proposed technique permits integration of different types of data,
eliminates arbitrary approach in making decisions, and provides accuracy and fast computation. An
underway work by authors dedicated to the process of feature selection and feature conditioning process
in order to improve the classification performance.

Afra, S. Gildin, E., and Tarrahi, M. 2014. Heterogeneous Reservoir Characterization using Efficient
Parameterization through Higher Order SVD (HOSVD). Presented at the American Control
Conference (ACC), Portland, 4-6 June. doi: 10.1109/ACC.2014.6859246.
Afra, S., and Gildin, E. 2013. Permeability Parametrization using Higher Order Singular Value
Decomposition (HOSVD). Presented at the 12th International Conference on Machine Learning
and Applications (ICMLA), Miami, December 4-7. doi: 10.1109/ICMLA.2013.121.
Afra, S., and Braga-Neto, U. 2011. Studying the Possibility of Peaking Phenomenon in Linear Support
Vector Machines with non-Separable Data. Presented at IEEE International Workshop on
Genomic Signal Processing and Statistics (GENSIPS), San Antonio, December 4-6. doi: 10.1109/
SPE-176725-MS 11

Afra, S., Nasr El-Din, H. A., and Socci, D. et alet al. 2015. A novel viscosity reduction plant-based
diluent for heavy and extra heavy oil. Presented at the 65th Canadian Chemical Engineering
Conference (CSChE2015), Calgary, October 4 7.
Alpaydin, E. 2010. Introduction to Machine Learning. MIT Press. p. 9. ISBN 978-0-262-01243 0.
Alvarado, V., Ranson, A., Hernandez, K., Manrique, E., Matheus, J., Liscano, T., & Prosperi, N.
(2002, January 1). Selection of EOR/IOR Opportunities Based on Machine Learning. Society of
Petroleum Engineers. doi: 10.2118/78332-MS
Emrani, A. S., Saber, M., and Farhadi, F. 2012. Modeling and Optimization of Fixed-Bed Fischer-
Tropsch Synthesis Using Genetic Algorithm. Journal of Chemical and Petroleum Engineering
46(1): 111.
Emrani, A. S., Saber, M., and Farhadi, F. 2011. A Decision Tree for Technology Selection of Nitrogen
Production Plants. Journal of Chemical and Petroleum Engineering 45(1): 111.
Emrani, A. S., and Nasr-El-Din, H. A. 2015. Stabilizing CO-Foam using Nanoparticles. Presented at
the SPE European Formation Damage Conference and Exhibition, Budapest, Hungary, 3-5 June.
SPE-174254-MS. doi: 10.2118/174254-MS.
Gildin, E., and Afra, S. 2014. Efficient inference of reservoir parameter distribution utilizing higher
order SVD Reparameterization. Presented at the 14th European conference on the mathematics of
oil recovery (ECMOR XIV), Sisil, Italy, August 7-9. doi: 10.3997/2214-4609.20141826.
Devroye, L., Gyorfi, L., and Lugosi, G. 1996. A probabilistic theory of pattern recognition. Appli-
cations of mathematics. Springer.
Grubbs, F. E. 1969. Procedures for Detecting Outlying Observations in Samples. Technometrics 11
(1): 121. doi: 10.1080/00401706.1969.10490657.
Jafarpour, B., and Tarrahi, M. 2011. Assessing the performance of the ensemble Kalman filter for
subsurface flow data integration under variogram uncertainty. Water Resources Research, 47(5).
Parada, C. H., & Ertekin, T. (2012, January 1). A New Screening Tool for Improved Oil Recovery
Methods Using Artificial Neural Networks. Society of Petroleum Engineers. doi: 10.2118/
Taber, J. J., Martin, F. D., & Seright, R. S. (1997, August 1). EOR Screening Criteria Revisited - Part
1: Introduction to Screening Criteria and Enhanced Recovery Field Projects. Society of Petroleum
Engineers. doi: 10.2118/35385-PA
Tarrahi, M., B. Jafarpour, and A. Ghassemi (2015), Integration of microseismic monitoring data into
coupled flow and geomechanical models with ensemble Kalman filter, Water Resour. Res., 51,
doi: 10.1002/2014WR016264.
Tarrahi, M., Jafarpour, B., & Ghassemi, A. (2013, September 30). Assimilation of Microseismic Data
into Coupled Flow and Geomechanical Reservoir Models with Ensemble Kalman Filter. Society
of Petroleum Engineers. doi: 10.2118/166510-MS
Webb, A. R., Copsey, K. D. 2011. Statistical Pattern Recognition. John Wiley & Sons.
Zerafat, M. M., Ayatollahi, S., Mehranbod, N., & Barzegari, D. (2011, January 1). Bayesian Network
Analysis as a Tool for Efficient EOR Screening. Society of Petroleum Engineers. doi: 10.2118/