Professional Documents
Culture Documents
a r t i c l e i n f o a b s t r a c t
Article history: The measurement of PVT properties of natural gas in gas pipelines, gas storage systems, and gas reservoirs
Received 5 March 2013 require accurate values of compressibility factor. Although equation of state and empirical correlations
Received in revised form were utilized to estimate compressibility factor, but the demands for novel, more reliable, and easy-to-use
28 May 2013
models encouraged the researchers to introduce modern tools such as artificial intelligent systems.
Accepted 12 June 2013
Available online 12 July 2013
This paper introduces Particle swarm optimization (PSO) and Genetic algorithm (GA) as population-
based stochastic search algorithms to optimize the weights and biases of networks, and to prevent
trapping in local minima. Hence, in this paper, GA and PSO were used to minimize the neural network
Keywords:
Compressibility factor
error function.
Natural gas A database containing 6378 data was employed to develop the models. The proposed models were
Particle swarm optimization compared to conventional correlations so that the model predictions indicated a good accuracy for the
Genetic algorithm results in training and testing stages. The results showed that artificial neural networks (ANNs)
Artificial neural network remarkably overcame the inadequacies of the empirical models where PSOeANN improved the per-
formance significantly. Additionally, the regression analysis released the efficiency coefficient (R2) of
0.999 which can be considered very promising.
Crown Copyright Ó 2013 Published by Elsevier B.V. All rights reserved.
1. Introduction parameter in almost all calculations for gases (Kamyab et al., 2010).
Different approaches were adopted for prediction of compress-
Every chemical or petroleum engineer sometimes needs to pre- ibility factor including Equation of States (EoS), Empirical Correla-
dict PVT properties of fluids. This analysis is carried out in labs but at tions and Artificial Intelligences (AI).
the early stage of field development or initial steps it brings about A general principle, corresponding states principle (CSP), threw
challenges because this information is not easily available. It inspired light on estimation of Z-factor by asserting that suitably dimen-
engineers to engage in developing empirical or semi-empirical cor- sionless properties of all substances will follow universal variations
relations which required the primary characteristics. One of the most of suitably dimensionless variables of state and other dimension-
widely-used parameters is compressibility factor which its value less quantities (Poling et al., 2001). Where, the number of param-
plays a direct role in chemical and reservoir calculations. Necessity eters characteristic of the substance determines the level of CSP.
arises when there is no available experimental data for the required Two-Parameter CSP just uses two characteristic properties, such
composition, pressure and temperature conditions (Kumar, 2004). as Tc and Pc, to make the state conditions dimensionless, and
the dimensionless function may be Z-factor. Although these
corresponding-states correlations are very nearly exact for simple
2. Natural gas compressibility factor
fluids, systematic deviations are observed for more complex fluids.
The CSP improvements led to introduction of third parameter,
The gas compressibility factor (Z-factor), a parameter that
characteristics of molecular structure. Some researchers introduced
measures the deviation for a real gas from the ideal gas, is a crucial
Zc, the compressibility factor at critical condition, (Lydersen et al.,
1955) and the most popular parameter, acentric factor u (Pitzer
* Corresponding author. Department of Petroleum Engineering, Petroleum Uni-
versity of Technology, Ahwaz, Iran. Tel.: þ98 9375634541. et al., 1955, 1957a,b). But it should be expressed that most of the
E-mail address: ali.chamkalani@gmail.com (A. Chamkalani). function whether correlation or analytical have been derived with
1875-5100/$ e see front matter Crown Copyright Ó 2013 Published by Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.jngse.2013.06.002
A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143 133
only Tpr and Ppr, and both types of functions are approximate. 4.2.3. Beggs and Brill (1973)
Therefore, despite of elimination of accentric factor, we develop our Beggs and Brill (1973) introduced an equation generated from
model on the basis of reduced pressure and temperature. Besides, Standing and Katz Z-factor chart:
while we are dealing with natural gas, no analytical relation exists
for accentric factor so that it is calculated from correlations ð1 AÞ D
Z ¼ Aþ þ CPpr (4)
generating from Tc and Pc. Furthermore, as our judged correlations eB
all have been obtained from these two parameters, we agreed to set
them as our model’s inputs. where
0:5
A ¼ 1:39 Tpr 0:92 0:36Tpr 0:101
3. Pseudo critical properties models h i
B ¼ 0:62 0:23Tpr Ppr þ T 0:066 2 þ
0:037 Ppr 0:32 6
Ppr
ð pr 0:86Þ 109ðTpr 1Þ
For sake of pseudo critical properties of natural gas, several
correlations were presented to calculate the pseudo critical tem- C ¼ 0:132 0:32log Tpr
D ¼ 10ð0:30160:49Tpr þ0:1824Tpr Þ
perature and pressure via mixing rules including Kay (1936), 2
where
4.2. Empirical correlations
0:27Ppr
Numerous computations and needs for predetermined param- rpr ¼ (7)
ZTpr
eters resulted in utilization of empirical correlations which facili-
tated the computations and seemed to be more user-friendly The corresponding coefficients have been listed in Table 1.
models. In the following, we tried to chronicle these developments.
4.2.5. Dranchuk and Abou-Kassem (1975)
4.2.1. Papay (1968) Dranchuk and Abou-Kassem (1975) proposed an eleven-
Papay (1968) proposed a simple relationship for calculation of constant equation-of-state for calculating gas compressibility fac-
compressibility factor as: tors. The equation is as following:
" # " #
Ppr Ppr A2 A3 A4 A5 A7 A8 2
Z ¼ 1 0:3648758 0:04188423 (1) Z ¼ 1 þ A1 þ þ 3 þ 4 þ 5 rpr þ A6 þ þ 2 rpr
Tpr Tpr Tpr Tpr Tpr Tpr Tpr Tpr
" #
A A r2
pr
A9 7 þ 28 r5pr þ A10 1 þ A11 r2pr 3 exp A11 r2pr
Tpr Tpr Tpr
4.2.2. Hall and Yarborough (1973)
Hall and Yarborough (1973) presented an equation-of-state that (8)
determined the Standing and Katz Z-factor chart. They based their
model on StarlingeCarnahan EoS (Carnahan and Starling, 1969) and
by using data taken from Standing and Katz’s Z-factor chart fitted
their expressions. Table 1
The corresponding coefficients of Dranchuk et al. (1974) have
1 þ y þ y2 y3
2 3:5 been listed.
Z ¼ 14:54=Tpr 8:23=Tpr þ 3:39=Tpr y
ð1 yÞ3 Coefficient Tuned coefficient
2
þ 90:7=Tpr 242:2=Tpr 2
þ 42:4=Tpr yð1:18þ2:82=Tpr Þ A1 0.31506237
A2 1.0467099
(2) A3 0.57832729
A4 0.53530771
where A5 0.61232032
A6 0.10488813
h 2 i. A7 0.68157001
y ¼ 0:06125Ppr exp 1:2 1 1=Tpr Tpr Z (3) A8 0.68446549
134 A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143
1 þ y þ y2 y 3
where 2 3:5
Z ¼ 14:54=Tpr 8:23=Tpr þ 3:39=Tpr y
ð1 yÞ3
0:27Ppr þ 90:7=Tpr 242:2=Tpr 2 2
þ 42:4=Tpr yð1:18þ2:82=Tpr Þ
rpr ¼ (9) h i
ZTpr
þ k1 yexp k2 ðy 0:421Þ2 þ k3 y10 exp
Table 2 has recorded the tuned coefficients of Dranchuk and h i
Abou-Kassem (1975) correlation.
69279ðy 0:374Þ4
(12)
4.2.6. Shell oil Company
Kumar (2004) referenced the shell company model for estima- where
tion of Z-factor as: h 2 i.
y ¼ 0:06125Ppr exp 1:2 1 1=Tpr Tpr Z ;
4 1
Ppr 63 13
k1 ¼ 1:87 þ 0:001Tpr ; k2 ¼ 171:8Tpr ; (13)
Z ¼ A þ BPpr þ ð1 AÞexpð CÞ D (10)
10 h i
63
k3 ¼ 3525 1 exp 219=Tpr
where
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
A ¼ 0:101 0:36Tpr þ 1:3868 Tpr 0:919 4.2.8. Bahadori et al. (2007)
Bahadori et al. (2007) introduced a correlation which related
B ¼ 0:021 þ T0:04275
pr 0:65 compressibility factor to reduced temperature and pressure.
They tuned the correlation coefficient by data ranging from
C ¼ Ppr E þ FPpr þ GPpr4
0.2 < Ppr < 16 and 1.05 < Tpr < 2.4.
(11)
D ¼ 0:122exp 11:3 Tpr 1 2 3
Z ¼ a þ bPpr þ cPpr þ dPpr (14)
E ¼ 0:6222 0:224Tpr
0:0657 where
F ¼ 0:037
Tpr 0:85
2 þ D T3
G ¼ 0:32exp 19:53 Tpr 1 a ¼ Aa þ Ba Tpr þ Ca Tpr a pr
b ¼ 2 þ D T3
Ab þ Bb Tpr þ Cb Tpr b pr
2 þ D T3 (15)
c ¼ Ac þ Bc Tpr þ Cc Tpr c pr
d ¼ 2 þ D T3
Ad þ Bd Tpr þ Cd Tpr
4.2.7. Hall and Iglesias-Silva (2007) d pr
Hall and Yarborough (1973) used an augmented hard-sphere
equation which calculated the Z-factor of natural gas mixture. We gathered the coefficients in Table 3.
They combined Z-factor of a hard sphere represented by the Car-
nahaneStarling equation and a correction term that accounted for 4.2.9. Heidaryan et al. (2010a)
the hard sphere model deficiencies in order to represent the real Multiple regression analysis was carried out by Heidaryan et al.
fluid and the attractive part of the Z-factor. Hall and Yarborough (2010a) to develop a correlation benefiting of 1220 data points in
(1973) pointed out that the method was not recommended for range of 0.2 Ppr 15 and 1.2 Tpr 3.
application if the pseudo-reduced temperature was less than one. 2
Therefore, Hall and Iglesias-Silva (2007) decided to add a A1 þ A2 ln Ppr þ A3 ln Ppr þ A4 ln Ppr þ TApr5 þ TA26
correction term to the Hall and Yarborough (1973) equation which Z ¼ 2 A
pr
(16)
it applied at reduced temperature while does not influence the
1 þ A7 ln Ppr þ A8 ln Ppr þ Tpr9 þ AT10
2
pr
Table 4 Table 6
This represents the coefficient of Heidaryan et al. (2010a) The tuned coefficients in Azizi et al. (2010) equations are brought.
method.
Coefficients Tuned coefficients Coefficients Tuned coefficients
Coefficient Tuned coefficient
a 0.0373142485385592 k 24,449,114,791.1531
A1 1.115323727 b 0.0140807151485369 l 19,357,955,749.3274
A2 0.07903952088760 c 0.0163263245387186 m 126,354,717,916.607
A3 0.01588138 d 0.0307776478819813 n 623,705,678.385784
A4 0.008861345 e 13,843,575,480.943800 o 17,997,651,104.3330
A5 2.16190792611599 f 16,799,138,540.763700 p 151,211,393,445.064
A6 1.157531187 g 1,624,178,942.6497600 q 139,474,437,997.172
A7 0.05367780720737 h 13,702,270,281.086900 r 24,233,012,984.0950
A8 0.0146557 i 41,645,509.896474600 s 18,938,047,327.5205
A9 1.80997374923296 j 237,249,967,625.01300 t 141,401,620,722.689
A10 0.954860388
(17) A4
A2 Ppr
ðA þ1Þ
A6 Ppr 4
ðA þ2Þ
A8 Ppr 4
2
Z ¼ 1 þ A1 Ppr þ A2 Ppr þ A4
þ A7
þ ðA þ1Þ
(20)
They obtained model in the ranges of 0.2 < Pr < 30 and Tpr Tpr Tpr 7
1.0 < Tr < 3.0.
The tuned coefficients for this equation have been shown in Table 7.
4.2.11. Azizi et al. (2010)
Azizi et al. (2010) utilized curve fitting software to produce a 4.2.13. Shokir et al. (2012)
correlation with 20 coefficients. This correlation was derived based Shokir et al. (2012) presented a model for estimating Z-factors of
on 3038 points from the Standing and Katz Z-factor chart. They sweet gases, sour gases and gas condensates using genetic pro-
presented ABI to estimate the sweet gas compressibility factor over gramming (GP). In addition, they built models for pseudo-critical
the range of 0.2 < Ppr < 11 & 1.1 < Tpr < 2. pressure and temperature as well which may assist researchers to
make them applicable instead of traditional methods.
BþC
Z ¼ Aþ (18) Z ¼ AþBþCþDþE (21)
DþE
where where
0:5 2Tpr Ppr 1
2:16 1:028 1:58 2:1 A ¼ 2:679562
A ¼ aTpr þ bPpr þ cPpr Tpr þ d ln Tpr ðPpr2 þTpr3 Þ=Ppr
2:4
B ¼ e þ fTpr 1:56
þ gPpr 0:124 3:033
þ hPpr Tpr
Tpr Ppr þPpr
2
4.3. Artificial intelligences Each particle represents a candidate position (i.e., solution) as a
point in a D-dimensional space, and its status is characterized
As time goes by, intelligent systems were used more and more, according to its position and velocity. The D-dimensional position
and as powerful tools became acceptable in petroleum and chem- for the particle i at iteration t can be represented as xti ¼
ical modeling (Chamkalani et al., 2013; Zendehboudi et al., 2012; fxti1 ; xti2 ; .; xtiD g. Likewise, the velocity (i.e., distance change) for
Roosta et al., 2012; Ahmadi et al., 2013; Vasanth Kumar, 2009). particle i at iteration t, which is also a D-dimensional vector, can be
The application of neural network returns back to Normandin et al. described as vti ¼ fvti1 ; vti2 ; .; vtiD g. In the simple version of PSO,
(1993) who calculated compressibility factor of pure gases. Later, there was no actual control over the previous velocity of the par-
Kamyab et al. (2010) addressed the deficiencies of Normandin et al. ticles. In the later versions of PSO, this shortcoming was addressed
(1993) and suggested a more accurate model by ANN as previous by incorporating a new parameter, called inertia weight introduced
but with two inputs. by Shi and Eberhart (1998b). Let pti ¼ fpti1 ; pti2 ; .; ptiD g represent
the best solution that particle i has obtained until iteration t, and
4.3.1. Artificial neural network ptg ¼ fptg1 ; ptg2 ; .; ptgD g denote the best solution obtained from pti in
Artificial neural networks are functional abstraction of biolog- the population at iteration t. In order to search for the optimal
ical neural structures of the central nervous system (Chamkalani solution, each particle changes its velocity based on the cognitive
et al., 2013; Topçu and Sarıdemir, 2008). For modeling purposes, and social parts as using Eq. (23).
the commonly used feed-forward ANN architecture namely MLP
t t1
t
may be employed. The MLP network approximates the nonlinear Vid ¼ w*Vid þ c1 r1 Pid xtid þ c2 r2 Pgd
t
xtgd ; d ¼ 1; 2; .; D
inputeoutput relationships as defined by, yk ¼ fk(x,wk), k ¼ 1, 2, .,
K, where wk is the vector defining network weights. The MLP (23)
network usually consists of three layers. The layers described as
input, hidden, and output layers comprising N, L, and K number of where c1 indicates the cognitive learning factor; c2 indicates the
processing nodes, respectively. Each node in the input (hidden) social learning factor, inertial weight (w) is used to slowly reduce
layer is linked to all the nodes in the hidden (output) layer using the velocity of the particles to keep the swarm under control, and r1
weighted connections. The MLP architecture also houses a bias and r2 are random numbers uniformly distributed in U(0,1).
node (with fixed output ofþ1) in its input and hidden layers; the The inertial weight is linearly decreasing (Shi and Eberhart,
bias nodes are also connected to all the nodes in the subsequent 1999) according to the following equation:
layer. Usage of bias nodes helps the MLP-approximated function to wmax wmin
be positioned anywhere in the N-dimensional input space; in their wt ¼ wmax *t (24)
tmax
absence, the function is forced to pass through the origin of the N-
dimensional space. The N number of nodes in the input layer is where wmin and wmax are the initial and final values of the inertia
equal to the number of operating variables, whereas the number of weight, respectively, t is the current iteration number and tmax is
output nodes (K) equals the number of outputs. However, the the maximum number of iterations.
number of hidden nodes (L) is an adjustable parameter whose It is possible to clamp the velocity vectors by specifying upper
magnitude is determined by issues such as the desired approxi- and lower bounds on vmax to avoid too rapid movement of particles
mation and generalization performance of the network model. In in the search space. Hence, the velocities of all the particles are
order that the MLP network accurately approximates the nonlinear limited within the range of [vmax, vmax] (Arumugam et al., 2008).
relationship existing between the process inputs and the outputs; it Each particle then moves to a new potential solution based on
needs to be trained in a manner such that a pre-specified error the following equation:
function is minimized. In essence, the MLP training procedure aims
at obtaining an optimal weight set {wk} that minimizes a pre- tþ1
Xid t
¼ Xid t
þ Vid ; d ¼ 1; 2; .; D (25)
specified error function. MSE (Mean Square Error) is a commonly
employed error function utilized in MLP. The basic process of the PSO algorithm is given as follows:
Since the parameter settings for a MLP are often designed quite
differently due to the unique characteristics of the data, trial-and- Step 1: (Initialization) Randomly generate initial particles.
error seems to be the most common way to identify the optimum Step 2: (Fitness) Measure the fitness of each particle in the
value of learning rate, momentum, hidden neurons, and learning population.
cycle but it does not guarantee the optimal performance. Step 3: (Update) Compute the velocity of each particle with
Eq. (23).
4.3.2. Particle swarm optimization Step 4: (Construction) For each particle, move to the next po-
Particle swarm optimization (Kennedy and Eberhart, 1995) is an sition according to Eq. (25).
emerging population-based meta-heuristic that simulates social Step 5: (Termination) Stop the algorithm if termination criterion
behavior such as birds flocking to a promising position to achieve is satisfied; return to Step 2 otherwise.
precise objectives in a multidimensional space. It has been applied
successfully to a wide variety of highly complicated optimization The process of PSO is finished if the termination condition is
problems (Lin et al., 2008) as well as various real-world problems satisfied. Fig. 1 depicts the flowchart of PSOeANN.
(He and Wang, 2007; Kwok et al., 2006; Liu et al., 2008; Peng et al.,
2008; Qiao et al., 2008). Like evolutionary algorithms, PSO performs 4.3.3. Genetic algorithm
searches using a population (called a swarm) of individuals (called Genetic algorithm (GA) is a global heuristic, stochastic optimi-
particles) that are updated from iteration to iteration. zation technique based on evolution theory and genetic principles
The size of population is denoted as psize. To discover the optimal developed by Holland (1975). Goldberg and Michalewicz discussed
solution, each particle changes its search direction according to two the mechanism and robustness of GA in solving nonlinear optimi-
factors: its own best previous experience (pbest) and the best zation problems (Goldberg, 1989; Michalewicz, 1992). The algo-
experience of all other members (gbest). Shi and Eberhart (1998a) rithm begins with a randomly generated population which consists
termed pbest the cognitive part, and gbest the social part. of chromosomes, and applies three kinds of genetic operators:
A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143 137
Start Start
Calculate fitness
Calculate every particle’s
fitness
Elitism
Update the velocity and
position of the particle
Termination Yes
Yes criterion satisfied?
Termination
criterion satisfied?
End
Table 8
The selected parameters of PSO and GA.
PSO GA
Population Initial inertia Final inertia Learning Learning Population Cross over Mutation
size weight wmin weight wmax factor (c1) factor (c2) size probability (Cp) probability (Pmut)
generated sets. This methodology is very robust in finding optimum create an equal competition for our compared models, we settled to
set of weights and biases for the ANN models. The presented run the models up to 100 iterations so that the stopping criterion
models overcame the shortcomings of the standard neural network was at the end of iterations.
by increasing the probability of locating the global optimum Regarding the methodological difference between GA and PSO,
(minimum) for the error function. unlike GA, PSO has no complicated evolutionary operators such as
A three-layer FFBP (Feed Forward Back Propagation) network crossover, selection and mutation and it is highly dependent on
was used in the PSOeANN, GAeANN, and unaccompanied ANN stochastic processes. In addition, the evolution process by GA
models. The input layer has two nodes corresponding the pseudo suffers from the permutation problem and noisy fitness evaluation
pressure and temperature, and the output layer has one node (Yao and Liu, 1997; Belew et al., 1991) indicating that two identical
which is compressibility factor. The Sigmoid activation function was ANNs may have different representations. This makes the evolu-
used as the transfer function from input layer to hidden layer, and tion process quite inefficient in producing fit offspring. Most of the
the Linear function was taken as the activation function in the last GAs use binary string representations of connection weights and
layer. The program was developed based on Matlab, in which, the architectures. This creates many problems, one of which is the
MarquardteLevenberg algorithm was used for training the unac- representation precision of quantized weights. If weights are
companied ANN, and the mean square error (MSE) function was coarsely quantized, training might be infeasible since the required
selected as the evaluation index of network performance. All the accuracy for a proper weight representation cannot be obtained.
inputs and outputs data were normalized within a uniform range On the other hand, if too many bits are used (fine quantization),
(1 þ1) to ensure that they receive equal attention during the binary strings may be unfeasibly long, especially for large ANNs,
training process. and this makes the evolution process too slow or impractical.
Our various runs suggested that presence of ten neurons in the Another problem is the wide separation of the network compo-
hidden layer yielded an effective model which, consequently, gave nents from the same or neighbor (hidden) nodes, in the binary
better compressibility factor. Based on this, we convinced to pro- string representation. Due to crossover operation, the interactions
pose the topology as 2e10e1 in which the hidden neurons and among them might be lost and hence the evolution speed is
hidden layer were reported as 10 and 1, respectively. In order to drastically reduced.
Fig. 3. This displays the predicted compressibility factor versus observed valued for directly-obtained models over training data.
A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143 139
Fig. 4. This shows the estimated compressibility factor versus observed data for iterative models over training data.
In this study, different parameters of GA optimization such as Fig. 3 displays the predicted compressibility factors versus
population size, crossover probability (Cp), and mutation proba- observed values for directly-obtained models over training data.
bility (Pmut) were set up. Likewise, the parameters such population According to this plot we can see that Heidaryan et al. (2010b)
size, initial inertia weight (wmin), final inertia weight (wmax) and outperforms the others where Shokir et al. (2012) fail to function
two learning factors (c1 and c2) were specified for PSO. For PSOe properly. Consequently, we did a comparison between iterative
ANN, GAeANN, and ANN, the network training will not terminate models over the aforementioned training data in Fig. 4. For Hall and
unless the MSE (mean square error) value does not change over 100 Iglesias-Silva (2007), we rescaled the plot over its initial plot to see
epochs. Furthermore, the maximum searching range for both its behavior in detail. Because the Hall and Iglesias-Silva (2007) is a
weights and biases agreed to be between [1 1]. The corresponding modified form of Hall and Yarborough (1973), they exhibit similar
parameters for PSO and GA are represented in Table 8. behaviors over the data. Fig. 5 illustrates the comparison between
Fig. 5. The predicted Z-factor has been plotted versus observed Z-factor for 3 intelligent systems.
140 A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143
Fig. 6. MSE and R2 of model over training data are shown. The abbreviation are as Papay (P), Beggs and Brill (BB), Shell Oil Company (S), BahadorieMokhatabeTowler (BMT),
HeidaryaneSalarabadieMoghadasi (HSM), HeidaryaneMoghadasieRahimi (HMR), AzizieBehbahanieIsazadeh (ABI), Sanjari and Nemati Lay (SN), El-M. Shokir et al. (El), Hall and
Yarborough (HY), DranchukePurviseRobinson (DPR), Dranchuk and Abou-Kassem (DAK), Hall and Iglesias-Silva (HI).
A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143 141
0.5 0.3
0.25
0.4
2.5 6
0.2
0.3
5
2 0.15
0.2
4
0.1
MSE (total)
1.5
MSE
0.1
3 0.05
1
0 0
PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P 2
PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P
0.5
1
0 0
PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P
Tr=1.15
Tr=1.35
5 Tr=1.6 1
Tr=2.2
4.5 0.9
Tr=2
4 0.8
3.5 0.7
3 0.6
R2 (total)
2
2.5 0.5
R
2 0.4
1.5 0.3
1 0.2
0.5 0.1
0 0
PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P
2
Fig. 7. The partial and total MSE and R of all models for testing data. The abbreviation are as Papay (P), Beggs and Brill (BB), Shell Oil Company (S), BahadorieMokhatabeTowler
(BMT), HeidaryaneSalarabadieMoghadasi (HSM), HeidaryaneMoghadasieRahimi (HMR), AzizieBehbahanieIsazadeh (ABI), Sanjari and Nemati Lay (SN), El-M. Shokir et al. (El), Hall
and Yarborough (HY), DranchukePurviseRobinson (DPR), Dranchuk and Abou-Kassem (DAK), Hall and Iglesias-Silva (HI).
and the output parameter indicates greater significance of the input in which nH is the number of hidden neurons, nv is the number of
variable on the magnitude of the dependent parameter. input neurons, ivj is the absolute value of input connection weights,
and Oj is the absolute value of connection weights between the
20 1 3 hidden and output layers. The relative importance of each inde-
PnH 6B ivj pendent factor in contribution to prediction of compressibility
P C 7
j ¼ 1 4@ nv i
AOj 5 factor has been shown in Table 11.
k ¼ 1 kj
The computational time for three ANN models were measured
RI ¼ 2 20 1 33 (26) so that the elapsed time for PSOeANN, GAeANN, and ANN was
Pnv 6 PnH 6B ivj C 77 205.1, 229.8, and 21.7 s, respectively. Considering the lower
i ¼ 14
P
j ¼ 1 4@ nv AOj 55
i
k ¼ 1 kj
computational time and less MSE reveal the superiority of PSOe
ANN over GAeANN. In addition, the reasons for application of both
Table 10
The accuracy indicators and some statistic values for all challenged models over testing data.a
Fig. 8. This plot shows the MSE versus generation/epoch for searching algorithm of particle swarm, genetic algorithm, and neural network.
Garson, G.D., 1991. Interpreting neural-network connection weights. AI Expet. 6, Piper, L.D., McCain, Jr., Corredor, J.H., 1993. Compressibility Factors for Naturally
47e51. Occurring Petroleum Gases. SPE 26668, Houston, TX, Oct. 3e6.
Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization, and Machine Pitzer, K.S., Curl, R.F., 1957a. The volumetric and thermodynamic properties of
Learning. Addison-Wesley, Reading, MA. fluids. 3. Empirical equation for the 2nd virial coefficient. J. Am. Chem. Soc. 79,
Hall, K.R., Iglesias-Silva, G.A., 2007. Improved equations for the StandingeKatz ta- 2369.
bles. Hydrocarb. Process 86 (4), 107e110. Pitzer, K.S., Curl, R.F., 1957b. The Thermodynamic Properties of Fluids. Inst. Mech.
Hall, K.R., Yarborough, L., June 18, 1973. A new EoS for Z-factor calculations. Oil Eng., London.
Gas J., 82e90. Pitzer, K.S., Lippmann, D.Z., Curl, R.F., Huggins Jr., C.M., Petersen, D.E., 1955. The
He, Q., Wang, L., 2007. An effective co-evolutionary particle swarm optimization for volumetric and thermodynamic properties of fluids. 2. Compressibility factor,
constrained engineering design problems. Eng. Appl. Artif. Intell. 20, 89e99. vapor pressure and entropy of vaporization. J. Am. Chem. Soc. 77, 3433e3440.
Heidaryan, E., Moghadasi, J., Rahimi, M., 2010b. New correlations to predict natural Poling, B.P., Prausnitz, J.M., O’Connell, J.P., 2001. Properties of Gases and Liquids, fifth
gas viscosity and compressibility factor. J. Petrol. Sci. Eng. 73, 67e72. ed. McGraw-Hill Companies, Inc., New York.
Heidaryan, E., Salarabadi, A., Moghadasi, J., 2010a. A novel correlation approach for Qiao, W., Gao, Z., Harley, R.G., 2008. Robust neuro-identification of nonlinear plants
prediction of natural gas compressibility factor. J. Nat. Gas Chem. 19, 189e192. in electric power systems with missing sensor measurements. Eng. Appl. Artif.
Holland, J., 1975. Adaptation in Natural and Artificial Systems. Springer, Berlin. Intell. 21, 604e618.
Kamyab, M., Sampaio Jr., J.H.B., Qanbari, F., Eustes III, A.W., 2010. Using artificial Roosta, A.K., Setoodeh, P., Jahanmiri, A.H., 2012. Artificial neural network modeling
neural networks to estimate the z-factor for natural hydrocarbon gases. J. Petrol. of surface tension for pure organic compounds. Ind. Eng. Chem. Res. 51 (1),
Sci. Eng. 73, 248e257. 561e566.
Kay, W.B., 1936. Density of hydrocarbon gases and vapor at high temperature and Sanjari, E., Nemati Lay, E., 2012. An accurate empirical correlation for predicting
pressure. Ind. Eng. Chem. Res., 1014e1019. natural gas compressibility factors. J. Nat. Gas Chem. 21, 184e188.
Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. Proc. IEEE Conf. Neural Shi, Y.H., Eberhart, R.C., 1999. Empirical study of particle swarm optimization. In:
Netw. 4, 1942e1948. Proceedings of the Congress on Evolutionary Computation, pp. 1945e1950.
Kumar, N., 2004. Compressibility factor for natural and sour reservoir gases by Shi, Y., Eberhart, R., 1998a. A modified particle swarm optimizer. In: Proceeding of
correlations and cubic equations of state. MS thesis, Texas Tech University, the IEEE Congress on Evolutionary Computation, pp. 69e73.
Lubbock, Tex, USA, pp. 14e15, 23. Shi, Y., Eberhart, R., 1998b. Parameter selection in particle swarm optimization.
Kwok, N.M., Liu, D.K., Dissanayake, G., 2006. Evolutionary computing based mobile Lecture Notes in Computer Science, vol. 1447, pp. 591e600.
robot localization. Eng. Appl. Artif. Intell. 19, 857e868. Shokir, Eissa M.El-M., El-Awad, Musaed N., Al-Quraishi, Adulhrahman A., Al-
Lin, S.W., Lee, Z.J., Chen, S.C., 2008. Parameter determination of support vector Mahdy, Osama A., 2012. Compressibility factor model of sweet, sour, and
machines and feature selection using simulated annealing approach. Appl. Soft condensate gases using genetic programming. Chem. Eng. Res. Des. 90,
Comput. 8, 1505e1512. 785e792.
Liu, L., Liu, W., Cartes, D.A., 2008. Particle swarm optimization-based parameter Soave, G., 1972. Equilibrium constants from a modified RedlicheKwong equation of
identification applied to permanent magnet synchronous motors. Eng. Appl. state. Chem. Eng. Sci. 27, 1197e1203.
Artif. Intell. 21, 1092e1100. Standing, M.B., 1981. Volumetric and Phase Behavior of Oil Field Hydrocarbon
Londono Galindo, F.E., Archer, R.A., Blasingame, T.A., 2005. Correlations for Systems, ninth ed. Society of Petroleum Engineers of AIME, Dallas, TX.
hydrocarbon-gas viscosity and gas density-validation and correlation of Standing, M.B., Katz, D.L., 1942. Density of natural gases. Trans. AIME 146, 140e149.
behavior using a large-scale database. SPE Reserv. Eval. Eng. 8 (6), 561e572. Stewart, W.F., Burkhard, S.F., Voo, D., 1959. Prediction of pseudo critical parameters
Lydersen, A.L., Greenkorn, R.A., Hougen, O.A., 1955. Generalized Thermodynamic for mixtures. Paper Presented at the AIChE Meeting, Kansas City, MO.
Properties of Pure Fluids. Eng. Exp. Stn. Rep. 4. Univ. Wisconsin, Coll. Eng., Sutton, R.P., 1985. Compressibility Factors for High Molecular Weight Reservoir
Madison, Wis. Gases. Paper SPE 14265 Presented at the SPE Annual Technical Meeting and
Michalewicz, Z., 1992. Genetic Algorithms þ Data Structures ¼ Evolution Programs, Exhibition, Las Vegas, Sent, pp. 22e25.
third ed. Springer-Verlag. Sutton, R.P., 2007. Fundamental PVT calculations for associated and gas/condensate
Normandin, A., Grandjean, P.A., Thibauld, J., 1993. PVT data analysis using neural natural-gas systems. SPE Reserv. Eval. Eng. 10 (3), 270e284.
networks models. Ind. Eng. Chem. Res. 32, 970e975. Topçu, I.B., Sarıdemir, M., 2008. Prediction of rubberized mortar properties
Paliwal, M.A., Kumar, U.A., 2011. Assessing the contribution of variables in feed using artificial neural network and fuzzy logic. J. Mater. Process. Technol.
forward neural network. Appl. Soft Comput. 11 (4), 3690e3696. 199, 108e118.
Papay, J., 1968. A Termelestechnologiai Parameterek Valtozasa a gazlelepk muvelese Vasanth Kumar, K., 2009. Neural network prediction of interfacial tension at crystal/
Soran. OGIL MUSZ, Tud, Kuzl, Budapest, pp. 267e273. solution interface. Ind. Eng. Chem. Res. 48 (8), 4160e4164.
Peng, D., Robinson, D.B., 1976. New two constant equation of state. Ind. Eng. Chem. Yao, X., Liu, Y., 1997. A new evolutionary system for evolving artificial neural net-
Fundam. 15, 59e64. works. IEEE Trans. Neural Netw. 8 (3), 694e713.
Peng, T., Zuo, W., He, F., 2008. SVM based adaptive learning method for text Zendehboudi, S., Ahmadi, M.A., James, L., Chatzis, I., 2012. Prediction of condensate-
classification from positive and unlabeled documents. Knowl. Inform. Syst. to gas ratio for retrograde gas condensate reservoirs using artificial neural
16, 281e301. network with particle swarm optimization. Energy Fuel 26 (6), 3432e3447.