You are on page 1of 12

Journal of Natural Gas Science and Engineering 14 (2013) 132e143

Contents lists available at SciVerse ScienceDirect

Journal of Natural Gas Science and Engineering


journal homepage: www.elsevier.com/locate/jngse

An intelligent approach for optimal prediction of gas deviation factor


using particle swarm optimization and genetic algorithm
Ali Chamkalani a, b, *, Ali Mae’soumi c, Abdolhamid Sameni d
a
Department of Petroleum Engineering, Petroleum University of Technology, Ahwaz, Iran
b
Oil and Gas Engineering Department, Pars Oil and Gas Company (POGC), Asalouyeh, Iran
c
Department of Chemical and Petroleum Engineering, Sharif University of Technology, Tehran, Iran
d
Institute of Petroleum Engineering, University of Tehran, Tehran, Iran

a r t i c l e i n f o a b s t r a c t

Article history: The measurement of PVT properties of natural gas in gas pipelines, gas storage systems, and gas reservoirs
Received 5 March 2013 require accurate values of compressibility factor. Although equation of state and empirical correlations
Received in revised form were utilized to estimate compressibility factor, but the demands for novel, more reliable, and easy-to-use
28 May 2013
models encouraged the researchers to introduce modern tools such as artificial intelligent systems.
Accepted 12 June 2013
Available online 12 July 2013
This paper introduces Particle swarm optimization (PSO) and Genetic algorithm (GA) as population-
based stochastic search algorithms to optimize the weights and biases of networks, and to prevent
trapping in local minima. Hence, in this paper, GA and PSO were used to minimize the neural network
Keywords:
Compressibility factor
error function.
Natural gas A database containing 6378 data was employed to develop the models. The proposed models were
Particle swarm optimization compared to conventional correlations so that the model predictions indicated a good accuracy for the
Genetic algorithm results in training and testing stages. The results showed that artificial neural networks (ANNs)
Artificial neural network remarkably overcame the inadequacies of the empirical models where PSOeANN improved the per-
formance significantly. Additionally, the regression analysis released the efficiency coefficient (R2) of
0.999 which can be considered very promising.
Crown Copyright Ó 2013 Published by Elsevier B.V. All rights reserved.

1. Introduction parameter in almost all calculations for gases (Kamyab et al., 2010).
Different approaches were adopted for prediction of compress-
Every chemical or petroleum engineer sometimes needs to pre- ibility factor including Equation of States (EoS), Empirical Correla-
dict PVT properties of fluids. This analysis is carried out in labs but at tions and Artificial Intelligences (AI).
the early stage of field development or initial steps it brings about A general principle, corresponding states principle (CSP), threw
challenges because this information is not easily available. It inspired light on estimation of Z-factor by asserting that suitably dimen-
engineers to engage in developing empirical or semi-empirical cor- sionless properties of all substances will follow universal variations
relations which required the primary characteristics. One of the most of suitably dimensionless variables of state and other dimension-
widely-used parameters is compressibility factor which its value less quantities (Poling et al., 2001). Where, the number of param-
plays a direct role in chemical and reservoir calculations. Necessity eters characteristic of the substance determines the level of CSP.
arises when there is no available experimental data for the required Two-Parameter CSP just uses two characteristic properties, such
composition, pressure and temperature conditions (Kumar, 2004). as Tc and Pc, to make the state conditions dimensionless, and
the dimensionless function may be Z-factor. Although these
corresponding-states correlations are very nearly exact for simple
2. Natural gas compressibility factor
fluids, systematic deviations are observed for more complex fluids.
The CSP improvements led to introduction of third parameter,
The gas compressibility factor (Z-factor), a parameter that
characteristics of molecular structure. Some researchers introduced
measures the deviation for a real gas from the ideal gas, is a crucial
Zc, the compressibility factor at critical condition, (Lydersen et al.,
1955) and the most popular parameter, acentric factor u (Pitzer
* Corresponding author. Department of Petroleum Engineering, Petroleum Uni-
versity of Technology, Ahwaz, Iran. Tel.: þ98 9375634541. et al., 1955, 1957a,b). But it should be expressed that most of the
E-mail address: ali.chamkalani@gmail.com (A. Chamkalani). function whether correlation or analytical have been derived with

1875-5100/$ e see front matter Crown Copyright Ó 2013 Published by Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.jngse.2013.06.002
A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143 133

only Tpr and Ppr, and both types of functions are approximate. 4.2.3. Beggs and Brill (1973)
Therefore, despite of elimination of accentric factor, we develop our Beggs and Brill (1973) introduced an equation generated from
model on the basis of reduced pressure and temperature. Besides, Standing and Katz Z-factor chart:
while we are dealing with natural gas, no analytical relation exists
for accentric factor so that it is calculated from correlations ð1  AÞ D
Z ¼ Aþ þ CPpr (4)
generating from Tc and Pc. Furthermore, as our judged correlations eB
all have been obtained from these two parameters, we agreed to set
them as our model’s inputs. where
 0:5
A ¼ 1:39 Tpr  0:92  0:36Tpr  0:101
3. Pseudo critical properties models  h i  
B ¼ 0:62  0:23Tpr Ppr þ T 0:066 2 þ
 0:037 Ppr 0:32 6
Ppr
ð pr 0:86Þ 109ðTpr 1Þ
For sake of pseudo critical properties of natural gas, several 
correlations were presented to calculate the pseudo critical tem- C ¼ 0:132  0:32log Tpr

D ¼ 10ð0:30160:49Tpr þ0:1824Tpr Þ
perature and pressure via mixing rules including Kay (1936), 2

Stewart et al. (1959), Sutton (1985), Corredor et al. (1992), Piper


et al. (1993) and Elsharkawy (2004). Along with these works, (5)
some correlations were proposed to predict the pseudo critical
This method is not suggested to be used for reduced temperature
parameters from gas specific gravity such as Standing (1981),
(Tpr) values less than 0.92.
Elsharkawy et al. (2001), Elsharkawy and Elkamel (2001),
Londono Galindo et al. (2005), and Sutton (2007).
4.2.4. Dranchuk et al. (1974)
Dranchuk et al. (1974) benefitted of StandingeKatz Z-factor
4. Compressibility factor model chart and developed a correlation based on BenedicteWebbeRubin
EoS as:
4.1. Equation of states (EoS) " #    
A2 A A A A
Z ¼ 1 þ A1 þ þ 33 rpr þ A4 þ 5 r2pr þ 5 6 r5pr
Equations of states always play a great role in prediction of PVT Tpr Tpr Tpr Tpr
properties of mixtures. The most used EoS are: Peng and Robinson " #
A    
(1976) and Soave-Redlich-Kwong (Soave, 1972) while the applica- þ 37 r2pr 1 þ A8 r2pr exp A8 r2pr (6)
tion of modified and new EoSs is common nowadays. Tpr

where
4.2. Empirical correlations
0:27Ppr
Numerous computations and needs for predetermined param- rpr ¼ (7)
ZTpr
eters resulted in utilization of empirical correlations which facili-
tated the computations and seemed to be more user-friendly The corresponding coefficients have been listed in Table 1.
models. In the following, we tried to chronicle these developments.
4.2.5. Dranchuk and Abou-Kassem (1975)
4.2.1. Papay (1968) Dranchuk and Abou-Kassem (1975) proposed an eleven-
Papay (1968) proposed a simple relationship for calculation of constant equation-of-state for calculating gas compressibility fac-
compressibility factor as: tors. The equation is as following:
   " # " #
Ppr Ppr A2 A3 A4 A5 A7 A8 2
Z ¼ 1 0:3648758  0:04188423 (1) Z ¼ 1 þ A1 þ þ 3 þ 4 þ 5 rpr þ A6 þ þ 2 rpr
Tpr Tpr Tpr Tpr Tpr Tpr Tpr Tpr
" #
A A   r2  
pr
 A9 7 þ 28 r5pr þ A10 1 þ A11 r2pr 3 exp  A11 r2pr
Tpr Tpr Tpr
4.2.2. Hall and Yarborough (1973)
Hall and Yarborough (1973) presented an equation-of-state that (8)
determined the Standing and Katz Z-factor chart. They based their
model on StarlingeCarnahan EoS (Carnahan and Starling, 1969) and
by using data taken from Standing and Katz’s Z-factor chart fitted
their expressions. Table 1
The corresponding coefficients of Dranchuk et al. (1974) have
1 þ y þ y2  y3  
2 3:5 been listed.
Z ¼  14:54=Tpr  8:23=Tpr þ 3:39=Tpr y
ð1  yÞ3 Coefficient Tuned coefficient
 
2
þ 90:7=Tpr  242:2=Tpr 2
þ 42:4=Tpr yð1:18þ2:82=Tpr Þ A1 0.31506237
A2 1.0467099
(2) A3 0.57832729
A4 0.53530771
where A5 0.61232032
A6 0.10488813
 h  2 i. A7 0.68157001
y ¼ 0:06125Ppr exp  1:2 1  1=Tpr Tpr Z (3) A8 0.68446549
134 A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143

1 þ y þ y2  y 3  
where 2 3:5
Z ¼  14:54=Tpr  8:23=Tpr þ 3:39=Tpr y
ð1  yÞ3
 
0:27Ppr þ 90:7=Tpr  242:2=Tpr 2 2
þ 42:4=Tpr yð1:18þ2:82=Tpr Þ
rpr ¼ (9) h i
ZTpr
þ k1 yexp  k2 ðy  0:421Þ2 þ k3 y10 exp
Table 2 has recorded the tuned coefficients of Dranchuk and h i
Abou-Kassem (1975) correlation.
  69279ðy  0:374Þ4
(12)
4.2.6. Shell oil Company
Kumar (2004) referenced the shell company model for estima- where
tion of Z-factor as:  h  2 i.
y ¼ 0:06125Ppr exp  1:2 1  1=Tpr Tpr Z ;
 4  1
Ppr 63 13
k1 ¼ 1:87 þ 0:001Tpr ; k2 ¼ 171:8Tpr ; (13)
Z ¼ A þ BPpr þ ð1  AÞexpð  CÞ  D (10)
10 h  i
63
k3 ¼ 3525 1  exp  219=Tpr
where

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
A ¼ 0:101  0:36Tpr þ 1:3868 Tpr  0:919 4.2.8. Bahadori et al. (2007)
Bahadori et al. (2007) introduced a correlation which related
B ¼ 0:021 þ T0:04275
pr 0:65 compressibility factor to reduced temperature and pressure.
  They tuned the correlation coefficient by data ranging from
C ¼ Ppr E þ FPpr þ GPpr4
0.2 < Ppr < 16 and 1.05 < Tpr < 2.4.
  (11)
D ¼ 0:122exp  11:3 Tpr  1 2 3
Z ¼ a þ bPpr þ cPpr þ dPpr (14)
E ¼ 0:6222  0:224Tpr
0:0657 where
F ¼  0:037
Tpr 0:85
  2 þ D T3
G ¼ 0:32exp  19:53 Tpr  1 a ¼ Aa þ Ba Tpr þ Ca Tpr a pr
b ¼ 2 þ D T3
Ab þ Bb Tpr þ Cb Tpr b pr
2 þ D T3 (15)
c ¼ Ac þ Bc Tpr þ Cc Tpr c pr
d ¼ 2 þ D T3
Ad þ Bd Tpr þ Cd Tpr
4.2.7. Hall and Iglesias-Silva (2007) d pr
Hall and Yarborough (1973) used an augmented hard-sphere
equation which calculated the Z-factor of natural gas mixture. We gathered the coefficients in Table 3.
They combined Z-factor of a hard sphere represented by the Car-
nahaneStarling equation and a correction term that accounted for 4.2.9. Heidaryan et al. (2010a)
the hard sphere model deficiencies in order to represent the real Multiple regression analysis was carried out by Heidaryan et al.
fluid and the attractive part of the Z-factor. Hall and Yarborough (2010a) to develop a correlation benefiting of 1220 data points in
(1973) pointed out that the method was not recommended for range of 0.2  Ppr  15 and 1.2  Tpr  3.
application if the pseudo-reduced temperature was less than one.    2 
Therefore, Hall and Iglesias-Silva (2007) decided to add a A1 þ A2 ln Ppr þ A3 ln Ppr þ A4 ln Ppr þ TApr5 þ TA26
correction term to the Hall and Yarborough (1973) equation which Z ¼    2 A
pr
(16)
it applied at reduced temperature while does not influence the
1 þ A7 ln Ppr þ A8 ln Ppr þ Tpr9 þ AT10
2
pr

behavior of the original HalleYarborough at higher reduced tem-


Table 4 represents the coefficients of proposed method.
peratures. There were 890 Z-factor values from the StandingeKatz
chart to find the temperature functions that showed the best fit at
4.2.10. Heidaryan et al. (2010b)
low temperature isotherms.
Later attempts of Heidaryan et al. (2010b) led to an empirical
correlation which based their data bank on their previous work
Table 2
(Heidaryan et al., 2010a). The corresponding coefficients have been
The tuned coefficients of Dranchuk and Abou-Kassem (1975) given in Table 5.
have been recorded.

Coefficient Tuned coefficient Table 3


A1 0.3265 The coefficients of Bahadori et al. (2007) gathered in this table.
A2 1.07
Coefficient Tuned coefficient Coefficient Tuned coefficient
A3 0.5339
A4 0.01569 Aa 0.969469 Ac 0.0184810
A5 0.05165 Ba 1.349238 Bc 0.0523405
A6 0.5475 Ca 1.443959 Cc 0.050688
A7 0.7361 Da 0.36860 Dc 0.010870
A8 0.1844 Ab 0.107783 Ad 0.000584
A9 0.1056 Bb 0.127013 Bd 0.002146
A10 0.6134 Cb 0.100828 Cd 0.0020961
A11 0.7210 Db 0.012319 Dd 0.000459
A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143 135

Table 4 Table 6
This represents the coefficient of Heidaryan et al. (2010a) The tuned coefficients in Azizi et al. (2010) equations are brought.
method.
Coefficients Tuned coefficients Coefficients Tuned coefficients
Coefficient Tuned coefficient
a 0.0373142485385592 k 24,449,114,791.1531
A1 1.115323727 b 0.0140807151485369 l 19,357,955,749.3274
A2 0.07903952088760 c 0.0163263245387186 m 126,354,717,916.607
A3 0.01588138 d 0.0307776478819813 n 623,705,678.385784
A4 0.008861345 e 13,843,575,480.943800 o 17,997,651,104.3330
A5 2.16190792611599 f 16,799,138,540.763700 p 151,211,393,445.064
A6 1.157531187 g 1,624,178,942.6497600 q 139,474,437,997.172
A7 0.05367780720737 h 13,702,270,281.086900 r 24,233,012,984.0950
A8 0.0146557 i 41,645,509.896474600 s 18,938,047,327.5205
A9 1.80997374923296 j 237,249,967,625.01300 t 141,401,620,722.689
A10 0.954860388

4.2.12. Sanjari and Nemati Lay (2012)


By considering 5844 experimental data points for Z-factors of
0 1
 A   2 A  natural gas mixtures, Sanjari and Nemati Lay (2012) proposed an
A
BA1 þ A3 ln Ppr þ Tpr5 þ A7 ln Ppr þ T 29 þ T11 ln Ppr C empirical method within the ranges of 1.01 < Tpr < 3.0 and 0.01 <
B pr C
Z ¼ lnB pr
C
@ 1 þ A ln Ppr þ A4 þ A ln Ppr 2 þ A8 þ A10 lnPpr A
   Ppr < 15.0. It divides the pressure region into two sections resulting
2 Tpr 6 T2 Tpr pr two sets of coefficients for 0.01 < Ppr < 3.0 and 3.0 < Ppr < 15.

(17) A4
A2 Ppr
ðA þ1Þ
A6 Ppr 4
ðA þ2Þ
A8 Ppr 4
2
Z ¼ 1 þ A1 Ppr þ A2 Ppr þ A4
þ A7
þ ðA þ1Þ
(20)
They obtained model in the ranges of 0.2 < Pr < 30 and Tpr Tpr Tpr 7
1.0 < Tr < 3.0.
The tuned coefficients for this equation have been shown in Table 7.
4.2.11. Azizi et al. (2010)
Azizi et al. (2010) utilized curve fitting software to produce a 4.2.13. Shokir et al. (2012)
correlation with 20 coefficients. This correlation was derived based Shokir et al. (2012) presented a model for estimating Z-factors of
on 3038 points from the Standing and Katz Z-factor chart. They sweet gases, sour gases and gas condensates using genetic pro-
presented ABI to estimate the sweet gas compressibility factor over gramming (GP). In addition, they built models for pseudo-critical
the range of 0.2 < Ppr < 11 & 1.1 < Tpr < 2. pressure and temperature as well which may assist researchers to
make them applicable instead of traditional methods.
BþC
Z ¼ Aþ (18) Z ¼ AþBþCþDþE (21)
DþE

where where
 
 0:5 2Tpr Ppr 1
2:16 1:028 1:58 2:1 A ¼ 2:679562
A ¼ aTpr þ bPpr þ cPpr Tpr þ d ln Tpr ðPpr2 þTpr3 Þ=Ppr
2:4
B ¼ e þ fTpr 1:56
þ gPpr 0:124 3:033
þ hPpr Tpr  
Tpr Ppr þPpr
2

 1:28  1:37   2 B ¼ 7:686825 Tpr Ppr þ2Tpr2


þTpr
3
C ¼ i ln Tpr þ j ln Tpr þ k ln Ppr þ l ln Ppr
   
þ m ln Ppr ln Tpr (19) 2 P T P 2 þT P 3 þ2T P 2P 2 þ2P 3
C ¼ 0:000624 Tpr pr pr pr pr pr pr pr pr pr
5:55 0:68 0:33
D ¼ 1 þ nTpr þ oPpr Tpr
 
 1:18  2:1   2 Tpr Ppr
E ¼ p ln Tpr þ qln Tpr þ r ln Ppr þ s ln Ppr D ¼ 3:067747
  2
Ppr þTpr þPpr
þ t ln Ppr ln Tpr
   
0:068059 2 þ0:081873P 2  0:041098Tpr
E¼ Tpr Ppr þ0:139489Tpr pr Ppr
The tuned coefficients in above equations are given in Table 6.
 
8:152325Ppr
þ 1:63028Ppr þ0:24287Tpr 2:64988
Table 5 Tpr
The corresponding coefficients of Heidaryan et al. (2010b) have been given in the (22)
table.

Coefficient Tuned coefficients Tuned coefficients


0.2  Ppr  3 3 < Ppr  15 Table 7
A1 2.827793  10þ00 3.252838  10þ00 The tuned coefficients for Sanjari and Nemati Lay (2012) equation have been shown.
A2 4.688191  1001 1.306424  1001
Coefficients 0.01 < Ppr < 3.0 3.0 < Ppr < 15
A3 1.262288  10þ00 6.449194  1001
A4 1.536524  10þ00 1.518028  10þ00 A1 0.007698 0.015642
A5 4.535045  10þ00 5.391019  10þ00 A2 0.003839 0.000701
A6 6.895104  1002 1.379588  1002 A3 0.467212 2.341511
A7 1.903869  1001 6.600633  1002 A4 1.018801 0.657903
A8 6.200089  1001 6.120783  1001 A5 3.805723 8.902112
A9 1.838479  10þ00 2.317431  10þ00 A6 0.087361 1.136000
A10 4.052367  1001 1.632223  1001 A7 7.138305 3.543614
A11 1.073574  10þ00 5.660595  1001 A8 0.083440 0.134041
136 A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143

4.3. Artificial intelligences Each particle represents a candidate position (i.e., solution) as a
point in a D-dimensional space, and its status is characterized
As time goes by, intelligent systems were used more and more, according to its position and velocity. The D-dimensional position
and as powerful tools became acceptable in petroleum and chem- for the particle i at iteration t can be represented as xti ¼
ical modeling (Chamkalani et al., 2013; Zendehboudi et al., 2012; fxti1 ; xti2 ; .; xtiD g. Likewise, the velocity (i.e., distance change) for
Roosta et al., 2012; Ahmadi et al., 2013; Vasanth Kumar, 2009). particle i at iteration t, which is also a D-dimensional vector, can be
The application of neural network returns back to Normandin et al. described as vti ¼ fvti1 ; vti2 ; .; vtiD g. In the simple version of PSO,
(1993) who calculated compressibility factor of pure gases. Later, there was no actual control over the previous velocity of the par-
Kamyab et al. (2010) addressed the deficiencies of Normandin et al. ticles. In the later versions of PSO, this shortcoming was addressed
(1993) and suggested a more accurate model by ANN as previous by incorporating a new parameter, called inertia weight introduced
but with two inputs. by Shi and Eberhart (1998b). Let pti ¼ fpti1 ; pti2 ; .; ptiD g represent
the best solution that particle i has obtained until iteration t, and
4.3.1. Artificial neural network ptg ¼ fptg1 ; ptg2 ; .; ptgD g denote the best solution obtained from pti in
Artificial neural networks are functional abstraction of biolog- the population at iteration t. In order to search for the optimal
ical neural structures of the central nervous system (Chamkalani solution, each particle changes its velocity based on the cognitive
et al., 2013; Topçu and Sarıdemir, 2008). For modeling purposes, and social parts as using Eq. (23).
the commonly used feed-forward ANN architecture namely MLP  
t t1
 t
may be employed. The MLP network approximates the nonlinear Vid ¼ w*Vid þ c1 r1 Pid  xtid þ c2 r2 Pgd
t
 xtgd ; d ¼ 1; 2; .; D
inputeoutput relationships as defined by, yk ¼ fk(x,wk), k ¼ 1, 2, .,
K, where wk is the vector defining network weights. The MLP (23)
network usually consists of three layers. The layers described as
input, hidden, and output layers comprising N, L, and K number of where c1 indicates the cognitive learning factor; c2 indicates the
processing nodes, respectively. Each node in the input (hidden) social learning factor, inertial weight (w) is used to slowly reduce
layer is linked to all the nodes in the hidden (output) layer using the velocity of the particles to keep the swarm under control, and r1
weighted connections. The MLP architecture also houses a bias and r2 are random numbers uniformly distributed in U(0,1).
node (with fixed output ofþ1) in its input and hidden layers; the The inertial weight is linearly decreasing (Shi and Eberhart,
bias nodes are also connected to all the nodes in the subsequent 1999) according to the following equation:
layer. Usage of bias nodes helps the MLP-approximated function to wmax  wmin
be positioned anywhere in the N-dimensional input space; in their wt ¼ wmax  *t (24)
tmax
absence, the function is forced to pass through the origin of the N-
dimensional space. The N number of nodes in the input layer is where wmin and wmax are the initial and final values of the inertia
equal to the number of operating variables, whereas the number of weight, respectively, t is the current iteration number and tmax is
output nodes (K) equals the number of outputs. However, the the maximum number of iterations.
number of hidden nodes (L) is an adjustable parameter whose It is possible to clamp the velocity vectors by specifying upper
magnitude is determined by issues such as the desired approxi- and lower bounds on vmax to avoid too rapid movement of particles
mation and generalization performance of the network model. In in the search space. Hence, the velocities of all the particles are
order that the MLP network accurately approximates the nonlinear limited within the range of [vmax, vmax] (Arumugam et al., 2008).
relationship existing between the process inputs and the outputs; it Each particle then moves to a new potential solution based on
needs to be trained in a manner such that a pre-specified error the following equation:
function is minimized. In essence, the MLP training procedure aims
at obtaining an optimal weight set {wk} that minimizes a pre- tþ1
Xid t
¼ Xid t
þ Vid ; d ¼ 1; 2; .; D (25)
specified error function. MSE (Mean Square Error) is a commonly
employed error function utilized in MLP. The basic process of the PSO algorithm is given as follows:
Since the parameter settings for a MLP are often designed quite
differently due to the unique characteristics of the data, trial-and- Step 1: (Initialization) Randomly generate initial particles.
error seems to be the most common way to identify the optimum Step 2: (Fitness) Measure the fitness of each particle in the
value of learning rate, momentum, hidden neurons, and learning population.
cycle but it does not guarantee the optimal performance. Step 3: (Update) Compute the velocity of each particle with
Eq. (23).
4.3.2. Particle swarm optimization Step 4: (Construction) For each particle, move to the next po-
Particle swarm optimization (Kennedy and Eberhart, 1995) is an sition according to Eq. (25).
emerging population-based meta-heuristic that simulates social Step 5: (Termination) Stop the algorithm if termination criterion
behavior such as birds flocking to a promising position to achieve is satisfied; return to Step 2 otherwise.
precise objectives in a multidimensional space. It has been applied
successfully to a wide variety of highly complicated optimization The process of PSO is finished if the termination condition is
problems (Lin et al., 2008) as well as various real-world problems satisfied. Fig. 1 depicts the flowchart of PSOeANN.
(He and Wang, 2007; Kwok et al., 2006; Liu et al., 2008; Peng et al.,
2008; Qiao et al., 2008). Like evolutionary algorithms, PSO performs 4.3.3. Genetic algorithm
searches using a population (called a swarm) of individuals (called Genetic algorithm (GA) is a global heuristic, stochastic optimi-
particles) that are updated from iteration to iteration. zation technique based on evolution theory and genetic principles
The size of population is denoted as psize. To discover the optimal developed by Holland (1975). Goldberg and Michalewicz discussed
solution, each particle changes its search direction according to two the mechanism and robustness of GA in solving nonlinear optimi-
factors: its own best previous experience (pbest) and the best zation problems (Goldberg, 1989; Michalewicz, 1992). The algo-
experience of all other members (gbest). Shi and Eberhart (1998a) rithm begins with a randomly generated population which consists
termed pbest the cognitive part, and gbest the social part. of chromosomes, and applies three kinds of genetic operators:
A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143 137

Start Start

Set up the parameters


of PSO
Initial Population

Train the ANN

Calculate fitness
Calculate every particle’s
fitness

Elitism
Update the velocity and
position of the particle

Termination Yes
Yes criterion satisfied?
Termination
criterion satisfied?

Stop PSO training No


No

ANN training Selection Stop GA training

New populations Output the results Crossover and


mutation Bp training

End

Fig. 1. The flowchart of PSOeANN.


New populations Output the results
selection, crossover and mutation operators to find the optimal
solutions. The selection operator chooses chromosomes from the
current population based on fitness value of the individuals. The End
crossover operator combines the features of two parent chromo-
somes to form two similar offspring by swapping corresponding
Fig. 2. The flowchart of ANN optimized with genetic algorithm (GAeANN).
segments of the parents (Goldberg, 1989). The mutation operator
creates new chromosomes by randomly changing the genes of
existing chromosomes. GA can explore the entire design space by It is a fact that weights and biases of the neural network are
the genetic manipulations; it does not easily fall into a certain local selected randomly although a training algorithm helps model
minima or maxima. Therefore, GA is an aggressive search technique performance but there is no guarantee for performance stability
that quickly converges to find the optimal solution in a large so- where several runs may yield completely different results. A
lution domain. Fig. 2 presents the flowchart of genetic algorithm powerful optimization is an alternative which significantly im-
hybrid with artificial neural network (GAeANN). proves the performance and brings performance stability for net-
works by avoiding trapping into local minima. Therefore, we
5. Results and discussion employed particle swarm optimization (PSO) and genetic algorithm
(GA) as optimizers which optimize the weights and biases of the
The study was implemented by applying 6378 data digitized networks.
from Standing and Katz chart (Standing and Katz, 1942) in range of The judged models were divided into three groups; direct cor-
0  PPr  30 and 1  Tpr  3. As in above sections described, due to relations, iterative correlations, and intelligent systems. The direct
approximately-derived functions and only psuedo-pressure- correlations include Papay (1968), Beggs and Brill (1973), Shell Oil
temperature dependent correlations, the input includes PPr and Company (Kumar, 2004), Bahadori et al. (2007), Heidaryan et al.
Tpr where output is compressibility factor, Z. Of the advantages of (2010a,b), Azizi et al. (2010), Sanjari and Nemati Lay (2012),
using these data are the globally-accepted chart and their general Shokir et al. (2012) where the iterative correlations employed
applicability to all gases and liquids regarding CPS. Hall and Yarborough (1973), Dranchuk et al. (1974), Dranchuk and
The data were divided into two parts: training with allocation of Abou-Kassem (1975) and Hall and Iglesias-Silva (2007). In the last
5009 data, and testing with distributed data number of 314, 314, group ANN, GAeANN, and PSOeANN constitute intelligent systems.
314, 314, and 113 relating to pseudo-temperature of 1.15, 1.35, 1.6, In the intelligent session of our study, we investigated the
2.2, and 2, respectively. application of optimized ANNs through PSO and GA optimizations.
In previous studies, each author has taken into research several The problem was formulated as an optimization problem to find a
correlations and, even, the many have not mathematically repre- set of weights and biases for the ANN that minimized the difference
sented the models. In this study, we made our attempts to gather between the ANN predictions and the target values in the training
almost all previous works and to conduct a thorough and set of data. Different sets of weights and biases were generated
comprehensive study on them. With respect to these attribute, the randomly and evolved/updated using the GA/PSO. Then, the accu-
study in some aspect could be considered as a review because it has racy of the ANN model predictions was analyzed to evaluate the
collected old and latest works on compressibility factor prediction. fitness of each set of weights and biases in competition with other
138 A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143

Table 8
The selected parameters of PSO and GA.

PSO GA

Population Initial inertia Final inertia Learning Learning Population Cross over Mutation
size weight wmin weight wmax factor (c1) factor (c2) size probability (Cp) probability (Pmut)

40 0.4 0.9 1.4 2 40 0.7 0.2

generated sets. This methodology is very robust in finding optimum create an equal competition for our compared models, we settled to
set of weights and biases for the ANN models. The presented run the models up to 100 iterations so that the stopping criterion
models overcame the shortcomings of the standard neural network was at the end of iterations.
by increasing the probability of locating the global optimum Regarding the methodological difference between GA and PSO,
(minimum) for the error function. unlike GA, PSO has no complicated evolutionary operators such as
A three-layer FFBP (Feed Forward Back Propagation) network crossover, selection and mutation and it is highly dependent on
was used in the PSOeANN, GAeANN, and unaccompanied ANN stochastic processes. In addition, the evolution process by GA
models. The input layer has two nodes corresponding the pseudo suffers from the permutation problem and noisy fitness evaluation
pressure and temperature, and the output layer has one node (Yao and Liu, 1997; Belew et al., 1991) indicating that two identical
which is compressibility factor. The Sigmoid activation function was ANNs may have different representations. This makes the evolu-
used as the transfer function from input layer to hidden layer, and tion process quite inefficient in producing fit offspring. Most of the
the Linear function was taken as the activation function in the last GAs use binary string representations of connection weights and
layer. The program was developed based on Matlab, in which, the architectures. This creates many problems, one of which is the
MarquardteLevenberg algorithm was used for training the unac- representation precision of quantized weights. If weights are
companied ANN, and the mean square error (MSE) function was coarsely quantized, training might be infeasible since the required
selected as the evaluation index of network performance. All the accuracy for a proper weight representation cannot be obtained.
inputs and outputs data were normalized within a uniform range On the other hand, if too many bits are used (fine quantization),
(1 þ1) to ensure that they receive equal attention during the binary strings may be unfeasibly long, especially for large ANNs,
training process. and this makes the evolution process too slow or impractical.
Our various runs suggested that presence of ten neurons in the Another problem is the wide separation of the network compo-
hidden layer yielded an effective model which, consequently, gave nents from the same or neighbor (hidden) nodes, in the binary
better compressibility factor. Based on this, we convinced to pro- string representation. Due to crossover operation, the interactions
pose the topology as 2e10e1 in which the hidden neurons and among them might be lost and hence the evolution speed is
hidden layer were reported as 10 and 1, respectively. In order to drastically reduced.

Fig. 3. This displays the predicted compressibility factor versus observed valued for directly-obtained models over training data.
A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143 139

Fig. 4. This shows the estimated compressibility factor versus observed data for iterative models over training data.

In this study, different parameters of GA optimization such as Fig. 3 displays the predicted compressibility factors versus
population size, crossover probability (Cp), and mutation proba- observed values for directly-obtained models over training data.
bility (Pmut) were set up. Likewise, the parameters such population According to this plot we can see that Heidaryan et al. (2010b)
size, initial inertia weight (wmin), final inertia weight (wmax) and outperforms the others where Shokir et al. (2012) fail to function
two learning factors (c1 and c2) were specified for PSO. For PSOe properly. Consequently, we did a comparison between iterative
ANN, GAeANN, and ANN, the network training will not terminate models over the aforementioned training data in Fig. 4. For Hall and
unless the MSE (mean square error) value does not change over 100 Iglesias-Silva (2007), we rescaled the plot over its initial plot to see
epochs. Furthermore, the maximum searching range for both its behavior in detail. Because the Hall and Iglesias-Silva (2007) is a
weights and biases agreed to be between [1 1]. The corresponding modified form of Hall and Yarborough (1973), they exhibit similar
parameters for PSO and GA are represented in Table 8. behaviors over the data. Fig. 5 illustrates the comparison between

Fig. 5. The predicted Z-factor has been plotted versus observed Z-factor for 3 intelligent systems.
140 A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143

Table 9 PSOeANN, GAeANN, ANN, and Heidaryan et al. (2010b) showed


The MSE and R2 of training data for all models.a satisfactory performances. Remarkably, Shokir et al. (2012) revealed
MSEb R2b to act adversely and undesirably which was almost subjected to high
PSO 0.000334 0.996875
pseudo-pressure above 15.
GA 0.000823 0.992192 In further investigations, we analyzed the performance of
ANN 0.001625 0.98456 competing and proposed models by observing their behaviors. But
El 3.134588 0.007471 due to attempts for presenting a high quality plot, and conse-
SN 0.044261 0.792879
quently, large numbers of plots, we chose to demonstrate them in
ABI 0.016315 0.739865
HMR 0.00059 0.994641 supplementary file. As it is evident from these figures, the trends of
HSM 0.003808 0.962373 optimized ANNs’ predictions and experimental data are so close,
BMT 0.037503 0.698814 indicating the models ability to predict compressibility factor.
HI 0.2197 0.031211
In order to survey the generalization ability and accuracy of all
S 0.019235 0.854685
DAK 0.020739 0.863411
considered models, MSE and R2 were measured and their histo-
DPR 0.058757 0.770552 grams were illustrated in Fig. 7. In the left figures, the separate R2
HY 0.148796 0.079564 and MSE of Tpr were explored whereas in right figures the overall R2
BB 0.796497 0.05064 and MSE of tested models were compared. As a result of the
P 0.341102 0.507898
indistinctive difference of some MSEs, both partial and total MSEs
a
The abbreviation are as Papay (P), Beggs and Brill (BB), Shell Oil Company (S), were rescaled and magnified. Also, we carried out a detailed sta-
BahadorieMokhatabeTowler (BMT), HeidaryaneSalarabadieMoghadasi (HSM),
tistical procedure to see both accuracy of models and their statistics
HeidaryaneMoghadasieRahimi (HMR), AzizieBehbahanieIsazadeh (ABI), Sanjari and
Nemati Lay (SN), El-M. Shokir et al. (El), Hall and Yarborough (HY), DranchukePurvise
of relative error percents in Table 10. The meaningful significance
Robinson (DPR), Dranchuk and Abou-Kassem (DAK), Hall and Iglesias-Silva (HI). can be ascertained based on the goodness of the optimized neural
b
MSE: Mean Square Error; R2: Correlation Coefficient. networks prediction by comparing these indicators.
Fig. 8 shows the searching results using particle swarm, genetic
algorithm, and unassociated neural network to develop the neural
intelligent systems in which once ANN has been optimized by PSO models for the encapsulation weights and biases. It is evident that
and once again optimized by GA whereas both models were the MSE for PSOeANN is lower than of two others and has arrived
compared to unaccompanied ANN. It is apparent from the figure that to stability with lower MSE. It is worthy to note that there is no
PSO exhibits a better performance over the other configurations. guarantee to avoid unassociated ANN trapping in local minimums.
Furthermore, unassociated ANN did not function properly in com- Some attempts have been made in the past in this direction
parison with optimized ANNs. The simulation performances of the to interpret the contributions of explanatory variables in prediction
all models were evaluated on the basis of mean square error (MSE) problem using the weights of neural network (Paliwal and
and efficiency coefficient R2. The corresponding values of these ac- Kumar, 2011). In this research, the predictor variables were
curacy indicators are presented in Table 9. The histogram and the contributorily ranked by a methodology proposed by Garson (1991)
error statistics plot from all explored models have been visualized in for partitioning the neuronal connection weights (Zendehboudi
Fig. 6, too. Because of the lack of enough space, the model’s names et al., 2012). The relative importance of input parameters (RI) is
were abbreviated on horizontal axes. For a better inspection of these computed using the input and output connection weight according
accuracy indicators, R2 and MSE plots were rescaled. Accordingly, to Eq. (26). The stronger relationship between any input variable

Fig. 6. MSE and R2 of model over training data are shown. The abbreviation are as Papay (P), Beggs and Brill (BB), Shell Oil Company (S), BahadorieMokhatabeTowler (BMT),
HeidaryaneSalarabadieMoghadasi (HSM), HeidaryaneMoghadasieRahimi (HMR), AzizieBehbahanieIsazadeh (ABI), Sanjari and Nemati Lay (SN), El-M. Shokir et al. (El), Hall and
Yarborough (HY), DranchukePurviseRobinson (DPR), Dranchuk and Abou-Kassem (DAK), Hall and Iglesias-Silva (HI).
A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143 141

0.5 0.3

0.25
0.4
2.5 6
0.2
0.3
5
2 0.15
0.2
4
0.1

MSE (total)
1.5
MSE

0.1
3 0.05
1
0 0
PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P 2
PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P

0.5
1

0 0
PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P
Tr=1.15
Tr=1.35
5 Tr=1.6 1
Tr=2.2
4.5 0.9
Tr=2
4 0.8

3.5 0.7

3 0.6

R2 (total)
2

2.5 0.5
R

2 0.4

1.5 0.3

1 0.2

0.5 0.1

0 0
PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P PSO GA ANN El SN ABI HMR HSM BMT HI S DAK DPR HY BB P

2
Fig. 7. The partial and total MSE and R of all models for testing data. The abbreviation are as Papay (P), Beggs and Brill (BB), Shell Oil Company (S), BahadorieMokhatabeTowler
(BMT), HeidaryaneSalarabadieMoghadasi (HSM), HeidaryaneMoghadasieRahimi (HMR), AzizieBehbahanieIsazadeh (ABI), Sanjari and Nemati Lay (SN), El-M. Shokir et al. (El), Hall
and Yarborough (HY), DranchukePurviseRobinson (DPR), Dranchuk and Abou-Kassem (DAK), Hall and Iglesias-Silva (HI).

and the output parameter indicates greater significance of the input in which nH is the number of hidden neurons, nv is the number of
variable on the magnitude of the dependent parameter. input neurons, ivj is the absolute value of input connection weights,
and Oj is the absolute value of connection weights between the
20 1 3 hidden and output layers. The relative importance of each inde-
PnH 6B ivj pendent factor in contribution to prediction of compressibility
P C 7
j ¼ 1 4@ nv i
AOj 5 factor has been shown in Table 11.
k ¼ 1 kj
The computational time for three ANN models were measured
RI ¼ 2 20 1 33 (26) so that the elapsed time for PSOeANN, GAeANN, and ANN was
Pnv 6 PnH 6B ivj C 77 205.1, 229.8, and 21.7 s, respectively. Considering the lower
i ¼ 14
P
j ¼ 1 4@ nv AOj 55
i
k ¼ 1 kj
computational time and less MSE reveal the superiority of PSOe
ANN over GAeANN. In addition, the reasons for application of both

Table 10
The accuracy indicators and some statistic values for all challenged models over testing data.a

MSE R2 Min Max Mean Standard deviation Variance


(rel. err. per)b (rel. err. per)b (rel. err. per)b (rel. err. per)b (rel. err. per)b

PSO 0.0001128 0.999075 6.07914 18.84993 0.126348 1.297164 1.682634


GA 0.0002398 0.997930 4.5315 11.44639 0.0121 1.67133 2.793345
ANN 0.0010057 0.992014 17.1567 10.92203 0.700087 2.885247 8.324653
El 5.5870853 0.228399 1014.75 284.3998 28.9543 146.0139 21320.06
SN 0.0171401 0.901046 2.34659 45.43087 5.814962 10.07757 101.5573
ABI 0.0147106 0.753521 34.3377 241.2986 6.12618 19.0423 362.609
HMR 0.0003542 0.997258 3.3503 7.963439 0.187522 1.473664 2.171685
HSM 0.0062668 0.945647 151.348 20.793 1.18964 6.376508 40.65986
BMT 0.0053189 0.951486 24.5238 30.23641 0.096237 6.707902 44.99594
HI 0.1303476 0.024161 179.621 111.7207 54.8678 35.52681 1262.155
S 0.0006196 0.995044 3.75753 6.470678 0.866853 2.271278 5.158704
DAK 0.0102819 0.932210 151.984 20.13376 3.30283 21.3324 455.0712
DPR 0.057721 0.792733 172.849 119.0102 14.9393 45.80468 2098.069
HY 0.1311211 0.051457 119.727 110.0872 58.5513 31.62764 1000.308
BB 0.0012491 0.989862 6.9772 6.514059 1.1641 2.521157 6.356233
P 0.2610544 0.628089 82.0987 133.199 30.0126 43.77563 1916.305
a
The abbreviation are as Papay (P), Beggs and Brill (BB), Shell Oil Company (S), BahadorieMokhatabeTowler (BMT), HeidaryaneSalarabadieMoghadasi (HSM), Heidaryane
MoghadasieRahimi (HMR), AzizieBehbahanieIsazadeh (ABI), Sanjari and Nemati Lay (SN), El-M. Shokir et al. (El), Hall and Yarborough (HY), DranchukePurviseRobinson
(DPR), Dranchuk and Abou-Kassem (DAK), Hall and Iglesias-Silva (HI).
b
Relative error Percent ¼ (Predicted  Observed)/Observed * 100.
142 A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143

Fig. 8. This plot shows the MSE versus generation/epoch for searching algorithm of particle swarm, genetic algorithm, and neural network.

Table 11 extended to Dr. Zendehboudi, University of Waterloo, for his valu-


The relative importance of input variables on compressibility factor for optimized able helps, comments, and guidance’s so that without his help we
and unassociated ANNs.
would not be able to present such manuscript. Moreover, we would
Ppr Tpr like to acknowledge Dr. Mohammad Reza Kamyab, Curtin Univer-
PSOeANN 0.743742 0.256258 sity, for his supports in this work.
GAeANN 0.714493 0.285507
ANN 0.697417 0.302583
Appendix A. Supplementary material

Supplementary material associated with this article can be


optimizations are, first, to introduce either method for compress-
found, in the online version, at http://dx.doi.org/10.1016/j.jngse.
ibility factor and, second, to do a comparison in order to determine
2013.06.002.
which one is more effective and has reliable learning system.
In summary, our performance and accuracy analyses elucidated
that ANNs, whether optimized or not, have a reliable ability to References
predict compressibility factor. Also, we should note that the pro-
Ahmadi, M.A., Zendehboudi, S., Lohi, A., Elkamel, A., Chatzis, I., 2013. Reservoir
posed PSOeANN technique, in general, achieved minimum mean
permeability prediction by neural networks combined with hybrid genetic al-
square error and less computational time thus yields better gorithm and particle swarm optimization. Geophys. Prospect. 61, 582e598.
robustness and stability over the other artificial neural networks. Arumugam, M.S., Rao, M.V.C., Chandramohan, A., 2008. A new and improved
version of particle swarm optimization algorithm with globalelocal best pa-
rameters. Knowl. Inform. Syst. 16, 331e357.
6. Conclusion Azizi, N., Behbahani, R., Isazadeh, M.A., 2010. An efficient correlation for calculating
compressibility factor of natural gases. J. Nat. Gas Chem. 19, 642e645.
Bahadori, A., Mokhatab, S., Towler, B.F., 2007. Rapidly estimating natural gas
In this study, we developed several intelligent models by compressibility factor. J. Nat. Gas Chem. 16 (4), 349e353.
adapting methodologies using GA or PSO to search for optimal Beggs, H.D., Brill, J.P., 1973. A study of two-phase flow in inclined pipes. Trans. AIME
feed-forward network weights and biases. PSOeANN as well as 255, 607.
Belew, R.K., McInerney, J., Schraudolph, N.N., 1991. Evolving Networks: Using Ge-
GAeANN and unassociated ANN were compared to conventional netic Algorithm with Connectionist Learning. Technical report CS90-174
correlations. With respect to accuracy ranking, PSOeANN out- revised. Computer Sci. Eng. Dept., Univ. California, San Diego.
performed the other models, whether intelligent or correlation, by Carnahan, N.F., Starling, K.E., 1969. Equation of state for nonattracting rigid spheres.
J. Chem. Phys. 51, 635e636.
acquiring R2 and MSE of 999075 and 0001128, respectively. Chamkalani, A., Arabloo, M., Chamkalani, R., Zargari, M.H., Dehestani-
Furthermore, we carried out a relative importance investigation for Ardakani, M.R., Farzam, M., 2013. Soft computing method for prediction of CO2
all three intelligent models which unanimously predicted the corrosion in flow lines based on neural network approach. Chem. Eng. Com-
mun. 200, 731e747.
contribution degree of Ppr around 0.7 and, consequently, 0.3 for Tpr. Corredor, J.H., Piper, L.D., McCain, W.D., Jr., 1992. Compressibility factors for natu-
Further survey was conducted on the computational time of rally occurring petroleum gases. Paper SPE 24864 Presented at the SPE Annual
intelligent models which PSOeANN took less time in comparison to Technical Meeting and Exhibition, Washington D.C., Oct. 4e7.
Dranchuk, P.M., Purvis, R.A., Robinson, D.B., 1974. Computer calculation of natural
GAeANN. Totally, we suggest PSOeANN as more reliable tool for
gas compressibility factors using the Standing and Katz correlations. Inst. of
prediction of compressibility factor. Petroleum Technical Institute Series, No. IP74-008, pp. 1e13.
Dranchuk, P.M., Abou-Kassem, J.H., 1975. Calculation of Z factors for natural gases
using equations of state. J. Can. Petrol. Technol. 14 (3), 34e36.
Acknowledgment Elsharkawy, A.M., Elkamel, A., 2001. The accuracy of predicting compressibility
factor for sour natural gases. Petrol. Sci. Technol. 19 (5&6), 711e731.
The authors express their sincere thanks to Professor Thomas A. Elsharkawy, A.M., Yousef, S.Kh., Hashem, S., Alikhan, A.A., 2001. Compressibility
factor for gas condensates. Energy Fuels 15, 807e816.
Blasingame and Professor K.R Hall, Texas A&M University, for their Elsharkawy, A.M., 2004. Efficient methods for calculations of compressibility, den-
guidance and helps through this study. Also, special thanks are sity and viscosity of natural gases. Fluid Phase Equilib. 218, 1e13.
A. Chamkalani et al. / Journal of Natural Gas Science and Engineering 14 (2013) 132e143 143

Garson, G.D., 1991. Interpreting neural-network connection weights. AI Expet. 6, Piper, L.D., McCain, Jr., Corredor, J.H., 1993. Compressibility Factors for Naturally
47e51. Occurring Petroleum Gases. SPE 26668, Houston, TX, Oct. 3e6.
Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization, and Machine Pitzer, K.S., Curl, R.F., 1957a. The volumetric and thermodynamic properties of
Learning. Addison-Wesley, Reading, MA. fluids. 3. Empirical equation for the 2nd virial coefficient. J. Am. Chem. Soc. 79,
Hall, K.R., Iglesias-Silva, G.A., 2007. Improved equations for the StandingeKatz ta- 2369.
bles. Hydrocarb. Process 86 (4), 107e110. Pitzer, K.S., Curl, R.F., 1957b. The Thermodynamic Properties of Fluids. Inst. Mech.
Hall, K.R., Yarborough, L., June 18, 1973. A new EoS for Z-factor calculations. Oil Eng., London.
Gas J., 82e90. Pitzer, K.S., Lippmann, D.Z., Curl, R.F., Huggins Jr., C.M., Petersen, D.E., 1955. The
He, Q., Wang, L., 2007. An effective co-evolutionary particle swarm optimization for volumetric and thermodynamic properties of fluids. 2. Compressibility factor,
constrained engineering design problems. Eng. Appl. Artif. Intell. 20, 89e99. vapor pressure and entropy of vaporization. J. Am. Chem. Soc. 77, 3433e3440.
Heidaryan, E., Moghadasi, J., Rahimi, M., 2010b. New correlations to predict natural Poling, B.P., Prausnitz, J.M., O’Connell, J.P., 2001. Properties of Gases and Liquids, fifth
gas viscosity and compressibility factor. J. Petrol. Sci. Eng. 73, 67e72. ed. McGraw-Hill Companies, Inc., New York.
Heidaryan, E., Salarabadi, A., Moghadasi, J., 2010a. A novel correlation approach for Qiao, W., Gao, Z., Harley, R.G., 2008. Robust neuro-identification of nonlinear plants
prediction of natural gas compressibility factor. J. Nat. Gas Chem. 19, 189e192. in electric power systems with missing sensor measurements. Eng. Appl. Artif.
Holland, J., 1975. Adaptation in Natural and Artificial Systems. Springer, Berlin. Intell. 21, 604e618.
Kamyab, M., Sampaio Jr., J.H.B., Qanbari, F., Eustes III, A.W., 2010. Using artificial Roosta, A.K., Setoodeh, P., Jahanmiri, A.H., 2012. Artificial neural network modeling
neural networks to estimate the z-factor for natural hydrocarbon gases. J. Petrol. of surface tension for pure organic compounds. Ind. Eng. Chem. Res. 51 (1),
Sci. Eng. 73, 248e257. 561e566.
Kay, W.B., 1936. Density of hydrocarbon gases and vapor at high temperature and Sanjari, E., Nemati Lay, E., 2012. An accurate empirical correlation for predicting
pressure. Ind. Eng. Chem. Res., 1014e1019. natural gas compressibility factors. J. Nat. Gas Chem. 21, 184e188.
Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. Proc. IEEE Conf. Neural Shi, Y.H., Eberhart, R.C., 1999. Empirical study of particle swarm optimization. In:
Netw. 4, 1942e1948. Proceedings of the Congress on Evolutionary Computation, pp. 1945e1950.
Kumar, N., 2004. Compressibility factor for natural and sour reservoir gases by Shi, Y., Eberhart, R., 1998a. A modified particle swarm optimizer. In: Proceeding of
correlations and cubic equations of state. MS thesis, Texas Tech University, the IEEE Congress on Evolutionary Computation, pp. 69e73.
Lubbock, Tex, USA, pp. 14e15, 23. Shi, Y., Eberhart, R., 1998b. Parameter selection in particle swarm optimization.
Kwok, N.M., Liu, D.K., Dissanayake, G., 2006. Evolutionary computing based mobile Lecture Notes in Computer Science, vol. 1447, pp. 591e600.
robot localization. Eng. Appl. Artif. Intell. 19, 857e868. Shokir, Eissa M.El-M., El-Awad, Musaed N., Al-Quraishi, Adulhrahman A., Al-
Lin, S.W., Lee, Z.J., Chen, S.C., 2008. Parameter determination of support vector Mahdy, Osama A., 2012. Compressibility factor model of sweet, sour, and
machines and feature selection using simulated annealing approach. Appl. Soft condensate gases using genetic programming. Chem. Eng. Res. Des. 90,
Comput. 8, 1505e1512. 785e792.
Liu, L., Liu, W., Cartes, D.A., 2008. Particle swarm optimization-based parameter Soave, G., 1972. Equilibrium constants from a modified RedlicheKwong equation of
identification applied to permanent magnet synchronous motors. Eng. Appl. state. Chem. Eng. Sci. 27, 1197e1203.
Artif. Intell. 21, 1092e1100. Standing, M.B., 1981. Volumetric and Phase Behavior of Oil Field Hydrocarbon
Londono Galindo, F.E., Archer, R.A., Blasingame, T.A., 2005. Correlations for Systems, ninth ed. Society of Petroleum Engineers of AIME, Dallas, TX.
hydrocarbon-gas viscosity and gas density-validation and correlation of Standing, M.B., Katz, D.L., 1942. Density of natural gases. Trans. AIME 146, 140e149.
behavior using a large-scale database. SPE Reserv. Eval. Eng. 8 (6), 561e572. Stewart, W.F., Burkhard, S.F., Voo, D., 1959. Prediction of pseudo critical parameters
Lydersen, A.L., Greenkorn, R.A., Hougen, O.A., 1955. Generalized Thermodynamic for mixtures. Paper Presented at the AIChE Meeting, Kansas City, MO.
Properties of Pure Fluids. Eng. Exp. Stn. Rep. 4. Univ. Wisconsin, Coll. Eng., Sutton, R.P., 1985. Compressibility Factors for High Molecular Weight Reservoir
Madison, Wis. Gases. Paper SPE 14265 Presented at the SPE Annual Technical Meeting and
Michalewicz, Z., 1992. Genetic Algorithms þ Data Structures ¼ Evolution Programs, Exhibition, Las Vegas, Sent, pp. 22e25.
third ed. Springer-Verlag. Sutton, R.P., 2007. Fundamental PVT calculations for associated and gas/condensate
Normandin, A., Grandjean, P.A., Thibauld, J., 1993. PVT data analysis using neural natural-gas systems. SPE Reserv. Eval. Eng. 10 (3), 270e284.
networks models. Ind. Eng. Chem. Res. 32, 970e975. Topçu, I.B., Sarıdemir, M., 2008. Prediction of rubberized mortar properties
Paliwal, M.A., Kumar, U.A., 2011. Assessing the contribution of variables in feed using artificial neural network and fuzzy logic. J. Mater. Process. Technol.
forward neural network. Appl. Soft Comput. 11 (4), 3690e3696. 199, 108e118.
Papay, J., 1968. A Termelestechnologiai Parameterek Valtozasa a gazlelepk muvelese Vasanth Kumar, K., 2009. Neural network prediction of interfacial tension at crystal/
Soran. OGIL MUSZ, Tud, Kuzl, Budapest, pp. 267e273. solution interface. Ind. Eng. Chem. Res. 48 (8), 4160e4164.
Peng, D., Robinson, D.B., 1976. New two constant equation of state. Ind. Eng. Chem. Yao, X., Liu, Y., 1997. A new evolutionary system for evolving artificial neural net-
Fundam. 15, 59e64. works. IEEE Trans. Neural Netw. 8 (3), 694e713.
Peng, T., Zuo, W., He, F., 2008. SVM based adaptive learning method for text Zendehboudi, S., Ahmadi, M.A., James, L., Chatzis, I., 2012. Prediction of condensate-
classification from positive and unlabeled documents. Knowl. Inform. Syst. to gas ratio for retrograde gas condensate reservoirs using artificial neural
16, 281e301. network with particle swarm optimization. Energy Fuel 26 (6), 3432e3447.

You might also like