Professional Documents
Culture Documents
a r t i c l e i n f o a b s t r a c t
Article history: FischereTropsch synthesis is a collection of chemical reactions that converts a mixture of carbon
Received 12 August 2012 monoxide and hydrogen into hydrocarbons. In this study, application of FTS is studied in a wide range of
Received in revised form synthesis gas conversions. Artificial neural networks (ANN) were used to predict the molar percentage of
9 September 2012
CH4, CO2 and CO in the FischereTropsch process of natural gas and also genetic algorithm (GA) was
Accepted 17 September 2012
applied to obtain the optimum values of operational parameters. The input parameters consist of a 3-
Available online
dimensions vector which includes the reaction time, operating pressure and temperature and also the
output was molar percentage of CH4, CO2 and CO. Topology and decision parameters have been calcu-
Keywords:
Artificial neural network
lated by trial and error and acceptable correlation coefficients (R2 ¼ 0.94 for CH4, R2 ¼ 0.93 for CO2 and
Genetic algorithm R2 ¼ 0.96 for CO) were obtained. Also the results obtained by sensitivity analysis represent that operation
FischereTropsch synthesis time has significant influence on molar percentage of CH4 as desired product with respect to other
Catalyst operational parameters. Finally the results justify that GA-ANN could be effectively used for FTS as
a powerful estimation technique.
Ó 2012 Elsevier B.V. All rights reserved.
1. Introduction pressures range for this process is from one to several tens of atmo-
spheres. Increasing the pressure leads to higher conversion rates and
1.1. FischereTropsch synthesis also favors formation of long-chained alkanes. Even higher pressures
would be favorable, but higher pressures can lead to catalyst deac-
The FischereTropsch synthesis (FTS) is a reaction for converting tivation via coke formation and the benefits may not justify the
a mixture of hydrogen and carbon monoxide which derived from additional costs of high-pressure equipment. This process occurs at
coal, methane or biomass to liquid fuels (Malek et al., 2011; Cano the presence of cobalt catalyst which supported by refractory oxides
et al., 2011). such as alumina, silica, titanium, and etc (Karaca et al., 2011).
According to the FTS feedstock and the type of desired product, some
ð2n þ 1ÞH2 þ nCO/Cn H2nþ2 þ nH2 O (1) other catalysts, for example cobalt or iron catalysts are used in
industrial processes (Smit et al., 2009). The activity and selectivity of
The FischereTropsch technology is applied for converting
catalyst which is used in FischereTropsch reactions are dependent to
synthesis gas to long-chain hydrocarbons such as C5þ (Borg et al.,
some parameters such as the nature and the structure of catalyst
2008). Generally, the FischereTropsch process is operated in the
support, the nature of active metal, metal dispersion, metal loading
temperature range of 150e300 C. This reaction is very sensitive
and the catalyst preparation methods (Malek et al., 2011).
respect to temperature, for example, increasing temperatures lead to
Different metals are used as a catalyst in the FTS process, but
faster reactions and higher conversion rates, but also tend to favor
nowadays, due to commercial consideration, Co and Fe catalyst are
methane production. For this reason, the temperature is usually
so common for this process (Cano et al., 2011). In the low hydrogen
maintained at the low to middle part of that range. The common
content synthesis gases, for example, the gas which earn from coal,
iron is more useful respect to other catalyst, because it promotes
* Corresponding author. Tel.: þ98 711 2303071; fax: þ98 711 6287294.
E-mail address: rahimpor@shirazu.ac.ir (M.R. Rahimpour). the wateregas-shift reaction. Also cobalt-based catalysts have large
1875-5100/$ e see front matter Ó 2012 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.jngse.2012.09.001
H. Adib et al. / Journal of Natural Gas Science and Engineering 10 (2013) 14e24 15
number of active site which is so effective on reactions rate. One of complicated to recognize any logical relationships within the
the metals which mostly was used as a support in FTS is alumina, dataset used in ANN (Gyurova and Friedrich, 2011; Koolivand et al.,
since it has favorable mechanical properties, but an alumina- 2011). In the overall structure of an ANN (Fig. 1), there are several
supported catalyst has a limited reducibility due to a strong inter- layers of units (neurons), namely the input layer, hidden layer(s),
action between the support and the cobalt oxides (Iglesia et al., and output layer. The number of neurons in the input and output
1993; Schanke et al., 1995). layers are equal to the number of input and output variables. The
Due to producing different types of gaseous, liquid and solid numbers of layers in hidden section are different and it depends to
phase products, phenomenological modeling of FTS is very the type of problem and can contain more than one layer (Myshkin
complicated (Sharma et al., 2011). In recent years, new modeling et al., 1997; Haykin, 1999; Fausett, 1994; Tchaban et al., 1998). As
methods such as artificial neural network (ANN) based modeling shown in Fig. 2, basically the artificial neurons are a simple calcu-
approach, has opened a wide view for developing empirical lator that works in the following way. At the first, the inputs of the
models. The relations between catalytic performances (such as the each neuron (a1, a2, a3) are multiplied by the corresponding weight
selectivity of catalyst and the conversion of reactants) and the functions that assigned to them (wj1, wj2, wj3). For each model, the
components of catalyst could be expressed effectively on the basis weights represent the model fitting parameters. Then the
of properties of artificial neural network. Artificial neural network combined input cj, forms as a summation of products of neurons
could be used as a predictor for any process and multi objective and weight functions.
function systems. Especially for non-linear systems, application of
ANNs is very effective respect to other typical modeling methods. In cj ¼ a1 wj1 þ a2 wj2 þ a3 wj3 (2)
addition to ANN, genetic algorithm is another helpful optimizing
There are lower and upper bounds for neuron’s input (aj) and to
method for finding the best inputs and outputs of the system.
ensure that neuron’s input does not exceed its maximum or minimum
activation values, a neuron’s combined input must be put through an
1.2. Artificial neural network approach activation function (commonly a sigmoid function) that
“compressed” it into the require deactivation value range (between
In recent years, there has been an increasing interest in studying 0 and 1). Sigmoid function is an exponential function that is mostly
the mechanisms and structure of the brain. This fact led to devel- used as an activation function for hidden layer in neural network
opment of artificial neural network (ANN) computational models models. Then the inputs aj transfer to the neurons in the above layers
for solving complex problems (Jamialahmadi and Javadpour, 2000). which are connected to that neuron. In summary, an artificial neuron
Numerous benefits have been made in developing artificial intel- is a non-linear function of its inputs and an ANN is a super position of
ligent systems which is inspired by biological neural networks, simple non-linear functions (Pai et al., 2008; Zeng,1998; Genel, 2004).
fuzzy systems and combination of them. ANN is used to solve In the ANN, the meaning of training is continuous adjusting of
a variety of problems in the fields of optimization, perception, the connection weight functions until they reach distinctive values
prediction and so on. Due to better performance, tolerance fantastic that help the network to produce sufficiently appropriate outputs
patterns, notable adaptively, capability and no need for system which are close to the actual desired outputs. The accuracy of
model, artificial neural network has been replacing traditional a developed model depends on its weight functions values. When
methods in many applications offering. For constructing an appli- optimum weights are achieved, the weights and biased values
cable ANN model for a process, the user must specify the structure encode the network’s state of knowledge (Genel et al., 2003).
and learning algorithm. As shown in Fig. 1, ANN structure is divided According to the methods of data processing, artificial neural
into three segments: input layer, hidden layer, and output layer networks are divided to two classes, feed-forward and recurrent.
(Gyurova et al., 2010). Feed-forward neural network is an artificial neural network where
An ANN collects its knowledge by detecting the patterns and connections between the units do not form a directed cycle while
relationships in data and learns through experience, not from a recurrent neural network is a class of neural network where
programming (Myshkin et al., 1997; Schooling et al., 1999). The connections between units form a directed cycle. Also based on the
major characteristic of ANNs is the simulation of complex and non- methods of learning, some of ANNs employ supervised training,
linear problems by employing a different number of non-linear while others used unsupervised or self-organizing methods. From
processing elements, such as the nodes or neurons (Yuhong and a theoretical point of view, supervised and unsupervised learning
Wenxin, 2009). The modeling process is similar to a “black-box” differ only in the causal structure of the model. In supervised
operation without any external information, therefore it is very learning, the model defines the effect of one set of observations
which called inputs, on another set of observations, called outputs.
In unsupervised learning, all the observations are assumed to be
caused by latent variables, that is, the observations are assumed to
be at the end of the causal chain (Poggio and Girosi, 1990a, 1990b).
Fig. 1. Structure of artificial neural network. Fig. 2. Basic structure of a network with neuron functions.
16 H. Adib et al. / Journal of Natural Gas Science and Engineering 10 (2013) 14e24
2. Experiments
2.2. Reactor best model fitting for molar percentage of CH4 prediction in terms
of MSE. So it means that in 5-8-1 architecture, there are 5, 8, and 1
The catalyst analysis was carried out in 1 L volume stainless steel neuron in input, hidden and output layers, respectively. Using trial
autoclave reactor. The autoclave reactor is convenient to use at and error in the evaluations, the least cross validation error and
medium to high pressure 150 bars and at temperature up to 350 C. correlation coefficient between the predicted and actual data
The input and output gas lines were made using 316 stainless steel points of %CH4 were 0.052 and 0.94 respectively. It must be
tubing. This reactor was equipped with electrical heater, magnetic mentioned that the ANN predictions are in optimum point if the
stirring motor, and magnetic stirrer. The magnetic stirring motor values of R2 and MSE are found to be close to 1 and 0, respectively.
was driven by air flow. Also the temperature of the reactor was By using trial and error method and changing the initial
controlled by a F2M (Scientific 240 temperature programmer) parameters, a 4-7-1 ANN model architecture provides the best
thermocouple model (Hewlett Packard). prediction of the molar percent of CO2 in terms of MSE. Since the
least cross validation error and the best correlation coefficient were
3. Results and discussion 0.035 and 0.93 respectively, this model can be considered as the
appropriate network. Also in prediction of molar percent of CO, the
In present study the entire data have been obtained in the best ANN model structure consisted of a 4-9-1 architecture which
chemistry laboratory. Carbon monoxide and hydrogen were injec- provided the least cross-validation error of 0.023. The value of
ted in the autoclave reactor with certain ratio. In this reactor, correlation coefficient of this network was 0.96 which deserved as
Co(III)/Al2O3 is used as a catalyst. The effects of resident time, an acceptable value for an ANN-based model. In order to build the
pressure and temperature on the formation of product were ANN model, coding was developed using MATLAB software and the
investigated. 6 dataset including 45 data in each set have been used standard functions of neural network toolbox were used.
for input/output of ANN. For optimizing the neural network architecture, the computa-
For developing ANN model, model input data were resident tions started using one neuron in the hidden layer as the initial
time, pressure and temperature of the reactor. Also the molar guess. Then by increasing the number of neurons, the performance
percentages of CH4, CO2 and CO have been used as network outputs. function MSE was calculated as shown in Figs. 12e14.
Table 1 represents the range of the input/output data which are
used for developing ANN model. Also Figs. 6e11 present the three-
dimensional plots of some of experimental data which is used for
developing ANN model.
Table 1
Range of dataset used for input/output of ANN.
Fig. 8. Output composition of CO2 versus input residence time and pressure.
Fig. 10. Output composition of CH4 versus input residence time and temperature.
Fig. 9. Output composition of CO2 versus input residence time and temperature. Fig. 11. Output composition of CO versus residence time and pressure.
20 H. Adib et al. / Journal of Natural Gas Science and Engineering 10 (2013) 14e24
Fig. 12. Network performance (MSE) in prediction of molar percent of CH4 for different
numbers of neurons in hidden layer.
Fig. 15. Comparison of actual molar percentage of CH4 and ANN predictions.
Fig. 13. Network performance (MSE) in prediction of molar percent of CO2 for different
numbers of neurons in hidden layer.
Fig. 16. Comparison of actual molar percentage of CO2 and ANN predictions.
updating the weights of the artificial neurons in a single-layer
experience. Also the back propagation algorithm was constructed that a feed-forward neural network with one hidden layer which
as the learning algorithm to adapt the weight functions (Callan, employed sigmoidal basis functions is capable for approximating
1999). For achieving best results of algorithms, the input data many non-linear functions. The sigmoidal basis function and input
were normalized to the range [1, þ1] by the ‘minmax’ Matlab transformation are defined as equations (7) and (8) respectively.
function before the training. For input-hidden layers, the transfer
function is sigmoid ‘tansig’ which is a hyperbolic tangent sigmoid 1
transfer function, and also for output layer, linear ‘purelin’ is 4 z ¼ (7)
1 þ expðazÞ
applied as a transfer functions. The transfer function of input-
hidden layers updates weight and bias values according to the
‘resilient back propagation algorithm’. Wang et al. (1996) showed
Fig. 14. Network performance (MSE) in prediction of molar percent of CO for different
numbers of neurons in hidden layer. Fig. 17. Comparison of actual molar percentage of CO and ANN predictions.
H. Adib et al. / Journal of Natural Gas Science and Engineering 10 (2013) 14e24 21
Table 2 Table 3
ANN model performance. The best and average fitness of GA.
ANN performance % molar of CH4 % molar of CO2 % molar of CO Optimization summary Best fitness Average fitness
MSE 0.052 0.035 0.023 Generation # 21 17
MAE 0.199 0.215 0.311 Minimum MSE 0.002 0.003
R2 0.94 0.93 0.96
Table 4
GA parameters for the GA-MLP model.
Parameter Value
Population size 50
Maximum number of generations 100
Maximum number of consecutive generations for which 10
no improvement is observed
Crossover probability 0.9
Mutation probability 0.01
Cluster centers 15
and the average fitness values for all the data. According to Table 3,
MSE for the best and average fitness is 0.002 and 0.003 respectively.
The entire dataset (6 dataset, including 45 data in each set) were
divided into three sets with different length: the first set (60% of
dataset) was used for model building or “training”, the second set
(20% of dataset) for cross validation and the third set (20% of
dataset) for test the model. The target was to build a model that can
Fig. 18. Average fitness (MSE) versus generation.
predict the result of this experiment for different conditions. GA-
MLP which is a combination of genetic algorithm and multi-layer
perceptron is used to select the optimal subset of variables. A
multi-layer perceptron (MLP) is a feed-forward artificial neural
X
P
z ¼ bj xj þ b0 (8) network model that maps sets of input data into a set of appro-
j¼1 priate output. Parameter values which are employed are summa-
rized in Table 4. The execution time was less than 15 min for
For different experimental data, initial weight functions gener- a Pentium IV 1.86 MHz processor. It shows that this method is not
ated using random function in the interval of 0.1 to þ0.1. Each time consuming. The optimum point is achievable mostly by
network topology which was trained at an average of three times, running the algorithm just one time.
gave similar results. In this study, the best simulating and gener- The results of optimization are illustrated in Figs. 20e22. As it is
alizing experimental model results was searched by starting from shown in these figures the GA-ANN results are in perfect match
small networks (small number of weights) and then enlarging the with experimental data. Comparison of the prediction of the GA-
networks until the best model without over-fitting was achieved. ANN model with the trial and error approach model showed that
Enlarging a network was done by adding more neurons to the GA-ANN model significantly outperforms to trial-and-error ANN
existing two hidden layers of the network. approach. On the other words, GA found to be a good alternative
Process of phenotypic fitness measurement, selection, crossover over the trial-and-error approach to determine the optimal ANN
recombination and mutation was iterated through 100 generations architecture and internal parameters quickly and efficiently. The
and the network with the lowest error was designated as the performance of GA-ANN model on the test sets which is reported in
optimal evolved network. Figs. 18 and 19 illustrate the best fitness Table 5 shows that the neural network model incorporating a GA is
Fig. 19. Best fitness (MSE) versus generation of GA-ANN. Fig. 20. Comparison of GA-ANN prediction and actual molar percentage of CH4.
22 H. Adib et al. / Journal of Natural Gas Science and Engineering 10 (2013) 14e24
Fig. 21. Comparison of GA-ANN prediction and actual molar percentage of CO2.
Fig. 22. Comparison of GA-ANN prediction and actual molar percentage of CO.
Table 5
GA-ANN model performance.
Fig. 24. (a) Effect of temperature on conversion of CH4, (b) effect of temperature on
Fig. 23. Sensitivity analysis about the mean percent of CH4 output. conversion of CO2, (c) effect of temperature on conversion of CO.
H. Adib et al. / Journal of Natural Gas Science and Engineering 10 (2013) 14e24 23
Fig. 26. (a) Effect of time on conversion of CH4, (b) effect of time on conversion of CO2,
(c) effect of time on conversion of CO.
4. Conclusion
output composition of FischereTropsch synthesis as a reliable Gyurova, L.A., Miniño-Justel, P., Schlarb, A.K., 2010. Modeling the sliding wear and
friction properties of polyphenylene sulfide composites using artificial neural
estimation method with high accuracy.
networks. Wear 268, 708e714.
Gyurova, L.A., Friedrich, K., 2011. Artificial neural networks for predicting sliding
Nomenclature friction and wear properties of polyphenylene sulfide composites. Tribol. Int.
44, 603e609.
Haykin, S., 1999. Neural Networks: a Comprehensive Foundation, second ed.
Prentice Hall, New Jersey.
Symbols Hegazy, T., Moselhi, O., Fazio, P., 1994. Developing practical neural network appli-
cations using back-propagation. Microcomput. Civ. Eng. 9, 145e459.
A coefficient of sigmoidal activation function Holland, J.H., 1975. Adaptation in Natural and Artificial Systems. Springer, Berlin.
M number of neuron in the hidden layer Huang, K., Zhan, X.L., Chen, F.Q., Lu, D.W., 2003. Catalyst design for methane
N number of input training data oxidative coupling by using artificial neural network and hybrid genetic algo-
rithm. Chem. Eng. Sci. 58, 81e87.
N number of all data Iglesia, E., Soled, S.L., Fiato, R.A., Via, G.H., 1993. Bimetallic synergy in cobalt
O experimental value ruthenium FischereTropsch synthesis catalysts. J. Catal. 143, 345.
P number of neuron in the input layer Jamialahmadi, M., Javadpour, F.G., 2000. Relationship of permeability, porosity and
depth using an artificial neural network. J. Pet. Sci. Eng. 26, 235e239.
T predicted value Karaca, H., Safonova, O.V., Chambrey, S., Fongarland, P., Roussel, P.,
W weight GribovalConstant, A., Lacroix, M., Khodakov, A.Y., 2011. Structure and catalytic
X input parameter performance of Pt-promoted alumina-supported cobalt catalysts under realistic
conditions of FischereTropsch synthesis. J. Catal. 277, 14e26.
Y network response from output layer neuron(s)
Karr, C.L., Weck, B., Massart, D.L., Vankeerberghen, P., 1995. Least median squares
Z input transformation of activation function curve fitting using genetic algorithm. Eng. Appl. Artif. Intel. 8, 177e189.
Koolivand Salooki, M., Abedini, R., Adib, H., Koolivand, H., 2011. Design of neural
network for manipulating gas refinery sweetening regenerator column outputs.
Greek letters Sep. Purif. Technol. 82, 1e9.
A linear parameter Malek Abbaslou, R.M., Soltan, J., Dalai, A.K., 2011. Iron catalyst supported on carbon
a* best-fit of linear parameter nanotubes for FischereTropsch synthesis: effects of Mo promotion. Fuel 90,
1139e1144.
B non-linear parameter
Marshall, S.J., Harrison, R.F., 1991. Optimization and training of feed forward neural
b* best-fit of non-linear parameter networks by GAs. In: Proceeding of IEEE Second International Conference on
H learning rate Artificial Neural Networks, pp. 39e43.
Q threshold (bias) Miller, G., Todd, P., Hedge, S., 1989. Designing neural networks using genetic algo-
rithms. In: Proceeding of the Third International Joint Conference on Genetic
s2 variance of experimental data Algorithms, pp. 379e384.
F activation function (basis function) Myshkin, N.K., Kwon, O.K., Grigoriev, A.Y., Ahn, H.S., Kong, H., 1997. Classification of
wear debris using neural network. Wear 203e204, 658e662.
Pai, P.S., Mathew, M.T., Stack, M.M., Rocha, L.A., 2008. Some thoughts on neural
Abbreviations network modeling of micro abrasionecorrosion processes. Tribol. Int. 41,
AI artificial intelligence 672e681.
Parlak, A., Islamoglu, Y., Yasar, H., Egrisogut, A., 2006. Application of artificial neural
ANN artificial neural networks
network to predict specific fuel consumption and exhaust temperature for
FTS FischereTropsch synthesis a diesel engine. Appl. Therm. Eng. 26, 824e828.
GA genetic algorithm Poggio, T., Girosi, F., 1990a. Regularization algorithms for learning that are equiva-
MAE mean absolute error lent to multilayer networks. Science 247, 978e982.
Poggio, T., Girosi, F., 1990b. Networks for approximation and learning. Proc. IEEE 78,
Max_fail maximum validation failures 1481e1497.
mem_reduc speed tradeoff factor Saemi, M., Ahmadi, M., Varjani, A.Y., 2007. Design of neural networks using genetic
MLP multilayer perception algorithm for the permeability estimation of the reservoir. J. Pet. Sci. Eng. 59, 97e105.
Sahoo, G.B., Ray, C., 2006. Flow forecasting for a Hawaiian stream using rating
MSE mean square error curves and neural networks. J. Hydrology 317, 63e80.
Mu momentum Salehi, H., ZeinaliHeris, S., Koolivand Salooki, M., Noei, S.H., 2011. Designing a neural
mu_dec mu decrease factor network for closed thermosyphon with nano fluid using a genetic algorithm.
Braz .J. Chem. Eng. 28, 157e168.
mu_inc mu increase factor Schanke, D., Vada, S., Blekkan, E.A., Hilmen, A.M., Hoff, A., Holmen, A., 1995. Study of
mu_ma maximum of mu Pt-promoted cobalt CO hydrogenation catalysts. J. Catal. 156, 85e95.
NMSE normalized mean-squared error Schooling, J.M., Brown, M., Reed, P.A.S., 1999. An example of the use of neural
computing techniques in materials science: the modeling of fatigue thresholds
RMSE root mean square error
in Ni-base super alloys. Mater. Sci. Eng. A 260, 222e239.
R linear correlation coefficient Sharma, B.K., Sharma, M.P., Kumar, S., Roy, S.K., Roy, S.K., Badrinarayanan, S.,
R2 coefficient of determination Sainkar, S.R., Mandale, A.B., Date, S.K., 2011. Studies on cobalt-based Fischere
Tropsch catalyst and characterization using SEM and XPS techniques. Appl.
Catal. A 211, 203e211.
References Smit, E.D., Bealea, A.M., Nikitenko, S., Weckhuysen, B.M., 2009. Local and long range
order in promoted iron-based FischereTropsch catalysts: a combined in situ X-ray
Bera, A.K., Ghosh, A., Ghosh, A., 2005. Regression model for bearing capacity of absorption spectroscopy/wide angle X-ray scattering study. J. Catal. 262, 244e256.
a square footing on reinforced pond ash. Geotex. Geomemb. 23, 261. Tchaban, T., Griffin, J.P., Taylor, M.J., 1998. A comparison between single and
Borg, O., Dietzel, P.D.C., Spjelkavik, A.I., Tveten, E.Z., Walmsley, J.C., Diplas, S., Eri, S., combined back propagation neural networks in the prediction of turnover. Eng.
Holmen, A., Ryttera, E., 2008. FischereTropsch synthesis: cobalt particle size and Appl. Artif. Intel. 11, 41e47.
support effects on intrinsic activity and product distribution. J. Catal. 259, 25. Van Rooij, A.J.F., Jain, L.C., Johnson, R.P., 1996. Neural Network Training Using
Callan, R., 1999. The Essence of Neural Networks. Prentice Hall, Hertfordshire. Genetic Algorithms, First ed. World Scientific Pub Co. Inc., Singapore.
Cano, L.A., Cagnoli, M.V., Bengoa, J.F., Alvarez, A.M., Marchetti, S.G., 2011. Effect of Vonk, E., Jain, L.C., Johnson, R.P., 1997. Automatic Generation of Neural Network
the activation atmosphere on the activity of Fe catalysts supported on SBA-15 in Architecture Using Evolutionary Computation. World Scientific Pub Co. Inc.,
the FischereTropsch synthesis. J. Catal. 278, 310e320. Singapore.
Chen, G.L., Wang, X.F., Zhuang, Z.Q., Wang, D.S., 1996. Principles and Application of Wang, X., Luo, R., Shao, H., 1996. Designing a soft sensor for a distillation column
Genetic Algorithm. People’s Post & Telecom Press, Beijing (in Chinese). with the fuzzy distributed radial basis function neural network. Proc. IEEE Conf.
Fausett, L., 1994. Fundamentals of Neural Networks, Architectures, Algorithms and Decis. Control 2, 1714e1719.
Applications. Prentice Hall, New Jersey. Whitcombe, J.M., Cropp, R.A., Braddock, R.D., Agranovski, I.E., 2006. The use of
Genel, K., Kurnaz, S.C., Durman, M., 2003. Modeling of tribological properties of sensitivity analysis and genetic algorithms for the management of catalyst
alumina fiber reinforced zincealuminum composites using artificial neural emissions from oil refineries. Math. Comput. Model. 44, 430e438.
network. Mater. Sci. Eng. A 363, 203e210. Yuhong, Z., Wenxin, H., 2009. Application of artificial neural network to predict the
Genel, K., 2004. Application of artificial neural network for predicting strain-life friction factor of open channel flow. Comm. Nonlinear Sci. Numer. Simulat. 14,
fatigue properties of steel on the basis of tensile tests. Int. J. Fatigue 26, 2373e2378.
1027e1035. Zeng, P., 1998. Neural computing in mechanics. Appl. Mech. Rev. 51, 173e197.