You are on page 1of 11

Available online at www.sciencedirect.

com

ScienceDirect
Procedia Technology 10 (2013) 242 – 252

International Conference on Computational Intelligence: Modeling Techniques and Applications


(CIMTA) 2013

Prediction of Flow Regime for Air-Water Flow in Circular Micro


Channels using ANN
Nirjhar Bara, Sudip Kumar Dasa**
a
Department of Chemical Engineering, Calcutta University, 92, A. P. C. Road, Kolkata -- 700009, West Bengal, India

Abstract

Artificial Neural Network is used for the classifications of flow regimes in air-water flow through micro channels are presented.
561 data points from various experimental results from the published literature for air-water two-phase flow in circular micro
channels have been used. Three different well known artificial neural network models have been used to predict the flow regime.
The ANN model based on Levenberg-Marquardt training algorithm gives the slightly better predictability over the other training
algorithms.
© 2013 The Authors. Published by Elsevier Ltd.
© 2013 The Authors. Published by Elsevier Ltd.
Selection and peer-review under responsibility of the University of Kalyani, Department of Computer Science & Engineering.
Selection and peer-review under responsibility of the University of Kalyani, Department of Computer Science & Engineering
Keywords: Air-water flow;Flow regime; backpropagation; Levenberg-Marquardt; support vectop machine;

1. Introduction

Fluid flow in microchannels is widely studied due to its importance in large number of engineering applications
like, micro scale heat exchangers, reactors used in different fields of engineering like electronics, automotive
vehicles, biochemistry, laser process equipments and aerospace technology [1–2]. Microreactor technology has
experience vast development in variety of applications used by chemists and chemical engineers because it offered
enhanced heat and mass transfer, improved chemistry, easy to scale up and inherently safe process [3–9]. Gas-liquid
flow involved in many unit operations and when these processes are operated in microchannels then it is expected to
high interfacial areas and also the transport rates are greatly increased, size is reduced to micro scale and it applied
in gas-liquid catalytically hydrogenation process, direct fluorination and gas absorption [5, 10–15]. The gas-liquid

* Corresponding author. Tel.: +91-33-2350-1397 Ext. 247 (O); fax: +91-33-2351-9755.


E-mail address: drsudipkdas@vsnl.net

2212-0173 © 2013 The Authors. Published by Elsevier Ltd.


Selection and peer-review under responsibility of the University of Kalyani, Department of Computer Science & Engineering
doi:10.1016/j.protcy.2013.12.358
Nirjhar Bara and Sudip Kumar Das / Procedia Technology 10 (2013) 242 – 252 243

two-phase flow through microchannel mainly differ from the macro channel flow as surface tension plays an
important role, flow is laminar in nature, effects of wall roughness, wettability are important [9, 16–19]. So the flow
patterns in the microchannel should be different from that in large channels. Air-water two-phase through circular
microchannels are investigated by different researchers and they reported the existence of dispersed bubbly flow,
gas slug flow, liquid-ring flow, liquid lump flow and liquid droplet flow [20–21]. Shao et al. [16] reviewed the flow
patterns of gas-liquid flow through microchannel and concluded that the flow regime map based on UGS–ULS co-
ordinate system predicts better transition from one flow regime to another. Flow patterns are important to
understand the two-phase flow phenomena and are expected to influence the two-phase pressure drop, void fraction,
system stability, transfer rate for heat, mass and momentum etc. Hence the accurate prediction of flow pattern in
gas-liquid flow through micro channel is necessary and Artificial Neural Network (ANN) modeling is the best
choice for flow pattern prediction.
Recently the studies of Bar et al. [22] deals with the experimentation on the air-water flow through 3.0 and 4.0
mm diameter thick walled flexible transparent PVC pipe. They have successfully used five different training
algorithms to predict the flow pattern. The present study is an attempt to predict flow regime related to air-water
flow in micro channels using 3 different ANN structures i.e., two types of Multilayer Perceptrons (MLP) trained
with Backpropagation (BP) algorithm and Levenberg Marquardt (LM) algorithm respectively and Support Vector
Machine (SVM) in circular micro channels of diameters ranging from 0.05–0.6 mm in horizontal plane from the
experimental data collected from literature [17, 23, 24].

2. ANN Methodology

Figure 1 shows the schematic diagram of the ANN. The data is collected from the published work of the
researchers as mentioned in Table 1.

Fig. 1. Schematic diagram of the ANN.

Table 1. Description of data collected from literature for Air-Water flow.


Authors Diameters used (mm) Number of Flow regimes (As per authors)
data points
Chung and Kawaji [17] 0.05, 0.1, 0.25 and 0.53 393 A, B, C, M, RS, SA, S, SR
Venkatesan et al. [24] 0.6 91 B, DB, S, SA
Saisorn and Wongwises [23] 0.15 77 LUAAF, LAAF, A

2.1. Input and output data for ANN analysis

The input parameters are,


1) Liquid velocity – Ul
244 Nirjhar Bara and Sudip Kumar Das / Procedia Technology 10 (2013) 242 – 252

2) Gas flow rate – Ug


3) Diameter of the Micro channels – D
Density of air – ρg, air viscosity – μg and acceleration due to gravity g are constant so it is ineffective as input
parameter for ANN programming. Hence they did not take part in the analysis. The output parameters are the
various flow regimes as expressed by the researchers. Table 2 presents the range of variable investigated in
experimental studies.

Table 2. Range of the data used for ANN analysis.


Measurement Type Range
Channel diameter D (mm) 0.05 ≤ D ≤ 0.6
Physical properties of air
Density (kg/m3) 1.1611
Viscosity of air (Ns/m2) 0.0000186275
Physical properties of water
Density (kg/m3) 995.646
Viscosity of water (Ns/m2) 7.98 × 10-4
Flow Rate
Liquid flow rate Ul (m/s) 0.0105 to 5.84
Air flow rate Ug (m/s) 0.0223 to 68.9
Total number of data points 561

The output of the ANN consists of the 11 different flow regimes as mentioned by the researchers and are
represented below,
 Annular → A
 Bubbly → B
 Churn → C
 Dispersed Bubbly → DB
 Liquid/unstable annular alternating flow → LUAAF
 Liquid/annular alternating flow → LAAF
 Multiple → M
 Ring-Slug → RS
 Slug → S
 Slug Annular → SA
 Slug-Ring → SR
In general, the researchers either used normalized data or raw data for ANN analysis. From our previous
experience [22, 25–29] here only the raw data is used for ANN analysis. The total data was first randomized for
three different samples to eliminate sampling error. Then each of these three samples was analyzed (training, cross-
validation and final prediction) for all three different algorithms namely Backpropagation, Levenberg-Marquardt
and Support Vector Machine as discussed earlier. The output was represented by 11 columns corresponding to 11
different flow patterns. Then the output was translated from symbolic data to numeric data, i.e. into 11 columns
consisting of 0 and 1, where each input row corresponding to a particular air velocity, water velocity and tube
diameter was represented in the output with 10 values of number 0 and one value of number 1. The row having
number 1 corresponds to that flow pattern.
For the comparison of the performance of the ANN prediction the following were considered,
1) Air-water flow only
2) Circular tubes only
3) Horizontal orientation of tubes only
4) No normalization was done for the overall data for any analysis
Nirjhar Bara and Sudip Kumar Das / Procedia Technology 10 (2013) 242 – 252 245

5) MLPs have only one hidden layer


6) Hyperbolic tangent function used in all the hidden and output layers of MLPs
7) The number of processing elements vary from 1–25 only for all cases of MLPs
8) The amount of data for training, cross-validation and prediction are kept same for all training algorithms
9) Only one computer with same architecture used for all analysis

2.2. Optimization of the ANN

The optimizations of the ANN networks were achieved by trial and error method and depends on,
 The data used;
 Number of processing elements used in the hidden layer;
 The number of epochs for training;
 Stopping criterion of the training;
 The values of the constant parameters used for algorithm.

2.3. Multilayer Perceptron

It is well established fact that MLPs can be successfully used for function approximation and classification
problems. MLPs are structured with three layers: an input layer, hidden layer(s) and an output layer. Multilayer
Perceptron (MLP) with one hidden layer has been used for the prediction of unknown flow pattern. Backpropagation
(BP) and Levenberg-Marquardt (LM) algorithm with hyperbolic tangent transfer function presented in Eq. (1) in the
hidden and output layer was used for training.
e x  e  x (1)
tanh  x 
e x  e  x

2.3.1. Backpropagation

It is discussed in detail in our in our previous publications [22, 25–29]. The backpropagation process propagates
the errors (during training and cross-validation) backward through the network and allows adaptation of the hidden
layer processing elements and a closed-loop control system is thus established. The synapse that connects the hidden
layer to the input layer and the synapse that connects the hidden layer to the output layer adjust the weights to
reduce the error. This process continues till the threshold value for the cross-validation is achieved. This threshold
value is the minimum value desired for the cross-validation MSE. The number of processing elements in the hidden
layer is varied according to the requirement of the training and cross-validation process.
The following formula is used for updating the weights in the hidden layer [25–29],
E (2)
w (t )  
ij (t )  w (t  1) ij
wij

where wij (t ) represents the change of connection weights for the jth processing element in the hidden layer during
epoch number t with that of ith input xi Where E is represented as:
1 (3)
E  ( x2  y 2 ) in in
2
where i corresponds to the ith input and output, n corresponds to the nth epoch during training, yi is the ith output. 
is learning rate and  is the momentum coefficient.

2.3.2. Levenberg-Marquardt

It is a second order learning algorithm. The following formula is used for updating the weights in the hidden
layer [27–29],
w  ( J J   I ) J e
T 1 T
(4)
246 Nirjhar Bara and Sudip Kumar Das / Procedia Technology 10 (2013) 242 – 252

here J is ( N  m)  n Jacobian matrix and e is ( N  m)  1 error matrix, I is the identity matrix,  is the combination
coefficient, N is the number of epochs, n is the number of weights and m is the number of outputs. When  is large
the algorithm becomes steepest descent and when it is small the algorithm becomes Gauss-Newton. In this way the
LM algorithm combines the best features of these two algorithms but avoids most of their limitations.

2.4. Support Vector Machine

First proposed by Glucksman [30] and later the method was made popular by others [31, 32]. In this process two
hyperplanes can be considered in between two sets of data used for classification in a way by which the difference
between the two sets are made maximum, i.e., SVM orients a boundary in a way that the distance between the
boundary and the nearest data in each set is maximum (optimal separating surface which is equidistant from both the
sets). Let us consider two datasets as A and B. If we consider two hyper planes describing the boundary of each data
set then the boundary on set A is considered to be passing through some points of the set A and the similar is true for
the boundary of set B. The points that are falling on the boundary of A and B are known as support vectors. Once
the support vectors are selected the rest of the data can be rejected and by doing so the number of training data is
reduced.
All the models of neural network used for this analysis had been discussed thoroughly in our previous
publications [22, 25–29]. There is no scientific formula to predict the number of hidden layers or the number of
processing elements in the hidden layer(s). These numbers are dependent on the number of training data, time and
the computer architecture.

3. Results and performance

Neurosolution 5.07 in a computer with Intel Core i7 Processor (2.8 GHz), Intel DP55KG Motherboard (1333
MHz), 16 GB of DDR3 RAM (1333 MHz), ATI RADEON HD Graphics card of 1 GB DDR5 RAM with 64 Bit
Windows 7 Ultimate operating system was used for this analysis.
The total of 561 data points have been used for ANN analysis, out of this 365 data points used for training, 140
data points for cross-validation and 56 data used for final prediction.
The MSE was calculated from the following formula,
1 N (5)
MSE 
N
 ( x  y )2
i 1
i i

The numbers of processing elements are varied from 1 to 25 in a single hidden layer.

Table 3. Minimum value of MSE for cross-validation reached during training for 2 MLPs.
Sample Minimum value of MSE for cross-validation
number
Backpropagation Levenberg-Marquardt
1 0.027763 0.031470
2 0.023941 0.023608
3 0.024206 0.032601

Table 4. Optimum number of processing elements in the hidden layer during training for 2 different MLPs.
Sample number Backpropagation Levenberg-Marquardt
1 21 20
2 25 14
3 24 23

Table 3 presents the minimum value of MSE for cross-validation for the 5 different networks. From the values of
MSE in Table 3 it is clear that the desired value of minimum MSE i.e., 0.01 for cross-validation which was the also
Nirjhar Bara and Sudip Kumar Das / Procedia Technology 10 (2013) 242 – 252 247

another criterion to stop training had never been reached throughout the training and cross-validation process by any
network. The network with the particular number of processing elements in the single hidden layer records the
minimum value of cross-validation MSE is further used for the final prediction. This is the procedure that is
followed for all the three different samples.
Table 4 presents the optimum number of processing elements in the hidden layer for Multilayer Perceptrons
during training.

3.1. Backpropagation

For the training of MLP network with Backpropagation algorithm a maximum of 32000 epochs are used. For the
hidden layer of BP network the value of learning rate  was 1 and that of momentum coefficient  was 0.7. For the
output layer of BP network the value of learning rate  was 0.1 and that of momentum coefficient  was 0.7.
However, training was stopped when we had observed no improvement in cross-validation MSE for 10000
consecutive epochs. The 1st column of Tables 3 and 4 presents the minimum value of cross-validation MSE reached
and the optimum number of processing elements respectively.

0.15
Algorithm: Backpropagation
Sample 1
Sample 2
Sample 3
Minimum MSE for cross-validation

0.10

0.05

0.00
5 10 15 20 25
Number of processing elements in hidden layer

Fig. 2. Variation of minimum value of cross-validation with the number of processing elements in the hidden layer for the network trained with
backpropagation algorithm for all 3 Samples.

Figure 2 is the comparison between the minimum value of MSE for cross-validation and the number of
processing elements in the hidden layer for all the five samples of the MLP network where the weights were updated
using Backpropagation algorithm. Figure 3 is the cross-validation curve for all the five samples of the MLP network
where the weights were updated using Backpropagation algorithm when the cross-validation MSE reaches its
minimum value for all 3 samples.
248 Nirjhar Bara and Sudip Kumar Das / Procedia Technology 10 (2013) 242 – 252

0
10
Algorithm: Backpropagation
Sample 1
Sample 2
Sample 3

Average MSE for cross-validation


-1
10

-2
10
1 10 100 1000 10000
Number of epochs

Fig. 3. Cross-validation curve for the network trained with backpropagation algorithm for all 3 samples.

3.2. Levenberg-Marquardt

0.15
Algorithm: Levenberg-Marquardt
Sample 1
Sample 2
Sample 3
Minimum MSE for cross-validation

0.10

0.05

0.00
5 10 15 20 25
Number of processing elements in hidden layer

Fig. 4. Variation of minimum value of cross-validation with the number of processing elements in the hidden layer for the network trained with
Levenberg-Marquardt algorithm for all 3 Samples.
Nirjhar Bara and Sudip Kumar Das / Procedia Technology 10 (2013) 242 – 252 249

For the training of MLP network with LM algorithm a maximum of 1000 epochs have been used. For the
network with LM algorithm the initial value of  was 0.01. However, training was stopped when no improvement
in cross-validation MSE for 200 consecutive epochs was observed. The numbers of processing elements are varied
from 1 to 25 in a single hidden layer.

1
10
Algorithm: Levenberg-Marquardt
Sample 1
Sample 2
Sample 3
Average MSE for cross-validation

0
10

-1
10

-2
10
1 10 100 1000
Number of epochs

Fig. 5. Cross-validation curve for the network trained with Levenberg-Marquardt algorithm for all 3 Samples.

Figure 4 presents the comparison between the minimum value of MSE for cross-validation and the number of
processing elements in the hidden layer for all the five samples of the MLP network where the weights were updated
using Levenberg-Marquardt algorithm. The 2nd column of Tables 3 and 4 presents the minimum value of cross-
validation MSE reached and the optimum number of processing elements respectively. Figure 5 presents the cross-
validation curve for all the five samples of the MLP network where the weights were updated using Levenberg-
Marquardt algorithm when the cross-validation MSE reaches its minimum value for all 3 samples.

3.3. Support vector machine

For the training with SVM network 1000 epochs are used. However training was stopped when no improvement
in cross-validation MSE for 100 consecutive epochs was observed. Figure 6 presents the cross-validation curve for
all the five samples for SVM network where the weights were updated using Levenberg-Marquardt algorithm. The
training for all the five curves overlaps each other as evident from the curve. The gradual decrease shows that
training was good. Near the end of the training the upward bend of the curves and its abrupt end shows the
termination of training as the increase or no improvement of MSE for cross-validation continues for 100 epochs.
The gradual decrease in the value of average MSE over 5 runs proves the effectiveness of the training.
250 Nirjhar Bara and Sudip Kumar Das / Procedia Technology 10 (2013) 242 – 252

0
10
Algorithm: Support Vector Machine
Sample 1
Sample 2
Sample 3

Average MSE for cross-validation


-1
10

-2
10
1 10 100 1000
Number of epochs

Fig. 6. Cross-validation graph for SVM network for all 3 Samples.

The final predicted flow pattern result was identified to be the number closest to 1. The predicted flow patterns
are compared with the respective experimental data. Table 5 shows the performance of the five different ANNs to
predict the flow patterns. The accuracy of the prediction is affected because of overlapping of data as expressed in
the original graph of the authors and the presence of some data points on the boundary of line separating the
different flow regimes. From the mean calculated in table 5, it is clear that the network trained with Levenberg-
Marquardt gives slightly better predictability over the other networks.

Table 5. Comparison of ANN predictions with experimental flow regime data with 3 different ANN used.
Sample number Backpropagation Levenberg-Marquardt Support vector machine
1 53 52 45
2 48 51 48
3 48 48 39
Mean 49.67 50.33 44

4. Conclusions

Experimental observation using air-water two-phase flows microchannels to identify the different flow regime
depicted in the literature have been collected. Applicability of different neural network training algorithms, i.e.,
backpropagation, Levenberg-Marquardt and Support Vector Machine have been used to predict the flow pattern.
All three networks predict the flow regimes well. The network trained with Levenberg-Marquardt gives the slightly
better predictability over the other networks.
Nirjhar Bara and Sudip Kumar Das / Procedia Technology 10 (2013) 242 – 252 251

References

[1] Tuckerman DB, Pease RFW. High-performance heat sinking for vlsi. IEEE Electronic Device Lett 1981; ED-2:126-9.
[2] Liu JT, Peng XF, Wang BX. Variable-property effect on liquid flow and heat transfer in microchannels. Chem Engg J 2008; 141:346-53.
[3] Ehrfeld W, Hessel V, Löwe H. Microreactors: new technology for modern chemistry. Wiley-VCH, Weinheim, 2000.
[4] Jähnisch K, Baerns M, Hessel V, Ehrfeld W, Haverkamp V, Löwe H, Wille Ch, Guber A. Direct fluorination of toluene using elemental
fluorine in gas-liquid microreactors. J Fluorine Chem 2002; 105:117-28.
[5] Jähnisch K, Hessel V, Löwe H, and Baerns M. Chemistry in microstructured reactors. Angew Chem Int Ed Engl 2004; 43:406-46.
[6] Jensen KF, Microreaction engineering – is small better?. Chem Engg Sci 2001; 56:293-303.
[7] Kolb G, Hessel V. Micro-structured reactors for gas phase reactions. Chem Engg J 2004; 98:1-38.
[8] Watts P, Haswell SJ.The application of micro reactors for organic synthesis. Chem Soc Rev 2005; 34:235-46.
[9] Yue J, Chen G, Yuan Q, Luo L, Gonthier Y. Hydrodynamics and mass transfer characteristics in gas-liquid flow through a rectangular
microchannel. Chem Engg Sci 2007; 62:2096-108.
[10] de Mas N, Gunther A, Schmidt MA, Jensen KF. Microfabricated multiphase reactors for the selective direct fluorination of aromatics.Ind
Engg Chem Res 2003; 42:698-710.
[11] Hessel V, Angeli P, Gavriilidis A, Löwe H. Gas-liquid and gas-liquid-solid microstructured reactor: contacting principles and applications.
Ind Engg Chem Res 2005; 44:9750-69.
[12] Kobayashi J, Mori Y, Okamoto K, Akiyama R, Ueno M, Kitamori T, Kobayashi S. A microfluidic device for conducting gas-liquid
hydrogenation reactions. Science 2004; 304:1305-08.
[13] Löb P, Löwe H, Hessel V. Fluorinations, chlorinations and braminations of organic compounds in micro reactors. J Fluorine Chem 2004;
125:1677-94.
[14] Tegrotenhuis WE, Cameron RJ, Viswanathan VV, Wegeng RS. Solvent extraction and gas absorption using microchannel contactors. In
Ehrfeld, W. (Ed.), Microreaction Technology: Industrial Prospects: IMRET 3: Proc. 3rd Int. Conf. Microreaction Technology, Springer, Berlin,
(2000) 541.
[15] Yeong KK, Gavriilidis A, Zapf R, Hessel V. Experimental studies of nitrobenzene hydrogenation in a microstructured falling film reactor.
Chem Engg Sci 2004; 59:3491-94.
[16] Shao N, Gavriilidis A, Angeli P. Flow regimes for adiabatic gas-liquid flow in microchannels. Chem Engg Sci 2009; 64:2749-61.
[17] Chung PM-Y, Kawaji M, The effect of channel diameter on adiabatic two-phase flow characteristics in microchannels. Int J Multiphase
Flow 2004; 30:735-61.
[18] Kawahara A, Chung PM-Y, Kawaji M. Investigation of two-phase flow pattern, void fraction and pressure drop in a microchannel. Int J
Multiphase Flow 2002; 28:1411-35
[19] Serizawa A, Feng Z, Kawara Z. Two phase flow in microchannels. Exp Thermal Fluid Sci 2002; 26:703-14.
[20] Feng ZP, Serizawa A, Visualization of two-phase flow patterns in an ultra-small tube. In: Proc. 18th Multiphase Flow Sym., Japan, 15–16
July, Osaka, Japan, 1999.
[21] Yue J, Luo L, Gonthier Y, Chen G, Yuan Q. An experimental investigation of gas-liquid two-phase in single microchannel contactors. Chem
Engg Sci 2008; 63:4189-202.
[22] Bar N, Ghosh TK, Biswas MN, Das SK. Air-Water Flow through 3mm and 4mm Tubes – Experiment and ANN Prediction. Int J Artificial
Intelligent Systems and Machine Learning 2011; 3:531-37.
[23] Saisorn S, Wongwises S. An experimentalminvestigation of two-phase air-water flow through a horizontal circular micro-channel, Expt
Therm Fluid Sci 2009; 33:306-15.
[24] Venkatesan M, Das SK, Balakrishnan AR, Effect of tube diuameter on two-phase flow patterns in mini tubes. Can J Chem Eng 2010;
88:936-44.
[25] Bar N, Bandyopadhyay TK, Biswas MN, Das SK. Prediction of pressure drop using artificial neural network for non-Newtonian liquid flow
through pipinf components. J Pet Sc Eng 2010; 71:187-94.
[26] Bar N, Biswas MN, Das SK. Prediction of pressure drop using artificial neural network for gas-non-Newtonian liquid flow through piping
components. Ind Eng Chem Res 2010; 49:9423-29.
[27] Bar N, Das SK. Comparative study of friction factor by prediction of frictional pressure drop per unit length using empirical correlation and
ANN for gas-non-Newtonian liquid flow through 180o circular bend. Int Rev Chem Engg 2011; 3:628-43.
[28] Bar N, Das SK. Frictional Pressure Drop for Gas - Non-Newtonian Liquid Flow through 90o and 135o Circular Bend: Prediction Using
Empirical Correlation and ANN. Int J Fluid Mech Res 2013; 39:416-37.
[29] Das SK, Bar N, Hydrodynamics of Gas-non-Newtonian Liquid Flow and ANN Predictability, Saarbrücken, Germany, Lambert Academic
252 Nirjhar Bara and Sudip Kumar Das / Procedia Technology 10 (2013) 242 – 252

Publishing, 2013.
[30] Glucksman H, On the improvement of a linear separation by extending the adaptive process with a stricter criterion. IEEE Trans. Electronic
Computer 1966; EC-15:941-44.
[31] Boser B, Guyon I, Vapnik V. A training algorithm for optimal margin classifiers, 5th Annual Workshop on Computational Learning Theory,
New York, ACM Press, 1992.
[32] Cortes C, Vapnik V. Support-vector networks. Mach. Learning 1995; 20:273-97.

You might also like