You are on page 1of 19

International Conference on Science, Technology Engineering and Management

Intelligent flow meter using swarm intelligence


Kalaiselvan.G1.
Assistant professor, Dept.of Electronics and
Instrumentation Engineering, Jeppiaar Engineering
College, Chennai, India 1

ABSTRACT: Adaptive flow meter calibration is design


using PSO trained ANN. This work objective is to model
an advanced flow measuring instrument using intelligent
algorithm. This paper elaborates the linearity range of
venturi and it extinguish the diverse flow meter
parameters.
Hence this paper deals with training of the weights of feed
forward neural network by using a hybrid algorithm which
associates particle swarm optimization (PSO) algorithm
and back-propagation (BP) algorithm(also known
asPSO–BP algorithm). In the priliminary stages of a global
search, the particle swarm optimization algorithm
converges hastily but around global optimum, the search
process becomes delayed.
On the contrary,using the gradient descending method
faster convergent speed and higher convergent accuracy
can be attained around global optimum. This hybrid
algorithm can make use of PSOA strong global searching
ability as well as BP strong local searching ability . The
overall system can be linearised ,

ICONSTEM- IREES’16 1
International Conference on Science, Technology Engineering and Management

made independent of venturi - pipe diameter ratio, density


of the liquid and temperature by adding pso trained ANN
block to the data conversion unit in cascade thus avoiding
calibration repetition when any or a combination of the
liquid and parameters are changed

Keywords: Flow Meter, Linearization, Feed forward


Neural Network, Swarm Intelligence.

I.INTRODUCTION
Flow controls plays major role in power plant
industries, oil and natural gas industries and food and
pharmaceutical industries in controlling many parameters.
Flow control is achieved using flow meters. Flow
measuring instruments derives its principle from
Bernoulli’s theorem.
The high sensitivity and ruggedness of Venturi makes
it widely applicable whereas the problem of offset, elevated
non-linear response characteristics, dependence of output
on the venturi- pipe diameter ratio, liquid density and
temperature have restricted its use and further imposing
difficulties. To overcome the difficulties caused by
nonlinear response characteristics of the venturi, several
techniques have been recommended which are monotonous
and time consuming. Further, the calibration process needs
to be repeated every time the diameter ratio or liquid is
changed. The change in liquid temperature heightens the
problem of linearity of a venturi as the output is dependent
on temperature and flow rate of the liquid. To overcome the
ICONSTEM- IREES’16 2
International Conference on Science, Technology Engineering and Management

above difficulties, this paper suggests a smart flow


measurement technique which uses artificial neural
network. This network is to educate the system to widen
linearity range thus making the result independent of ratio
between venturi – pipe diameter, liquid density and
temperature.

II.VENTURI FLOWMETER

Volumetric flow rate can be measured using a venturi flow


meter(shown in Fig. 1).

It uses the Bernoulli's principle which gives a association


between the pressure and the velocity of the fluid. Increases
in the velocity, decreases the pressure and vice versa. The
nozzle is usually in a pipe in which fluid flows in the
venturi is a tapered structure.The fluid on reaching the
venturi nozzle,is forced to converge to pass through the tiny
hole.
Venacontracta point is the point that occurs downstream of
venturi nozzle, where the maximum convergence happens
And hence the flow of the liquid changes the velocity and
the pressure . Once the liquid crosses the venacontracta, it
inflate thus changing the velocity and pressure changes
once again. The volumetric and mass flow rates can be
obtained from Bernoulli's equation by measuring the
difference in pressure of the fluid in the normal pipe section
and at the venacontracta.

ICONSTEM- IREES’16 3
International Conference on Science, Technology Engineering and Management

Where
Cd – Discharge coefficient
Ab – Area of the flowmeter cross section
β – Ratio of Db to Da
Pa – Pressure at study flow
Pb – Pressure at vena-contracta
ρ – Density of liquid

Figure.1 Venturi flow meter

The effect of temperature on the density [17-18] is shown


by

Where
ρt – specific density of liquid at temperature ‘t oC’
ρto – specific density of liquid at temperature ‘to oC’
Pt – pressure at temperature ‘t oC’
Pto – pressure at temperature ‘to oC’
k – Bulk modulus of liquid
α – temperature coefficient of liquid

ICONSTEM- IREES’16 4
International Conference on Science, Technology Engineering and Management

The block diagram representation of the proposed


instrument

III.METHODOLOGY

The block diagram Fig.2 representation of the proposed


instrument.

Figure.2 Block Diagram

ICONSTEM- IREES’16 5
International Conference on Science, Technology Engineering and Management

The PSO formulae define each particle as a potential


solution to a problem in a D-dimensional space, with the ith
particle represented as . Each particle also
remembers its previous best position, designated as pbest,
and its velocity (Carlisle and
Dozier 2000). In each generation, the velocity of each
particle is updated, being pulled in the direction of its own
previous best position (pi) and the best of all positions (pg)
reached by all particles until the preceding generation. The
original PSO formulae developed by Kennedy and Eberhart
were modified by Shi and Eberhart 1998 with the
introduction of an inertia parameter, , that was shown
empirically to improve the overall performance of PSO.
(1)

(2)
where rand1() and rand2() are samples from a uniform
random number generator, k represents a relative time
index, c1 is a weight determining the impact of the previous
best solution and c2 is the weight on the global best
solution’s impact on particle velocity. For more details of
the particle swarm optimization algorithm the reader is
referred to (Veeramachaneni et al. 2003, 2007).Generally,
in population-based search optimization methods,
considerably high diversity is necessary during the early
part of the search to allow the full range of the search
space. On the other hand, during the latter part of the
search, when the algorithm is converging to the optimal

ICONSTEM- IREES’16 6
International Conference on Science, Technology Engineering and Management

solution, fine-tuning of the solutions is important to find the


global optima efficiently (Ratnaweera et al. 2004).

IV.HYBRID PSO–BP ALGORITHM

By optimizing PSO with BP algorithms PSO-BP algorithm


obtained. The strong ability of the PSO algorithm makes it
possible to find global optimistic result, alike GA
algorithm, on the other hand it has a disadvantage very
slow search around global optimum . The ability of BP
algorithm, to find local optimistic result is high, and the
ability to determine the global optimistic result is weak.
Hence this paper formulates a new hybrid algorithm
PSO–BP by associating the PSO with the BP
. The central concept for this hybrid algorithm is that at the
beginning stage of optimum search, the PSO is employed
to hasten the speed of the training. the searching process is
switched to gradient descending searching according to this
heuristic knowledge When the fitness function value has
not changed for some generations, or value changed is
smaller than a predefined number,.

Like the APSO algorithm when iterative generation


intensifies the parameter w reduces steadily. The selection
strategy for the inertial weight w is executed by reducing w
linearly in the initial stages and non- linearly in the later
stages. With the help of recurrent experiments maxgen1
parameter is adjusted to an appropriate value and the

ICONSTEM- IREES’16 7
International Conference on Science, Technology Engineering and Management

search around the global optimum Pg is done using


adaptive gradient descending method .
In this paper, we adopted the subsequent approach for
learning rate

where g is learning rate, k, g0 are constants, epoch is a


variable representing iterative times. The reducing speed of
learning rate is controlled by tuning k and g0. The weights
of the feedforward neural network with two layered
structures is evolved using three sorts of algorithm.If there
are n nodes in the input layer then hidden layer will consist
of H hidden nodes and the output layer will consist of n
output nodes. Fig. 3 shows the structure of neural network
with a two layered feedforward mechanism.The figure 3,
shows a corresponding fitness function.
Assuming that the hidden transfer function is sigmoid
function, and the output transfer function is a linear
activation function. From Fig. 3, it can be seen that the
output of the jth hidden node is:

ICONSTEM- IREES’16 8
International Conference on Science, Technology Engineering and Management

Fig:3 Two-Layered Feed forward Neural Network


Structure

where n is the number of the input node, wij is the


connection weight from the ith node of input layer to the jth
node of hidden layer, hi is the threshold of the jth hidden
layer unit; xi is the ith input. sj is the weight input sum in
hidden,

The output of the kth output layer is:

where wkj is the connection weight from the jth hidden


node to the kth output node, hk is the threshold of the kth
output unit. The learning error E can be computed by the
following formulation:

where q is the number of total training samples, yki_ Cki is


the desired output and actual output error of the ith output
unit when the kth instructing sample is used for training.
The fitness function of the ith training sample is defined as

ICONSTEM- IREES’16 9
International Conference on Science, Technology Engineering and Management

In neural network with feed forward strategy When the


PSO algorithm is used in evolving weights, every particle
denotes a set of weights and there are three encoding
strategy for every particle,which is explained in detail as
following.
The shortcomings examined in the initial section
were overcome by adding an Neural Network with
Artificial Intelligence(ANN) model in cascade with data
converter unit. This model is designed using the toolbox in
the neural network of MATLAB. Once the weights are
calculated, verification is done to check whether mean
square error (MSE) is less than estimate error threshold
(Th) for at least 10 repeated readings. If the preceeding
requirement is fulfilled the whole model is saved, else the
iteration for updates of ANN parameters continue till it
reaches the maximum number of iteration and then the
model is saved with caution that desired performance has
not reached.

V.RESULTS AND DISCUSSION

ICONSTEM- IREES’16 10
International Conference on Science, Technology Engineering and Management

Initially, only one hidden layer is chosen and training,


validation and testing is completed. . Since the value of R
and MSE were not near the expected values, number of
hidden layers is increased by one more and training,
validation and testing are done again. This continuous till
an appropriate number of hidden layer is formed out which
results in acceptable values of R an MSE.

Table 1: Summarizes the network model

Optimized parameters of the


neural network model
Training
90
base(60%)
Databa
Validation
se 30
base(20%)
Test base(20%) 30
1st layer 8
No of
2nd layer 7
neuron
3rd layer 7
s in
4th layer 8
1st layer tansig
Transf
2nd layer tansig
er
3rd layer tansig
functio
4th layer tansig
n of
Output layer tansig

Table 2: PSO parameters


ICONSTEM- IREES’16 11
International Conference on Science, Technology Engineering and Management

PSO
Parameter
Value or Range
Initial Weight 0.5
Final Weight -
c1 2
c2 2
r1 [0, 1]
r2 [0, 1]

With the details mentioned the network is trained,


validated and tested. Table 1 and 2 summarizes the various
parameters of the Hybrid ANN-PSO model. The result
shows the structure of the neural network considered in the
present case.

Table 3.The comparison of the performances of the


PSO–BPA and BPA

The BPA The BP–PSOA


Hidden
training training
unit
CPU CPU
number Error Error
time time
29.8 14.9
`3 8.4e_004 4.2e_004
s s
35.3 16.7
4 8.9e_004 6.5e_005
s s
39.5 18.9
5 8.3e_004 7.4e_005
s s
ICONSTEM- IREES’16 12
International Conference on Science, Technology Engineering and Management

48.6 19.4
6 4.3e_004 6.9e_005
s s
49.9 21.3
7 7.8e_004 5.74e_005
s s

VI.CONCLUSION

Smart sensors should be capable of providing accurate


readout, calibration and auto-compensation for the
nonlinear influences of the environmental parameters on its
characteristics. The proposed PSO-ANN-based models may
be applied to flow sensors to incorporate intelligence in
terms of auto-calibration and to mitigate the nonlinear
dependency of their response characteristics on the
environmental parameters. The proposed PSO-ANN is
taught, confirmed and tested with the simulated data. Once
the training is over, for the system with venturi nozzle,it is
subjected to various test inputs corresponding to different
flow rate at a particular diameter ratio, liquid density and
temperature all within the specified range. The outputs of
system with PSO-ANN are shows better than ANN model
in corresponding to various input flow rate at different
values of diameter ratio, liquid density and temperature.

ICONSTEM- IREES’16 13
International Conference on Science, Technology Engineering and Management

The proposed method corrects the errors by reducing


the number of iteration, preserves quality of flow meter and
yields better performance in finding the best solution or
near best solutions.

VII.REFERENCES
[1] A. van Ooyen, B. Nienhuis, Improving the convergence
of the back-propagation algorithm, Neural Network 5 (4)
(1992) 465–471.
[2] Carlisle, A., and Dozier, G., (2000).Adapting Particle
Swarm Optimization to Dynamic Environments.
Proceedings of International Conference on Artificial
Intelligence, Las Vegas, Nevada, USA, 429-434.
[3] Chunkai Zhang, Huihe Shao, Yu Li, Particle swarm
optimization for evolving artificial neural network, in:
Proc. of IEEE Int. Conf. on System, Man, and
Cybernetics, vol. 4 (2000) 2487–2490.
[4] Cui Zhihua, Zeng Jianchao, Nonlinear particle swarm
optimizer: Framework and the implementation of
optimization, control, in: Proc. of Automation, Robotics
and Vision Conference, 2004. ICARCV 2004 8th, vol. 1
(2004) 238–241.
[5] D.S. Chen, R.C. Jain, A robust Back-propagation
Algorithm for Function Approximation, IEEE Trans.
Neural Network 5 (1994) 467–479.
[6] D.S. Huang, Systematic Theory of Neural
Networks for Pattern Recognition, Publishing
House of Electronic Industry of China, Beijing, 1996.

ICONSTEM- IREES’16 14
International Conference on Science, Technology Engineering and Management

[7] E.O. Doebelin, “Measurement Systems -


Application and Design'', Tata McGraw Hill
publishing company, 5th edition, 2003. Bela G Liptak,
“Instrument Engineers' Handbook:
[8] H. Nielsen, R, “Theory of the back propagation
neural network'', International Joint
Conference on Neural Networks, USA, 1989.
[9] J. E. Lawley, “Orifice meter calibration'', 57th
International School of hydrocarbon Measurement,
document ID: 16233, 1982.
[10] Jian Qiu Zhang, Yong Yan, “A self validating
differential pressure flow sensor'', IEEE conference
Proceedings on Instrumentation and Measurement
Technology, Budapest, May 2001.
[11] J.J.F. Cerqueira, A.G.B. Palhares, M.K. Madrid, A
simple adaptive back-propagation algorithm for
multilayered feedforward perceptrons, in: Proc. of IEEE
International Conference on Systems, Man and
Cybernetics, volume 3, 6–9 October (2002) 6.
[12] J. Salerno, Using the particle swarm optimization
technique to train a recurrent neural model, in: Proc. of
Ninth IEEE Int. Conf. on Tools with Artificial
Intelligence (1997) 45–49.
[13] Lei Shi , Li Cai, En Li, Zize Liang, Zengguang Hou,
“Nonlinear Calibration of pH Sensor Based on the
Back-propagation Neural Network'', International
Conference on Networking, Sensing and Control, China,
2008

ICONSTEM- IREES’16 15
International Conference on Science, Technology Engineering and Management

[14] Lijun Xu, Hui Li, Shaliang Tang, Cheng Tan, Bo Hu,
“Wet Gas Metering Using a Venturi-meter and Neural
Networks'', IEEE conference Proceedings on
Instrumentation and Measurement Technology, Canada,
May 2008.
[15] Lijun Xu, Wanlu Zhou, Xiaomin Li, Shaliang Tang,
“Wet Gas Metering Using a Revised Venturi Meter and
Soft-Computing Approximation Techniques'', IEEE
transactions on Instrumentation and Measurement,
vol.60, No 3, pp 946 - 2087, March 2011.
[16] Marco Gori, Alberto Tesi, On the problem of local
minima in back-propagation, IEEE Trans. Pattern Anal.
Mach. Intell. 14 (1) (1992) 76–86.
[17] Man Gyun Na, Yoon Joon Lee, In Joon Hwang, “A
smart software sensor for feedwater flow measurement
monitoring'', IEEE transactions on Nuclear Science,vol
52, No 6, pp 3026 - 3034, December 2005.
[18] M.K. Weirs, A method for self-determination of
adaptive learning rates in back propagation, Neural
Networks 4 (1991) 371–379.
[19] Pereira J.M.D, Postolache O, Girao P.M.B.S,
“PDF-Based Progressive Polynomial Calibration Method
for Smart Sensors Linearization'', IEEE transactions on
Instrumentation and Measurement, vol.58, No 9, pp 3245
- 3252, September 2009 Thailand, 2009.
[20] P.J. Angeline, G.M. Sauders, J.B. Pollack, An
evolutionary algorithm that constructs recurrent neural
networks, IEEE Trans. Neural Networks 5 (1) (1994)
54–65.

ICONSTEM- IREES’16 16
International Conference on Science, Technology Engineering and Management

[21] R.A. Jacobs, Increased rates of convergence through


learning rate adaptation, Neural Networks 1 (1988)
295–307.
[22] Ratnaweera, A., Halgamuge, K., Watson, C., (2004).
Self-Organizing Hierarchical Particle Swarm Optimizer
with Time-Varying Acceleration Coefficients. IEEE
Trans. On Evolutionary Computation, 8(3), 240-255.
[23] R.C. Eberhart, Y. Shi, Comparing Inertia Weights and
Constriction Factors in Particle swarm Optimization, in:
Proc. of 2000 congress on Evolutionary Computing, vol.
1 (2000) 84–88.
[24] Rodrigo J. Plaza, “Sink or Swim: The Effects of
Temperature on Liquid Density and Buoyancy'',
California state science fair, 2006.
[25] R.C. Eberhart, J. Kennedy, Particle swarm
optimization, in: Proc. of IEEE Int. Conf. on Neural
Network, Perth, Australia (1995) 1942–1948.
[26] Santhosh K V, B K Roy, “An Intelligent Pressure
MeasuringInstrument'', International conference on
modern trends in instrumentation and control (ICIC
2011), Coimbatore, India, September 2011.
[27] Shi, Y., Eberhart, R. C., (1998). A Modified Particle
Swarm Optimizer. IEEE International Conference on
Evolutionary Computation, Anchorage, Alaska, 69-73.
[28] Shi, Y., Eberhart, R. C., (1999). Empirical study of
particle swarm optimization. Proc. IEEE Int.
Congr.Evolutionary Computation, (3), 101–106.
[29] Shigeru Yanagiharaa, Osamu Mochizukia, Kyosuke
Satoa, Keizo Saitob, “Variable area venturi type exhaust

ICONSTEM- IREES’16 17
International Conference on Science, Technology Engineering and Management

gas flow meter'' , JSAE Review, Science direct, vol 20,


No 2 pp 265 - 267, April 1999.
[30] S. Haykin, “Neural Networks: a comprehensive
foundation'', 2nd Edition, Prentice Hall India Press, 2001.
[31] S. Shaw, W. Kinsner, Chaotic simulated annealing in
multilayer feedforward networks, in: Proc. of Canadian
Conf. on Electrical and Computer Engineering, vol. 1
(1996) 265–269.
[32] Veeramachaneni, K., Peram, T., Ann, L., Mohan, C.,
(2003). Optimization Using Particle Swarms with Near
Neighbor Interactions. Lecture Notes in Computer
Science, Springer Verlag, Vol. 2723, DOI:
10.1007/3-540-45105-6.
[33] Veeramachaneni, K., Yan, W., Goebel, K., Osadciw,
L., (2007). Improving Classifier Fusion Using Particle
Swarm Optimization. IEEE Symposium on
Computational Intelligence in Multicriteria Decision
Making, 128 -135.
[34] U. Bhattacharya, S.K. Parui, The Use of Self-adaptive
learning rates to improve the convergence of
backpropagation algorithm, Tech. Rep. GVPR-1/95,
CVPR Unit, Indian Statistical Institute, Calcutta, India,
1995.
[35] X. Yao, A review of evolutionary artificial neural
networks, Int. J. Intell. Syst., 8(4) (1993) 539–567.
[36] Y. Shi, R.C. Eberhart, A modified particle swarm
optimizer, in: Proc. of IEEE World onf.
on Computation Intelligence (1998) 69– 73.

ICONSTEM- IREES’16 18
International Conference on Science, Technology Engineering and Management

[37] Y. Shi, R.C. Eberhart, Empirical study of Particle


Swarm Optimization, in: Proc. of IEEE World
Conference on Evolutionary Computation (1999).

ICONSTEM- IREES’16 19

You might also like