controller is
synthesized. The effectiveness of the control strategy is demonstrated using simulation results.
Keywords: Distillation Column, Dynamic Neural Network, Nonlinear H
Control
1. Introduction
A distillation column is a strongly nonlinear process with
multivariate interactions among outputs and some uncer
tainty often exists in the system, which renders the
analysis and control of a distillation column very difficult
[1,2]. In practice, the singlepoint control is commonly
used which is sample and easy to tune. However, the con
sumption of energy in singlepoint control is large and the
product of the other end is not guaranteed. For high purity
distillation column, the twopoint control or other strate
gies have been investigated [1,2]. Considering the intrinsic
nonlinearity of a distillation column, nonlinear controller
has been designed based on the rigorous mathematical
model [3,4]. Although a nonlinear controller may be
effective in simulation, its implementation in practical
plants is complex because of the lack of measurement of
some key variables. Furthermore, a controller based on the
accurate mathematical model may lead to poor perfor
mance in case of large perturbation. For this reason, most
controllers for distillation columns are synthesized based
on the inputoutput relations such as transfer function or a
model obtained by system identification [1,58].
Because of its capability to approximate arbitrary non
linear mapping, neural network has been actively used in
nonlinear system identification and control [912]. The
multilayer feedforward neural network (MFNN) is one
of the most widely used neural networks as a system
model in the design of a modelbased controller. A dy
namic neural network (DNN) is more suitable to ap
proximate a nonlinear dynamic process. Therefore, dy
namic recurrent neural network has been used to learn the
inputoutput relationship of a column and a local optimal
controller based on the neural network model was given in
[8]. Nonlinear adaptive controllers based on MFNN and
RBFNN have also been studied [13,14]. A recent review
[15] for the past 28 years showed that most of the
implementations of advanced control like internal model
control were based on linear models. Many recently
published papers [16,17] on neural control for distillation
columns are some extensions of previous research and
supported by simulations. Although a neural network can
be trained in simulation to approximate any nonlinear
process, the control law must be designed to be robust for
the modeling error. H
controller based on the trained network model is em
ployed to enhance the robustness of the control system.
Simulation results demonstrate the effectiveness of the
online identifier and the control algorithm.
F. LI 384
( )
1
1 2
2
( )
( ) ( ) ( )
( )
u t
Y f Y g Y g Y
u t
 
= +

\ .
2 1
( ), ( )
2. DNN Based OnLine Identifier of a Binary
Distillation Column
Assume the distillation column is with (L, V) control
structure where L and V are reflux flow in condenser and
boil up flow in reboiler (kmol/min) respectively. Then, let
u1, u2 be L, V respectively. The outputs are the light
compositions of the top and bottom products of a distilla
tion column.
In some set points, assume the nonlinear inputoutput
relationship of the distillation column can be expressed
approximately as below:
(1)
where
i
f Y g Y R
e ( ) ( ) ( ) ( )
T
g Y g Y g Y =
0, g >
22
0 g < i
( )
1 2
1
1 2
( )
( , ) ( , )
g g
u t
g Y W g Y W
 
+

,
1 2 i i i
, and
according to some mechanistic knowledge,
11
, for any , = 1, 2.
12 21
Assume the outputs
1
Y ,
2
are known, a dynamic neu
ral network based identifier is designed to approximate
(1) as below:
0, 0 < > g g [0,1]
i
Y e
Y
2
( ) u t
\ .
Y
( , Y AY f Y = +
)
f
W AY
(2)
where A is any stable matrix. Y and are the output
vectors of the column and the identifier respectively.
f and
i
g ( i = 1,2) are all MFNN, which are the
estimates of f and
i
g in (1) respectively. The active
function of the outputlayer of MFNN
f is linear. The
active function of
1
g is a sigmoid function
(1 exp( k + )) x , and the active function of
2
g is
(1 x k
>
exp( k +
k
)) , where is a tunable parameter sat
isfying . 0
f
W and
i
g
W are the weight matrixes.
From [9], for 0 c > and any continuous function
( ) F x there exist a threelayer feedforward network sat
isfing:
F sup
x
( ) F x ( , ) x W c

<
W

( , )
where denotes the optimal weight matrix and
F x W

denotes the outputs of the network.
Let
f
W

and
i
g
W

be the corresponding optimal
weight matrix, system (1) can be rewritten as
2
1
( , ) ( , ) ( ) ( )
i
f i g i
i
f Y W AY g Y W u t Y
 
=
+ +
)
Y A
Y = + (3)
where (Y is the modeling error of the optimal identi
fer satisfing:
sup ( )
Y
Y
Y
eO
s
{ }
1 2
0 , 1
Y
Y Y Y O = < <
which can be made as small as possible by adjusting the
neural network structure.
Let
u

u be an extended column vector of all the ele
ments of a weight matrix W. denotes the corre
sponding extended vector of the optimal weight matrix.
Then the extended column vector of
f
W
i
and
g
W are
f
u
and
i
g
u . Define identification error as
,
i
g ( j f = ). From (2) and (3) we have
2
1
)
i i
i
e Ae f g Y
=
= + +
( ) u t ( + (4)
where
1
) ( , f Y u u

= = ( , Y )
f f f f
f
f
f
O
o
u
u
+
c
 
+

\ . c
0 1
f f u
,
, o < <
1
i
i
g
o
u u u u
+

c
 
= = +

\ .
( , ) ( , )
i i i i
i
i i g i g g g g
g
g g Y g Y O
u c
Define
2
1
i f
i
O O
=
= +
( ) ( ) ( )
g i
d t u t Y + (5)
Assume:
sup ( ) sup c c s +
f g
d t u (6)
where
j
( , j f g = ) is a finite constant. c
The updating algorithms of the parameter vectors
f
u
and
g
i
u are as followings
f f
f
f
Pe k P e
t
f
u u
u
 
c
= 

c
\ .
( )
i i
i
i
i
i g g
g
g
Peu t k P e
t
u
u
c
= 

c
 
(7)
g
u
\ .
0, , > = where
j i
, and P is the positive defined
solution of the following Lyapunov equation:
k j f g
T
PA A P Q + =
Q is a positive defined matrix and the minimal eigenvalue
of Q is
0
.
Theorem 1: If is bounded and the weight updat
ing alogrithm is (7), then the identification error and
parameter estimation error
( ) u t
e
j
u
( )
( )
i
i i
1
2
1
2
i
i
f i g
f f g g
i
u t
e Pd t
i
f g
g f
V e Qe e P e P
t t t
t t t
u u
u u u u
=
=
+
u u
=
c c
+ +
c c
Copyright 2011 SciRes. ICA
F. LI 385
Substituting (10) into the above equation leads to
2
1
( ) ( ( )
2
( )
i i i i
1
f f f f
g g g g
V e Qe e Pd t e P k
k
t t t
t
i
u u u
u u u


= + +
+
=
2
2
0
2 2
2
1 1
(
2
)
i i i i i
f f
g g f f f g g g
i i
V e e P d e P k
k k k
u
u u u u
 
= =
s + +
+ +
u
We can further have:
2
2
2
* 0
2
2
2
* *
1
2 2
1 1
f
f f f
k
V e P e k
u u
u u u

 
s

\ .
\
  
 
 + 
*
1
4
2 4
i i i i
f
g g g g
i
k d
u
=

 
\ .
\ . .
Assuming is bounded and combining it with (6),
we have
( ) u t
( )
f g m
d t u c c c s + s .
So
m
c is bounded.
Define
2
2 2
1
1 1
4 4
i i
f g g m
i
k k
d f
c u u c
 
=
= + +
.
It is easy to verify that
if
0
2
e
P
d
c
>
or:
1
2
f f d f
k u u c

> +
or:
1
i
2
i i
g g d g
k c

0 V <
u u > +
then .
For
i
u

L
being bounded, when
i
u t
, we have
d
( ) L e
c
e . According to Barbalats theorem [21], it can be
seen that:
, ,
i
f g
u u
e L
e
e L e
.
Considering (5), we further have .
3. Nonlinear H
Control
Consider an affine nonlinear controller system
1 2
( ) ( ) ( ) x f x g x g x u e = + +
( ) ( ) y h x k x u = +
n
(8)
where x R e
m
u R e , ,
s
R ee ,
p
y R e ; ( ) f x ,
1
( ) g x ,
g ( )
2
x , h(x) and k(x) are smooth functions; u is control
input e is uncertainty noise and (0) 0, (0) 0 = = f h
0
.
For system (8), the design objective of nonlinear H
control is to make the following conditions guaranteed
[19,20].
1) The closedloop system is asymptotically stable;
and,
2) for a given > , the following inequality holds
2
2
2
0 0
( ) d ( ) d t t e t t s
} }
t t
y 0 t > .
According to [19], defining
x
V
V
x
c
=
c
n
for x R e
( )
T
R x k =
and , we have: ( ) ( ) x k x
( ) R x Proposition: If is nonsingular, nonlinear H
control law is expressed as
1
2
( ) ( ) ( )
2
x
u R x g x x
t t
=

\ .
1
( ) V k x h
t
+
 
(9)
where V(x) (V(0) = 0) satisfies
1
( ) ) ( ) ( ) 0
4
x x x
V f x h x V R x V
t t
( h x + + s
where
1
( ) ( ) ( ) ( ) ( ) ( )
2
f x f x g x R x k x h x
t
=
1
( ) ( ( ) ( ) ( )) ( ) h x I k x R x k x h x
t
=
1
1
( ) ( ) ( ) ( ) ( ) R x g x g x g x R x g
1 1 2 2 2
t t
= .
( )
1 2
( ) Y AY f AY g g u t
3.2. Nonlinear H
d d
Y AY s
(10)
where e is the uncertainty variable denoting the mode
ling error.
Assuming the desired tracking trajectory is
s is a reference variable. = +
, where
Define .
1 1 1 d
d
Y Y x
Y Y
Y Y x
   
= =
 
\ . \ . 2 2 2 d
2
From (10), the tracking error equation can be derived
as below:
1
( ) x a x Ax G G u e = + + +
( ) a x
(11)
where
f AY s =
1 0
,
11 12
1 2
21 22
0 1
g g
, G G
g g
 
= =

\ . \ .
 

.
Copyright 2011 SciRes. ICA
F. LI 386
( ) G Y
u
1
2
=
=
(13)
where P is the positive defined solution of the following
Riccati inequality:
2
\ .
1
1 0
T
PA A P P P
 
+ + s

. (14)
Remark: P can be obtained by solving a Riccati
equation as below:
2
1
1 0
T T
PA A P P P C C
 
+ + + =

\ .
where [A,C] is observable.
4. Simulation
After The binary distillation model used in the simula
tion is the same as the one developed by [2]. The nomi
nal values of outputs are
10
0.98(mol%) = Y
(mol%)
and
20
. The nominal values of reflux flow
and boilup flow are L = 2.28625 (kmol/min) and
V = 2.78625 (kmol/min). The feed flow and feed compo
sition are F = 1 (kmol/min) and Z
f
= 0.5 (mol%).
0.02 Y =
9 0
0 9
A
 
=

\ .
1 0
, 1.1
0 1
C
 
= =

\ .
0.0555 0
0 0.0555
P
 
=

\ .
The distillation column is controlled using the control
strategy (13) developed in this paper. The tacking prop
erty and the disturbance attenuation property of the con
trol system are demonstrated through simulation. To
make the transient response be more elegant, the dy
namic neural network based identifier is trained for some
time offline.
In the simulation, we choose
,
and the positive definite P we obtained is
.
1) Servo properties
Let the setpoint of the top light component change
from nominal value 0.98 to 0.995 and the bottom com
position remain nominal value 0.02. The augmentations
of system output are demonstrated in Figure 1. The
curves of the control inputs are in Figure 2.
2) Robustness
The responses of the closedloop system are illus
Figure 1. The augmentations of outputs.
Figure 2. The curves of control inputs.
Figure 3. The augmentations of outputs (Feed increases
10%).
Figure 4. The augmentations of outputs (Feed composition
increases 10%).
trated in Figures 3 and 4 when feed flow F and feed
composition Z
f
increase 10% respectively.
Copyright 2011 SciRes. ICA
F. LI 387
5. Conclusions
A dynamic neural network based online nonlinear identi
fier for a binary distillation column is designed. The
learning algorithm of the network weights is established
in detail, which can guarantee the boundedness of the
identification error. To deal with the modeling error, a
nonlinear H
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.