Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Robust Control Using LMI Transformation and
NeuralBased Identification for Regulating
SingularlyPerturbed Reduced Order
EigenvaluePreserved Dynamic Systems
Anas N. AlRabadi
Computer Engineering Department, The University of Jordan, Amman
Jordan
1. Introduction
In control engineering, robust control is an area that explicitly deals with uncertainty in its
approach to the design of the system controller [7,10,24]. The methods of robust control are
designed to operate properly as long as disturbances or uncertain parameters are within a
compact set, where robust methods aim to accomplish robust performance and/or stability
in the presence of bounded modeling errors. A robust control policy is static in contrast to
the adaptive (dynamic) control policy where, rather than adapting to measurements of
variations, the system controller is designed to function assuming that certain variables will
be unknown but, for example, bounded. An early example of a robust control method is the
highgain feedback control where the effect of any parameter variations will be negligible
with using sufficiently high gain.
The overall goal of a control system is to cause the output variable of a dynamic process to
follow a desired reference variable accurately. This complex objective can be achieved based
on a number of steps. A major one is to develop a mathematical description, called
dynamical model, of the process to be controlled [7,10,24]. This dynamical model is usually
accomplished using a set of differential equations that describe the dynamic behavior of the
system, which can be further represented in statespace using system matrices or in
transformspace using transfer functions [7,10,24].
In system modeling, sometimes it is required to identify some of the system parameters.
This objective maybe achieved by the use of artificial neural networks (ANN), which are
considered as the new generation of information processing networks [5,15,17,28,29].
Artificial neural systems can be defined as physical cellular systems which have the
capability of acquiring, storing and utilizing experiential knowledge [15,29], where an ANN
consists of an interconnected group of basic processing elements called neurons that
perform summing operations and nonlinear function computations. Neurons are usually
organized in layers and forward connections, and computations are performed in a parallel
mode at all nodes and connections. Each connection is expressed by a numerical value
called the weight, where the conducted learning process of a neuron corresponds to the
changing of its corresponding weights.
Recent Advances in Robust Control – Novel Approaches and Design Methods
60
When dealing with system modeling and control analysis, there exist equations and
inequalities that require optimized solutions. An important expression which is used in
robust control is called linear matrix inequality (LMI) which is used to express specific
convex optimization problems for which there exist powerful numerical solvers [1,2,6].
The important LMI optimization technique was started by the Lyapunov theory showing
that the differential equation ( ) ( ) x t Ax t = ` is stable if and only if there exists a positive
definite matrix [P] such that 0
T
A P PA + < [6]. The requirement of { 0 P > , 0
T
A P PA + < } is
known as the Lyapunov inequality on [P] which is a special case of an LMI. By picking any
0
T
Q Q = > and then solving the linear equation
T
A P PA Q + = − for the matrix [P], it is
guaranteed to be positivedefinite if the given system is stable. The linear matrix inequalities
that arise in system and control theory can be generally formulated as convex optimization
problems that are amenable to computer solutions and can be solved using algorithms such
as the ellipsoid algorithm [6].
In practical control design problems, the first step is to obtain a proper mathematical model
in order to examine the behavior of the system for the purpose of designing an appropriate
controller [1,2,3,4,5,7,8,9,10,11,12,13,14,16,17,19,20,21,22,24,25,26,27]. Sometimes, this
mathematical description involves a certain small parameter (i.e., perturbation). Neglecting
this small parameter results in simplifying the order of the designed controller by reducing
the order of the corresponding system [1,3,4,5,8,9,11,12,13,14,17,19,20,21,22,25,26]. A reduced
model can be obtained by neglecting the fast dynamics (i.e., nondominant eigenvalues) of
the system and focusing on the slow dynamics (i.e., dominant eigenvalues). This
simplification and reduction of system modeling leads to controller cost minimization
[7,10,13]. An example is the modern integrated circuits (ICs), where increasing package
density forces developers to include side effects. Knowing that these ICs are often modeled
by complex RLCbased circuits and systems, this would be very demanding
computationally due to the detailed modeling of the original system [16]. In control system,
due to the fact that feedback controllers don't usually consider all of the dynamics of the
functioning system, model reduction is an important issue [4,5,17].
The main results in this research include the introduction of a new layered method of
intelligent control, that can be used to robustly control the required system dynamics, where
the new control hierarchy uses recurrent supervised neural network to identify certain
parameters of the transformed system matrix [
¯
A ], and the corresponding LMI is used to
determine the permutation matrix [P] so that a complete system transformation {[
¯
B ], [
¯
C],
[
¯
D]} is performed. The transformed model is then reduced using the method of singular
perturbation and various feedback control schemes are applied to enhance the
corresponding system performance, where it is shown that the new hierarchical control
method simplifies the model of the dynamical systems and therefore uses simpler
controllers that produce the needed system response for specific performance
enhancements. Figure 1 illustrates the layout of the utilized new control method. Layer 1
shows the continuous modeling of the dynamical system. Layer 2 shows the discrete system
model. Layer 3 illustrates the neural network identification step. Layer 4 presents the
undiscretization of the transformed system model. Layer 5 includes the steps for model
order reduction with and without using LMI. Finally, Layer 6 presents various feedback
control methods that are used in this research.
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
61
Continuous Dynamic System: {[A], [B], [C], [D]}
System Discretization
System Undiscretization (Continuous form)
LMIBased Permutation
Matrix [P]
Model Order Reduction
Output Feedback
(LQR Optimal
Control)
PID
State Feedback (LQR
Optimal Control)
State Feedback
(Pole Placement)
ClosedLoop Feedback Control
NeuralBased System
Transformation: {[ A
ˆ
],[ B
ˆ
]}
NeuralBased State
Transformation: [ A
~
]
Complete System
Transformation: {[ B
~
],[ C
~
],[ D
~
]}
State
Feedback
Control
Fig. 1. The newly utilized hierarchical control method.
While similar hierarchical method of ANNbased identification and LMIbased
transformation has been previously utilized within several applications such as for the
reducedorder electronic Buck switchingmode power converter [1] and for the reduced
order quantum computation systems [2] with relatively simple state feedback controller
implementations, the presented method in this work further shows the successful wide
applicability of the introduced intelligent control technique for dynamical systems using
various spectrum of control methods such as (a) PIDbased control, (b) state feedback
control using (1) pole placementbased control and (2) linear quadratic regulator (LQR)
optimal control, and (c) output feedback control.
Section 2 presents background on recurrent supervised neural networks, linear matrix
inequality, system model transformation using neural identification, and model order
reduction. Section 3 presents a detailed illustration of the recurrent neural network
identification with the LMI optimization techniques for system model order reduction. A
practical implementation of the neural network identification and the associated
comparative results with and without the use of LMI optimization to the dynamical system
model order reduction is presented in Section 4. Section 5 presents the application of the
feedback control on the reduced model using PID control, state feedback control using pole
assignment, state feedback control using LQR optimal control, and output feedback control.
Conclusions and future work are presented in Section 6.
Recent Advances in Robust Control – Novel Approaches and Design Methods
62
2. Background
The following subsections provide an important background on the artificial supervised
recurrent neural networks, system transformation without using LMI, state transformation
using LMI, and model order reduction, which can be used for the robust control of dynamic
systems, and will be used in the later Sections 35.
2.1 Artificial recurrent supervised neural networks
The ANN is an emulation of the biological neural system [15,29]. The basic model of the
neuron is established emulating the functionality of a biological neuron which is the basic
signaling unit of the nervous system. The internal process of a neuron maybe
mathematically modeled as shown in Figure 2 [15,29].
Fig. 2. A mathematical model of the artificial neuron.
As seen in Figure 2, the internal activity of the neuron is produced as:
1
p
k kj j
j
v w x
=
=
∑
(1)
In supervised learning, it is assumed that at each instant of time when the input is applied, the
desired response of the system is available [15,29]. The difference between the actual and the
desired response represents an error measure which is used to correct the network parameters
externally. Since the adjustable weights are initially assumed, the error measure may be used
to adapt the network's weight matrix [W]. A set of input and output patterns, called a training
set, is required for this learning mode, where the usually used training algorithm identifies
directions of the negative error gradient and reduces the error accordingly [15,29].
The supervised recurrent neural network used for the identification in this research is based
on an approximation of the method of steepest descent [15,28,29]. The network tries to
match the output of certain neurons to the desired values of the system output at a specific
instant of time. Consider a network consisting of a total of N neurons with M external input
connections, as shown in Figure 3, for a 2
nd
order system with two neurons and one external
input. The variable g(k) denotes the (M x 1) external input vector which is applied to the
0 k
w
1 k
w
2 k
w
∑
( ) . ϕ
k
y
1
x
2
x
p
x
Output
Activation
Function
Summing
Junction
Synaptic
Weights
Input
Signals
k
v
Threshold
k
θ
kp
w
0
x
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
63
network at discrete time k, the variable y(k + 1) denotes the corresponding (N x 1) vector of
individual neuron outputs produced one step later at time (k + 1), and the input vector g(k)
and onestep delayed output vector y(k) are concatenated to form the ((M + N) x 1) vector
u(k) whose i
th
element is denoted by u
i
(k). For Λ denotes the set of indices i for which g
i
(k) is
an external input, and β denotes the set of indices i for which u
i
(k) is the output of a
neuron (which is y
i
(k)), the following equation is provided:
if ( )
( )
if ( )
i
i
i
g i Λ k ,
k =
u
y i k , β
∈ ⎧
⎪
⎨
∈
⎪
⎩
Fig. 3. The utilized 2
nd
order recurrent neural network architecture, where the identified
matrices are given by {
11 12 11
21 22 21
,
d d
A A B
A B
A A B
⎡ ⎤ ⎡ ⎤
= =
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
¯ ¯
} and that [ ] [ ] W ⎡ ⎤ =
⎣ ⎦
¯ ¯
d d
A B .
The (N x (M + N)) recurrent weight matrix of the network is represented by the variable [W].
The net internal activity of neuron j at time k is given by:
( ) = ( ) ( )
j ji i
i Λ
v k w k u k
β ∈ ∪
∑
where Λ ∪ ß is the union of sets Λ and ß . At the next time step (k + 1), the output of the
neuron j is computed by passing v
j
(k) through the nonlinearity (.) ϕ , thus obtaining:
( 1) = ( ( ))
j j
y k v k ϕ +
The derivation of the recurrent algorithm can be started by using d
j
(k) to denote the desired
(target) response of neuron j at time k, and ς(k) to denote the set of neurons that are chosen
to provide externally reachable outputs. A timevarying (N x 1) error vector e(k) is defined
whose j
th
element is given by the following relationship:
Z
1
g
1
:
A
11
A
12
A
21
A
22
B
11
B
21
) 1 (
~
1
+ k x
System
external input
System
dynamics
System state:
internal input
Neuron
delay
Z
1
Outputs
) (
~
k y
) 1 (
~
2
+ k x
) (
~
1
k x ) (
~
2
k x
¸ ¸ ¸ ¸ ¸ ¸ ¸
Recent Advances in Robust Control – Novel Approaches and Design Methods
64
( )  ( ), if ( )
( ) =
0, otherwise
j j
j
d k y k j k
e k
ς ∈ ⎧
⎪
⎨
⎪
⎩
The objective is to minimize the cost function E
total
which is obtained by:
total
= ( )
k
E E k
∑
, where
2
1
( ) = ( )
2
j
j
E k e k
ς ∈
∑
To accomplish this objective, the method of steepest descent which requires knowledge of
the gradient matrix is used:
total
total
( )
= = = ( )
k k
E E k
E E k
∂ ∂
∇ ∇
∂ ∂
∑ ∑ W W
W W
where ( ) E k ∇
W
is the gradient of E(k) with respect to the weight matrix [W]. In order to train
the recurrent network in real time, the instantaneous estimate of the gradient is used
( ) ( ) E k ∇
W
. For the case of a particular weight
m
w
/
(k), the incremental change
m
w Δ
/
(k)
made at k is defined as
( )
( ) = 
( )
m
m
E k
w k
w k
η
∂
Δ
∂
/
/
where η is the learningrate parameter.
Therefore:
( )
( ) ( )
= ( ) =  ( )
( ) ( ) ( )
j
i
j j
m m m j j
e k
y k E k
e k e k
w k w k w k
ς ς ∈ ∈
∂
∂ ∂
∂ ∂ ∂
∑ ∑
/ / /
To determine the partial derivative ( )/ ( )
j m
y k w k ∂ ∂
/
, the network dynamics are derived. This
derivation is obtained by using the chain rule which provides the following equation:
( +1) ( +1) ( ) ( )
= = ( ( ))
( ) ( ) ( ) ( )
j j j j
j
m j m m
y k y k v k v k
v k
w k v k w k w k
ϕ
∂ ∂ ∂ ∂
∂ ∂ ∂ ∂
/ / /
`
, where
( ( ))
( ( )) =
( )
j
j
j
v k
v k
v k
ϕ
ϕ
∂
∂
`
.
Differentiating the net internal activity of neuron j with respect to
m
w
/
(k) yields:
( ) ( ) ( ( ) ( ))
( )
= = ( ) + ( )
( ) ( ) ( ) ( )
j ji ji i
i
ji i
m m m m i Λ i Λ
v k w k w k u k
u k
w k u k
w k w k w k w k
β β ∈ ∪ ∈ ∪
∂ ∂ ∂ ⎡ ⎤
∂
⎢ ⎥
∂ ∂ ∂ ∂
⎢ ⎥
⎣ ⎦
∑ ∑
/ / / /
where
( )
( )/ ( )
ji m
w k w k ∂ ∂
/
equals "1" only when j = m and i = / , and "0" otherwise. Thus:
( )
= ( )
( )
j
i
mj
ji
m m i Λ
v k
(k)
u
w k u (k)
δ
w k w (k)
β ∈ ∪
∂
∂
+
∂ ∂
∑ /
/ /
where
mj δ
is a Kronecker delta equals to "1" when j = m and "0" otherwise, and:
0, if
( )
= ( )
, if
( )
( )
i
i
m
m
i Λ
u k
y k
i
w k
w k
β
∈ ⎧
∂ ⎪
∂
⎨
∈
∂
⎪
∂
⎩
/
/
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
65
Having those equations provides that:
( +1)
( )
= ( ( )) ( ) ( )
( ) ( )
j
i
m
j ji
m m i
y k
y k
v k w k u k
w k w k
β
ϕ
δ
∈
⎡ ⎤ ∂
∂
+ ⎢ ⎥
∂ ∂
⎢ ⎥
⎣ ⎦
∑ /
/
/ /
`
The initial state of the network at time (k = 0) is assumed to be zero as follows:
(0)
= 0
(0)
i
m
y
w
∂
∂
/
, for {j ∈ ß , m∈ ß , / ∈ Λ β ∪ }.
The dynamical system is described by the following triplyindexed set of variables (
j
m
π
/
):
( )
( ) =
( )
j j
m
m
y k
k
w k
π
∂
∂
/
/
For every time step k and all appropriate j, m and / , system dynamics are controlled by:
( +1) = ( ( )) ( ) ( ) ( )
j i
mj j ji m m
i
k k w k k u k
v
β
π ϕ π
δ
∈
⎡ ⎤
+ ⎢ ⎥
⎢ ⎥
⎣ ⎦
∑ / / /
` , with (0) = 0
j
m
π
/
.
The values of ( )
j
m
k π
/
and the error signal e
j
(k) are used to compute the corresponding
weight changes:
( ) = ( ) ( )
j
m j m
j
k e k k
w
ς
η
π
∈
Δ
∑ / /
(2)
Using the weight changes, the updated weight
m
w
/
(k + 1) is calculated as follows:
( +1) = ( ) + ( )
m m m
w k w k w k Δ
/ / /
(3)
Repeating this computation procedure provides the minimization of the cost function and
thus the objective is achieved. With the many advantages that the neural network has, it is
used for the important step of parameter identification in model transformation for the
purpose of model order reduction as will be shown in the following section.
2.2 Model transformation and linear matrix inequality
In this section, the detailed illustration of system transformation using LMI optimization
will be presented. Consider the dynamical system:
( ) ( ) ( ) x t Ax t Bu t = + ` (4)
( ) ( ) ( ) y t Cx t Du t = + (5)
The state space system representation of Equations (4)  (5) may be described by the block
diagram shown in Figure 4.
Recent Advances in Robust Control – Novel Approaches and Design Methods
66
Fig. 4. Block diagram for the statespace system representation.
In order to determine the transformed [A] matrix, which is [
¯
A ], the discrete zero input
response is obtained. This is achieved by providing the system with some initial state values
and setting the system input to zero (u(k) = 0). Hence, the discrete system of Equations
(4)  (5), with the initial condition
0
(0) x x = , becomes:
( 1) ( )
d
x k A x k + = (6)
( ) ( ) y k x k = (7)
We need x(k) as an ANN target to train the network to obtain the needed parameters in
[
¯
d
A ] such that the system output will be the same for [A
d
] and [
¯
d
A ]. Hence, simulating this
system provides the state response corresponding to their initial values with only the [A
d
]
matrix is being used. Once the inputoutput data is obtained, transforming the [A
d
] matrix is
achieved using the ANN training, as will be explained in Section 3. The identified
transformed [
¯
d
A ] matrix is then converted back to the continuous form which in general
(with all real eigenvalues) takes the following form:
0
r c
o
A A
A
A
⎡ ⎤
=
⎢ ⎥
⎣ ⎦
¯
→
1 12 1
2 2
0
0
0 0
n
n
n
A A
A
A
λ
λ
λ
⎡ ⎤
⎢ ⎥
⎢ ⎥
=
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
¯ ¯
¯
¯
. .
(8)
where λ
i
represents the system eigenvalues. This is an upper triangular matrix that
preserves the eigenvalues by (1) placing the original eigenvalues on the diagonal and (2)
finding the elements
¯
ij
A in the upper triangular. This upper triangular matrix form is used
to produce the same eigenvalues for the purpose of eliminating the fast dynamics and
sustaining the slow dynamics eigenvalues through model order reduction as will be shown
in later sections.
Having the [A] and [
¯
A ] matrices, the permutation [P] matrix is determined using the LMI
optimization technique, as will be illustrated in later sections. The complete system
transformation can be achieved as follows where, assuming that
1
x P x
−
= ¯ , the system of
Equations (4)  (5) can be rewritten as:
( ) ( ) ( ) Px t APx t Bu t = +
`
¯ ¯ , ( ) ( ) ( ) y t CPx t Du t = + ¯ ¯ , where ( ) ( ) y t y t = ¯ .
B
∫
C
D
A
+
+
+
y(t) u(t)
) (t x` ) (t x
+
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
67
Premultiplying the first equation above by [P
1
], one obtains:
1 1 1
( ) ( ) ( ) P Px t P APx t P Bu t
− − −
= +
`
¯ ¯
, ( ) ( ) ( ) y t CPx t Du t = + ¯ ¯
which yields the following transformed model:
( ) ( ) ( ) x t Ax t Bu t = +
` ¯ ¯
¯ ¯
(9)
( ) ( ) ( ) y t Cx t Du t = +
¯ ¯
¯ ¯
(10)
where the transformed system matrices are given by:
1
A P AP
−
=
¯
(11)
1
B P B
−
=
¯
(12)
C CP =
¯
(13)
D D =
¯
(14)
Transforming the system matrix [A] into the form shown in Equation (8) can be achieved
based on the following definition [18].
Definition. A matrix
n
A M ∈ is called reducible if either:
a. n = 1 and A = 0; or
b. n ≥ 2, there is a permutation matrix
n
P M ∈ , and there is some integer r with
1 1 r n ≤ ≤ − such that:
1
X Y
P AP
Z
−
⎡ ⎤
=
⎢ ⎥
⎣ ⎦
0
(15)
where
, r r
X M ∈ ,
, n r n r
Z M
− −
∈ ,
, r n r
Y M
−
∈ , and 0
, n r r
M
−
∈ is a zero matrix.
The attractive features of the permutation matrix [P] such as being (1) orthogonal and (2)
invertible have made this transformation easy to carry out. However, the permutation
matrix structure narrows the applicability of this method to a limited category of
applications. A form of a similarity transformation can be used to correct this problem for
{ :
n n n n
f R R
× ×
→ } where f is a linear operator defined by
1
( ) f A P AP
−
= [18]. Hence, based
on [A] and [
¯
A ], the corresponding LMI is used to obtain the transformation matrix [P], and
thus the optimization problem will be casted as follows:
1
min
o
P
P P Subject to P AP A ε
−
− − <
¯
(16)
which can be written in an LMI equivalent form as:
2 1
1
1
min ( ) 0
( )
0
( )
o
T
S
o
T
S P P
trace S Subject to
P P I
I P AP A
P AP A I
ε
−
−
− ⎡ ⎤
>
⎢ ⎥
−
⎢ ⎥
⎣ ⎦
⎡ ⎤
−
> ⎢ ⎥
− ⎢ ⎥
⎣ ⎦
¯
¯
(17)
Recent Advances in Robust Control – Novel Approaches and Design Methods
68
where S is a symmetric slack matrix [6].
2.3 System transformation using neural identification
A different transformation can be performed based on the use of the recurrent ANN while
preserving the eigenvalues to be a subset of the original system. To achieve this goal, the
upper triangular block structure produced by the permutation matrix, as shown in Equation
(15), is used. However, based on the implementation of the ANN, finding the permutation
matrix [P] does not have to be performed, but instead [X] and [Z] in Equation (15) will
contain the system eigenvalues and [Y] in Equation (15) will be estimated directly using the
corresponding ANN techniques. Hence, the transformation is obtained and the reduction is
then achieved. Therefore, another way to obtain a transformed model that preserves the
eigenvalues of the reduced model as a subset of the original system is by using ANN
training without the LMI optimization technique. This may be achieved based on the
assumption that the states are reachable and measurable. Hence, the recurrent ANN can
identify the [
d
ˆ
A ] and [
d
ˆ
B ] matrices for a given input signal as illustrated in Figure 3. The
ANN identification would lead to the following [
d
ˆ
A ] and [
d
ˆ
B ] transformations which (in
the case of all real eigenvalues) construct the weight matrix [W] as follows:
1
1 12 1
2
2 2
ˆ
ˆ ˆ
ˆ
ˆ
0
ˆ ˆ ˆ ˆ
[ ] [ ] ,
0
ˆ
0 0
n
n
d d
n
n
b
A A
b
A
W A B A B
b
λ
λ
λ
⎡ ⎤
⎡ ⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎡ ⎤
= → = =
⎢ ⎥
⎢ ⎥ ⎣ ⎦
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
⎣ ⎦
.
. .
where the eigenvalues are selected as a subset of the original system eigenvalues.
2.4 Model order reduction
Linear timeinvariant (LTI) models of many physical systems have fast and slow dynamics,
which may be referred to as singularly perturbed systems [19]. Neglecting the fast dynamics
of a singularly perturbed system provides a reduced (i.e., slow) model. This gives the
advantage of designing simpler lowerdimensionality reducedorder controllers that are
based on the reducedmodel information.
To show the formulation of a reduced order system model, consider the singularly
perturbed system [9]:
11 12 1 0
( ) ( ) ( ) ( ) , 0 x t A x t A t B u t x( ) x ξ = + + = ` (18)
21 22 2 0
( ) ( ) ( ) ( ) , (0 t A x t A t B u t ) εξ ξ ξ ξ = + + =
`
(19)
1 2
y( ) ( ) ( ) t C x t C t ξ = + (20)
where
1
m
x ∈ℜ and
2
m
ξ ∈ℜ are the slow and fast state variables, respectively,
1
n
u∈ℜ and
2
n
y ∈ℜ are the input and output vectors, respectively, { [ ]
ii
A , [
i
B ], [
i
C ]} are constant
matrices of appropriate dimensions with {1, 2} i ∈ , andε is a small positive constant. The
singularly perturbed system in Equations (18)(20) is simplified by setting 0 ε = [3,14,27]. In
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
69
doing so, we are neglecting the fast dynamics of the system and assuming that the state
variables ξ have reached the quasisteady state. Hence, setting 0 ε = in Equation (19), with
the assumption that [
22
A ] is nonsingular, produces:
1 1
22 21 22 1
( ) ( ) ( )
r
t A A x t A B u t ξ
− −
= − −
(21)
where the index r denotes the remained or reduced model. Substituting Equation (21) in
Equations (18)(20) yields the following reduced order model:
( ) ( ) ( )
r r r r
x t A x t B u t = + ` (22)
( ) ( ) ( )
r r r
y t C x t D u t = + (23)
where {
1
11 12 22 21 r
A A A A A
−
= − ,
1
1 12 22 2 r
B B A A B
−
= − ,
1
1 2 22 21 r
C C C A A
−
= − ,
1
2 22 2 r
D C A B
−
= − }.
3. Neural network identification with lmi optimization for the system model
order reduction
In this work, it is our objective to search for a similarity transformation that can be used to
decouple a preselected eigenvalue set from the system matrix [A]. To achieve this objective,
training the neural network to identify the transformed discrete system matrix [
¯
d
A ] is
performed [1,2,15,29]. For the system of Equations (18)(20), the discrete model of the
dynamical system is obtained as:
( 1) ( ) ( )
d d
x k A x k B u k + = +
(24)
( ) ( ) ( )
d d
y k C x k D u k = +
(25)
The identified discrete model can be written in a detailed form (as was shown in Figure 3) as
follows:
1 11 12 1 11
2 21 22 2 21
( 1) ( )
( )
( 1) ( )
x k A A x k B
u k
x k A A x k B
+ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= +
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
+
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
¯ ¯
¯ ¯
(26)
1
2
( )
( )
( )
x k
y k
x k
⎡ ⎤
=
⎢ ⎥
⎣ ⎦
¯
¯
¯
(27)
where k is the time index, and the detailed matrix elements of Equations (26)(27) were
shown in Figure 3 in the previous section.
The recurrent ANN presented in Section 2.1 can be summarized by defining Λ as the set of
indices i for which ( )
i
g k is an external input, defining ß as the set of indices i for which
( )
i
y k is an internal input or a neuron output, and defining ( )
i
u k as the combination of the
internal and external inputs for which i ß ∈ ∪ Λ. Using this setting, training the ANN
depends on the internal activity of each neuron which is given by:
( ) ( ) ( )
j ji i
i Λ
v k w k u k
β ∈ ∪
=
∑
(28)
Recent Advances in Robust Control – Novel Approaches and Design Methods
70
where w
ji
is the weight representing an element in the system matrix or input matrix for
j ß ∈ and i ß ∈ ∪ Λ such that [ ] [ ] W ⎡ ⎤ =
⎣ ⎦
¯ ¯
d d
A B . At the next time step (k +1), the output
(internal input) of the neuron j is computed by passing the activity through the nonlinearity
φ(.) as follows:
( 1) ( ( ))
j j
x k v k ϕ + = (29)
With these equations, based on an approximation of the method of steepest descent, the
ANN identifies the system matrix [A
d
] as illustrated in Equation (6) for the zero input
response. That is, an error can be obtained by matching a true state output with a neuron
output as follows:
( ) ( ) ( )
j j j
e k x k x k = − ¯
Now, the objective is to minimize the cost function given by:
total
( )
k
E E k =
∑
and
2 1
2
( ) ( )
j
j
E k e k
ς ∈
=
∑
where ς denotes the set of indices j for the output of the neuron structure. This cost
function is minimized by estimating the instantaneous gradient of E(k) with respect to the
weight matrix [W] and then updating [W] in the negative direction of this gradient [15,29].
In steps, this may be proceeded as follows:
 Initialize the weights [W] by a set of uniformly distributed random numbers. Starting at
the instant (k = 0), use Equations (28)  (29) to compute the output values of the N
neurons (where N ß = ).
 For every time step k and all , j ß ∈ m ß ∈ and ß ∈ ∪ / Λ, compute the dynamics of the
system which are governed by the triplyindexed set of variables:
( 1) ( ( )) ( ) ( ) ( )
j i
j ji m mj m
i ß
k v k w k k u k π ϕ π δ
∈
⎡ ⎤
+ = + ⎢ ⎥
⎢ ⎥
⎣ ⎦
∑ / / /
`
with initial conditions (0) 0
j
m
π =
/
and
mj
δ is given by
( )
( ) ( )
ji m
w k w k ∂ ∂
/
, which is equal
to "1" only when {j = m, i = / } and otherwise it is "0". Notice that, for the special case of
a sigmoidal nonlinearity in the form of a logistic function, the derivative ( ) ϕ ⋅ ` is given
by ( ( )) ( 1)[1 ( 1)]
j j j
v k y k y k ϕ = + − + ` .
 Compute the weight changes corresponding to the error signal and system dynamics:
( ) ( ) ( )
j
m j m
j
w k e k k
ς
η π
∈
Δ =
∑ / /
(30)
 Update the weights in accordance with:
( 1) ( ) ( )
m m m
w k w k w k + = + Δ
/ / /
(31)
 Repeat the computation until the desired identification is achieved.
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
71
As illustrated in Equations (6)  (7), for the purpose of estimating only the transformed
system matrix [
¯
d
A ], the training is based on the zero input response. Once the training is
completed, the obtained weight matrix [W] will be the discrete identified transformed
system matrix [
¯
d
A ]. Transforming the identified system back to the continuous form yields
the desired continuous transformed system matrix [
¯
A ]. Using the LMI optimization
technique, which was illustrated in Section 2.2, the permutation matrix [P] is then determined.
Hence, a complete system transformation, as shown in Equations (9)  (10), will be achieved.
For the model order reduction, the system in Equations (9)  (10) can be written as:
( ) ( )
( )
0 ( )
( )
r r c r r
o o o
o
x t A A x t B
u t
A x t B
x t
⎡ ⎤
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= +
⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎣ ⎦
`
¯ ¯
` ¯
¯
(32)
[ ]
( ) ( )
( )
( ) ( )
r r r
r o
o o o
y t x t D
C C u t
y t x t D
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= +
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
¯ ¯
¯ ¯
(33)
The following system transformation enables us to decouple the original system into
retained (r) and omitted (o) eigenvalues. The retained eigenvalues are the dominant
eigenvalues that produce the slow dynamics and the omitted eigenvalues are the non
dominant eigenvalues that produce the fast dynamics. Equation (32) maybe written as:
( ) ( ) ( ) ( )
r r r c o r
x t A x t A x t B u t = + +
`
¯ ¯ ¯ and ( ) ( ) ( )
o o o o
x t A x t B u t = +
`
¯ ¯
The coupling term ( )
c o
A x t ¯ maybe compensated for by solving for ( )
o
x t ¯ in the second
equation above by setting ( )
o
x t
`
¯ to zero using the singular perturbation method (by
setting 0 ε = ). By performing this, the following equation is obtained:
1
( ) ( )
o o o
x t A B u t
−
= − ¯
(34)
Using ( )
o
x t ¯ , we get the reduced order model given by:
1
( ) ( ) [ ] ( )
r r r c o o r
x t A x t A A B B u t
−
= + − +
`
¯ ¯
(35)
1
( ) ( ) [ ] ( )
r r o o o
y t C x t C A B D u t
−
= + − + ¯ (36)
Hence, the overall reduced order model may be represented by:
( ) ( ) ( )
r or r or
x t A x t B u t = +
`
¯ ¯ (37)
( ) ( ) ( )
or r or
y t C x t D u t = + ¯ (38)
where the details of the {[
or
A ], [
or
B ], [
or
C ], [
or
D ]} overall reduced matrices were shown in
Equations (35)  (36), respectively.
4. Examples for the dynamic system order reduction using neural
identification
The following subsections present the implementation of the new proposed method of
system modeling using supervised ANN, with and without using LMI, and using model
Recent Advances in Robust Control – Novel Approaches and Design Methods
72
order reduction, that can be directly utilized for the robust control of dynamic systems. The
presented simulations were tested on a PC platform with hardware specifications of Intel
Pentium 4 CPU 2.40 GHz, and 504 MB of RAM, and software specifications of MS Windows
XP 2002 OS and Matlab 6.5 simulator.
4.1 Model reduction using neuralbased state transformation and lmibased
complete system transformation
The following example illustrates the idea of dynamic system model order reduction using
LMI with comparison to the model order reduction without using LMI. Let us consider the
system of a highperformance tape transport which is illustrated in Figure 5. As seen in
Figure 5, the system is designed with a small capstan to pull the tape past the read/write
heads with the takeup reels turned by DC motors [10].
(a)
(b)
Fig. 5. The used tape drive system: (a) a front view of a typical tape drive mechanism, and
(b) a schematic control model.
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
73
As can be shown, in static equilibrium, the tape tension equals the vacuum force (
o
T F = )
and the torque from the motor equals the torque on the capstan (
1 t o o
K i r T = ) where T
o
is the
tape tension at the read/write head at equilibrium, F is the constant force (i.e., tape tension
for vacuum column), K is the motor torque constant, i
o
is the equilibrium motor current, and
r
1
is the radius of the capstan takeup wheel.
The system variables are defined as deviations from this equilibrium, and the system
equations of motion are given as follows:
1
1 1 1 1 t
d
J r T K i
dt
ω
β ω = + − +
,
1 1 1
x r ω = `
1 e
di
L Ri K e
dt
ω + =
,
2 2 2
x r ω = `
2
2 2 2 2
0
d
J r T
dt
ω
β ω + + =
1 3 1 1 3 1
( ) ( ) T K x x D x x = − + − ` `
2 2 3 2 2 3
( ) ( ) T K x x D x x = − + − ` `
1 1 1
x r θ = ,
2 2 2
x r θ = ,
1 2
3
2
x x
x
−
=
where
1,2
D is the damping in the tapestretch motion, e is the applied input voltage (V), i is
the current into capstan motor, J
1
is the combined inertia of the wheel and takeup motor, J
2
is the inertia of the idler, K
1,2
is the spring constant in the tapestretch motion, K
e
is the
electric constant of the motor, K
t
is the torque constant of the motor, L is the armature
inductance, R is the armature resistance, r
1
is the radius of the takeup wheel, r
2
is the radius
of the tape on the idler, T is the tape tension at the read/write head, x
3
is the position of the
tape at the head,
3
x` is the velocity of the tape at the head, β
1
is the viscous friction at take
up wheel, β
2
is the viscous friction at the wheel, θ
1
is the angular displacement of the
capstan, θ
2
is the tachometer shaft angle, ω
1
is the speed of the drive wheel
1
θ
`
, and ω
2
is the
output speed measured by the tachometer output
2
θ
`
.
The state space form is derived from the system equations, where there is one input, which
is the applied voltage, three outputs which are (1) tape position at the head, (2) tape tension,
and (3) tape position at the wheel, and five states which are (1) tape position at the air
bearing, (2) drive wheel speed, (3) tape position at the wheel, (4) tachometer output speed,
and (5) capstan motor speed. The following subsections will present the simulation results
for the investigation of different system cases using transformations with and without
utilizing the LMI optimization technique.
4.1.1 System transformation using neural identification without utilizing linear matrix
inequality
This subsection presents simulation results for system transformation using ANNbased
identification and without using LMI.
Case #1. Let us consider the following case of the tape transport:
0 2 0 0 0 0
1.1 1.35 1.1 3.1 0.75 0
0 0 0 5 0 0 ( ) ( ) ( )
1.35 1.4 2.4 11.4 0 0
0 0.03 0 0 10 1
x t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
` ,
Recent Advances in Robust Control – Novel Approaches and Design Methods
74
0 0 1 0 0
( ) 0.5 0 0.5 0 0 ( )
0.2 0.2 0.2 0.2 0
y t x t
⎡ ⎤
⎢ ⎥
=
⎢ ⎥
⎢ ⎥
− −
⎣ ⎦
The five eigenvalues are {10.5772, 9.999, 0.9814, 0.5962 ± j0.8702}, where two eigenvalues
are complex and three are real, and thus since (1) not all the eigenvalues are complex and (2)
the existing real eigenvalues produce the fast dynamics that we need to eliminate, model
order reduction can be applied. As can be seen, two real eigenvalues produce fast dynamics
{10.5772, 9.999} and one real eigenvalue produce slow dynamics {0.9814}. In order to
obtain the reduced model, the reduction based on the identification of the input matrix [
ˆ
B ]
and the transformed system matrix [
ˆ
A ] was performed. This identification is achieved
utilizing the recurrent ANN.
By discretizing the above system with a sampling time T
s
= 0.1 sec., using a step input with
learning time T
l
= 300 sec., and then training the ANN for the input/output data with a
learning rate η = 0.005 and with initial weights w = [[
d
ˆ
A ] [
d
ˆ
B ]] given as:
0.0059 0.0360 0.0003 0.0204 0.0307 0.0499
0.0283 0.0243 0.0445 0.0302 0.0257 0.0482
0.0359 0.0222 0.0309 0.0294 0.0405 0.0088
0.0058 0.0212 0.0225 0.0273 0.0079 0.0152
0.0295 0.0235 0.0474 0.0373 0.0158 0.016
w =
8
⎡ ⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
produces the transformed model for the system and input matrices,
ˆ
[ ] A and
ˆ
[ ] B , as follows:
0.5967 0.8701 0.1041 0.2710 0.4114 0.1414
0.8701 0.5967 0.8034 0.4520 0.3375 0.0974
0 0 0.9809 0.4962 0.4680 0.1307 ( ) ( ) ( )
0 0 0 9.9985 0.0146 0.0011
0 0 0 0 10.5764 1.0107
x t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
`
0 0 1 0 0
( ) 0.5 0 0.5 0 0 ( )
0.2 0.2 0.2 0.2 0
y t x t
⎡ ⎤
⎢ ⎥
=
⎢ ⎥
⎢ ⎥
− −
⎣ ⎦
As observed, all of the system eigenvalues have been preserved in this transformed model
with a little difference due to discretization. Using the singular perturbation technique, the
following reduced 3
rd
order model is obtained as follows:
0.5967 0.8701 0.1041 0.1021
( ) 0.8701 0.5967 0.8034 ( ) 0.0652 ( )
0 0 0.9809 0.0860
x t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
`
0 0 1 0
( ) 0.5 0 0.5 ( ) 0 ( )
0.2 0.2 0.2 0
y t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
− −
⎣ ⎦ ⎣ ⎦
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
75
It is also observed in the above model that the reduced order model has preserved all of its
eigenvalues {0.9809, 0.5967 ± j0.8701} which are a subset of the original system, while the
reduced order model obtained using the singular perturbation without system
transformation has provided different eigenvalues {0.8283, 0.5980 ± j0.9304}.
Evaluations of the reduced order models (transformed and nontransformed) were obtained
by simulating both systems for a step input. Simulation results are shown in Figure 6.
0 5 10 15 20
0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Time[s]
S
y
s
t
e
m
O
u
t
p
u
t
Fig. 6. Reduced 3
rd
order models (.… transformed, ... nontransformed) output responses
to a step input along with the nonreduced model ( ____ original) 5
th
order system output
response.
Based on Figure 6, it is seen that the nontransformed reduced model provides a response
which is better than the transformed reduced model. The cause of this is that the
transformation at this point is performed only for the [A] and [B] system matrices leaving
the [C] matrix unchanged. Therefore, the system transformation is further considered for
complete system transformation using LMI (for {[A], [B], [D]}) as will be seen in subsection
4.1.2, where LMIbased transformation will produce better reductionbased response results
than both the nontransformed and transformed without LMI.
Case #2. Consider now the following case:
0 2 0 0 0 0
1.1 1.35 0.1 0.1 0.75 0
0 0 0 2 0 0 ( ) ( ) ( )
0.35 0.4 0.4 2.4 0 0
0 0.03 0 0 10 1
x t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
`
,
0 0 1 0 0
( ) 0.5 0 0.5 0 0 ( )
0.2 0.2 0.2 0.2 0
y t x t
⎡ ⎤
⎢ ⎥
=
⎢ ⎥
⎢ ⎥
− −
⎣ ⎦
The five eigenvalues are {9.9973, 2.0002, 0.3696, 0.6912 ± j1.3082}, where two eigenvalues
are complex, three are real, and only one eigenvalue is considered to produce fast dynamics
{9.9973}. Using the discretized model with T
s
= 0.071 sec. for a step input with learning time
T
l
= 70 sec., and through training the ANN for the input/output data with η = 3.5 x 10
5
and
initial weight matrix given by:
Recent Advances in Robust Control – Novel Approaches and Design Methods
76
0.0195 0.0194 0.0130 0.0071 0.0048 0.0029
0.0189 0.0055 0.0196 0.0025 0.0053 0.0120
0.0091 0.0168 0.0031 0.0031 0.0134 0.0038
0.0061 0.0068 0.0193 0.0145 0.0038 0.0139
0.0150 0.0204 0.0073 0.0180 0.0085 0.0161
w
⎡
⎢
⎢
=
⎣
⎤
⎥
⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎦
and by applying the singular perturbation reduction technique, a reduced 4
th
order model is
obtained as follows:
0.6912 1.3081 0.4606 0.0114 0.0837
1.3081 0.6912 0.6916 0.0781 0.0520
( ) ( ) ( )
0 0 0.3696 0.0113 0.0240
0 0 0 2.0002 0.0014
x t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
`
0 0 1 0
( ) 0.5 0 0.5 0 ( )
0.2 0.2 0.2 0.2
y t x t
⎡ ⎤
⎢ ⎥
=
⎢ ⎥
⎢ ⎥
− −
⎣ ⎦
where all the eigenvalues {2.0002, 0.3696, 0.6912 ± j1.3081} are preserved as a subset of the
original system. This reduced 4
th
order model is simulated for a step input and then
compared to both of the reduced model without transformation and the original system
response. Simulation results are shown in Figure 7 where again the nontransformed
reduced order model provides a response that is better than the transformed reduced
model. The reason for this follows closely the explanation provided for the previous case.
0 2 4 6 8 10 12 14 16 18 20
0.02
0.01
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Time[s]
S
y
s
t
e
m
O
u
t
p
u
t
Fig. 7. Reduced 4
th
order models (…. transformed, ... nontransformed) output responses
to a step input along with the nonreduced ( ____ original) 5
th
order system output response.
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
77
Case #3. Let us consider the following system:
0 2 0 0 0 0
0.1 1.35 0.1 04.1 0.75 0
0 0 0 5 0 0 ( ) ( ) ( )
0.35 0.4 1.4 5.4 0 0
0 0.03 0 0 10 1
x t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
`
,
0 0 1 0 0
( ) 0.5 0 0.5 0 0 ( )
0.2 0.2 0.2 0.2 0
y t x t
⎡ ⎤
⎢ ⎥
=
⎢ ⎥
⎢ ⎥
− −
⎣ ⎦
The eigenvalues are {9.9973, 3.9702, 1.8992, 0.6778, 0.2055} which are all real. Utilizing
the discretized model with T
s
= 0.1 sec. for a step input with learning time T
l
= 500 sec., and
training the ANN for the input/output data with η = 1.25 x 10
5
, and initial weight matrix
given by:
0.0014 0.0662 0.0298 0.0072 0.0523 0.0184
0.0768 0.0653 0.0770 0.0858 0.0968 0.0609
0.0231 0.0223 0.0053 0.0162 0.0231 0.0024
0.0907
w =
0.0695 0.0366 0.0132 0.0515 0.0427
0.0904 0.0772 0.0733 0.0490 0.0150 0.0735
⎡ ⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
and then by applying the singular perturbation technique, the following reduced 3
rd
order
model is obtained:
0.2051 1.5131 0.6966 0.0341
( ) 0 0.6782 0.0329 ( ) 0.0078 ( )
0 0 1.8986 0.4649
x t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
`
0 0 1 0
( ) 0.5 0 0.5 ( ) 0 ( )
0.2 0.2 0.2 0.0017
y t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
− −
⎣ ⎦ ⎣ ⎦
Again, it is seen here the preservation of the eigenvalues of the reducedorder model being
as a subset of the original system. However, as shown before, the reduced model without
system transformation provided different eigenvalues {1.5165,0.6223,0.2060} from the
transformed reduced order model. Simulating both systems for a step input provided the
results shown in Figure 8.
In Figure 8, it is also seen that the response of the nontransformed reduced model is better
than the transformed reduced model, which is again caused by leaving the output [C]
matrix without transformation.
4.1.2 LMIbased state transformation using neural identification
As observed in the previous subsection, the system transformation without using the LMI
optimization method, where its objective was to preserve the system eigenvalues in the
reduced model, didn't provide an acceptable response as compared with either the reduced
nontransformed or the original responses.
As was mentioned, this was due to the fact of not transforming the complete system (i.e., by
neglecting the [C] matrix). In order to achieve better response, we will now perform a
Recent Advances in Robust Control – Novel Approaches and Design Methods
78
complete system transformation utilizing the LMI optimization technique to obtain the
permutation matrix [P] based on the transformed system matrix [
¯
A ] as resulted from the
ANNbased identification, where the following presents simulations for the previously
considered tape drive system cases.
0 10 20 30 40 50 60 70 80
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Time[s]
S
y
s
t
e
m
O
u
t
p
u
t
Fig. 8. Reduced 3
rd
order models (…. transformed, ... nontransformed) output responses
to a step input along with the nonreduced ( ____ original) 5
th
order system output response.
Case #1. For the example of case #1 in subsection 4.1.1, the ANN identification is used now
to identify only the transformed [
¯
d
A ] matrix. Discretizing the system with T
s
= 0.1 sec.,
using a step input with learning time T
l
= 15 sec., and training the ANN for the
input/output data with η = 0.001 and initial weights for the [
¯
d
A ] matrix as follows:
0.0286 0.0384 0.0444 0.0206 0.0191
0.0375 0.0440 0.0325 0.0398 0.0144
0.0016 0.0186 0.0307 0.0056 0.0304
0.0411 0.0226 0.0478 0.0287 0.0453
0.0327 0.0042 0.0239 0.0106 0.0002
w
⎡ ⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥
=
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
produces the transformed system matrix:
0.5967 0.8701 1.4633 0.9860 0.0964
0.8701 0.5967 0.2276 0.6165 0.2114
0 0 0.9809 0.1395 0.4934
0 0 0 9.9985 1.0449
0 0 0 0 10.5764
A
⎡ ⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥
=
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
¯
Based on this transformed matrix, using the LMI technique, the permutation matrix [P] was
computed and then used for the complete system transformation. Therefore, the
transformed {[
¯
B ], [
¯
C], [
¯
D]} matrices were then obtained. Performing model order
reduction provided the following reduced 3
rd
order model:
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
79
0.5967 0.8701 1.4633 35.1670
( ) 0.8701 0.5967 0.2276 ( ) 47.3374 ( )
0 0 0.9809 4.1652
x t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
`
0.0019 0 0.0139 0.0025
( ) 0.0024 0.0009 0.0088 ( ) 0.0025 ( )
0.0001 0.0004 0.0021 0.0006
y t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
where the objective of eigenvalue preservation is clearly achieved. Investigating the
performance of this new LMIbased reduced order model shows that the new completely
transformed system is better than all the previous reduced models (transformed and non
transformed). This is clearly shown in Figure 9 where the 3
rd
order reduced model, based on
the LMI optimization transformation, provided a response that is almost the same as the 5
th
order original system response.
0 1 2 3 4 5 6 7 8
0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
___ Original,  Trans. with LMI, .. None Trans., .... Trans. without LMI
Time[s]
S
y
s
t
e
m
O
u
t
p
u
t
Fig. 9. Reduced 3
rd
order models (…. transformed without LMI, ... nontransformed, 
transformed with LMI) output responses to a step input along with the non reduced ( ____
original) system output response. The LMItransformed curve fits almost exactly on the
original response.
Case #2. For the example of case #2 in subsection 4.1.1, for T
s
= 0.1 sec., 200 input/output
data learning points, and η = 0.0051 with initial weights for the [
¯
d
A ] matrix as follows:
0.0332 0.0682 0.0476 0.0129 0.0439
0.0317 0.0610 0.0575 0.0028 0.0691
0.0745 0.0516 0.0040 0.0234 0.0247
0.0459 0.0231 0.0086 0.0611 0.0154
0.0706
w =
0.0418 0.0633 0.0176 0.0273
⎡ ⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
Recent Advances in Robust Control – Novel Approaches and Design Methods
80
the transformed [
¯
A ] was obtained and used to calculate the permutation matrix [P]. The
complete system transformation was then performed and the reduction technique produced
the following 3
rd
order reduced model:
0.6910 1.3088 3.8578 0.7621
( ) 1.3088 0.6910 1.5719 ( ) 0.1118 ( )
0 0 0.3697 0.4466
x t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
`
0.0061 0.0261 0.0111 0.0015
( ) 0.0459 0.0187 0.0946 ( ) 0.0015 ( )
0.0117 0.0155 0.0080 0.0014
y t x t u t
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
= +
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
with eigenvalues preserved as desired. Simulating this reduced order model to a step input,
as done previously, provided the response shown in Figure 10.
0 2 4 6 8 10 12 14 16 18 20
0.02
0.01
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
___ Original,  Trans. with LMI, .. None Trans., .... Trans. without LMI
Time[s]
S
y
s
t
e
m
O
u
t
p
u
t
Fig. 10. Reduced 3
rd
order models (…. transformed without LMI, ... nontransformed,
 transformed with LMI) output responses to a step input along with the non reduced (
____ original) system output response. The LMItransformed curve fits almost exactly on the
original response.
Here, the LMIreductionbased technique has provided a response that is better than both of
the reduced nontransformed and nonLMIreduced transformed responses and is almost
identical to the original system response.
Case #3. Investigating the example of case #3 in subsection 4.1.1, for T
s
= 0.1 sec., 200
input/output data points, and η = 1 x 10
4
with initial weights for [ ]
¯
d
A given as:
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
81
0.0048 0.0039 0.0009 0.0089 0.0168
0.0072 0.0024 0.0048 0.0017 0.0040
0.0176 0.0176 0.0136 0.0175 0.0034
0.0055 0.0039 0.0078 0.0076 0.0051
0.01
w =
02 0.0024 0.0091 0.0049 0.0121
⎡ ⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
the LMIbased transformation and then order reduction were performed. Simulation results
of the reduced order models and the original system are shown in Figure 11.
0 5 10 15 20 25 30 35 40
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
___ Original,  Trans. with LMI, .. None Trans., .... Trans. without LMI
Time[s]
S
y
s
t
e
m
O
u
t
p
u
t
Fig. 11. Reduced 3
rd
order models (…. transformed without LMI, ... nontransformed,
 transformed with LMI) output responses to a step input along with the non reduced (
____ original) system output response. The LMItransformed curve fits almost exactly on the
original response.
Again, the response of the reduced order model using the complete LMIbased
transformation is the best as compared to the other reduction techniques.
5. The application of closedloop feedback control on the reduced models
Utilizing the LMIbased reduced system models that were presented in the previous section,
various control techniques – that can be utilized for the robust control of dynamic systems 
are considered in this section to achieve the desired system performance. These control
methods include (a) PID control, (b) state feedback control using (1) pole placement for the
desired eigenvalue locations and (2) linear quadratic regulator (LQR) optimal control, and
(c) output feedback control.
5.1 Proportional–Integral–Derivative (PID) control
A PID controller is a generic control loop feedback mechanism which is widely used in
industrial control systems [7,10,24]. It attempts to correct the error between a measured
Recent Advances in Robust Control – Novel Approaches and Design Methods
82
process variable (output) and a desired setpoint (input) by calculating and then providing a
corrective signal that can adjust the process accordingly as shown in Figure 12.
Fig. 12. Closedloop feedback singleinput singleoutput (SISO) control using a PID
controller.
In the control design process, the three parameters of the PID controller {K
p
, K
i
, K
d
} have to
be calculated for some specific process requirements such as system overshoot and settling
time. It is normal that once they are calculated and implemented, the response of the system
is not actually as desired. Therefore, further tuning of these parameters is needed to provide
the desired control action.
Focusing on one output of the tapedrive machine, the PID controller using the reduced
order model for the desired output was investigated. Hence, the identified reduced 3
rd
order
model is now considered for the output of the tape position at the head which is given as:
original
3 2
0.0801s 0.133
( )
2.1742s 2.2837s 1.0919
G s
s
+
=
+ + +
Searching for suitable values of the PID controller parameters, such that the system provides
a faster response settling time and less overshoot, it is found that {K
p
= 100, K
i
= 80, K
d
= 90}
with a controlled system which is given by:
3 2
controlled
4 3 2
7.209s 19.98s 19.71s 10.64
( )
s 9.383 22.26s 20.8s 10.64
G s
s
+ + +
=
+ + + +
Simulating the new PIDcontrolled system for a step input provided the results shown in
Figure 13, where the settling time is almost 1.5 sec. while without the controller was greater
than 6 sec. Also as observed, the overshoot has much decreased after using the PID
controller.
On the other hand, the other system outputs can be PIDcontrolled using the cascading of
current process PID and new tuningbased PIDs for each output. For the PIDcontrolled
output of the tachometer shaft angle, the controlling scheme would be as shown in Figure
14. As seen in Figure 14, the output of interest (i.e., the 2
nd
output) is controlled as desired
using the PID controller. However, this will affect the other outputs' performance and
therefore a further PIDbased tuning operation must be applied.
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
83
0 1 2 3 4 5 6 7 8 9 10
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
Fig. 13. Reduced 3
rd
order model PID controlled and uncontrolled step responses.
(a) (b)
Fig. 14. Closedloop feedback singleinput multipleoutput (SIMO) system with a PID
controller: (a) a generic SIMO diagram, and (b) a detailed SIMO diagram.
As shown in Figure 14, the tuning process is accomplished using G
1T
and G
3T
. For example,
for the 1
st
output:
1 1 1 2 1 1
PID( )
T
Y G G R Y Y G R = − = = (39)
∴
1
2
PID( )
T
R
G
R  Y
=
(40)
where Y
2
is the Laplace transform of the 2
nd
output. Similarly, G
3T
can be obtained.
5.2 State feedback control
In this section, we will investigate the state feedback control techniques of pole placement
and the LQR optimal control for the enhancement of the system performance.
5.2.1 Pole placement for the state feedback control
For the reduced order model in the system of Equations (37)  (38), a simple pole placement
based state feedback controller can be designed. For example, assuming that a controller is
Recent Advances in Robust Control – Novel Approaches and Design Methods
84
needed to provide the system with an enhanced system performance by relocating the
eigenvalues, the objective can be achieved using the control input given by:
( ) ( ) ( )
r
u t Kx t r t = − + ¯ (41)
where K is the state feedback gain designed based on the desired system eigenvalues. A
state feedback control for pole placement can be illustrated by the block diagram shown in
Figure 15.
Fig. 15. Block diagram of a state feedback control with {[
or
A ], [
or
B ], [
or
C ], [
or
D ]} overall
reduced order system matrices.
Replacing the control input u(t) in Equations (37)  (38) by the above new control input in
Equation (41) yields the following reduced system equations:
( ) ( ) [ ( ) ( )]
r or r or r
x t A x t B Kx t r t = + − +
`
¯ ¯ ¯ (42)
( ) ( ) [ ( ) ( )]
or r or r
y t C x t D Kx t r t = + − + ¯ ¯ (43)
which can be rewritten as:
( ) ( ) ( ) ( )
r or r or r or
x t A x t B Kx t B r t = − +
`
¯ ¯ ¯ ( ) [ ] ( ) ( )
r or or r or
x t A B K x t B r t → = − +
`
¯ ¯
( ) ( ) ( ) ( )
or r or r or
y t C x t D Kx t D r t = − + ¯ ¯ ( ) [ ] ( ) ( )
or or r or
y t C D K x t D r t → = − + ¯
where this is illustrated in Figure 16.
Fig. 16. Block diagram of the overall state feedback control for pole placement.
or
B
∫
+
+
+
y(t)
u(t) ) (
~
t x
r
`
) (
~
t x
r
K

+ r(t)
or
A
or
C
or
D
+
K B A
or or
−
∫
+
+
+
y(t)
( )
r
x t
`
¯ ( )
r
x t ¯
r(t)
or
B
K D C
or or
−
or
D
+
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
85
The overall closedloop system model may then be written as:
( ) ( ) ( )
cl r cl
x t A x t B r t = +
`
¯ ¯ (44)
( ) ( ) ( )
cl r cl
y t C x t D r t = + ¯ (45)
such that the closed loop system matrix [A
cl
] will provide the new desired system
eigenvalues.
For example, for the system of case #3, the state feedback was used to reassign the
eigenvalues with {1.89, 1.5, 1}. The state feedback control was then found to be of K = [
1.2098 0.3507 0.0184], which placed the eigenvalues as desired and enhanced the system
performance as shown in Figure 17.
0 10 20 30 40 50 60 70 80 90 100
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Time[s]
S
y
s
t
e
m
O
u
t
p
u
t
Fig. 17. Reduced 3
rd
order state feedback control (for pole placement) output step response
... compared with the original ____ full order system output step response.
5.2.2 LinearQuadratic Regulator (LQR) optimal control for the state feedback control
Another method for designing a state feedback control for system performance
enhancement may be achieved based on minimizing the cost function given by [10]:
( )
0
T T
J x Qx u Ru dt
∞
= +
∫
(46)
which is defined for the system ( ) ( ) ( ) x t Ax t Bu t = + ` , where Q and R are weight matrices for
the states and input commands. This is known as the LQR problem, which has received
much of a special attention due to the fact that it can be solved analytically and that the
resulting optimal controller is expressed in an easytoimplement state feedback control
[7,10]. The feedback control law that minimizes the values of the cost is given by:
( ) ( ) u t Kx t = − (47)
Recent Advances in Robust Control – Novel Approaches and Design Methods
86
where K is the solution of
1 T
K R B q
−
= and [q] is found by solving the algebraic Riccati
equation which is described by:
1
0
T T
A q qA qBR B q Q
−
+ − + = (48)
where [Q] is the state weighting matrix and [R] is the input weighting matrix. A direct
solution for the optimal control gain maybe obtained using the MATLAB statement
lqr( , , , ) K A B Q R = , where in our example R = 1, and the [Q] matrix was found using the
output [C] matrix such as
T
Q C C = .
The LQR optimization technique is applied to the reduced 3
rd
order model in case #3 of
subsection 4.1.2 for the system behavior enhancement. The state feedback optimal control
gain was found K = [0.0967 0.0192 0.0027], which when simulating the complete system for
a step input, provided the normalized output response (with a normalization factor γ =
1.934) as shown in Figure 18.
0 10 20 30 40 50 60 70 80 90 100
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Time[s]
S
y
s
t
e
m
O
u
t
p
u
t
Fig. 18. Reduced 3
rd
order LQR state feedback control output step response ... compared
with the original ____ full order system output step response.
As seen in Figure 18, the optimal state feedback control has enhanced the system
performance, which is basically based on selecting new proper locations for the system
eigenvalues.
5.3 Output feedback control
The output feedback control is another way of controlling the system for certain desired
system performance as shown in Figure 19 where the feedback is directly taken from the
output.
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
87
Fig. 19. Block diagram of an output feedback control.
The control input is now given by ( ) ( ) ( ) u t Ky t r t = − + , where ( ) ( ) ( )
or r or
y t C x t D u t = + ¯ . By
applying this control to the considered system, the system equations become [7]:
1 1
( ) ( ) [ ( ( ) ( )) ( )]
( ) ( ) ( ) ( )
[ ] ( ) ( ) ( )
[ [ ] ] ( ) [ [ ] ] ( )
r or r or or r or
or r or or r or or or
or or or r or or or
or or or or r or or
x t A x t B K C x t D u t r t
A x t B KC x t B KD u t B r t
A B KC x t B KD u t B r t
A B K I D K C x t B I KD r t
− −
= + − + +
= − − +
= − − +
= − + + +
`
¯ ¯ ¯
¯ ¯
¯
¯
(49)
1 1
( ) ( ) [ ( ) ( )]
( ) ( ) ( )
[[ ] ] ( ) [[ ] ] ( )
or r or
or r or or
or or r or or
y t C x t D Ky t r t
C x t D Ky t D r t
I D K C x t I D K D r t
− −
= + − +
= − +
= + + +
¯
¯
¯
(50)
This leads to the overall block diagram as seen in Figure 20.
Fig. 20. An overall block diagram of an output feedback control.
Considering the reduced 3
rd
order model in case #3 of subsection 4.1.2 for system behavior
enhancement using the output feedback control, the feedback control gain is found to be K =
[0.5799 2.6276 11]. The normalized controlled system step response is shown in Figure 21,
where one can observe that the system behavior is enhanced as desired.
or
B
∫
+
+
+
y(t)
u(t)
( )
r
x t
`
¯
( )
r
x t ¯
K

+
r(t)
or
A
or
C
or
D
+
1
[ ]
or or or or
A B K I D K C
−
− +
∫
+
+
+
y(t)
( )
r
x t
`
¯ ( )
r
x t ¯
r(t)
1
[ ]
or or
B I KD
−
+
1
[ ]
or or
I D K C
−
+
or or
D K D I
1
] [
−
+
+
Recent Advances in Robust Control – Novel Approaches and Design Methods
88
0 10 20 30 40 50 60 70 80 90 100
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Time[s]
S
y
s
t
e
m
O
u
t
p
u
t
Fig. 21. Reduced 3
rd
order output feedback controlled step response ... compared with the
original ____ full order system uncontrolled output step response.
6. Conclusions and future work
In control engineering, robust control is an area that explicitly deals with uncertainty in its
approach to the design of the system controller. The methods of robust control are designed
to operate properly as long as disturbances or uncertain parameters are within a compact
set, where robust methods aim to accomplish robust performance and/or stability in the
presence of bounded modeling errors. A robust control policy is static  in contrast to the
adaptive (dynamic) control policy  where, rather than adapting to measurements of
variations, the system controller is designed to function assuming that certain variables will
be unknown but, for example, bounded.
This research introduces a new method of hierarchical intelligent robust control for dynamic
systems. In order to implement this control method, the order of the dynamic system was
reduced. This reduction was performed by the implementation of a recurrent supervised
neural network to identify certain elements [A
c
] of the transformed system matrix [
¯
A ],
while the other elements [A
r
] and [A
o
] are set based on the system eigenvalues such that [A
r
]
contains the dominant eigenvalues (i.e., slow dynamics) and [A
o
] contains the nondominant
eigenvalues (i.e., fast dynamics). To obtain the transformed matrix [
¯
A ], the zero input
response was used in order to obtain output data related to the state dynamics, based only
on the system matrix [A]. After the transformed system matrix was obtained, the
optimization algorithm of linear matrix inequality was utilized to determine the
permutation matrix [P], which is required to complete the system transformation matrices
{[
¯
B ], [
¯
C], [
¯
D]}. The reduction process was then applied using the singular perturbation
method, which operates on neglecting the fasterdynamics eigenvalues and leaving the
dominant slowdynamics eigenvalues to control the system. The comparison simulation
results show clearly that modeling and control of the dynamic system using LMI is superior
Robust Control Using LMI Transformation and NeuralBased Identification for
Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
89
to that without using LMI. Simple feedback control methods using PID control, state
feedback control utilizing (a) pole assignment and (b) LQR optimal control, and output
feedback control were then implemented to the reduced model to obtain the desired
enhanced response of the full order system.
Future work will involve the application of new control techniques, utilizing the control
hierarchy introduced in this research, such as using fuzzy logic and genetic algorithms.
Future work will also involve the fundamental investigation of achieving model order
reduction for dynamic systems with all eigenvalues being complex.
7. References
[1] A. N. AlRabadi, “Artificial Neural Identification and LMI Transformation for Model
ReductionBased Control of the Buck SwitchMode Regulator,” American Institute of
Physics (AIP), In: IAENG Transactions on Engineering Technologies, Special Edition of
the International MultiConference of Engineers and Computer Scientists 2009, AIP
Conference Proceedings 1174, Editors: SioIong Ao, Alan HoiShou Chan, Hideki
Katagiri and Li Xu, Vol. 3, pp. 202 – 216, New York, U.S.A., 2009.
[2] A. N. AlRabadi, “Intelligent Control of SingularlyPerturbed Reduced Order
EigenvaluePreserved Quantum Computing Systems via Artificial Neural
Identification and Linear Matrix Inequality Transformation,” IAENG Int. Journal of
Computer Science (IJCS), Vol. 37, No. 3, 2010.
[3] P. Avitabile, J. C. O’Callahan, and J. Milani, “Comparison of System Characteristics
Using Various Model Reduction Techniques,” 7
th
International Model Analysis
Conference, Las Vegas, Nevada, February 1989.
[4] P. Benner, “Model Reduction at ICIAM'07,” SIAM News, Vol. 40, No. 8, 2007.
[5] A. BilbaoGuillerna, M. De La Sen, S. AlonsoQuesada, and A. Ibeas, “Artificial
Intelligence Tools for Discrete Multiestimation Adaptive Control Scheme with
Model Reduction Issues,” Proc. of the International Association of Science and
Technology, Artificial Intelligence and Application, Innsbruck, Austria, 2004.
[6] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System
and Control Theory, Society for Industrial and Applied Mathematics (SIAM), 1994.
[7] W. L. Brogan, Modern Control Theory, 3
rd
Edition, Prentice Hall, 1991.
[8] T. BuiThanh, and K. Willcox, “Model Reduction for LargeScale CFD Applications
Using the Balanced Proper Orthogonal Decomposition,” 17
th
American Institute of
Aeronautics and Astronautics (AIAA) Computational Fluid Dynamics Conf., Toronto,
Canada, June 2005.
[9] J. H. Chow, and P. V. Kokotovic, “A Decomposition of NearOptimal Regulators for
Systems with Slow and Fast Modes,” IEEE Trans. Automatic Control, AC21, pp. 701
705, 1976.
[10] G. F. Franklin, J. D. Powell, and A. EmamiNaeini, Feedback Control of Dynamic Systems,
3
rd
Edition, AddisonWesley, 1994.
[11] K. Gallivan, A. Vandendorpe, and P. Van Dooren, “Model Reduction of MIMO System
via Tangential Interpolation,” SIAM Journal of Matrix Analysis and Applications, Vol.
26, No. 2, pp. 328349, 2004.
Recent Advances in Robust Control – Novel Approaches and Design Methods
90
[12] K. Gallivan, A. Vandendorpe, and P. Van Dooren, “Sylvester Equation and Projection
Based Model Reduction,” Journal of Computational and Applied Mathematics, 162, pp.
213229, 2004.
[13] G. Garsia, J. Dfouz, and J. Benussou, “H
2
Guaranteed Cost Control for Singularly
Perturbed Uncertain Systems,” IEEE Trans. Automatic Control, Vol. 43, pp. 1323
1329, 1998.
[14] R. J. Guyan, “Reduction of Stiffness and Mass Matrices,” AIAA Journal, Vol. 6, No. 7,
pp. 13131319, 1968.
[15] S. Haykin, Neural Networks: A Comprehensive Foundation, Macmillan Publishing
Company, New York, 1994.
[16] W. H. Hayt, J. E. Kemmerly, and S. M. Durbin, Engineering Circuit Analysis, McGraw
Hill, 2007.
[17] G. Hinton, and R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural
Networks,” Science, pp. 504507, 2006.
[18] R. Horn, and C. Johnson, Matrix Analysis, Cambridge University Press, New York, 1985.
[19] S. H. Javid, “Observing the Slow States of a Singularly Perturbed Systems,” IEEE Trans.
Automatic Control, AC25, pp. 277280, 1980.
[20] H. K. Khalil, “Output Feedback Control of Linear TwoTimeScale Systems,” IEEE
Trans. Automatic Control, AC32, pp. 784792, 1987.
[21] H. K. Khalil, and P. V. Kokotovic, “Control Strategies for Decision Makers Using
Different Models of the Same System,” IEEE Trans. Automatic Control, AC23, pp.
289297, 1978.
[22] P. Kokotovic, R. O'Malley, and P. Sannuti, “Singular Perturbation and Order Reduction
in Control Theory – An Overview,” Automatica, 12(2), pp. 123132, 1976.
[23] C. Meyer, Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied
Mathematics (SIAM), 2000.
[24] K. Ogata, DiscreteTime Control Systems, 2
nd
Edition, Prentice Hall, 1995.
[25] R. Skelton, M. Oliveira, and J. Han, Systems Modeling and Model Reduction, Invited Chapter
of the Handbook of Smart Systems and Materials, Institute of Physics, 2004.
[26] M. Steinbuch, “Model Reduction for Linear Systems,” 1
st
International MACSInet
Workshop on Model Reduction, Netherlands, October 2001.
[27] A. N. Tikhonov, “On the Dependence of the Solution of Differential Equation on a
Small Parameter,” Mat Sbornik (Moscow), 22(64):2, pp. 193204, 1948.
[28] R. J. Williams, and D. Zipser, “A Learning Algorithm for Continually Running Fully
Recurrent Neural Networks,” Neural Computation, 1(2), pp. 270280, 1989.
[29] J. M. Zurada, Artificial Neural Systems, West Publishing Company, New York, 1992.
60
Recent Advances in Robust Control – Novel Approaches and Design Methods
When dealing with system modeling and control analysis, there exist equations and inequalities that require optimized solutions. An important expression which is used in robust control is called linear matrix inequality (LMI) which is used to express specific convex optimization problems for which there exist powerful numerical solvers [1,2,6]. The important LMI optimization technique was started by the Lyapunov theory showing that the differential equation x(t ) = Ax(t ) is stable if and only if there exists a positive definite matrix [P] such that AT P + PA < 0 [6]. The requirement of { P > 0 , AT P + PA < 0 } is known as the Lyapunov inequality on [P] which is a special case of an LMI. By picking any Q = QT > 0 and then solving the linear equation AT P + PA = − Q for the matrix [P], it is guaranteed to be positivedefinite if the given system is stable. The linear matrix inequalities that arise in system and control theory can be generally formulated as convex optimization problems that are amenable to computer solutions and can be solved using algorithms such as the ellipsoid algorithm [6]. In practical control design problems, the first step is to obtain a proper mathematical model in order to examine the behavior of the system for the purpose of designing an appropriate controller [1,2,3,4,5,7,8,9,10,11,12,13,14,16,17,19,20,21,22,24,25,26,27]. Sometimes, this mathematical description involves a certain small parameter (i.e., perturbation). Neglecting this small parameter results in simplifying the order of the designed controller by reducing the order of the corresponding system [1,3,4,5,8,9,11,12,13,14,17,19,20,21,22,25,26]. A reduced model can be obtained by neglecting the fast dynamics (i.e., nondominant eigenvalues) of the system and focusing on the slow dynamics (i.e., dominant eigenvalues). This simplification and reduction of system modeling leads to controller cost minimization [7,10,13]. An example is the modern integrated circuits (ICs), where increasing package density forces developers to include side effects. Knowing that these ICs are often modeled by complex RLCbased circuits and systems, this would be very demanding computationally due to the detailed modeling of the original system [16]. In control system, due to the fact that feedback controllers don't usually consider all of the dynamics of the functioning system, model reduction is an important issue [4,5,17]. The main results in this research include the introduction of a new layered method of intelligent control, that can be used to robustly control the required system dynamics, where the new control hierarchy uses recurrent supervised neural network to identify certain parameters of the transformed system matrix [ A ], and the corresponding LMI is used to determine the permutation matrix [P] so that a complete system transformation {[ B ], [ C ], [ D ]} is performed. The transformed model is then reduced using the method of singular perturbation and various feedback control schemes are applied to enhance the corresponding system performance, where it is shown that the new hierarchical control method simplifies the model of the dynamical systems and therefore uses simpler controllers that produce the needed system response for specific performance enhancements. Figure 1 illustrates the layout of the utilized new control method. Layer 1 shows the continuous modeling of the dynamical system. Layer 2 shows the discrete system model. Layer 3 illustrates the neural network identification step. Layer 4 presents the undiscretization of the transformed system model. Layer 5 includes the steps for model order reduction with and without using LMI. Finally, Layer 6 presents various feedback control methods that are used in this research.
Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems
61
ClosedLoop Feedback Control State Feedback (Pole Placement) State Feedback (LQR Optimal Control)
PID
State Feedback Control
Output Feedback (LQR Optimal Control)
Model Order Reduction Complete System ~ ~ ~ Transformation: {[ B ],[ C ],[ D ]} LMIBased Permutation Matrix [P] System Undiscretization (Continuous form)
ˆ ˆ Transformation: {[ A ],[ B ]}
NeuralBased System
NeuralBased State
Transformation: [ A ]
~
System Discretization Continuous Dynamic System: {[A], [B], [C], [D]}
Fig. 1. The newly utilized hierarchical control method. While similar hierarchical method of ANNbased identification and LMIbased transformation has been previously utilized within several applications such as for the reducedorder electronic Buck switchingmode power converter [1] and for the reducedorder quantum computation systems [2] with relatively simple state feedback controller implementations, the presented method in this work further shows the successful wide applicability of the introduced intelligent control technique for dynamical systems using various spectrum of control methods such as (a) PIDbased control, (b) state feedback control using (1) pole placementbased control and (2) linear quadratic regulator (LQR) optimal control, and (c) output feedback control. Section 2 presents background on recurrent supervised neural networks, linear matrix inequality, system model transformation using neural identification, and model order reduction. Section 3 presents a detailed illustration of the recurrent neural network identification with the LMI optimization techniques for system model order reduction. A practical implementation of the neural network identification and the associated comparative results with and without the use of LMI optimization to the dynamical system model order reduction is presented in Section 4. Section 5 presents the application of the feedback control on the reduced model using PID control, state feedback control using pole assignment, state feedback control using LQR optimal control, and output feedback control. Conclusions and future work are presented in Section 6.
k Output y Summing Junction xp Input Signals wkp Synaptic Weights θk Threshold Fig. it is assumed that at each instant of time when the input is applied. Since the adjustable weights are initially assumed. the error measure may be used to adapt the network's weight matrix [W]. which can be used for the robust control of dynamic systems. system transformation without using LMI.29].29]. The internal process of a neuron maybe mathematically modeled as shown in Figure 2 [15. The difference between the actual and the desired response represents an error measure which is used to correct the network parameters externally. The network tries to match the output of certain neurons to the desired values of the system output at a specific instant of time. 2. as shown in Figure 3. where the usually used training algorithm identifies directions of the negative error gradient and reduces the error accordingly [15.29]. and will be used in the later Sections 35. for a 2nd order system with two neurons and one external input. 2. A set of input and output patterns. state transformation using LMI. Background The following subsections provide an important background on the artificial supervised recurrent neural networks.28. the desired response of the system is available [15.1 Artificial recurrent supervised neural networks The ANN is an emulation of the biological neural system [15.62 Recent Advances in Robust Control – Novel Approaches and Design Methods 2. and model order reduction.29]. is required for this learning mode. The basic model of the neuron is established emulating the functionality of a biological neuron which is the basic signaling unit of the nervous system.29]. called a training set. the internal activity of the neuron is produced as: vk = ∑ wkj x j j =1 p (1) In supervised learning. A mathematical model of the artificial neuron. The supervised recurrent neural network used for the identification in this research is based on an approximation of the method of steepest descent [15. Consider a network consisting of a total of N neurons with M external input connections. x0 x1 wk 0 wk 1 wk 2 Activation Function x2 ∑ vk ϕ( ) . The variable g(k) denotes the (M x 1) external input vector which is applied to the . As seen in Figure 2.
) . and ς(k) to denote the set of neurons that are chosen to provide externally reachable outputs. the output of the neuron j is computed by passing vj(k) through the nonlinearity ϕ (. A timevarying (N x 1) error vector e(k) is defined whose jth element is given by the following relationship: . if i ∈ β ⎩ Outputs ~ (k ) y System state: internal input ~ (k ) x2 ~ (k ) x1 B11 A11 A12 A21 A22 B21 Neuron ~ (k + 1) x1 Z1 delay System dynamics Z 1 ~ ( k + 1) x2 g1: System external input Fig. For Λ denotes the set of indices i for which gi(k) is an external input. matrices are given by { Ad = ⎢ 11 .B = ⎣ ⎦ A21 A22 ⎥ d ⎢ B21 ⎥ ⎣ ⎦ ⎣ ⎦ The (N x (M + N)) recurrent weight matrix of the network is represented by the variable [W]. the following equation is provided: ⎧ g i(k ) . The net internal activity of neuron j at time k is given by: v j (k) = i∈Λ ∪ β ∑ w ji ( k )ui ( k ) where Λ ∪ ß is the union of sets Λ and ß . and β denotes the set of indices i for which ui(k) is the output of a neuron (which is yi(k)). 3. the variable y(k + 1) denotes the corresponding (N x 1) vector of individual neuron outputs produced one step later at time (k + 1).Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 63 network at discrete time k. The utilized 2nd order recurrent neural network architecture. where the identified A12 ⎤ ⎡A ⎡ B11 ⎤ } and that W = ⎡[ A d ] [ Bd ]⎤ . and the input vector g(k) and onestep delayed output vector y(k) are concatenated to form the ((M + N) x 1) vector u(k) whose ith element is denoted by ui(k). if i ∈ Λ ⎪ ui(k ) = ⎨ ⎪ y i(k ) . At the next time step (k + 1). thus obtaining: y j ( k + 1) = ϕ ( v j ( k )) The derivation of the recurrent algorithm can be started by using dj(k) to denote the desired (target) response of neuron j at time k.
and "0" otherwise. the instantaneous estimate of the gradient is used ( ∇ W E( k )) . In order to train the recurrent network in real time. Thus: ∂v j ( k ) ∂wm ( k ) = i∈Λ ∪ β ∑ w ji ( k ) ∂ ui(k) + δ mju (k) ∂wm (k) where δ mj is a Kronecker delta equals to "1" when j = m and "0" otherwise. ∂wm ( k ) To determine the partial derivative ∂y j (k )/∂wm ( k ) . Differentiating the net internal activity of neuron j with respect to wm (k) yields: ∂v j ( k ) ∂wm ( k ) = i∈Λ ∪ β ∑ ∂( w ji ( k )ui ( k )) ∂wm ( k ) = ∂w ji ( k ) ⎡ ⎤ ∂ui ( k ) + ui ( k ) ⎥ ⎢ w ji ( k ) ∂wm ( k ) ∂wm ( k ) ⎥ i∈Λ ∪ β ⎢ ⎣ ⎦ ∑ where ∂w ji (k )/∂wm ( k ) equals "1" only when j = m and i = ( ) . where ϕ ( v j ( k )) = .η Therefore: ∂e j ( k ) ∂yi ( k ) ∂E( k ) = ∑ ej (k) = . and: if i ∈ Λ ⎧ 0. ⎪ e j (k) = ⎨ ⎪ 0. This derivation is obtained by using the chain rule which provides the following equation: ∂y j ( k + 1) ∂wm ( k ) ∂y j ( k + 1) ∂v j ( k ) ∂v j ( k ) ∂wm ( k ) ∂v j ( k ) ∂wm ( k ) ∂ϕ ( v j ( k )) ∂v j ( k ) = = ϕ ( v j ( k )) . ⎩ if j ∈ ς ( k ) otherwise The objective is to minimize the cost function Etotal which is obtained by: Etotal = ∑E( k ) . where E( k ) = k 1 2 j∈ς ∑ e2 ( k ) j To accomplish this objective.y j ( k ). if i ∈ β ∂wm ( k ) ⎪ ⎩ ∂wm ( k ) . the network dynamics are derived. ∂ui ( k ) ⎪ = ⎨ ∂yi ( k ) . the incremental change Δwm (k) made at k is defined as Δwm ( k ) = .64 Recent Advances in Robust Control – Novel Approaches and Design Methods ⎧ d j ( k ) . the method of steepest descent which requires knowledge of the gradient matrix is used: ∇ W Etotal = ∂Etotal ∂E( k ) =∑ = ∑ ∇ W E( k ) ∂W k ∂W k where ∇ W E( k ) is the gradient of E(k) with respect to the weight matrix [W]. For the case of a particular weight wm (k).∑ e j (k) ∂wm ( k ) j∈ς ∂wm ( k ) ∂wm ( k ) j∈ς ∂E( k ) where η is the learningrate parameter.
the detailed illustration of system transformation using LMI optimization will be presented. m∈ ß . for {j∈ ß . ⎢ i∈β ⎣ j The values of π m ( k ) and the error signal ej(k) are used to compute the corresponding weight changes: Δ wm ( k ) = η j ∑ e j ( k )π m ( k ) j∈ς (2) Using the weight changes. . the updated weight wm (k + 1) is calculated as follows: wm ( k + 1) = wm ( k ) + Δwm ( k ) (3) Repeating this computation procedure provides the minimization of the cost function and thus the objective is achieved. it is used for the important step of parameter identification in model transformation for the purpose of model order reduction as will be shown in the following section. ∈ Λ ∪ β }.2 Model transformation and linear matrix inequality In this section. Consider the dynamical system: x(t ) = Ax(t ) + Bu(t ) y(t ) = Cx(t ) + Du(t ) (4) (5) The state space system representation of Equations (4) . m and ⎡ j j i π m ( k + 1) = ϕ ( v j( k )) ⎢ ∑w ji ( k )π m ( k ) + δ mju ( k )⎥ . j The dynamical system is described by the following triplyindexed set of variables ( π m ): j π m (k) = ∂ y j(k ) ∂wm ( k ) . with π m (0) = 0 . 2.(5) may be described by the block diagram shown in Figure 4.Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 65 Having those equations provides that: ∂y j ( k + 1) ⎡ ⎤ ∂yi ( k ) + δ m u ( k )⎥ = ϕ ( v j ( k )) ⎢ ∑w ji ( k ) ∂wm ( k ) ∂wm ( k ) ⎢ i∈β ⎥ ⎣ ⎦ The initial state of the network at time (k = 0) is assumed to be zero as follows: ∂wm (0) ∂ y i(0) = 0 . With the many advantages that the neural network has. system dynamics are controlled by: ⎤ ⎥ ⎦ For every time step k and all appropriate j.
This is achieved by providing the system with some initial state values and setting the system input to zero (u(k) = 0). 4. This is an upper triangular matrix that preserves the eigenvalues by (1) placing the original eigenvalues on the diagonal and (2) finding the elements Aij in the upper triangular. . In order to determine the transformed [A] matrix. the permutation [P] matrix is determined using the LMI optimization technique. the discrete zero input response is obtained. This upper triangular matrix form is used to produce the same eigenvalues for the purpose of eliminating the fast dynamics and sustaining the slow dynamics eigenvalues through model order reduction as will be shown in later sections.(5). The identified transformed [ A d ] matrix is then converted back to the continuous form which in general (with all real eigenvalues) takes the following form: ⎡λ1 ⎢ Ac ⎤ ⎢0 ⎥ → A=⎢ Ao ⎦ ⎢ ⎢0 ⎣ A12 A1n ⎤ ⎥ A2 n ⎥ ⎥ ⎥ λn ⎥ ⎦ ⎡ Ar A=⎢ ⎣0 λ2 0 0 (8) where λi represents the system eigenvalues. assuming that x = P −1x . Once the inputoutput data is obtained. simulating this system provides the state response corresponding to their initial values with only the [Ad] matrix is being used. the discrete system of Equations (4) .66 Recent Advances in Robust Control – Novel Approaches and Design Methods D u(t) B + + x(t ) ∫ A x(t ) + C y( t ) + Fig. Block diagram for the statespace system representation. Having the [A] and [ A ] matrices. which is [ A ]. transforming the [Ad] matrix is achieved using the ANN training. with the initial condition x(0) = x0 . y(t ) = CP x(t ) + Du(t ) . the system of Equations (4) . becomes: x( k + 1) = Ad x( k ) y ( k ) = x( k ) (6) (7) We need x(k) as an ANN target to train the network to obtain the needed parameters in [ A d ] such that the system output will be the same for [Ad] and [ A d ]. as will be illustrated in later sections. where y(t ) = y(t ) . The complete system transformation can be achieved as follows where. Hence. as will be explained in Section 3. Hence.(5) can be rewritten as: P x(t ) = AP x(t ) + Bu(t ) .
A form of a similarity transformation can be used to correct this problem for { f : Rn×n → Rn×n } where f is a linear operator defined by f ( A) = P −1 AP [18]. the permutation matrix structure narrows the applicability of this method to a limited category of applications. A matrix A ∈ Mn is called reducible if either: a. b. n = 1 and A = 0. the corresponding LMI is used to obtain the transformation matrix [P]. one obtains: P −1 P x(t ) = P −1 AP x(t ) + P −1Bu(t ) .Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 67 Premultiplying the first equation above by [P1]. and thus the optimization problem will be casted as follows: min P P − Po Subject to P −1 AP − A < ε (16) which can be written in an LMI equivalent form as: S P − Po ⎤ ⎡ min trace(S ) Subject to ⎢ ⎥>0 T I ⎥ ⎢( P − Po ) S ⎣ ⎦ 2 ⎡ ε1 I P −1 AP − A⎤ ⎢ −1 ⎥>0 I ⎢( P AP − A)T ⎥ ⎣ ⎦ (17) . there is a permutation matrix P ∈ M n .n − r . Definition. y(t ) = CP x(t ) + Du(t ) which yields the following transformed model: x(t ) = Ax(t ) + Bu(t ) y(t ) = Cx(t ) + Du(t ) (9) (10) where the transformed system matrices are given by: A = P −1 AP B = P −1B C = CP (11) (12) (13) (14) D=D Transforming the system matrix [A] into the form shown in Equation (8) can be achieved based on the following definition [18]. However. The attractive features of the permutation matrix [P] such as being (1) orthogonal and (2) invertible have made this transformation easy to carry out. and there is some integer r with 1 ≤ r ≤ n − 1 such that: ⎡X Y ⎤ P −1 AP = ⎢ ⎥ ⎣ 0 Z⎦ (15) where X ∈ Mr .r is a zero matrix. n − r .r . and 0 ∈ Mn − r . Z ∈ Mn − r . or n ≥ 2. based on [A] and [ A ]. Hence. Y ∈ Mr .
The singularly perturbed system in Equations (18)(20) is simplified by setting ε = 0 [3. In . 2} . based on the implementation of the ANN. However..68 Recent Advances in Robust Control – Novel Approaches and Design Methods where S is a symmetric slack matrix [6]. finding the permutation matrix [P] does not have to be performed. another way to obtain a transformed model that preserves the eigenvalues of the reduced model as a subset of the original system is by using ANN training without the LMI optimization technique. respectively. the transformation is obtained and the reduction is then achieved. B = ⎢ ⎥ ⎢ ⎥ ⎥ ⎢b ⎥ ˆ λn ⎥ ⎦ ⎣ n⎦ where the eigenvalues are selected as a subset of the original system eigenvalues. Neglecting the fast dynamics of a singularly perturbed system provides a reduced (i. which may be referred to as singularly perturbed systems [19].14. This may be achieved based on the assumption that the states are reachable and measurable. This gives the advantage of designing simpler lowerdimensionality reducedorder controllers that are based on the reducedmodel information. Hence. u ∈ ℜn1 and y ∈ ℜn2 are the input and output vectors. as shown in Equation (15). 2. is used. x( 0 ) = x0 (18) (19) (20) εξ (t ) = A21x(t ) + A22ξ (t ) + B2 u(t ) . To achieve this goal. and ε is a small positive constant. [ Ci ]} are constant matrices of appropriate dimensions with i ∈ {1.27].3 System transformation using neural identification A different transformation can be performed based on the use of the recurrent ANN while preserving the eigenvalues to be a subset of the original system. To show the formulation of a reduced order system model. slow) model. { [A ii ] . the upper triangular block structure produced by the permutation matrix.e. the recurrent ANN can ˆ ˆ identify the [ Ad ] and [ Bd ] matrices for a given input signal as illustrated in Figure 3. [ Bi ]. Hence. ξ (0 ) = ξ0 y(t ) = C 1x(t ) + C 2ξ (t ) where x ∈ ℜm1 and ξ ∈ ℜm2 are the slow and fast state variables. The ˆ ˆ ANN identification would lead to the following [ Ad ] and [ Bd ] transformations which (in the case of all real eigenvalues) construct the weight matrix [W] as follows: ˆ ˆ W = ⎡[ Ad ] [ Bd ]⎤ ⎣ ⎦ → ⎡λ1 ⎢ ˆ ⎢0 A=⎢ ⎢ ⎢0 ⎣ ˆ A12 λ2 0 0 ˆ ˆ ⎡ b1 ⎤ A1n ⎤ ⎢ ⎥ ⎥ ˆ ˆ A2 n ⎥ ˆ ⎢b2 ⎥ ⎥.4 Model order reduction Linear timeinvariant (LTI) models of many physical systems have fast and slow dynamics. consider the singularly perturbed system [9]: x(t ) = A11x(t ) + A12ξ (t ) + B1u(t ) . respectively. 2. but instead [X] and [Z] in Equation (15) will contain the system eigenvalues and [Y] in Equation (15) will be estimated directly using the corresponding ANN techniques. Therefore.
The recurrent ANN presented in Section 2. defining ß as the set of indices i for which y i ( k ) is an internal input or a neuron output.29].15. and the detailed matrix elements of Equations (26)(27) were shown in Figure 3 in the previous section.1 can be summarized by defining Λ as the set of indices i for which gi ( k ) is an external input. (22) (23) 3. C r = C 1 − C 2 A22 A21 . we are neglecting the fast dynamics of the system and assuming that the state variables ξ have reached the quasisteady state. Br = B1 − A12 A22 B2 . and defining ui ( k ) as the combination of the internal and external inputs for which i ∈ ß ∪ Λ. To achieve this objective. Substituting Equation (21) in Equations (18)(20) yields the following reduced order model: xr (t ) = Ar xr (t ) + Br u(t ) y(t ) = C r xr (t ) + Dr u(t ) −1 −1 −1 −1 where { Ar = A11 − A12 A22 A21 .Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 69 doing so. it is our objective to search for a similarity transformation that can be used to decouple a preselected eigenvalue set from the system matrix [A]. with the assumption that [ A 22 ] is nonsingular. For the system of Equations (18)(20).2. the discrete model of the dynamical system is obtained as: x( k + 1) = Ad x( k ) + Bd u( k ) y( k ) = C d x( k ) + Dd u( k ) (24) (25) The identified discrete model can be written in a detailed form (as was shown in Figure 3) as follows: ⎡ x1 ( k + 1) ⎤ ⎡ A11 ⎢ x ( k + 1)⎥ = ⎢ A ⎣ 2 ⎦ ⎣ 21 A12 ⎤ ⎡ x1 ( k ) ⎤ ⎡ B11 ⎤ ⎥⎢ ⎥ + ⎢ ⎥ u( k ) A22 ⎦ ⎣ x2 ( k )⎦ ⎣ B21 ⎦ (26) ⎡ x1 ( k ) ⎤ y( k ) = ⎢ ⎥ ⎣ x 2 ( k )⎦ (27) where k is the time index. Using this setting. training the neural network to identify the transformed discrete system matrix [ A d ] is performed [1. setting ε = 0 in Equation (19). produces: −1 −1 ξ (t ) = − A22 A21xr (t ) − A22 B1u(t ) (21) where the index r denotes the remained or reduced model. Neural network identification with lmi optimization for the system model order reduction In this work. Hence. Dr = − C 2 A22 B2 }. training the ANN depends on the internal activity of each neuron which is given by: v j (k) = i∈Λ ∪ β ∑ w ji ( k )ui ( k ) (28) .
compute the dynamics of the system which are governed by the triplyindexed set of variables: j i π m ( k + 1) = ϕ ( v j ( k )) ⎢ ∑ w ji ( k )π m ( k ) + δ mj u ( k )⎥ ⎡ ⎤ ⎥ ⎦ ⎢ i∈ ß ⎣ j with initial conditions π m (0) = 0 and δ mj is given by ∂w ji ( k ) ∂wm ( k ) . i = } and otherwise it is "0". m ∈ ß and ∈ ß ∪ Λ.70 Recent Advances in Robust Control – Novel Approaches and Design Methods where wji is the weight representing an element in the system matrix or input matrix for j ∈ ß and i ∈ ß ∪ Λ such that W = ⎡[ A d ] [ Bd ]⎤ . for the special case of a sigmoidal nonlinearity in the form of a logistic function. Notice that. That is. In steps. use Equations (28) .29]. this may be proceeded as follows: Initialize the weights [W] by a set of uniformly distributed random numbers. Compute the weight changes corresponding to the error signal and system dynamics: j Δwm ( k ) = η ∑ e j ( k )π m ( k ) j∈ς (30)  Update the weights in accordance with: wm ( k + 1) = wm ( k ) + Δwm ( k ) (31)  Repeat the computation until the desired identification is achieved. At the next time step (k +1). an error can be obtained by matching a true state output with a neuron output as follows: e j (k) = x j (k) − x j (k) Now. based on an approximation of the method of steepest descent.(29) to compute the output values of the N neurons (where N = ß ). the ANN identifies the system matrix [Ad] as illustrated in Equation (6) for the zero input response. the objective is to minimize the cost function given by: Etotal = ∑ E( k ) and E( k ) = k 1 2 ∑ e2 ( k ) j j∈ς where ς denotes the set of indices j for the output of the neuron structure.) as follows: x j ( k + 1) = ϕ ( v j ( k )) (29) With these equations. the output ⎣ ⎦ (internal input) of the neuron j is computed by passing the activity through the nonlinearity φ(. the derivative ϕ ( ) is given ⋅ by ϕ ( v j ( k )) = y j ( k + 1)[1 − y j ( k + 1)] . This cost function is minimized by estimating the instantaneous gradient of E(k) with respect to the weight matrix [W] and then updating [W] in the negative direction of this gradient [15. which is equal ( ) to "1" only when {j = m. For every time step k and all j ∈ ß . . Starting at the instant (k = 0).
4. a complete system transformation.(7). the permutation matrix [P] is then determined. will be achieved. for the purpose of estimating only the transformed system matrix [ Ad ]. and using model .Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 71 As illustrated in Equations (6) . which was illustrated in Section 2. The retained eigenvalues are the dominant eigenvalues that produce the slow dynamics and the omitted eigenvalues are the nondominant eigenvalues that produce the fast dynamics. the training is based on the zero input response. Hence. [ Dor ]} overall reduced matrices were shown in Equations (35) .(10). as shown in Equations (9) . Once the training is completed.(10) can be written as: ⎡ xr (t )⎤ ⎡ Ar ⎢ ⎥=⎢ ⎢ ⎥ ⎣ x o (t )⎦ ⎣ 0 ⎡ y r (t ) ⎤ ⎢ y (t )⎥ = [C r ⎣ o ⎦ Ac ⎤ ⎡ xr (t )⎤ ⎡ Br ⎤ u(t ) + Ao ⎥ ⎢ xo (t )⎥ ⎢ Bo ⎥ ⎦⎣ ⎦ ⎣ ⎦ ⎡ xr (t )⎤ ⎡Dr ⎤ Co ] ⎢ ⎥ + ⎢ ⎥ u(t ) ⎣ xo (t )⎦ ⎣Do ⎦ (32) (33) The following system transformation enables us to decouple the original system into retained (r) and omitted (o) eigenvalues. the obtained weight matrix [W] will be the discrete identified transformed system matrix [ Ad ]. Transforming the identified system back to the continuous form yields the desired continuous transformed system matrix [ A ].(36). the system in Equations (9) . the following equation is obtained: xo (t ) = − Ao −1Bou(t ) Using xo (t ) . respectively. Using the LMI optimization technique. For the model order reduction. [ Cor ].2. with and without using LMI. we get the reduced order model given by: xr (t ) = Ar xr (t ) + [ − Ac Ao −1Bo + Br ]u(t ) − y(t ) = C r xr (t ) + [ −C o Ao 1Bo + D]u(t ) (34) (35) (36) Hence. the overall reduced order model may be represented by: xr (t ) = Aor xr (t ) + Bor u(t ) y(t ) = C or xr (t ) + Dor u(t ) (37) (38) where the details of the {[ A or ]. [ Bor ]. Examples for the dynamic system order reduction using neural identification The following subsections present the implementation of the new proposed method of system modeling using supervised ANN. Equation (32) maybe written as: xr (t ) = Ar xr (t ) + Ac xo (t ) + Br u(t ) and xo (t ) = Ao xo (t ) + Bou(t ) The coupling term Ac xo (t ) maybe compensated for by solving for xo (t ) in the second equation above by setting xo (t ) to zero using the singular perturbation method (by setting ε = 0 ). By performing this.
that can be directly utilized for the robust control of dynamic systems. 5.1 Model reduction using neuralbased state transformation and lmibased complete system transformation The following example illustrates the idea of dynamic system model order reduction using LMI with comparison to the model order reduction without using LMI. and software specifications of MS Windows XP 2002 OS and Matlab 6.72 Recent Advances in Robust Control – Novel Approaches and Design Methods order reduction. (a) (b) Fig. As seen in Figure 5. the system is designed with a small capstan to pull the tape past the read/write heads with the takeup reels turned by DC motors [10]. The used tape drive system: (a) a front view of a typical tape drive mechanism. and 504 MB of RAM. . Let us consider the system of a highperformance tape transport which is illustrated in Figure 5. 4.5 simulator.40 GHz. The presented simulations were tested on a PC platform with hardware specifications of Intel Pentium 4 CPU 2. and (b) a schematic control model.
x2 = r2ω2 dt dω J 2 2 + β 2ω2 + r2T = 0 dt T = K 1 ( x3 − x1 ) + D1 ( x3 − x1 ) T = K 2 ( x2 − x3 ) + D2 ( x2 − x3 ) x1 = r1θ 1 .Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 73 As can be shown.2 is the damping in the tapestretch motion. ω1 is the speed of the drive wheel θ 1 . J2 is the inertia of the idler. the tape tension equals the vacuum force ( To = F ) and the torque from the motor equals the torque on the capstan ( Kt io = r1To ) where To is the tape tension at the read/write head at equilibrium. and (5) capstan motor speed. x(t ) = ⎢ 0 ⎢ ⎥ ⎢ ⎥ 0 ⎥ ⎢1.75 ⎥ ⎢ ⎥ ⎢ ⎥ 0 0 5 0 ⎥ x(t ) + ⎢0 ⎥ u(t ) .4 2. (2) tape tension..35 1. θ2 is the tachometer shaft angle. Kt is the torque constant of the motor. and five states which are (1) tape position at the air bearing. and r1 is the radius of the capstan takeup wheel. e is the applied input voltage (V). Case #1.35 1. three outputs which are (1) tape position at the head. and the system equations of motion are given as follows: dω J 1 = 1 + β1ω1 − r1T + Kt i . T is the tape tension at the read/write head. The system variables are defined as deviations from this equilibrium. which is the applied voltage. x1 = r1ω1 dt di L Ri + K eω1 = e . R is the armature resistance. x3 is the position of the tape at the head.1 ⎢0 ⎥ 3. r2 is the radius of the tape on the idler. and ω2 is the output speed measured by the tachometer output θ 2 . Ke is the electric constant of the motor. F is the constant force (i. The following subsections will present the simulation results for the investigation of different system cases using transformations with and without utilizing the LMI optimization technique. K is the motor torque constant. where there is one input. x3 is the velocity of the tape at the head. θ1 is the angular displacement of the capstan.1 1.2 is the spring constant in the tapestretch motion.e.1 0.1. (4) tachometer output speed. tape tension for vacuum column). r1 is the radius of the takeup wheel. K1.4 ⎢0 ⎥ ⎢ 0 ⎢ 1⎥ 0. 4. β2 is the viscous friction at the wheel. (3) tape position at the wheel.03 0 0 10 ⎥ ⎣ ⎦ ⎣ ⎦ . L is the armature inductance. x2 = r2θ 2 . and (3) tape position at the wheel. The state space form is derived from the system equations. β1 is the viscous friction at takeup wheel. (2) drive wheel speed.4 11. Let us consider the following case of the tape transport: 2 0 0 0 ⎤ ⎡ 0 ⎡0 ⎤ ⎢ 1. in static equilibrium. io is the equilibrium motor current. J1 is the combined inertia of the wheel and takeup motor. i is the current into capstan motor. x3 = x1 − x2 2 where D1.1 System transformation using neural identification without utilizing linear matrix inequality This subsection presents simulation results for system transformation using ANNbased identification and without using LMI.
8034 ⎥ x(t ) + ⎢0.5 ⎥ ⎢ ⎥ ⎢ −0.5 y (t ) = ⎢ 0 0.0079 0. 0.1 sec.0235 0.0204 0.2 0.0158 0. 9.0482 ⎥ ⎥ 0.5772.005 and with initial weights w = [[ Ad ] [ Bd ]] given as: ⎡ 0. 9. 0. the reduction based on the identification of the input matrix [ B ] ˆ ] was performed.2 0.5 0 0 ⎥ x(t ) ⎥ ⎢ −0.9985 ⎢ ⎢ 0 0 0 0 ⎣ 0. and then training the ANN for the input/output data with a ˆ ˆ learning rate η = 0.9809 0.0360 0.74 Recent Advances in Robust Control – Novel Approaches and Design Methods 0 1 0 0⎤ ⎡ 0 ⎢ 0.0373 0.8701 0.2 0.1041 0.5962 ± j0.8701 0.0058 0.2 −0. model order reduction can be applied.999} and one real eigenvalue produce slow dynamics {0.0107 ⎥ 10.5 ⎥ x(t ) + ⎢0 ⎥ u(t ) y (t ) = ⎢ 0 0.0273 0.0445 ⎢ w = ⎢ 0.1041 ⎤ ⎡ 0.8034 0.0307 0.0474 ⎣ 0.0309 ⎢ ⎢ 0.8701 0.4680 ⎥ x(t ) + ⎢ 0. As can be seen..0974 ⎥ 0.3375 ⎥ ⎢ ⎥ 0. Using the singular perturbation technique.0283 0.2710 ⎢ 0. and thus since (1) not all the eigenvalues are complex and (2) the existing real eigenvalues produce the fast dynamics that we need to eliminate.2 ⎦ ⎥ ⎢ ⎥ ⎣ ⎣0 ⎦ .0243 0.0257 0.8702}.5967 0.4520 ⎢ 0 0.0222 0.2 −0.999.4114 ⎤ ⎡ 0.0225 ⎢ 0.0499 ⎤ 0.0011⎥ ⎢ 1..1414 ⎤ ⎥ ⎢ 0. all of the system eigenvalues have been preserved in this transformed model with a little difference due to discretization.1307 ⎥ u(t ) ⎥ ⎢ ⎥ 0.0294 0. where two eigenvalues are complex and three are real.4962 x(t ) = ⎢ 0 ⎢ 0 0 0 9. the following reduced 3rd order model is obtained as follows: ⎡ 0.9814}.2 0 ⎦ ⎥ ⎣ As observed.9809 ⎦ ⎣ ⎣ 0. using a step input with learning time Tl = 300 sec.0146 ⎥ ⎢ 0.0152 ⎥ 0.5967 0.0059 0.0302 0.5764 ⎥ ⎦ ⎣ ⎦ 0 1 0 0⎤ ⎡ 0 ⎢ 0. [ A ] and [B ] .5967 0.0860 ⎦ 0 1 ⎤ ⎡ 0 ⎡0 ⎤ ⎢ 0.0652 ⎥ u(t ) ⎢ ⎥ ⎢ ⎥ ⎢ 0 ⎥ ⎢ ⎥ 0 0. This identification is achieved and the transformed system matrix [ A utilizing the recurrent ANN.9814.2 0. By discretizing the above system with a sampling time Ts = 0. In order to ˆ obtain the reduced model.5772.0405 0.0359 0.5 0 0. as follows: ⎡ 0.2 0.0212 0.0088 ⎥ ⎥ 0.8701 0.5 0 0 ⎥ x(t ) y (t ) = ⎢ ⎥ ⎢ ⎥ ⎣ −0. two real eigenvalues produce fast dynamics {10.1021 ⎤ x(t ) = ⎢ 0.5967 0.2 −0.0003 ⎢ 0.0168 ⎥ ⎦ ˆ ˆ produces the transformed model for the system and input matrices.0295 0.2 0 ⎦ The five eigenvalues are {10.
6. y(t ) = ⎢ 0..… transformed. 0. Case #2.1 1.4 2. Based on Figure 6.5 0 0 ⎥ x(t ) 0 0 2 0 ⎥ x(t ) + ⎢0 ⎥ u(t ) .9304}. 0.5967 ± j0.35 0.04 0. [B]. three are real. where LMIbased transformation will produce better reductionbased response results than both the nontransformed and transformed without LMI.6912 ± j1.75⎥ ⎢0 ⎥ 0 1 0 0⎤ ⎡ 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 0.4 0.12 0.3082}. the system transformation is further considered for complete system transformation using LMI (for {[A].0002. where two eigenvalues are complex.1 0.9973. and only one eigenvalue is considered to produce fast dynamics {9. it is seen that the nontransformed reduced model provides a response which is better than the transformed reduced model.8701} which are a subset of the original system. for a step input with learning time Tl = 70 sec.5 x(t ) = ⎢ 0 ⎢ ⎥ ⎢ ⎥ ⎢ −0.2 0.2 0 ⎥ 0 ⎥ ⎣ ⎦ ⎢0.. Therefore.2 0. Evaluations of the reduced order models (transformed and nontransformed) were obtained by simulating both systems for a step input. 0.1 0.14 0. 0.2 −0. while the reduced order model obtained using the singular perturbation without system transformation has provided different eigenvalues {0.8283.9973}.02 System Output 0 5 10 Time[s] 15 20 Fig.2.3696. and through training the ANN for the input/output data with η = 3.1 0.08 0. [D]}) as will be seen in subsection 4. The cause of this is that the transformation at this point is performed only for the [A] and [B] system matrices leaving the [C] matrix unchanged.nontransformed) output responses to a step input along with the nonreduced model ( ____ original) 5th order system output response. Simulation results are shown in Figure 6.9809.35 0. Reduced 3rd order models (. 2.. Consider now the following case: 2 0 0 0 ⎤ ⎡ 0 ⎡0 ⎤ ⎢ 1.5980 ± j0.Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 75 It is also observed in the above model that the reduced order model has preserved all of its eigenvalues {0.5 x 105 and initial weight matrix given by: . 0.06 0.02 0 0.. .1.071 sec. Using the discretized model with Ts = 0.4 ⎢0 ⎥ ⎢ 0 ⎥ ⎢1⎥ 0.03 0 0 10 ⎦ ⎣ ⎣ ⎦ The five eigenvalues are {9.
04 System Output 0.3081} are preserved as a subset of the original system.0130 0.4606 0.0781 ⎥ ⎢ ⎥ ⎥ x(t ) + ⎢ 0.0196 0.0114 ⎤ ⎡ 0.0180 0.6912 0.0168 0. 0.0120 ⎥ ⎥ 0. 0.0091 ⎢ ⎢ 0.3081 0.. a reduced 4th order model is obtained as follows: ⎡ 0.0071 0...0195 ⎢ 0. .2 ⎥ ⎣ ⎦ where all the eigenvalues {2.0061 ⎢ ⎣ 0.02 0 2 4 6 8 10 Time[s] 12 14 16 18 20 Fig.0068 0.0194 0.02 0.0139 ⎥ 0.0048 0.0038 0.0073 0.2 0.nontransformed) output responses to a step input along with the nonreduced ( ____ original) 5th order system output response.01 0.07 0.3696 0.0189 ⎢ w = ⎢ 0. . 7. 0.0085 0.0134 0.3081 0.6916 0.0038 ⎥ ⎥ 0.0025 0.2 −0. transformed.0055 0.3696. The reason for this follows closely the explanation provided for the previous case.0240 ⎥ 0 0.0837 ⎤ ⎢ 1.6912 ± j1. Simulation results are shown in Figure 7 where again the nontransformed reduced order model provides a response that is better than the transformed reduced model.5 0 ⎥ x(t ) ⎢ ⎥ ⎢ −0. This reduced 4th order model is simulated for a step input and then compared to both of the reduced model without transformation and the original system response.0161 ⎥ ⎦ and by applying the singular perturbation reduction technique.76 Recent Advances in Robust Control – Novel Approaches and Design Methods ⎡ 0.0053 0.0029 ⎤ 0.0002 ⎦ ⎣ ⎣ 0.06 0.05 0.01 0 0.0031 0.0002.03 0.0193 0.0150 0.0145 0.0031 0.0520 ⎥ u(t ) x( t ) = ⎢ ⎢ 0 ⎢ 0.2 0.0113 ⎥ ⎢ ⎥ ⎢ ⎥ 0 0 0 2.0014 ⎦ 0 1 0 ⎤ ⎡ 0 y(t ) = ⎢ 0.6912 1.0204 0. Reduced 4th order models (….5 0 0.
35 0.5 0 0 ⎥ x(t ) 0 0 5 0 ⎥ x(t ) + ⎢0 ⎥ u(t ) . and initial weight matrix given by: ⎡ 0. 0.0053 0.2 ⎦ ⎥ ⎢ ⎥ ⎣ ⎣ 0. However. 1.5 x(t ) = ⎢ 0 ⎢ ⎥ ⎢ ⎥ ⎢ −0. it is also seen that the response of the nontransformed reduced model is better than the transformed reduced model. this was due to the fact of not transforming the complete system (i.0733 0.1 0. by neglecting the [C] matrix).0162 ⎢ ⎢ 0. y(t ) = ⎢ 0.4 1.2 0. it is seen here the preservation of the eigenvalues of the reducedorder model being as a subset of the original system. the reduced model without system transformation provided different eigenvalues {1.0078 ⎥ u(t ) ⎢ ⎥ ⎢ ⎥ ⎢ 0 ⎥ ⎢ ⎥ 0 1. Let us consider the following system: 2 0 0 0 ⎤ ⎡ 0 ⎡0 ⎤ ⎢ 0.6778.0366 0.2060} from the transformed reduced order model.2055} which are all real.1 sec.03 0 0 10 ⎦ ⎣ ⎣ ⎦ The eigenvalues are {9.0072 ⎢ 0.35 0.1 04. In Figure 8. didn't provide an acceptable response as compared with either the reduced nontransformed or the original responses.0490 ⎣ 0. Utilizing the discretized model with Ts = 0.0735 ⎥ ⎦ and then by applying the singular perturbation technique. Simulating both systems for a step input provided the results shown in Figure 8.Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 77 Case #3.0904 0. 4.2 0 ⎥ 0 ⎥ ⎣ ⎦ ⎢0.0.e. for a step input with learning time Tl = 500 sec.25 x 105.0858 ⎢ w = ⎢ 0. the following reduced 3rd order model is obtained: ⎡ 0.4 5.0329 ⎥ x(t ) + ⎢ 0.0024 ⎥ ⎥ 0.0298 0.9973.0653 0.75 ⎥ ⎢0 ⎥ 0 1 0 0⎤ ⎡ 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 0. as shown before.0231 0.6966 ⎤ ⎡ 0.6223.8986 ⎦ ⎣ ⎣0.0231 0.2 LMIbased state transformation using neural identification As observed in the previous subsection. As was mentioned.6782 0.1 1.0770 0.2 0. where its objective was to preserve the system eigenvalues in the reduced model.0662 0..0772 0. we will now perform a .0150 0. and training the ANN for the input/output data with η = 1.0427 ⎥ 0. the system transformation without using the LMI optimization method.5 ⎥ x(t ) + ⎢ 0 ⎥ u(t ) 0 0..0695 0.1.5 ⎥ y (t ) = ⎢ ⎢ ⎥ ⎢ −0. 0.0017 ⎦ Again.0.4649 ⎦ 0 1 ⎤ ⎡ 0 ⎡ 0 ⎤ ⎢ 0.0014 0.5165.0907 0.2 −0.0515 0.8992.0341 ⎤ x(t ) = ⎢ 0 0.2051 1.5131 0.2 0.0223 0. In order to achieve better response.0609 ⎥ ⎥ 0.0768 0.2 −0.4 ⎢0 ⎥ ⎢ 0 ⎥ ⎢ 1⎥ 0. which is again caused by leaving the output [C] matrix without transformation.0132 ⎢ 0.0523 0.0968 0.9702. 3.0184 ⎤ 0.
and training the ANN for the input/output data with η = 0.6 0.0239 0.. transformed.6165 ⎢ 0 0..1395 A=⎢ 0 ⎢ 0 0 0 9. .0304 ⎥ ⎥ 0.0964 ⎤ 0.0325 0.2 0 10 20 30 40 Time[s] 50 60 70 80 Fig. the ANN identification is used now to identify only the transformed [ Ad ] matrix.0206 0. For the example of case #1 in subsection 4.0056 0.0191 ⎤ 0.0453 ⎥ 0.0287 0.4934 ⎥ ⎥ 1.0398 0..9860 ⎢ 0.0106 0.8701 0. Reduced 3rd order models (….1.1 0.1 sec.0440 0. Case #1.0444 0.0144 ⎥ ⎥ 0.2114 ⎥ ⎥ 0.5764 ⎥ ⎦ Based on this transformed matrix.2 0.1 0 0. Therefore.1.4633 0.. 8.5 0. Performing model order reduction provided the following reduced 3rd order model: . the transformed {[ B ].2276 0.0327 ⎣ 0.0478 0.nontransformed) output responses to a step input along with the nonreduced ( ____ original) 5th order system output response.. 0.8701 1.5967 0.0449 ⎥ 10.0016 ⎢ ⎢ 0. [ C ].0226 0. using a step input with learning time Tl = 15 sec.3 0.78 Recent Advances in Robust Control – Novel Approaches and Design Methods complete system transformation utilizing the LMI optimization technique to obtain the permutation matrix [P] based on the transformed system matrix [ A ] as resulted from the ANNbased identification.0002 ⎥ ⎦ produces the transformed system matrix: ⎡ 0.0411 ⎢ 0.7 0.5967 0. the permutation matrix [P] was computed and then used for the complete system transformation. Discretizing the system with Ts = 0. using the LMI technique.0042 0. where the following presents simulations for the previously considered tape drive system cases.9985 ⎢ ⎢ 0 0 0 0 ⎣ 0.001 and initial weights for the [ Ad ] matrix as follows: ⎡ 0. [ D ]} matrices were then obtained.0186 0.0307 0.0286 ⎢ 0.9809 0.0375 ⎢ w = ⎢ 0.4 System Output 0.0384 0.
0025 ⎥ u(t ) ⎢ ⎥ ⎢ ⎥ ⎢ 0. ___ Original..02 System Output 0 1 2 3 4 Time[s] 5 6 7 8 Fig. 9.0273 ⎥ ⎦ . The LMItransformed curve fits almost exactly on the original response. transformed with LMI) output responses to a step input along with the non reduced ( ____ original) system output response.0086 0.2276 ⎥ x(t ) + ⎢ 47.1.5967 0.0040 0.4633 ⎤ ⎡ 35.0231 0.1 0.0332 0..0004 0.0682 0.0459 0..3374 ⎥ u(t ) x(t ) = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 0. Reduced 3rd order models (…. provided a response that is almost the same as the 5th order original system response. This is clearly shown in Figure 9 where the 3rd order reduced model.0009 0..14 0.12 0...0439 ⎤ 0.0476 0. .0019 ⎡ 0.5967 0.0051 with initial weights for the [ Ad ] matrix as follows: ⎡ ⎢ ⎢ w=⎢ ⎢ ⎢ ⎢ ⎣ 0.0006 ⎦ where the objective of eigenvalue preservation is clearly achieved. transformed without LMI.1670 ⎤ ⎢ 0.0418 0.04 0.. .. .02 0 0.0025 ⎤ y(t ) = ⎢ 0. 200 input/output data learning points.0024 0. Investigating the performance of this new LMIbased reduced order model shows that the new completely transformed system is better than all the previous reduced models (transformed and nontransformed).0088 ⎥ x(t ) + ⎢ 0.0129 0.8701 0. Case #2. with LMI.0516 0.0317 0. Trans.06 0.0001 0.Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 79 ⎡ 0.0154 ⎥ 0.Trans. based on the LMI optimization transformation.0575 0..0610 0.1 sec.0028 0.0745 0.0139 ⎤ ⎡ 0.0691 ⎥ ⎥ 0.08 0.1652 ⎦ 0 0. for Ts = 0.None Trans.1. without LMI 0.0176 0. For the example of case #2 in subsection 4.9809 ⎦ ⎣ 0 ⎣ 4.0247 ⎥ ⎥ 0.0611 0. and η = 0.nontransformed.0234 0.0021 ⎦ ⎥ ⎢ ⎥ ⎣ ⎣ 0.. .0706 0.0633 0.8701 1.
05 0. The LMItransformed curve fits almost exactly on the original response.0061 ⎢ y(t ) = ⎢ 0. with LMI. the LMIreductionbased technique has provided a response that is better than both of the reduced nontransformed and nonLMIreduced transformed responses and is almost identical to the original system response.6910 1. for Ts = 0. Simulating this reduced order model to a step input.01 0. and η = 1 x 104 with initial weights for [ Ad ] given as: . ___ Original.04 System Output 0. Investigating the example of case #3 in subsection 4.07 0.0946 ⎥ x(t ) + ⎢ 0.. The complete system transformation was then performed and the reduction technique produced the following 3rd order reduced model: ⎡ 0.8578 ⎤ ⎡ 0.0080 ⎥ ⎦ ⎣ ⎦ with eigenvalues preserved as desired..0261 0.1.06 0.1 sec.. 200 input/output data points.7621⎤ ⎢ ⎥ ⎢ ⎥ x(t ) = ⎢ 1. ..5719 ⎥ x(t ) + ⎢ 0. Case #3.80 Recent Advances in Robust Control – Novel Approaches and Design Methods the transformed [ A ] was obtained and used to calculate the permutation matrix [P].0459 ⎢ 0.0117 ⎣ 0.3088 3. 10.01 0 0. without LMI 0. transformed without LMI.02 0..03 0.None Trans. as done previously. Reduced 3rd order models (…... Trans.0014 ⎥ 0.6910 1.nontransformed. .0015 ⎥ u(t ) ⎢ 0. .0111 ⎤ ⎡ 0.4466 ⎥ 0 0.02 0 2 4 6 8 10 Time[s] 12 14 16 18 20 Fig..transformed with LMI) output responses to a step input along with the non reduced ( ____ original) system output response.0155 0.Trans. . Here.3697 ⎥ ⎣ ⎦ ⎣ ⎦ ⎡ 0.1.3088 0.1118 ⎥ u(t ) ⎢ 0 ⎢ 0. provided the response shown in Figure 10.. .0187 0.0015 ⎤ ⎥ ⎢ ⎥ 0..
The LMItransformed curve fits almost exactly on the original response..5 0. These control methods include (a) PID control.0136 0..0009 0. It attempts to correct the error between a measured . .0055 0.10. .0048 0. with LMI..4 System Output 0. Trans.nontransformed.0024 0.7 0.0121 ⎥ ⎦ the LMIbased transformation and then order reduction were performed. Simulation results of the reduced order models and the original system are shown in Figure 11.0076 0.0089 0.1 0 0.2 0. various control techniques – that can be utilized for the robust control of dynamic systems are considered in this section to achieve the desired system performance..0017 0. without LMI 0.None Trans.24].0049 0. 5.0175 0. 5.. (b) state feedback control using (1) pole placement for the desired eigenvalue locations and (2) linear quadratic regulator (LQR) optimal control.0091 0. . Reduced 3rd order models (…. transformed without LMI. . Again.0078 0.3 0.0034 ⎥ ⎥ 0.0039 0.0176 0.Trans. the response of the reduced order model using the complete LMIbased transformation is the best as compared to the other reduction techniques.0102 0.0039 0.0072 0..Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 81 ⎡ ⎢ ⎢ w=⎢ ⎢ ⎢ ⎢ ⎣ 0. .1 Proportional–Integral–Derivative (PID) control A PID controller is a generic control loop feedback mechanism which is widely used in industrial control systems [7.0040 ⎥ ⎥ 0.0024 0.1 0. and (c) output feedback control.transformed with LMI) output responses to a step input along with the non reduced ( ____ original) system output response. 11. ___ Original.6 0..2 0 5 10 15 20 Time[s] 25 30 35 40 Fig. The application of closedloop feedback control on the reduced models Utilizing the LMIbased reduced system models that were presented in the previous section.0051 ⎥ 0.0048 0..0176 0..0168 ⎤ 0.
where the settling time is almost 1. the output of interest (i. the three parameters of the PID controller {Kp. Also as observed. Fig.71s + 10. further tuning of these parameters is needed to provide the desired control action. In the control design process. the identified reduced 3rd order model is now considered for the output of the tape position at the head which is given as: G( s )original = 0.64 4 Simulating the new PIDcontrolled system for a step input provided the results shown in Figure 13.0919 Searching for suitable values of the PID controller parameters. this will affect the other outputs' performance and therefore a further PIDbased tuning operation must be applied.8s + 10. the PID controller using the reduced order model for the desired output was investigated. Focusing on one output of the tapedrive machine. the overshoot has much decreased after using the PID controller. while without the controller was greater than 6 sec. However. such that the system provides a faster response settling time and less overshoot.2837s + 1. For the PIDcontrolled output of the tachometer shaft angle.383s 3 + 22.26s 2 + 20. 12.98s 2 + 19. Kd = 90} with a controlled system which is given by: G( s )controlled = 7. the 2nd output) is controlled as desired using the PID controller. Closedloop feedback singleinput singleoutput (SISO) control using a PID controller. .82 Recent Advances in Robust Control – Novel Approaches and Design Methods process variable (output) and a desired setpoint (input) by calculating and then providing a corrective signal that can adjust the process accordingly as shown in Figure 12.5 sec.133 s 3 + 2. Kd} have to be calculated for some specific process requirements such as system overshoot and settling time. It is normal that once they are calculated and implemented. Hence. Ki. As seen in Figure 14. the response of the system is not actually as desired. the other system outputs can be PIDcontrolled using the cascading of current process PID and new tuningbased PIDs for each output. Therefore. the controlling scheme would be as shown in Figure 14. it is found that {Kp = 100. Ki = 80.209s 3 + 19.1742s 2 + 2.0801s + 0.e.64 s + 9. On the other hand..
Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems Step Response 0. 5. Similarly. G3T can be obtained. (a) (b) Fig. the tuning process is accomplished using G1T and G3T.(38).12 0. For example.2.08 0. 14. we will investigate the state feedback control techniques of pole placement and the LQR optimal control for the enhancement of the system performance. As shown in Figure 14. 13.02 0 0 1 2 3 4 5 Time (sec) 6 7 8 9 10 Fig.1 Pole placement for the state feedback control For the reduced order model in the system of Equations (37) . for the 1st output: Y1 = G1T G1PID(R − Y2 ) = Y1 = G1 R (39) (40) ∴ G1T = R PID(R . For example.1 Amplitude 0.2 State feedback control In this section. a simple pole placementbased state feedback controller can be designed. 5. Reduced 3rd order model PID controlled and uncontrolled step responses. assuming that a controller is .04 0.14 83 0. Closedloop feedback singleinput multipleoutput (SIMO) system with a PID controller: (a) a generic SIMO diagram.06 0.Y2 ) where Y2 is the Laplace transform of the 2nd output. and (b) a detailed SIMO diagram.
15. A state feedback control for pole placement can be illustrated by the block diagram shown in Figure 15. 16. [ Bor ]. . [ Cor ]. Block diagram of a state feedback control with {[ A or ]. Block diagram of the overall state feedback control for pole placement.(38) by the above new control input in Equation (41) yields the following reduced system equations: xr (t ) = Aor xr (t ) + Bor [ −K xr (t ) + r (t )] y(t ) = C or xr (t ) + Dor [ −K xr (t ) + r (t )] (42) (43) which can be rewritten as: xr (t ) = Aor xr (t ) − Bor K xr (t ) + Bor r (t ) → xr (t ) = [ Aor − Bor K ]xr (t ) + Bor r (t ) y(t ) = C or xr (t ) − Dor K xr (t ) + Dor r (t ) → y(t ) = [C or − Dor K ]xr (t ) + Dor r (t ) where this is illustrated in Figure 16. Dor r(t) + u(t)  + Bor ~ (t ) xr + ∫ Aor ~ (t ) xr + y(t) C or + K Fig. Replacing the control input u(t) in Equations (37) . the objective can be achieved using the control input given by: u(t ) = −K xr (t ) + r (t ) (41) where K is the state feedback gain designed based on the desired system eigenvalues.84 Recent Advances in Robust Control – Novel Approaches and Design Methods needed to provide the system with an enhanced system performance by relocating the eigenvalues. Dor r(t) Bor + + xr ( t ) ∫ xr ( t ) + y(t) C or − Dor K + Aor − Bor K Fig. [ Dor ]} overall reduced order system matrices.
0184]. which placed the eigenvalues as desired and enhanced the system performance as shown in Figure 17.3507 0. which has received much of a special attention due to the fact that it can be solved analytically and that the resulting optimal controller is expressed in an easytoimplement state feedback control [7. 1}.compared with the original ____ full order system output step response.2098 0. where Q and R are weight matrices for the states and input commands.5 0.89.. 17..10]. Reduced 3rd order state feedback control (for pole placement) output step response . This is known as the LQR problem.2 0 10 20 30 40 50 Time[s] 60 70 80 90 100 Fig. The state feedback control was then found to be of K = [1. 0. The feedback control law that minimizes the values of the cost is given by: u(t ) = −K x(t ) (47) .2 0. 1.3 0. the state feedback was used to reassign the eigenvalues with {1.2 LinearQuadratic Regulator (LQR) optimal control for the state feedback control Another method for designing a state feedback control for system performance enhancement may be achieved based on minimizing the cost function given by [10]: J = ∫ xT Qx + uT Ru dt 0 ∞ ( ) (46) which is defined for the system x(t ) = Ax(t ) + Bu(t ) .1 0.1 0 0.4 System Output 0.2.6 0. For example..5. for the system of case #3. 5.7 0.Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 85 The overall closedloop system model may then be written as: x(t ) = Acl xr (t ) + Bcl r (t ) y(t ) = Ccl xr (t ) + Dcl r (t ) (44) (45) such that the closed loop system matrix [Acl] will provide the new desired system eigenvalues.
0027]. the optimal state feedback control has enhanced the system performance.5 0. B.0967 0.. which when simulating the complete system for a step input. . which is basically based on selecting new proper locations for the system eigenvalues. 5.2 for the system behavior enhancement. Q . R ) .86 Recent Advances in Robust Control – Novel Approaches and Design Methods where K is the solution of K = R −1BT q and [q] is found by solving the algebraic Riccati equation which is described by: AT q + qA − qBR −1BT q + Q = 0 (48) where [Q] is the state weighting matrix and [R] is the input weighting matrix.2 0 10 20 30 40 50 Time[s] 60 70 80 90 100 Fig.4 System Output 0. provided the normalized output response (with a normalization factor γ = 1. Reduced 3rd order LQR state feedback control output step response . The state feedback optimal control gain was found K = [0.3 0.7 0. A direct solution for the optimal control gain maybe obtained using the MATLAB statement K = lqr( A. and the [Q] matrix was found using the output [C] matrix such as Q = C T C . The LQR optimization technique is applied to the reduced 3rd order model in case #3 of subsection 4.934) as shown in Figure 18..compared with the original ____ full order system output step response. where in our example R = 1.1 0. 18.2 0. 0.1 0 0..1.6 0.3 Output feedback control The output feedback control is another way of controlling the system for certain desired system performance as shown in Figure 19 where the feedback is directly taken from the output. As seen in Figure 18.0192 0.
5799 2. The control input is now given by u(t ) = −K y(t ) + r (t ) . [ I + Dor K ]−1 Dor Bor [ I + KDor ]−1 + + xr (t ) r(t) ∫ xr (t ) + [ I + Dor K ] C or −1 + y(t) Aor − Bor K [ I + Dor K ]−1 C or Fig. 20. . An overall block diagram of an output feedback control. Considering the reduced 3rd order model in case #3 of subsection 4.6276 11]. The normalized controlled system step response is shown in Figure 21.2 for system behavior enhancement using the output feedback control. By applying this control to the considered system. 19.Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 87 Dor r(t) + u(t) +  Bor xr (t ) ∫ Aor xr ( t ) + y(t) C or + + K Fig. where one can observe that the system behavior is enhanced as desired. the feedback control gain is found to be K = [0.1. the system equations become [7]: xr (t ) = Aor xr (t ) + Bor [ −K (C or xr (t ) + Dor u(t )) + r (t )] = Aor xr (t ) − Bor KC or xr (t ) − Bor KDor u(t ) + Bor r (t ) = [ Aor − Bor KC or ]xr (t ) − Bor KDor u(t ) + Bor r (t ) = [ Aor − Bor K [ I + Dor K ]−1 C or ]xr (t ) + [ Bor [ I + KDor ]−1 ]r (t ) y(t ) = C or xr (t ) + Dor [ −K y(t ) + r (t )] = C or xr (t ) − Dor Ky(t ) + Dor r (t ) = [[ I + Dor K ] C or ]xr (t ) + [[ I + Dor K ] Dor ]r (t ) −1 −1 (49) (50) This leads to the overall block diagram as seen in Figure 20. Block diagram of an output feedback control. where y(t ) = C or xr (t ) + Dor u(t ) .
. which operates on neglecting the fasterdynamics eigenvalues and leaving the dominant slowdynamics eigenvalues to control the system..6 0. 6. the order of the dynamic system was reduced.e. for example.where. while the other elements [Ar] and [Ao] are set based on the system eigenvalues such that [Ar] contains the dominant eigenvalues (i. robust control is an area that explicitly deals with uncertainty in its approach to the design of the system controller. The methods of robust control are designed to operate properly as long as disturbances or uncertain parameters are within a compact set.in contrast to the adaptive (dynamic) control policy . fast dynamics). The comparison simulation results show clearly that modeling and control of the dynamic system using LMI is superior .e.4 System Output 0. where robust methods aim to accomplish robust performance and/or stability in the presence of bounded modeling errors. rather than adapting to measurements of variations.3 0. Conclusions and future work In control engineering..1 0. A robust control policy is static . This reduction was performed by the implementation of a recurrent supervised neural network to identify certain elements [Ac] of the transformed system matrix [ A ]. which is required to complete the system transformation matrices {[ B ].. [ C ]. 21.2 0 10 20 30 40 50 Time[s] 60 70 80 90 100 Fig. The reduction process was then applied using the singular perturbation method.1 0 0. To obtain the transformed matrix [ A ]. After the transformed system matrix was obtained. bounded. the zero input response was used in order to obtain output data related to the state dynamics.2 0. slow dynamics) and [Ao] contains the nondominant eigenvalues (i.. the system controller is designed to function assuming that certain variables will be unknown but.5 0. This research introduces a new method of hierarchical intelligent robust control for dynamic systems. based only on the system matrix [A]. Reduced 3rd order output feedback controlled step response .88 Recent Advances in Robust Control – Novel Approaches and Design Methods 0. [ D ]}. In order to implement this control method. the optimization algorithm of linear matrix inequality was utilized to determine the permutation matrix [P].compared with the original ____ full order system uncontrolled output step response.7 0.
Willcox. Prentice Hall. Powell. 202 – 216. Canada. Vol. No. Linear Matrix Inequalities in System and Control Theory. Franklin. 37. El Ghaoui. Toronto. Vol. Modern Control Theory. Benner. [7] W. AlonsoQuesada. 8. [8] T. No. 1976. Editors: SioIong Ao. and output feedback control were then implemented to the reduced model to obtain the desired enhanced response of the full order system. AIP Conference Proceedings 1174. Future work will also involve the fundamental investigation of achieving model order reduction for dynamic systems with all eigenvalues being complex. such as using fuzzy logic and genetic algorithms. state feedback control utilizing (a) pole assignment and (b) LQR optimal control. “Model Reduction at ICIAM'07. C. Las Vegas. 3. and V. Kokotovic. and P. J. References [1] A. U.” Proc. L. 3. BuiThanh. “Model Reduction of MIMO System via Tangential Interpolation. Journal of Computer Science (IJCS). June 2005. 26. New York. Automatic Control. and A. February 1989. [4] P.” 7th International Model Analysis Conference. S. N. Feedback Control of Dynamic Systems.” SIAM News. utilizing the control hierarchy introduced in this research.” SIAM Journal of Matrix Analysis and Applications. Alan HoiShou Chan.S. AlRabadi. [10] G.Robust Control Using LMI Transformation and NeuralBased Identification for Regulating SingularlyPerturbed Reduced Order EigenvaluePreserved Dynamic Systems 89 to that without using LMI. 3rd Edition. Ibeas. Vol. Hideki Katagiri and Li Xu.” American Institute of Physics (AIP). Innsbruck. [5] A. “Comparison of System Characteristics Using Various Model Reduction Techniques. E. A. 1994. [2] A. Boyd. 1991. Simple feedback control methods using PID control. “Intelligent Control of SingularlyPerturbed Reduced Order EigenvaluePreserved Quantum Computing Systems via Artificial Neural Identification and Linear Matrix Inequality Transformation. No. 2. Society for Industrial and Applied Mathematics (SIAM). J. [11] K. EmamiNaeini. Chow. 328349. pp.. De La Sen. 2007. . D. [9] J. 2009. Milani.” 17th American Institute of Aeronautics and Astronautics (AIAA) Computational Fluid Dynamics Conf. M. Special Edition of the International MultiConference of Engineers and Computer Scientists 2009. Future work will involve the application of new control techniques. 3rd Edition. 1994.” IAENG Int. F. [3] P. and P. 2004. and A. 40. of the International Association of Science and Technology. 7. Van Dooren.A. “A Decomposition of NearOptimal Regulators for Systems with Slow and Fast Modes. Feron. pp. and J. Austria. H. BilbaoGuillerna. Nevada. Vandendorpe. 701705. [6] S. O’Callahan. 2004. In: IAENG Transactions on Engineering Technologies. Artificial Intelligence and Application. Balakrishnan. L. “Model Reduction for LargeScale CFD Applications Using the Balanced Proper Orthogonal Decomposition. Brogan. AlRabadi. and K. AddisonWesley. Gallivan. V. “Artificial Neural Identification and LMI Transformation for Model ReductionBased Control of the Buck SwitchMode Regulator.” IEEE Trans. N. AC21. 2010. Avitabile. pp. “Artificial Intelligence Tools for Discrete Multiestimation Adaptive Control Scheme with Model Reduction Issues. Vol..
DiscreteTime Control Systems. McGrawHill. pp. [26] M. Oliveira. E. pp. 1989. Institute of Physics. Skelton. New York. J. [23] C. 2004. “Output Feedback Control of Linear TwoTimeScale Systems. 504507. 7. and J. 1980. O'Malley. Benussou. J.” Automatica. 1976. Sannuti. [24] K. 1948. [19] S. . [28] R. 1985. Horn. Kokotovic.” IEEE Trans. Systems Modeling and Model Reduction. [14] R. Han.” Mat Sbornik (Moscow). Automatic Control. pp. 123132. Durbin. Matrix Analysis and Applied Linear Algebra. AC25. “Reduction of Stiffness and Mass Matrices. AC23. Vol. 1994. Cambridge University Press. Gallivan. 6. “Singular Perturbation and Order Reduction in Control Theory – An Overview. Hayt. J. Vandendorpe. [20] H. pp. Prentice Hall. 213229. and C. Javid. pp. Kemmerly. 2004. Matrix Analysis.” IEEE Trans. 22(64):2. 2006. V. [16] W. R. [21] H. Meyer.” Neural Computation. Artificial Neural Systems. Steinbuch. AC32. K. New York. M. and S. Khalil. “Reducing the Dimensionality of Data with Neural Networks. A. [15] S. and P. pp. 1978. 162. M.” Journal of Computational and Applied Mathematics. 43. [29] J. New York. 1(2). 12(2). Zurada. [13] G. Automatic Control. 2007. Kokotovic. Society for Industrial and Applied Mathematics (SIAM). Williams. “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. West Publishing Company. Engineering Circuit Analysis.” IEEE Trans. Zipser. Dfouz. “Observing the Slow States of a Singularly Perturbed Systems. 289297. “On the Dependence of the Solution of Differential Equation on a Small Parameter. 1968.” AIAA Journal. H. Salakhutdinov. Invited Chapter of the Handbook of Smart Systems and Materials. and D. Macmillan Publishing Company. October 2001. 1992. Hinton. Johnson. 193204.” 1st International MACSInet Workshop on Model Reduction. “H2 Guaranteed Cost Control for Singularly Perturbed Uncertain Systems. and R. [25] R. and P. 2000. [17] G. Automatic Control. 1987. N. pp. H. “Model Reduction for Linear Systems. [18] R. pp. J. Neural Networks: A Comprehensive Foundation.” Science.” IEEE Trans. “Control Strategies for Decision Makers Using Different Models of the Same System. and J. M. Garsia. 784792. and P. 1995.90 Recent Advances in Robust Control – Novel Approaches and Design Methods [12] K. K. [27] A. pp. No. 277280. 13231329. Vol. “Sylvester Equation and ProjectionBased Model Reduction. Automatic Control. Khalil. 270280. pp. [22] P. 2nd Edition. Van Dooren. Tikhonov. 13131319. Guyan. Netherlands. Haykin. Ogata. 1998.
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.