Professional Documents
Culture Documents
~
( t ) = ( t )
, which depends on H(s) and the spectrum of u. In fact it can be shown [201] that R(0) is
related to the spectrum of u via the equation
k
R ( 0 ) = H (1 j i) H ( j i) Su ( i)
T
i=1
which indicates the dependence of the null space of R(0) on the spectrum of the input u.
Example 5.2.3 Consider the second order plant
y=
bo
2
s +a1 s +a0
where a1, a0 > 0 and b0 0 are the unknown plant parameters. We first express the plant in the
form of
z= T
Where
=[b 0 , a1 , a0 ]
z=
s2
u
( s) ,
=[
[ s , 1]
1
u ,
y]
( s)
(s )
and
choose
( s )=(s +2)2 . Let us choose the pure least-squares algorithm from Table 4.3, i.e.,
=P
, ( 0 ) 0
P=P
, P ( 0 )= p0 I
=z T
where is the estimate of * and select p0 = 50. The signal vector
by the state equations
=[ T1 , T2 ]
is generated
0= c 0 +lu
1= [ 0 1 ] 0
2= c 2 +ly
z= y + [ 4 4 ] 2
[]
1
, l= 0
Where
c = 4 4
1
0
(i) for
(t)
as
t ; (ii) for
u=12 sin 3 t ,
( t )
as
Remark 5.2.2 As illustrated with Example 5.2.3, one can choose any one of the adaptive laws
presented in Tables 4.2, 4.3, and 4.5 to form parameter identifiers. The complete proof of the
stability properties of such parameter identifiers follows directly from the results of Chapter
4 and Theorem 5.2.4. The reader is asked to repeat some of the stability proofs of parameter
identifiers with different adaptive laws in the problem section.
(5.3.1)
y=C x
where
x Rn . We assume that u is a piece wise continuous and bounded function of time, and
A is a stable matrix. In addition we assume that the plant is completely controllable and
completely observable.
The problem is to construct a scheme that estimates both the parameters of the plant, i.e.,
A, B, C as well as the state vector x using only I/O measurements. We refer to such a scheme as
the adaptive observer.
A good starting point for choosing the structure of the adaptive observer is the state
observer, known as the Luenberger observer, used to estimate the state vector x of the plant
(5.3.1) when the parameters A, B, C are known.
state observer
x = A ^x + Bu , ^x ( 0 ) =x 0
(5.3.2)
where
unknown and A is a stable matrix, the following state observer may be used to generate the
estimate
^x of x:
x = A ^x + Bu , ^x ( 0 ) =^x 0
(5.3.3)
as
At
~
x (t) 0 , i.e.,
x ( t )=e ~
x ( 0 ) . Because A is a stable matrix ~
eigenvalues of A. The observers (5.3.2), (5.3.3) contain no feedback terms and are often referred
to as open-loop observers.
When x0 is unknown and A is not a stable matrix, or A is stable but the state observation
error is required to converge to zero faster than the rate with which
e At
(5.3.4)
where K is a matrix to be chosen by the designer. In contrast to (5.3.2) and (5.3.3), the
Luenberger observer (5.3.4) has a feedback term that depends on the output observation error
~
y = y^y .
The state observation error
~
x=x x^ for (5.3.4) satisfies
~
x=( AK CT ) ~
x , ~x(0)=x 0^x 0
(5.3.5)
AK C T
is a stable matrix. In
~
x (t)
to zero
can be arbitrarily chosen by designing K appropriately [95]. Therefore, it follows from (5.3.5)
that
^x (t) x (t)
exponentially fast as
AK C T . This result is valid for any matrix A and any initial condition x0 as long as (C,A) is
an observable pair and A,C are known.
Example 5.3.1 Consider the plant described by
] []
x = 4 1 x + 1 u
4 0
3
y=[ 1,0 ] x
The Luenberger observer for estimating the state x is given by
] [] [ ]
k
x = 4 1 x + 1 u+ 1 ( y^y )
4 0
3
k2
y=[ 1,0 ] ^x
where
K=[ k 1 , k 2 ]
is chosen so that
][] [
k
4 k 1 1
A o = 4 1 1 [ 1 0 ] =
4 0
k2
4 k 2 0
^x (t )
5t
This requirement is achieved by choosing k1, k2 so that the eigenvalues of A0 are real and less than
-5, i.e., we choose the desired eigenvalues of A0 to be
1=6, 2 =8
^
^
^ ,C
A ,B
, respectively, generated by some adaptive law. The
problem we face with this procedure is the inability to estimate uniquely the n2 + 2n parameters
of A,B,C from input/output data. As explained in Section 5.2.3, the best we can do in this case is
to estimate the
calculate
n+m+1 2 n
^
^
^ ,C
A ,B
. These calculations, however, are not always possible because the mapping
^
^
^ ,C
A ,B
is
not unique unless (A,B,C) satisfies certain structural constraints. One such constraint is that
(A,B,C) is in the observer form, i.e., the plant is represented as
a p
I n 1
x =[ 0 ] x +b p u
(5.3.6)
y=[ 1 0 0 ] x
a p =[ an1 , an2 , , a 0 ]
where
I n1 R
( n1) (n1)
and
b p =[ bn1 , bn2 , , b 0 ]
is the identity matrix. The elements of ap and bp are the coeffcients of the
n2
(5.3.7)
and can be estimated on-line from input/output data by using the techniques described in Chapter
4.
Because both (5.3.1) and (5.3.6) represent the same plant, we can assume the plant
representation (5.3.6) and estimate x instead of x. The disadvantage is that in a practical situation
x may represent physical variables that are of interest, whereas x may be an artificial state
variable.
The adaptive observer for estimating the state x of (5.3.6) is motivated from the structure
of the Luenberger observer (5.3.4) and is given by
^x = ^
A ( t ) ^x + b^ p ( t ) u+ K ( t)( y ^y )
^y =[ 1 0 0 ] x^
where
^x
is estimate of x ,
I n 1
a^ p (t )
^
A ( t ) =[ 0 ] , K ( t )=a a^ p (t)
a Rn is chosen so that
(5.3.8)
I n1
a (t )
A ( t )=[ 0 ]
(5.3.9)
a^ p ( t )
and
b^ p (t )
respectively, at time t.
A wide class of adaptive laws may be used to generate
a^ p (t ) and
b^ p (t ) on-line. As
(5.3.10)
where
Tn1 (s )
Tn1(s)
T
=
u ,
y =[ T1 , T2 ]
(s)
(s )
z=
sn
y= y + T 2
(s)
n
( s )=s + n1(s)
And
=[ bn1 , bn2 , , an1 , a n2 , , a0 ]
is the parameter vector to be estimated and (s) is a Hurwitz polynomial of degree n chosen by
the designer. A state-space representation for and z may be obtained as in (5.2.18) by using the
identity
(sI c )1l
( sI c ) = ( s) .
n1 ( s)
(s)
where (
c,
In view of (5.3.10), we can choose any adaptive law from Tables 4.2, 4.3 and 4.5 of
Chapter 4 to estimate * and, therefore, ap , bp on-line. We can form a wide class of adaptive
observers by combining (5.3.8) with any adaptive law from Tables 4.2, 4.3 and 4.5 of Chapter 4
that is based on the parametric plant model (5.3.10).
We illustrate the design of such adaptive observer by using the gradient algorithm of
Table 4.2 (A) in Chapter 4 as the adaptive law. The main equations of the observer are
summarized in Table 5.1.
The stability properties of the class of adaptive observers formed by combining the
observer equation (5.3.8) with an adaptive law from Tables 4.2 and 4.3 of Chapter 4 are given by
the following theorem.
Theorem 5.3.1 An adaptive observer for the plant (5.3.6) formed by combining the observer
equation (5.3.8) and any adaptive law based on the plant parametric model (5.3.10) obtained
from Tables 4.2 and 4.3 of Chapter 4 guarantees that
(i) All signals are u ,b:
(ii) The output observation error
~
y = y^y
converges to zero as t .
(iii) If u is suffciently rich of order 2n, then the state observation error
parameter error
~
x=x ^x
adaptive laws except for the pure least-squares where the convergence is asymptotic.
Table 5.1 Adaptive observer with gradient algorithm
Plant
and
l n1
a p
x = [ 0 ] x + bp u , x Rn
y=[1 0 0] ^x
l n1
a^ p (t )
Observer
x^ = [ 0 ] ^x + b^ p ( t ) u+ ( a ^ap ( t ) )( y ^y )
^y =[1 0 0] ^x
= [ b^ Tp ( t ) , a^ Tp ( t ) ] , =
z z^
,
m2
^z =T , = T >0
Adaptive law
Tn1 (s )
Tn1 ( s)
=
u ,
y
(s)
(s )
z=
sn
y
(s)
>;
Proof (i) The adaptive laws of Tables 4.2 and 4.3 of Chapter 4 guarantee that
, m , L2 L and L
the plant is stable, we have
establish that
, L
, m , 0
x , y , , m L
as
3.2.5). Because
, L
L2
u L
and
as
(see Lemma
^x L
^x = A x^ + b^ ( t ) u+ ( ^
A ( t ) A ) x
p
Because
T
= [ b^ Tp ( t ) , a^ Tp ( t ) ] ,
u , x L
(5.3.11)
^x L
(5.3.12)
~ ^
~
where b p b pb p , a p a^ pa p , . From (5.3.12), we obtain
1
~
~
y =CT ~
x ( s )=C T ( sI A ) (b p u+ ~
a p y )+ t
where t =L
{C T ( sI A )1} ~x (0)
n1( s)
[
sn1 , s n2 , , s ,1 ]
C ( sI A ) =
=
det ( sI A )
det ( sI A )
T
Letting
( s ) det ( sI A ) , we have
~
y ( s )=
~ ~
where bi , ai
1
~
ni
s [ b ni u+ ~ani y ]+ t
(s ) i=1
~
is the ith element of b p
and
~
ap
(s)
~
ni
~
y ( s )= s [ b ni u+ ~ani y ]+ t
(s ) i=1
(5.3.13)
where (s) is the Hurwitz polynomial of degree n dened in (5.3.10). We now apply Lemma A.1
(see appendix A) for each term under the summation in (5.3.13) to obtain
n i
ni
~
s
s
~
b ni
u+ ~ani
y W ci ( s ) ( W bi ( s ) u ) b ni+W ci ( s ) ( W bi ( s ) y ) ~
a ni + t
( s)
(s)
n
~y ( s ) = (s )
( s) i=1
(5.3.14)
where the elements of Wci(s),Wbi(s) are strictly proper transfer functions with the same poles as
(s). Using the deffnition of and parameter error ~
= , we rewrite (5.3.14) as
(s) ~ T
~
~
y=
+ W c i ( s )(( W bi ( s ) u ) b ni + ( W bi ( s ) y ) ~
a ni ) + t
(s)
i=1