You are on page 1of 11

Hal 266 269 (14-2013-011)

The proof of Theorem 5.2.5 is given in Section 5.6.


Theorem 5.2.5 does not imply that (t) converges to a constant let alone to *. It does
imply, however, that

~
( t ) = ( t )

converges to the null space of the autocovariance R(0) of

, which depends on H(s) and the spectrum of u. In fact it can be shown [201] that R(0) is
related to the spectrum of u via the equation
k

R ( 0 ) = H (1 j i) H ( j i) Su ( i)
T

i=1

which indicates the dependence of the null space of R(0) on the spectrum of the input u.
Example 5.2.3 Consider the second order plant
y=

bo
2

s +a1 s +a0

where a1, a0 > 0 and b0 0 are the unknown plant parameters. We first express the plant in the
form of
z= T

Where

=[b 0 , a1 , a0 ]

z=

s2
u
( s) ,

=[

[ s , 1]
1
u ,
y]
( s)
(s )

and

choose

( s )=(s +2)2 . Let us choose the pure least-squares algorithm from Table 4.3, i.e.,

=P
, ( 0 ) 0

P=P
, P ( 0 )= p0 I
=z T
where is the estimate of * and select p0 = 50. The signal vector
by the state equations

=[ T1 , T2 ]

is generated

0= c 0 +lu
1= [ 0 1 ] 0

2= c 2 +ly
z= y + [ 4 4 ] 2

[]

1
, l= 0

Where

c = 4 4
1
0

(i) for

u=5 sin t +12sin 3 t ,

. The reader can demonstrate via computer simulations that:

(t)

as

t ; (ii) for

u=12 sin 3 t ,

( t )

as

t where is a constant vector that depends on the initial condition (0).

Remark 5.2.2 As illustrated with Example 5.2.3, one can choose any one of the adaptive laws
presented in Tables 4.2, 4.3, and 4.5 to form parameter identifiers. The complete proof of the
stability properties of such parameter identifiers follows directly from the results of Chapter
4 and Theorem 5.2.4. The reader is asked to repeat some of the stability proofs of parameter
identifiers with different adaptive laws in the problem section.

5.3 Adaptive Observers


Consider the LTI SISO plant
x = Ax+ Bu , x ( 0 )=x 0

(5.3.1)

y=C x
where

x Rn . We assume that u is a piece wise continuous and bounded function of time, and

A is a stable matrix. In addition we assume that the plant is completely controllable and
completely observable.
The problem is to construct a scheme that estimates both the parameters of the plant, i.e.,
A, B, C as well as the state vector x using only I/O measurements. We refer to such a scheme as
the adaptive observer.

A good starting point for choosing the structure of the adaptive observer is the state
observer, known as the Luenberger observer, used to estimate the state vector x of the plant
(5.3.1) when the parameters A, B, C are known.

5.3.1 The Luenberger Observer


If the initial state vector x0 is known, the estimate

^x of x in (5.3.1) may be generated by the

state observer
x = A ^x + Bu , ^x ( 0 ) =x 0

(5.3.2)

^x Rn . Equations (5.3.1) and (5.3.2) imply that ^x ( t )=x ( t ) , t 0 . When x0 is

where

unknown and A is a stable matrix, the following state observer may be used to generate the
estimate

^x of x:
x = A ^x + Bu , ^x ( 0 ) =^x 0

(5.3.3)

In this case, the state observation error satisfies the equation


x = A ~
x,~
x ( 0 )=x x^
0

which implies that


~
x (t) x (t)

as

At
~
x (t) 0 , i.e.,
x ( t )=e ~
x ( 0 ) . Because A is a stable matrix ~

exponentially fast with a rate that depends on the location of the

eigenvalues of A. The observers (5.3.2), (5.3.3) contain no feedback terms and are often referred
to as open-loop observers.
When x0 is unknown and A is not a stable matrix, or A is stable but the state observation
error is required to converge to zero faster than the rate with which

e At

goes to zero, the

following observer, known as the Luenberger observer, is used:


^x = A ^x + Bu+ K ( y ^y ) , ^x ( 0 )=^x
0
^y =C T ^x

(5.3.4)

where K is a matrix to be chosen by the designer. In contrast to (5.3.2) and (5.3.3), the
Luenberger observer (5.3.4) has a feedback term that depends on the output observation error
~
y = y^y .
The state observation error

~
x=x x^ for (5.3.4) satisfies

~
x=( AK CT ) ~
x , ~x(0)=x 0^x 0

(5.3.5)

Because (C,A) is an observable pair, we can choose K so that

AK C T

is a stable matrix. In

AK C T , and, therefore, the rate of convergence of

fact, the eigenvalues of

~
x (t)

to zero

can be arbitrarily chosen by designing K appropriately [95]. Therefore, it follows from (5.3.5)
that

^x (t) x (t)

exponentially fast as

, with a rate that depends on the matrix

AK C T . This result is valid for any matrix A and any initial condition x0 as long as (C,A) is
an observable pair and A,C are known.
Example 5.3.1 Consider the plant described by

] []

x = 4 1 x + 1 u
4 0
3
y=[ 1,0 ] x
The Luenberger observer for estimating the state x is given by

] [] [ ]

k
x = 4 1 x + 1 u+ 1 ( y^y )
4 0
3
k2
y=[ 1,0 ] ^x
where

K=[ k 1 , k 2 ]

is chosen so that

][] [

k
4 k 1 1
A o = 4 1 1 [ 1 0 ] =
4 0
k2
4 k 2 0

is a stable matrix. Let us assume that

^x (t )

is required to converge to x(t) faster than

5t

This requirement is achieved by choosing k1, k2 so that the eigenvalues of A0 are real and less than
-5, i.e., we choose the desired eigenvalues of A0 to be

1=6, 2 =8

and design k1, k2 so that

det ( sI A 0 )=s2 + ( 4+ k 1 ) s+ 4+ k 2=(s +6)( s+8)


which gives
k 1=10, k 2=44

5.3.2 The Adaptive Luenberger Observer


Let us now consider the problem where both the state x and parameters A,B,C are to be estimated
on-line simultaneously using an adaptive observer.
A straightforward procedure for choosing the structure of the adaptive observer is to use
the same equation as the Luenberger observer in (5.3.4), but replace the unknown parameters
A,B,C with their estimates

^
^
^ ,C
A ,B
, respectively, generated by some adaptive law. The

problem we face with this procedure is the inability to estimate uniquely the n2 + 2n parameters
of A,B,C from input/output data. As explained in Section 5.2.3, the best we can do in this case is
to estimate the
calculate

n+m+1 2 n

parameters of the plant transfer function and use them to

^
^
^ ,C
A ,B
. These calculations, however, are not always possible because the mapping

of the n estimated parameters of the transfer function to the n2 + 2n parameters of

^
^
^ ,C
A ,B

is

not unique unless (A,B,C) satisfies certain structural constraints. One such constraint is that
(A,B,C) is in the observer form, i.e., the plant is represented as

a p

I n 1

x =[ 0 ] x +b p u

(5.3.6)

y=[ 1 0 0 ] x

Hal 270-273 (14-2013-013)

a p =[ an1 , an2 , , a 0 ]

where
I n1 R

( n1) (n1)

and

b p =[ bn1 , bn2 , , b 0 ]

are vectors of dimension n and

is the identity matrix. The elements of ap and bp are the coeffcients of the

denominator and numerator, respectively, of the transfer function


n1

n2

y ( s) bn1 s + bn2 s + +b0


=
n
n1
u (s)
s +a n1 s + a0

(5.3.7)

and can be estimated on-line from input/output data by using the techniques described in Chapter
4.
Because both (5.3.1) and (5.3.6) represent the same plant, we can assume the plant
representation (5.3.6) and estimate x instead of x. The disadvantage is that in a practical situation
x may represent physical variables that are of interest, whereas x may be an artificial state
variable.
The adaptive observer for estimating the state x of (5.3.6) is motivated from the structure
of the Luenberger observer (5.3.4) and is given by
^x = ^
A ( t ) ^x + b^ p ( t ) u+ K ( t)( y ^y )
^y =[ 1 0 0 ] x^
where

^x

is estimate of x ,

I n 1
a^ p (t )

^
A ( t ) =[ 0 ] , K ( t )=a a^ p (t)

a Rn is chosen so that

(5.3.8)

I n1

a (t )

A ( t )=[ 0 ]

(5.3.9)

a^ p ( t )

is a stable matrix and

and

b^ p (t )

are the estimates of the vectors ap and bp,

respectively, at time t.
A wide class of adaptive laws may be used to generate

a^ p (t ) and

b^ p (t ) on-line. As

an example, we can start with (5.3.7) to obtain as in Section


2.4.1 the parametric model
z= T

(5.3.10)

where

Tn1 (s )
Tn1(s)
T
=
u ,
y =[ T1 , T2 ]
(s)
(s )

z=

sn
y= y + T 2
(s)
n

( s )=s + n1(s)
And
=[ bn1 , bn2 , , an1 , a n2 , , a0 ]

is the parameter vector to be estimated and (s) is a Hurwitz polynomial of degree n chosen by
the designer. A state-space representation for and z may be obtained as in (5.2.18) by using the

identity

(sI c )1l

( sI c ) = ( s) .

n1 ( s)
(s)

where (

c,

l) is in the controller canonical form and

In view of (5.3.10), we can choose any adaptive law from Tables 4.2, 4.3 and 4.5 of
Chapter 4 to estimate * and, therefore, ap , bp on-line. We can form a wide class of adaptive
observers by combining (5.3.8) with any adaptive law from Tables 4.2, 4.3 and 4.5 of Chapter 4
that is based on the parametric plant model (5.3.10).
We illustrate the design of such adaptive observer by using the gradient algorithm of
Table 4.2 (A) in Chapter 4 as the adaptive law. The main equations of the observer are
summarized in Table 5.1.
The stability properties of the class of adaptive observers formed by combining the
observer equation (5.3.8) with an adaptive law from Tables 4.2 and 4.3 of Chapter 4 are given by
the following theorem.
Theorem 5.3.1 An adaptive observer for the plant (5.3.6) formed by combining the observer
equation (5.3.8) and any adaptive law based on the plant parametric model (5.3.10) obtained
from Tables 4.2 and 4.3 of Chapter 4 guarantees that
(i) All signals are u ,b:
(ii) The output observation error

~
y = y^y

converges to zero as t .

(iii) If u is suffciently rich of order 2n, then the state observation error
parameter error

~
x=x ^x

converge to zero. The rate of convergence is exponential for all

adaptive laws except for the pure least-squares where the convergence is asymptotic.
Table 5.1 Adaptive observer with gradient algorithm

Plant

and

l n1
a p

x = [ 0 ] x + bp u , x Rn

y=[1 0 0] ^x

l n1
a^ p (t )

Observer

x^ = [ 0 ] ^x + b^ p ( t ) u+ ( a ^ap ( t ) )( y ^y )

^y =[1 0 0] ^x

= [ b^ Tp ( t ) , a^ Tp ( t ) ] , =

z z^
,
m2

^z =T , = T >0
Adaptive law

Tn1 (s )
Tn1 ( s)
=
u ,
y
(s)
(s )

z=

sn
y
(s)

a* is chosen so that A* in (5.3.9) is stable; m2 = 1 or m2 = 1 +


Design variable

>;

(s) is a monic Hurwitz polynomial of degree n

Proof (i) The adaptive laws of Tables 4.2 and 4.3 of Chapter 4 guarantee that
, m , L2 L and L
the plant is stable, we have
establish that
, L

, m , 0

independent of the boundedness of u , y. Because

x , y , , m L
as

, which, together with

3.2.5). Because

, L

L2

u L

and

. Because of the boundedness of y, we can also


by showing that
, implies that

as

, the convergence of m , to zero follows.

(which follows from


t

(see Lemma

The proof of (i) is complete if we establish that

^x L

. We rewrite the observer

equation (5.3.8) in the form

^x = A x^ + b^ ( t ) u+ ( ^
A ( t ) A ) x
p

Because

T
= [ b^ Tp ( t ) , a^ Tp ( t ) ] ,

u , x L

(5.3.11)

and A* is a stable matrix, it follows that

^x L

Hence, the proof of (i) is complete.


~
x x ^x
(ii) Let
be the state observation error. It follows from (5.3.11), (5.3.6) that
~
^x = A ~
x b p u+ ~a p y , ~
x ( 0 )=x ( 0 ) x^ (0)

(5.3.12)

~ ^
~
where b p b pb p , a p a^ pa p , . From (5.3.12), we obtain
1
~
~
y =CT ~
x ( s )=C T ( sI A ) (b p u+ ~
a p y )+ t

where t =L

{C T ( sI A )1} ~x (0)

is an exponentially decaying to zero term.

Because (C,A) is in the observer canonical form, we have


T

n1( s)
[
sn1 , s n2 , , s ,1 ]
C ( sI A ) =
=

det ( sI A )
det ( sI A )
T

Letting

( s ) det ( sI A ) , we have
~
y ( s )=

~ ~
where bi , ai

1
~
ni
s [ b ni u+ ~ani y ]+ t

(s ) i=1

~
is the ith element of b p

and

~
ap

, respectively, which may be written as

(s)
~
ni
~
y ( s )= s [ b ni u+ ~ani y ]+ t
(s ) i=1

(5.3.13)

where (s) is the Hurwitz polynomial of degree n dened in (5.3.10). We now apply Lemma A.1
(see appendix A) for each term under the summation in (5.3.13) to obtain

n i

ni

~
s
s
~
b ni
u+ ~ani
y W ci ( s ) ( W bi ( s ) u ) b ni+W ci ( s ) ( W bi ( s ) y ) ~
a ni + t
( s)
(s)
n
~y ( s ) = (s )

( s) i=1
(5.3.14)

where the elements of Wci(s),Wbi(s) are strictly proper transfer functions with the same poles as
(s). Using the deffnition of and parameter error ~
= , we rewrite (5.3.14) as

(s) ~ T
~
~
y=
+ W c i ( s )(( W bi ( s ) u ) b ni + ( W bi ( s ) y ) ~
a ni ) + t
(s)
i=1

You might also like