You are on page 1of 6



Gildas Besancon Qinghua Zhang Hassan Hammouri

Laboratoire dAutomatique de Grenoble
ENSIEG BP 46 - 38402 Saint-Martin dH`eres, France

IRISA-INRIA, Campus de Beaulieu 35042 Rennes Cedex, France

Universite Claude Bernard, Lyon I 69622 Villeurbanne Cedex, France

Abstract: Motivated by constant parameter estimation in nonlinear models, as well as

state observation in dynamical systems, an adaptive observer is proposed for a class of
nonlinear systems linearly depending on unknown parameters. This observer is derived
from the well-known high-gain nonlinear design on the one hand, and an adaptive linear
one on the other hand, on the basis of previous results in that direction (Xu, 2002). Global
exponential convergence is shown under appropriate excitation conditions, and the results
are illustrated by simulations. Copyright 2002

Keywords: Nonlinear systems, nonlinear input-output models, nonlinear observers,

adaptive observers, high gain.

1. INTRODUCTION sions to more general cases have been proposed un-

der some particular passivity-like condition (Cho
The problem of parameter estimation in nonlinear sys- and Rajamani, 1997; Besancon, 2000; Besancon and
tems has motivated a lot of work, for system identifi- Zhang, 2002) or using a variable structure approach
cation or fault detection for instance. (Martinez and Poznyak, 2001).
In practise one generally has to additionally cope with The purpose of the present note is to show how high
the lack of measurements on internal dynamics, which gain techniques can also be useful in the context of
amounts to a problem of state reconstruction. In many state and parameter estimation.
cases, high gain techniques proved to be very efficient Recently indeed, on the basis of a new result on
for state estimation, leading to the now well-known adaptive observation for linear time-varying systems
high gain observer (Gauthier et al., 1992). (Zhang, 2002), an adaptive observer has been pro-
When the system further depends on some unknown posed for a class of nonlinear systems admitting some
parameters, the observer design has to be modified high gain observer, but further depending on unknown
so that both state variables and parameters can be parameters (Zhang et al., 2003). In that previous study,
estimated, leading to so-called adaptive observers. the unknown parameters enter the system through
Various results in that respect can be found, going some known time-varying vector fields. One contri-
back to (Luders and Narendra, 1973; Carroll and bution of the present note is to extend this result to the
Lindorff, 1973; Kreisselmeier, 1977) for linear sys- case when those vector fields can depend on the whole
tems, or (Bastin and Gevers, 1988; Marino, 1990) state, namely the coefficients of the parameters are not
for nonlinear ones, but with nonlinearities only de- directly known. The assumptions on the considered
pending on the measured input/output. Some exten-
class of systems are basically that if all the parameters enough.
were known, some high-gain observer could be de-
signed in a classical way, and that the systems are suf- Notice that such high-gain techniques also apply to
ficiently excited in a sense which is close to usually multi-output systems (Bornard and Hammouri, 1991),
required assumptions on adaptive systems 1 (signals whose characterization has been recently revisited
must be rich enough so that the unknown parameters (Bornard and Hammouri, 2002).
can indeed be identified). The result is then extended In the case of two outputs for instance, a simple struc-
to a class of multi-output nonlinear systems, which ture for which observer (4) can easily be extended is
in particular includes possible state-space representa- as follows (Besancon and Hammouri, 1998):
tions of linearly parameterized input-output models.    
A1 0 1 (x(t), u(t))
This makes it of particular interest for parameter iden-
x(t) = x(t) +
tification in nonlinear systems.  0 A2  2 (x(t), u(t))
C1 0
y(t) = x(t)
0 C2
The considered class of systems and its motivation
are presented in section 2, and the corresponding pro- where A1 , A2 (resp. C1 , C2 ) are of the
posed estimation scheme for both the state and the  same  form
parameter vectors is given in section 3. Some exam- as A0 (resp. C0 ) in (2), and if x = with
ples are presented in section 4 as an illustration of x[i] of same dimension as Ai , functions 1 , 2 satisfy
the proposed results, and conclusions end the paper the same structure as in (3) w.r.t. x[i] respectively,
in section 5. but with their respective last components which can
additionally depend on x[j], j 6= i.
In that case indeed, one can design an observer for (5)
by combining observers of the form (4) respectively
designed for x[1] and x[2] (Besancon and Hammouri,
2.1 Background results on high-gain observers 1998).

High-gain observer design is related to systems of 2.2 A class of parameter-affine systems

specific structure (Gauthier et al., 1992), roughly cor-
responding to the property for the system to be observ- The problem we consider here is that of state and
able for any input (Gauthier and Bornard, 1981). parameter estimation for multi-output systems of a
In short, this structure is as follows: structure like (5), which can be described as follows:
= A0 x(t) + (x(t), u(t))
x(t) A1 0 1 (x(t), u(t))
x(t) = x(t) +
y(t) = C0 x(t)
0 A2  2 (x(t), u(t))
1 (x, u)
+ (6)
where x IRn denotes the state vector, u IRm  2 (x, u)
is the input vector, and y IR the measured output, C1 0
y(t) = x(t)
while A0 , C0 and satisfy: 0 C2
0 1 0

.. where IRq is a vector of unknown parameters, and
A0 =
, 1 , 2 are ni q matrices (for some n1 , n2 ) satisfying:
1 (2)
0 0

0 0 .. ..
C0 = ( 1 0 0) and i (x, u) =
. .
0 0
ini ,1 (x, u) ini ,q (x, u)
(x, u) = ( 1 (x1 , u) 2 (x1 , x2 , u) .. n (x, u) )
Considering such a class of state-space representation
subject to unknown parameters first extends previ-
if x = ( x1 x2 xn ) . ous results on adaptive high gain observers (Zhang
Using some Lipschitz property of in x uniformly et al., 2003), obtained for single output systems and
in u, an observer for (1) can be designed as follows when (x, u) = (t) is known.
(Gauthier et al., 1992): It is further motivated by the fact that a general non-
linear input-output model of a (siso) system linear in
= A0 x
x x, u) ()1 K0 (C0 x
+ ( y) (4) unknown parameters of the following form 2 :

where K0 is such that A0 K0 C0 is stable, ()1 = y (n) = 0 (y (n1) , . . . , y,

y, u(m) , . . . , u,
(n1) (8)
diag(1, , 2 ... n1 ), and > 0 is chosen large +1 (y y, u(m) , . . . , u,
, . . . , y, u)T

1 see e.g. (Sastry and Bodson, 1989) 2 d k

Here v (k) = dt k v for any integer k and any function of time v(t)
can be re-written as (6) for instance whenever the where I stands for the identity.
input function u satisfies (as in (Ciccarella et al.,
1993)): Remark 3.1. Assumption [A3] to some extent cor-
(r+1) responds to (x(t), u(t)) being persistently exciting,
u (t) = 0 almost everywhere for t 0
since C0 is obtained by filtering through a linear
and some r IN ,
stable minimum phase transfer.
which happens e.g. for piecewise polynomial input The fact that , T must be independent of can for
functions. instance hold whenever the limiting when
Just consider indeed the time derivatives of y as the (namely [A0 K0 C0 ]1 ) satisfies the inequality
state variables corresponding to x[1], and the time of (11).
derivatives of u as those corresponding to x[2], with
y and u as the output variables: then we clearly end up
The result is as follows:
with a state-space representation of the form (6).
Notice also that input-output regressions which are
Theorem 3.1. Given a system (9) satisfying assump-
linear in the parameters - as in (8) - have been shown
tions [A1], [A2], [A3], for large enough, the system
to roughly characterize globally identifiable models
below is an asymptotic observer for (9), in the sense
(Ljung and Glad, 1994).
that for any initial condition x(0) 0 and any (0),
(0) respectively bounded by and X, k
x x(t) x(t)k
k exponentially go to zero:
and k(t)

(t) + (
= (A0 K0 C0 )(t) x(t), u(t))
3.1 Single output case
(t) = A0 x (t) + (
x(t), u(t)) + (
x(t), u(t))(t)
Let us first consider a system of the following form:
1 T T
+ () [K0 + (t) (t)C0 ][y(t) C0 x (t)]

n T T
(t) = (t) C0 [y(t) C0 x (t)] (12)

x(t) = A0 x(t) + (x(t), u(t)) + (x(t), u(t)) x

y(t) = C0 x(t) =x
x if kxk X, X otherwise,

where x IRn denotes the state vector, assumed to = if kk
, otherwise.

start in 0 IRn , u IRm is the input vector, y IR
the measured output, and IRq a vector of unknown
parameters, i.e. IRnq . where K0 and () are as in (4).
Let us assume that a high gain observer can be de-
signed as in (4) whenever is known, by considering The proof is based on the following lemmas:
that the following conditions hold:
[A1] A0 , C0 , are as in (2)-(3), and (x, u) is an nq Lemma 3.1. > 0, t1 0 1 > 0 such that
matrix of the same form as i in (7). 1 , x
given by (12) satisfies:
[A2] , are smooth functions w.r.t. their arguments,
and u is bounded generating bounded states kx(t)k k
x(t) x(t)k t t1 .
X for any x(0) 0 , while kk .
In other words, whatever is (satisfying [A2]), ob-
The main result of this section is that under some ad- server (12) can provide arbitrarily accurate estimation
ditional condition of persistent excitation - as usually of x by appropriately tuning (see e.g. (Besancon,
required in adaptive systems - an adaptive version of 2003)).
the available high gain observer can be designed:
is given by (12), then
Lemma 3.2. If [A3] holds and
[A3] Given some K0 making A0 K0 C0 to be a stable t2 0, > 0, 2 > 0 such that 2 :
matrix, inputs u are such that the state vector satisfies
the following property: k
x(t) x(t)k , for t t2
for any x(0) 0 , and any (0) IRnq , the solution satisfies (11) for t t2 .
(t) of:

(t) = (A0 K0 C0 )(t) + (x(t), u(t)) (10) of x accurate
This means that for some estimation x
enough (which can always be obtained in view of
satisfies the same excitation property
lemma 3.1),
is such that for some t0 0:
as .
, T independent of : t t0 ,
and for large enough, Lemma 3.3. If given by (12) satisfies (11), then
Z (11) there exists 3 > 0 such that for all 3 in
( )T C0T C0 ( )d I (12), := 1n ( ) and := ()[
x x]
t exponentially go to zero.
is sufficiently excit-
Lemma 3.3 clearly states that if t+T
ing (which can be made to be true by choosing large + [E( )T C0T C0 ( )
enough in view of previous lemmas), then the result of t
theorem 3.1 is indeed achieved. +( )T C0T C0 E( )]d (18)
The proofs of lemmas 3.1, 3.2, 3.3 are as follows:
+ E( )T C0T C0 E( )d (19)
Proof of lemma 3.1: Set 0 := ()[
x x], then t
one can check that: First, by [A3], (17) I, and clearly (19) 0.
0 =(A0 K0 C0 )0 + ()() + ()() Then, if kEk E and kk , we have:

+ 0 x, u)( )
T C T C0 0 + ()( t+T
where k(18)k kE T C0T C0 kd E T
() = (
x, u) (x, u)
(13) and thus (18) E T I.
() = (
x, u) (x, u).
Finally, (16) ( E T )I, and thus, for E small
enough, namely k x xk small enough, (11) holds for
Notice that from [A2] and the definition of x
, ()
and () are bounded, and from [A1]:
Proof of lemma 3.3: Set := n ( ) and
k()()k k0 k,
(14) := ()[ .
x x]
k()()k k0 k
Then one can check that:
for Lipschitz constants , . = (A0 K0 C0 ) + ()() + ()()
is upper bounded =
T C T C0 T C T C0 (20)
Notice also that from its definition, 0 0
independently of , and that from [A2] and the defi-
nition of , , , k()( x, u)( )k n1
some independent of . Now one has simply to choose some appropriate Lya-
Now by choosing V0 := T0 P0 0 with P0 such that: punov function for (20):
On the one hand, one can take again P0 as in (15).
P0 (A0 K0 C0 ) + (A0 K0 C0 )T P0 = I, (15) satisfies (11), then
On the other hand, noting that if
= C0 C0 is classically exponentially sta-
one can check that: ble (Narendra and Annaswamy, 1989), one can con-
sider some positive definite matrix P satisfying:
V 0 ( a)k0 k2 + k0 k P = [
T C0T C0 ]
T P + P [
T C0T C0 ]
where a, b are constants independent of , and from One can check that being upper bounded indepen-
this k0 k is made smaller than n1b(a) , which gives dently of , P classically satisfies p1 I P (t) p2 I
the result for k
x xk. for any t and some p1 , p2 independent of .
Finally one can choose V (, ) := T P0 + T P .
. Then:
Proof of lemma 3.2: Set E := We indeed get:
V = T + 2T P0 ()[() + ()]
E = (A0 K0 C0 )E + ()
T + 2T P T C T C0
Considering here, for each column Ei of E, a function
and P are upper
and by using (14) and the fact that
Vi := EiT P0 Ei with P0 as in (15), one can check
that for kx xk , V i kEi k2 + kEi k bounded independently of , for large enough we
for some independent of . Hence kEk can become can easily obtain:
arbitrarily small according to . Moreover, notice that V ckk2 dk k
by definition is upper bounded independently of ,
for some c, d > 0, and thus, V V which gives
say by .
= + E satisfying the conclusion.
Now considering the problem of
(11), we have:
3.2 The two-output case
)T C0T C0 (
( )d (16)
Now let us come to the case of a system (6) written in
t+T compact form as:
= ( )T C0T C0 ( )d (17) x = Ax + (x, u) + (x, u)
t y = Cx
x[1] T with similar arguments as in the proof of lemma 3.2,
with x = , x[i] = ( x1 [i] . . . xni [i] ) so can do of (24).
IRni , y IR2 , and A, C, , given by (6). Finally, by a similar transformation as in the proof of
lemma 3.3 (with here = I 1 ( )), one still
Assumptions [A1] to [A3] become: obtains error equations of the form (20), and by com-
[A10 ] For i = 1, 2, Ai , Ci are as A0 , C0 in (2), bining Lyapunov functions respectively associated to
i (x, u) = and the conclusion follows in the same way.

(i1 (x1 [i], u), .. ini 1 (x1 [i], ..xni 1 [i], u),
ini (x[i], x[j], u))T , j 6= i,
and i (x, u) is as in (7).
[A20 ] The same as [A2]; As a first example, let us consider the single output
[A30 ] Given some Ki making Ai Ki Ci to be a stable system described by:
matrix for i = 1, 2, inputs u are such that the state
x 1 = x2 + u
vector satisfies the following property:
x 2 = x1 2sin(x2 ) + arctan(x2 )1
for any x(0) 0 , and any i (0) IRni q , the
+cos(x1 x2 )2
solutions i (t) of:
y = x1
i = i (Ai Ki Ci )i + i i (x, u) (22)  
1 1
where u(t) = sin(2t) + 10 cos(10t), x(0) =
is such that for some t0 0:    
1 1
and = = .
i , Ti independent of i : t t0 , 2 2
and for i large enough, Here the system is clearly under the form (9) and thus
Z i (23) one can design an observer (12). Results obtained with
i ( )T CiT Ci i ( )d i I. = 10 are shown by figure 1 for state estimation, and
figure 2 for parameter estimation. From those figures,
it can be seen that the expected estimation is achieved.


Then by similar arguments as in previous subsection, 0

one can prove the following: 0.2


Theorem 3.2. Given a system (21) satisfying as-

sumptions [A10 ], [A20 ], [A30 ], for large enough, the 0.6

system below is an asymptotic observer for (21), in 0.8

the sense that for any initial condition x(0) 0 1

0 10 20 30 40 50 60

and any (0), (0) respectively bounded by and X,
x x 1 x1 , x
Fig. 1. State estimation errors ( 2 x2 ).
k k exponentially go to zero:
x(t) x(t)k and k(t)

(t) + (
= (A KC)(t) x(t), u(t)) 2.5

(t) = A x(t) + (
x(t), u(t)) + (
x(t), u(t))(t) 2

1 T T
+ () [K + (t) (t)C ][y(t) C x (t)] 1.5

T T 1
(t) = I (t) C [y(t) C x (t)] (24) 1

x if k
xk X, X otherwise, 0.5

xk 0

0 10 20 30 40 50 60

= if kk
, otherwise. Fig. 2. Parameter estimation(1 vs 1 , 2 vs 2 ).

K1 0 1 0
with K = , = , i as
0 K2 0 2 As a second example, let us consider the problem of
 (4), Ki making Ai Ki Ci to be stable, and I = identifying parameters in a system represented by the
n1 I 0 following input-output relationship:
0 n2 I
y (2) = sin(y y)
+ cos(y u) 2 2 .
1 + uu
Lemma 3.1 indeed clearly applies to each subvector Here by considering for instance some piecewise
x[i] of x. linear approximation of the previous input function
 notice that by choosing = max(1 , 2 ), = u(t) = sin(2t) + 10 cos(10t),
  the system  can be
1 y u
with i as in (22), satisfies a property (23), and written as (6) (with x[1] = and x[2] = ).
2 y u
An observer (24) can then be designed so as to get Carroll, R.L. and D.P. Lindorff (1973). An adap-
estimations of 1 , 2 (and y, from the only measure- tive observer for single input single output lin-
ment of y, u. ear systems. IEEE Trans. on Automatic Control
Simulation results obtained with = 40 are given in 18(5), 428435.
figure 3 (where 1 = 10, 2 = 5). Cho, Y.M. and R. Rajamani (1997). A systematic ap-
proach to adaptive observer synthesis for nonlin-
ear systems. IEEE Trans. on Automatic Control

42(4), 534537.

1 Ciccarella, G., M. Dalla Mora and A. Germani (1993).
A luenberger-like observer for nonlinear systems.

Int. Journal of Control 57, 537556.
Gauthier, J.P. and G. Bornard (1981). Observability

for any u(t) of a class of nonlinear systems. IEEE
0 10 20 30 40 50 60
Trans. on Automatic Control 26(4), 922926.
Fig. 3. Parameter identification (1 vs 1 , 2 vs 2 ). Gauthier, J.P., H. Hammouri and S. Othman (1992).
A simple observer for nonlinear systems - appli-
cations to bioreactors. IEEE Trans. on Automatic
Control 37(6), 875880.
Kreisselmeier, G. (1977). Adaptive observers with
5. CONCLUSION exponential rate of convergence. IEEE Trans. on
Automatic Control 22, 28.
In this paper, an adaptive version of the well-known Ljung, L. and T. Glad (1994). On global identifiability
high gain observer for nonlinear systems has been pro- for arbitrary model parametrizations. 30, 265
posed, as an extension of previous results of (Zhang, 276.
2002; Zhang et al., 2003) in two directions: the un- Luders, G. and K.S. Narendra (1973). An adaptive
known parameters enter the system through state- observer and identifier for a linear system. IEEE
dependent functions on the one hand, and a particular Trans. on Automatic Control 18(5), 496499.
multi-output case has been considered, motivated by a Marino, R. (1990). Adaptive observers for single out-
possible state-space representation of nonlinear input- put nonlinear systems. IEEE Trans. on Automatic
output models. Control 35(9), 10541058.
Martinez, J. Correa and A. Poznyak (2001). Switch-
ing structure robust state and parameter estima-
REFERENCES tor for MIMO nonlinear systems. Int. J. Control
Bastin, G. and M.R. Gevers (1988). Stable adaptive 74(2), 175189.
observers for nonlinear time-varying systems. Narendra, K.S. and A. Annaswamy (1989). Stable
IEEE Trans. on Automatic Control 33(7), 650 Adaptive Systems. Prentice Hall Int., NJ.
658. Sastry, S. and M. Bodson (1989). Adaptive control
Besancon, G. (2000). Remarks on nonlinear adap- - stability, convergence and robsutness. Prentice
tive observer design. Systems & Control Letters Hall Int., NJ.
41, 271280. Xu, A. (2002). Observateurs adaptatifs non lineaires
Besancon, G. (2003). High gain observation with dis- et diagnostic de pannes. PhD thesis. Universite
turbance attenuation and application to robust de Rennes I.
fault detection. Automatica 39, 10951102. Zhang, Q. (2002). Adaptive observer for MIMO linear
Besancon, G. and H. Hammouri (1998). On observer time-varying systems. IEEE Trans. Auto. Control
design for interconnected systems. Journal of 47(3), 525529.
Mathematical Systems, Estimation, & Control. Zhang, Q., A. Xu and G. Besancon (2003). An ef-
Besancon, G. and Q. Zhang (2002). Further devel- ficient nonlinear adaptive observer with global
opments on nonlinear adaptive observers with convergence. In: 13th IFAC Symp. on System
application to fault detection. In: IFAC World Identification, Rotterdam, The Netherlands.
Congress, Barcelona, Spain.
Bornard, G. and H. Hammouri (1991). A high gain
observer for a class of uniformly observable sys-
tems. In: Proc. 30th IEEE Conf. on Decision
and Control, Brighton, England, USA. pp. 1494
Bornard, G. and H. Hammouri (2002). A graph ap-
proach to uniform observability of nonlinear
multi-output systems. In: 41st IEEE Conf. on De-
cision and Control, Las Vegas, USA. pp. 701