You are on page 1of 11

ASYMPTOTICSTABILITY AND FEEDBACKSTABILIZATION

R. W. Brockett

Abstract. We consider the loca1 behavior of control problems described


by (* = dx/dr)

i=f(x,u) ; f(x,0)=0
o

and more specifically, the question of determining when there exists a


smooth function u(x) such that x = xo is an equilibrium point which is
asymPtotically stable. Our main results are formulated in Theorems 1
and 2 be1ow. Whereas it night have been suspected that controllability
would insure the exlstence of a stabilizing control law, Theorem I uses
a degree-theoretic argument to show this is far from being the case.
The positive result of Theorem 2 can be thought of as providing an
application of high gain feedback in a nonlinear setting.

1. Introduction

In this paper we establish general theorems which are strong


enough to irnply, among other things, that

a) there is a continuous control law (u,v) = (u(xry, z) rv(x,y,z))


which makes the origin asympEoticatly stable for

x = u
y = v
z = x y

and that

b) there exists no continuous control law (urv) = (u(xryrz),


v(x,y,z)) which makes the origin asymptotically stable for

181
182 l
+
:
x = u
g.
y = v *
z = y u-xv i
t
1

The flrst of these irnplies that the null solutlon of Eulerrs


angular velocity equations can be made asymptotlcally stable wlth two
control torques aligned wlth principle axes. (see [1,2)for a general
discusslon, but not this particurar result.) The second provldes a
counter example to the oft repeated conjecture asserting that a
reasonable form of local controllability iurplles the existence of a
stabillzing control law. Section 2 gives certain background rnaterlal
in control theory. In section 3 we formulate our nonexistence result
and ln section 4 we glve a criterion for the existence of stabilizing
control 1aws.
Sussmann [3] gives an example of a system in R2which is controrr-
abie in a strong sense and yet fails to have a continuous feedback con-
trol law yielding global asymptotic stability. His example involves
both bounds on the controls and nonlocal considerations, ours involves
nei ther.

2. Control Systems

we intend to work locally in this paper, but even so it is perhaps


lrorthwhile to make a few remarks about a g1oba1 formulation. A more
detailed and systeuratic account can be found in
[3]r but in any case
the reader famili-ar with control theory can go directly to section 3.
Let X be a differentiable manifold and 1et n:E-+X be a vector
bundle over x. Let rx denote the tangent bundle of X and let T*TX
denote the pullback of TX over E. A section of n*TX is then an
assignment of a velocity vector in TX for each point in E. rf we
choose a 1oca1 trivializati,on of E and pick coordj-nates then a section
Y € f (E,tl*TX) ls equivalent (in an obvlous notation) to

x = f (x,u)

such a y is ca11ed a control system.


A section o€f(X,f) is an assignment of a pair (u,o) corresponding
to each x and so locally it is given by a function o(x). we denote bv
Yo the section of f(E,n*TX) defined in coordinates by
l-B3

i = t(x,u*cr(x))

and say that yo ls obtained fron Y by the appllcatlon of the feedback

control law o. Associated with every control system Y there ls a vector

field which is obtalned by settlng u=0. This vector field will be

- and is
d e n o t e d b -y Y o called the drift. We now state a preclse problen.

Glven Y€f(Err*TX) and xo€X under what clrcumstances does there exist

o€f(X,E) such that xo ls an equillbriusr point of Y: whlch ls locally

asymptotlcally stabLe. We call thls the loca1 feedback stabilizatlon


problem.

Now the fibers of E and the fibers of r*TX are both veccor sPaces

and so it makes sense to ask if Ehe napping of f(X,E) xf(E,n*TX;*

f (E,r*TX) defined by (c,y) l* Vo is afflne with respect to o. If so

we call the systeu lnput-linear.


Observe that input-linear systems have a local description of the

form

i{.; = f(x)+rurer(x)

By a standard linear control svstem r.re understand a sectlon


y€f (E,I*TX) wlth x=lRo, E=IRmx Rn (the trivlal bundle) wlth .1

belng given by

m
x=Ax* Ib*u* I I
I+I

3. Nonexistence of Stabilizlng Control Laws

There is one situation 1n which the stabilization problen has

been completely understood for some time. We formulate the results

here in such a way as to provide a guide to the laEter developments

Renark: Consider the standard linear control system

x = Ax*Bu x(t) € nn (*)

A necessary and sufficient conditlon for there to exist a control 1aw

o which nakes x = 0 an asyflptotically stable equilibrium Point ls that

there exists a neighborhood N of x = 0 such that for each E € N there ls

a function of time ua(') defined on [0,-) such that the solution of (*)
1Bl.r

wlth lnitial condltton x(0) =f and control u(') =us'(') goes to zero as

t goes to lnflnlty.

Proof: Thls condltlon ls clearly a necessary one slnce solutlons


startlng near an asymptotlcal-ly stable equtllbrlum polnt must approach

It as t goes to lnflnlty. To see thet thls condltton ls sufficlent

one observes that Range (BrA3r...rAt-lB) ls an invarlant subspace for

A and by change of basis (*) can be ltritten as

^"] .[';]"
[;;][^l [:;]
wlth Rang" (B.rrAl1B.r,...,a,lrlnul = dLm x,r. Clearly the eigenvalues of

A22 trlust have negatlve real parts if x* is to go to zero as t goes to

lnflnlty. 0n the other hand, lt is well known and easlly Proven that
if (n,A3,...,At-lB) is of rank n then there exists an m by n matrix K

such that (A+BK) has lts eigenvalues ln the open left haLf-plane.

The remark then follows.


The rank condition just mentioned is necessary and sufficlent for

r?=Ax*Bu to have a certain controllability Property. If for any glven

pair a n d x ,/ a n d a n y g i v e n T > 0 , t h e r e 1 s a c o n t r o l u ( ' ) d e f i n e d o n


x.

[0rT] such that u(') steers (*) from x, at t=0 to x, at t=T we say
thaE the control system (*) ls controllable. The rank condition is

necessary and sufficient for controllabtlity in this sense. A short

surilnary of all this reads as fo11ows. The nu1l solution of (*) is

stabllizable if anll.-qnlv if, all the nodes associated with eiFenvalues

with non-negative real parts are controllable.

It is, in view of this background, not completely unreasonable to

hope that for nonlinear systens such as

f (x) + Iu. g. (x)

something similar might happen. A speciflc question in this direction


is, t'If every initial state 1n a neighborhood of xo can be steered Eo

x by a control defined on [0r*) does there necessarily exist a feed-

back control law which makes xo asymptotically stable?" In this paper

we show that the anst/er is no provided that we want a control law with

some smoothness. In general we need someEhing more than just a


r Rc

condltlon since, as we will see, the controls whlch


controllabiltty
to zero cannot always be Patched together 1n a
steer the trajectors

smooth way.
We nentlon one more well- known fact. If we have

x = f (xru) f(0,0) = 0

wlth f(.,.) continuously eifferentlable with respect to both arguments'

and B=Ef/Eu, then the control systen


and if we deflne a= (3f/0x)o

x = A x + B u

system at (0,0)' If Ehe llnearized system


is called the lineatized
Rank (BrAB,.",At-lB) =n then there exists a
satisfies the condition
lar,r u=Kx such that A*BK has its eigenvalues in the
linear control
half-plane. Moreover, if we reca1l that ;=?(*) witn ?(O) =O
open lefr
stable equilibrium Point provided the
has 0 as an asymPtotically
which are negative' then we see
eigenvalues of (Ef/0x) have real Parts
is possible for i= f(x,u) provided the
that feedback stabillzation
system 1s control1able. In view of the previous discusslon
llnearized
still more precisely. There exists a stabilizing
this can be stated
= 0 Provided the gnstable nodes of
control law for x = f(x,u) with f(0,0)

lhe llnear
lafr if the linearized svstem has-an unglable mode which- is
control
The negative result here depends on the well knom
uncontrollable.
that an equllibrium point is unstable if
result of Liapunov asserting

(Ef/Ox) point has any eigenvalue with a real Part which is


at lhat

positive.
FromtheseremarksweSeethatinsofaraSlocalasymPtotlc
stability is concerned, the only difflcult problems involve cases where

(3f/3x) on the iuraginary axis whlch correspond to


has eigenvalues
modes of the associated linearized systems and all
uncontrollabte
modes of the linearized system correspond to
other uncontrollable
stable behavlor. In the study of stability' the cases
asymptotically I
with a real part which
where (8f/Ex) has one or more eigenvalues
point are called critlcal cases. The
vanishes at the equilibrium
still far from complete' (See, e'8' the
study of the critieal cases is
in page 59). We are then, in this PaPer'
remarks of Arnold [ 5]'
certain features of the critical'
priurarily concerned with understanding
185
cases.

The followlng theoren glves a necessary eonditlon for the exis-


tence of a stabllizing control raw which, (i)
under and (ii) sumrnprizes
our previous dlscussion and ln (ili) introduces a new element which
is declslve as far as large crass of problens, lncruding the second
example of the introductlon, are concerned.

Theorem 1: Let x=f(x,u) be glven with f(xorO) =0 and f(.,.) contl_


nuously dlfferentiabre in a nelghborhood of (xor0). A necessary
condltion for the exlstence of a continuousl-y differentiable contror
1aw which makes (xo,0) asyuptorically stable is that:

(i) the linearlzed system shourd. have no uncontrollable modes


associated with eigenvalues whose real part positive.
is
(ii) there exists a neighborhood N of (xo,0) such that for each
E€N there exisrs a control ur(.) defined on [0,-) such
that this contror steers the solution of *=f(x,ur) frour
x=f at t=0 to *=*o at t=o.

(ili) the rnapping

Y : AxIRm -r Rn

defined by Y : (x,u) F+ f(x,u) should be onto an open ser


containing n

Proof: we prove the necessity of (iii), the necessity of (i) and (ii)
having been explained above. rf xo is an equilibrium point of i=a(x)
which is asymptotically stable we know fron the work of wilson [6] that
there exists a Liapunov function v such that
v is positive for xlxo,
vanishes "a *o, is continuously differentiable,
and has level sets
'(o)
v which are homotopy spheres. The compacrness of ,-1(o) inplies
that there exisrs c and € > 0 such rhar on .,r-1(o)
, llAv/E*l I .1/€ and
<Dv/Dx,a(x)><-€.
This inplies rhar if
llqll is sufficiently sna11 the
vector f ield associated with x = a(x) *
{ p o i n t s i n w a r d o . , . , , r -1
1
o;.
By evaluating at time t=1 the soluti-on of
x=a(x)+{ which passes
through x at t=0, we get a conrinuous map of {xlr(*) <oi into itself.
Applying the Lefschetz fixed point formula we see that this urap has a
fixed point which musr be an equilibrium poinr of i = a(x) + 6.
Alternatively, we could use a vers'on of the poincar5-Hopf Theoren
which applies to manifolds with boundary (see [7], page 41) to finish
off the proof . This, in turn, irylies that we can solve a(x) = 6
l-8T

for all 6 sufflclently srnall. Now lf

a(x) = f(x,u(x))

is to have x as an equllibrium polnt, and if xo ls to be asymPtoticallY

stable, 1t 1S clearly necessary that E = f (x,u) be solvable for f srnall .

Remark: If the control system is of the form

x = f (x) + Iu. g, (x) x(t) €Ncnn

then condition (iii) implies that the stabilization problem cannot have

a solution if there is a smooth distribution D which contains f(') and

g1(.),...,8r(') with dim D<n. One further special case: If we have

i = 1 "L.-e1 . ( x ) x(t) €NcRn

with the vectors Bi (x) being linearly independent at xo then there

exists a solutlon to the stabilization problern if and only if m=n.

In this case we must have as manv control parameters as we have

dimensions of X. Of course the matter is conpletely different in Ehe

set {gr(xo)} drops dimension precisely at xo. In this sense' distri-

butions with singularitles are the only interesting kind.

Remark: There is no stabilizing control law for i=t, Y=V, z=xv-Yu.

This system satisfies conditions (1) and (ii) of the theorem, but it

fails to satisfy condition (iii)

Existence of Stabilizing Control L

As mentioned above, 1n the study of asymptotic stability of the

null solution of *=t("); f(0) =0 one singles out for special attention

the critical cases, i.e. those for which (3f/3x) has an eigenvalue with

a zero real part. Liapunov showed that in a noncritical situation

there always exists a quadratic function whose derivative is negative

definite in a neighborhood of zero. In fact, it may be chosen so

that the derivative has a leading term which is a negative definite

quadratic form. In such cases, be they stable or unstable, there

exlsts in a neighborhood of 0 a function v(x) such that <Ev/Ex,f(x)>


1BB
is negatLve for x I 0 and which has the further property that it remains
negatlve definite for some reasonable class of perturbations.
Conslder the control system x= f(x,u) with f(xo,0) = 0. We will
say that this system has finite gain at xo lf there exists a function
v(') mapping a neighborhood of xo into rR sueh that is 0, v(x) >o
v(xo) ;
for x I x
U
and for some k
I
{
x

< 3 v / 3 x , f( x , u ) > < - 0 ( x ) * k ( u , u )


L
*

with 0(x) > 0 a n d 0 o n l y w h e n X = x ^ .

Remark: Note t h a t i f S is conrinuously differentiable with f(0r0),


fx (0,0) and f ( 0 , 0 ) all zero then
,,

x = Ax*Bu*f (x,u)

has finite gain at zero provided the eigenvalues of A have negative


real parts. (lrlecan take v(x) =xrQx with
QA+ArQ=_I.) More
generally, if ;=f(x) has zero as an asymptbticalry stable equilibrium
poinE then i=f(x)+ug(x) has finite gain at zero provided that there
exists a Liapunov function v(x) for x=f(x) whose rate of decay
satisfies ,] < -urrP*2, M > 0, r^rith <Vr,g>/vP/2
bo,rrrd.d in a neighborhood
of (0,0).
To establish this last asserti-on we note that setting
=
28 <Vv,g>/v al1ows us to write

< V v , f ( x ) + u g ( x ) >- k u 2
Iv,u]
[ - 0 , " 0u l l - " 1
I u --J["J
and this guadratic form can be made negative definite by taking k
large enough.

We now use this definition to study stability in some critical


cases.

Lennna: Consider the coupled differential equations

x = Ax*By+g(x,y)

h(x,y)
189

wlth g(','), h(.r.), s*(.,.) and gy(.,.) all contlnuous tn a neighbor-


hood of (0,0). Suppose that the eigenvalues of A have negatlve real
parts and suppose that g, h, g* and g, all vanlsh at (0,0). Let rp be
the contlnuous solution of Arf(y) +By*g(0(V),y) = 0 whlch vanishes at

y=0. Then this palr of equations has (0,0) as an asymptotically


stable crltlcal point provided = fr("-rr<y),y) has finlte gain ar y = 9.
i

Proof: Changevariables according to i=x-0(y), i=y. Then

ji. l -; = ni + eU(i) + Bi + s (i+rp1!y,;,


CIE

= ei+ e(i+rp1;y
,i) - g(i',i)
= Ax+ g(x,y;

where !not only vanishes together with its first derivatives at i=0.
y = 0 but, in fact, vanlshes when?= 0. Wehave, then

d -
iclE x = A x + g ( x , y )

d -
*d t v' = rrtx-rl (i ),i )

Let n be the Liapunov functlon which establlshes the finite gain


property for the i equation. Using the Liapunov function

v(x) = oi'qi+n(il

where qi+itQ=-1, we compuEe

v = -c<;,;> +<Q;,E(i,i)t
+ <vn ,h (;-v(;),;)

-t
,N
Since B(x,y) is second order and vanishes wt^en x does o<xrx>
dominates the second term and by virtue of the finite gain hypothesis,
the left-hand side is negative definite for a sufficiently large.
Asymptotic stability then follows from standard Liapunov arguments.
Notice that the effect of this lernma is to reduce the study of
the stability problem the study of a lower diuensional
to problen (the

I equatlon) by elirni.nation of the "uninteresting" noncritical part.


*
I
I

190

This can also be i.nterpreted in Eerms of time scales. The solution of


ttr" i part goes quickly to zero whereas the ! represents motlon whlch
occurs much more slowly ln the manifold defined by ?=0.

Remark: If there exists a control law whlch makes x = 0 an asymptoti-'

cally stable equllibrium point for

x = Ax*Bu

then there exists a control law which makes xo an asymptotically


stable critical point, provided ko *Buo = 0 can be solved for uo. In

fact, if u = Kx grakes x = 0 an asynptotically stable equllibrium point

then u=Kx+u^ makes x^ an asymptotically stable equilibriurn point.


o o
Thus if x = 0 can be made asynptotically stable then there is a whole

subspace U = {x IRx e Range B} of points which can be made asymptotieally


stable. Incidently, in view of part (iii) of theorem 1 we see that we
can apply Sardfs theorem to conclude something sirnilar for ic=f(x,u).
Namely, if there exists a feedback control 1 a r ^ rw h i c h m a k e s x = 0
asymptotically stable then for some neighborhood N., of i = 0 and some
neighborhood N of x=0 for all 6€Nr, except a possible set of measure
zero, {(x,u)lf(x,n) =6}nN defines a nanifold in (x,u) space.

The idea behind the followlng theorem is that it makes sense to

divide up the question of finding a stabilizing control law into two


parts, one being the choice of a slow mode behavior which is asymp-
totically stable on a submanifold and the other being the choice of a
linear control law to drlve the systen to. the slow mode regime.

Theorem 2: Let f and g be continuously differentiable in a neighbor-


hood of (0,0,0), and suppose they vanish at (0,0,0) together wirh
their first derivatlves. A sufficient condition for

i = e"*Fy*Bu*f(x,y,u)
\
s
y = Cy*g(x,y,u)
F

I
to be stabilizable at (0r0,0) is rhat there exist a pair (K,uo(.)) such l"

that
,
A * BK has its eigenvalues in the open left-half-plane and for {
T
0 the continuous solution of (A+Bx)rl(y) * Fy * nuo (l) + 0(V(v) ,y, uo (Y) l

+ KQ(y)) = 0 which vanishes ar 0


I
191

y = c y + e ( x - ! , ( y ), y , u o ( y ) + K U ( y))

I tras a flnlte gain at (0,0) wlth x regarded as the lnput.

Proof: This ls an lrnnedlate appllcatlon of the lenma wlth u belng

taken to be Kx * u (v).

Remark: To apply this to the flrst example of the lntroductLon'


. . . ?
x=u; y=v; ,-*yrwe let u=-xlz, v=-y-z The s10wmode equation is
. ? 7 2
then z =-z'*x2'-y2-xy and r1(z)=z- shows that this equation has
flnite galn at 0.

4. Acknowledgements

It is a pleasure to thank Chris Byrnes, John Balllieul and Peter

Crouch for helpful discussions on this material.

This work ltas supported in part by the U.S. Arny Research Office
under Grant No. DMG29-79-C-0147, ALr Force Grant No. AFOSR-81-7401'

the Office of Naval Research under JSEP Contract No. N00014-75-C-0648'

and the Natlonal Science Founcation under Grant No. ECS-8L-2I428.

5. References

"The Geometry of Homogeneous Polynomial Dynanical


tf] J. Ball1ieu1,
Systems," Nonlinear Analvsis, Theory Methods and Applications,
Vo1. 4 (1980) pp. 879-900.
"Spacecraft AtEltude Control and Stabilization"
12) P.E. Crouch,
(suboitted for publication) .

t3] H.J. Sussmann, "Subanalytic Sets and Feedback Control, J. of Diff.


Eqs. Vol. 31, No. 1, Jan. L979, pp. 3I-52.

t4] R.W. Brockett, A Geonetrical Fraurework for Nonlinear Control and


Estimation, N o t e s f o r C B M SC o n f e r e n e e ( t o a p p e a r ) .

15] Matheuratical Developments Arising frorn Hilbertrs Probleurs (Felix


Browder, ed.), A.elil4n_Xggh.._Sog., Providence, RI , L976.

t6] F.W. Wilson, Jr.n "The Structure of the Level Surfaces of a


Lyapunov Function," J. of Diff. Eqs. Vol. 4 (1967) 323-329.

l7l J.W. Milnor, Topology, A Differentiable Viewpoint, University


Press of Virginia, Charlottesville, VA, 1965.

You might also like