Professional Documents
Culture Documents
Volume 10
Editors
J.E. Marsden L. Sirovich S. Wiggins
The purpose of this series is to meet the current and future needs for the interac-
tion between various science and technology areas on the one hand and mathe-
matics on the other. This is done, firstly, by encouraging the ways that mathe-
matics may be applied in traditional areas, as weIl as point towards new and
innovative areas of applications; secondly, by encouraging other scientific disci-
plines to engage in a dialog with mathematicians outlining their problems to
both access new methods and to suggest innovative developments within mathe-
matics itself.
The series will consist of monographs and high-level texts from researchers
working on the interplay between mathematics and other fields of science and
technology.
Interdisciplinary Applied Mathematics
Nonlinear Systems
Analysis, Stability, and Control
Springer
Shankar Sastry
Department of Electrical Engineering
and Computer Science
University of Califomia, Berkeley
Berkeley, CA 94720-1770
USA
Editors
J.E. Marsden L. Sirovich
Control and Dynamical Systems Division of
Mail Code 107-81 Applied Mathematics
California Institute of Technology Brown University
Pasadena, CA 91125 Providence, RI 02912
USA USA
S. Wiggins
Control and Dynamical Systems
Mail Code 107-81
Califomia Institute of Technology
Pasadena, CA 91125
USA
Mathematics Subject Classification (1991): 93-01, 58F13, 34-01, 34Cxx, 34H05, 93c73 , 93c95
There has been a great deal of excitement in the last ten years over the emer-
gence of new mathematical techniques for the analysis and control of nonlinear
systems: Witness the emergence of a set of simplified tools for the analysis of
bifurcations, chaos, and other complicated dynamical behavior and the develop-
ment of a comprehensive theory of geometric nonlinear control . Coupled with
this set of analytic advances has been the vast increase in computational power
available for both the simulation and visualization of nonlinear systems as well as
for the implementation in real time of sophisticated, real-time nonlinear control
laws . Thus, technological advances have bolstered the impact of analytic advances
and produced a tremendous variety of new problems and applications that are
nonlinear in an essential way. Nonlinear controllaws have been implemented for
sophisticated flight control systems on board helicopters, and vertical take off and
landing aircraft; adaptive, nonlinear controllaws have been implemented for robot
manipulators operating either singly, or in cooperation on a multi-fingered robot
hand; adaptive control laws have been implemented for jet engines and automotive
fuel injection systems, as well as for automated highway systems and air traffic
management systems, to mention a few examples. Bifurcation theory has been
used to explain and understand the onset of fiutter in the dynamics of aircraft wing
structures, the onset of oscillations in nonlinear circuits, surge and stall in aircraft
engines, voltage collapse in a power transmission network. Chaos theory has been
used to predict the onset of noise in Josephson junction circuits and thresholding
phenomena in phase-locked loops. More recently, analog computation on nonlin-
ear circuits reminiscent of some simple models of neural networks hold out the
possibility of rethinking parallel computation, adaptation, and leaming .
viii Prcface
I will assurne these prerequisites for the book as weIl. The analysis prerequisite is
easily met by Chapters 1-7 of Marsden 's Elementary Classical Analysis, (W. H.
Freeman, 1974) or similar books. The linear systems prerequisite is met by Callier
and Desoer's Linear Systems Theory, (Springer Verlag, 1991) or Rugh's Linear
System Theory, (Prentice Hall, 1993), Chen's Linear System Theory and Design,
(Holt Reinhart and Winston, 1984); or Kailath 's Linear Systems, (Prentice HalI,
1980) or the recent Linear Systems by Antsaklis and Michel, (McGraw HilI, 1998).
I have never succeeded in covering all ofthe material in this book in one semester
(45 c1assroom hours), but here are some packages that 1 have covered, along with
a description of the style of the course
Altematively, it is possible to use all the material in this book for a two-semester
course (90 classroom hours) on nonlinear systems as follows :
Califomia, Irvine, Claire TomIin at Berkeley and Stanford, and Shahram Shahruz
and George Pappas at Berkeley. I am grateful to them for their painstaking com-
ments. I would also like to thank Claire Tomlin, Nanayaa Twum Danso, George
Pappas, Yi Ma, John Koo, Claudio Pinello, Jin Kim, Jana Kosecka, and Joao Hes-
panha for their help with proofreading the manuscript. I thank Christine Colbert
for her superb drafting of the figures and thank Achi Dosanjh of Springer-Ver-
lag, for her friendly management of the writing, and her patience with getting the
manuscript reviewed.
Colleagues who have worked with me and inspired me to leam about new areas
and new directions abound. Especially fresh in my mind are the set of lectures on
converse Lyapunov theorems given by M. Vidyasagar in Spring 1979 and the short
courses on nonlinear control taught by Arthur Krener in Fall 1984 and by Alberto
Isidori in the Fall 1989 at Berkeley that persuaded me of the richness of nonlinear
contro\. I have fond memories of joint research projects in power systems and
nonlinear circuits with with Aristotle Arapostathis, Andre Tits, Fathi Salam, Eyad
Abed, Felix Wu, John Wyatt, Omar Hijab, Alan Willsky, and George Verghese . I
owe a debt of gratitude to Dorothee Normand Cyrot, Arthur Krener, Alberto Isidori,
Jessy Grizzle, L. Robert Hunt, and Marica di Benedetto for sharing their passion in
nonlinear control with me. Richard Montgomery and Jerry Marsden painstakingly
taught me the amazing subtleties of nonholonomic mechanics. I thank Georges
Giralt, Jean-Paul Laumond, OIe Sordalen, John Morten Godhavn, Andrea Balluchi,
and Antonio Bicchi for their wonderful insights about non-holonomic motion
planning. Robert Hermann, Clyde Martin, Hector Sussmann, Christopher Bymes,
and the early Warwiek lecture notes of Peter Crouch played a big role in shaping
my interests in algebraic and geometric aspects of nonlinear control theory. P.
S. Krishnaprasad, John Baillieul, Mark Spong, N. Harris Mc Clamroch, Gerardo
Lafferriere , T. J. Tarn, Dan Koditschek were co-conspirators into unlocking the
mysteries of nonlinear problems in robotics. Stephen Morse, Brian Anderson,
Karl Astrom, and Bob Narendra played a considerable role in my understanding
of adaptive contro\.
The research presented here would not have been possible without the very
consistent support, both technical and financial, of George Meyer of NASA Ames,
who has had faith in the research operation at Berkeley and has painstakingly
explained to me and the students here over the years the subtleties of nonlinear
control and flight contro\. Jagdish Chandra, and then Linda Bushnell, at the Army
Research Office have supported my work with both critical technical and financial
inputs over the years, which I have most grateful for. Finally, Howard Moraff, at the
National Science Foundation believed in non-holonomic motion planning when
most people thought that non-holonomy was a mis-spelled word and supported our
research into parking cars! The list of grants that supported our research and the
writing of this book is NASA under grant NAG 2-243 (1983-1995), NAG 2-1039
(1995 onwards), ARO under grants DAAL-88-KO 106 (1988-1991), DAAL-91 -
GO 171 (1991-1994), DAAH04-95-1-0588 (1995-1998), and DAAH04-96-1-0341
(1996 onwards), NSF under grant IRI-9014490 (1990-1995).
Acknowledgments xiii
Finally, on a personal note, I would like to thank my mother and late father for
the courage of their convictions, selfless devotion, and commitment to excellence.
Shankar Sastry
Berkeley, Califomia
March 1999
Contents
Preface vii
Acknowledgments xi
Standard Notation xxiii
3 Mathematical Background 76
3.1 Groups and Fields . . . . . . . . . . . . . . . . . . . 76
3.2 VectorSpaces, Algebras, Norms, and Induced Norms 77
3.3 Contraction Mapping Theorems. . . . . . 82
3.3.1 Incremental Small Gain Theorem. 84
3.4 Existence and Uniqueness Theorems for
Ordinary Differential Equations . . . . . . 86
3.4.1 Dependence on Initial Conditions on
InfiniteTime Intervals . . . . . . . . . 90
3.4.2 Circuit Simulation by Waveform Relaxation 90
3.5 Differential Equations with Discontinuities . 93
3.6 Carleman Linearization . . . . . . . . . . . . . . . 99
3.7 Degree Theory . . . . . . . . . . . . . . . . . . . . 101
3.8 Degree Theory and Solutions of Resistive Networks 105
3.9 Basics of Differential Topology . . . . . . . . 107
3.9.1 Smooth Manifolds and Smooth Maps 107
3.9.2 Tangent Spaces and Derivatives . 110
3.9.3 Regular Values 113
3.9.4 Manifolds with Boundary . 115
3.10 Summary. 118
3.11 Exercises . . . . . . . . . . . . . . 119
References 645
Index 661
Standard Notation
The following notation is standard and is used throughout the text. Other non-
standard notation is defined when introduced in the text and is referenced in the
index. A word about the numbering scheme: Not all equations are numbered, but
those that are frequently referenced are. Theorems, Claims, Propositions, Corol-
laries, Lemmas, Definitions , Examples are numbered consecut ively in the order
in which they appear and they are all numbered. Their text is presented in an
emphas ized font. If the theorems , claims, proposit ions, etc. are specifically note-
worthy they are named in bold font before the statement. Exercises are at the end
of each chapter and are all numbered consecutively, and if especially noteworthy
are named like the theorems, claims , propositions, etc. Proofs in the text end with
the symbol 0 to demarcate the proof from the following text.
Sets
aEA a is an element of the set A
AcB set A is contained in set B
AUB union of set A with set B
AnB Intersection of set A with set B
such that
p=*q p implies q
p{=q q implies p
p~q p is equivalent to q
Me interior of a set M
xxiv Standard Notation
M elosure of M
]a, b[ open subset of the real line
[a,b] elosed subset of the realline
[a,b[ subset of the realline elosed at a, and open at b
a~b a tends to b
a.J,b a decreases towards b
atb a increases towards b
ffi direct sum of subspaces
Algebra
N set of non-negative integers, namely, (0, 1,2, .. .)
lR field of real numbers
Z ring of integers, namely, (... , -1,0, I, ...)
j square root of -1
C field of complex numbers
lR+(lR_) set of non-negative (non-positive) reals
C+(C_) set of complex numbers in the right (left) half plane,
ineluding the imaginary axis
jwaxis set of purely imaginary complex numbers
C~ {s E C: Re s < O} = interior of C;
C~ {s E C : Re s > O} = interior of C;
An set of n-tuples of elements belonging to the set A
(e.g.,lRn ,lR[s ]n)
set of m x n arrays with entries in A.
a(A) set of eigenvalues (spectrum) of a square matrix A
(xkh EK family of elements with K, an index set.
F[x] ring of polynomials in one variable x
with coefficients in a field F
F(x) field of rational functions in one variable x
with coefficients in a field F
Analysis
f :At-+B f maps the domain A into the codomain B
f(A) rangeof L > {y E B: y = f(x) forsome x E A}
interior of A
elosure of A
boundary of A
Standard Notation xxv
• Digital circuits for binary logic have at least two stable states,
• Chemical reaction kinetics aIlow for multiple equilibria,
2 1. Linear vs. Nonlinear
• Power tlow equations modeling the tlow of real and reactive power in a
transmission network have multiple steady state operating points,
• A buckled beam has two stable buckled states,
• In population ecology, there are multiple equilibrium populations of
competing species.
X = Ax (1.1)
• Differential equations for modeling the heartbeat and nerve impulse gen-
eration are of the so-called van der Pol kind, which we will study later.
The cyclic nature of these phenomena is obtained from a single stable limit
cycle. Other related examples model periodie museular eontraetions in the
oesophagus and intestines.
• Digital clock eireuits or astable multivibrators exhibit cyclic variation be-
tween the logical I and 0 states and mayaiso be modeled as adegenerate
limit of the van der Pol equation mentioned above (this is investigated in
detail in Chapter 7).
• Josephsonjunetion eireuits ean be shown to have a limit cycle (ofvery high
frequeney) when the biasing eurrent is greater than a eertain eritieal value,
and this oscillation marks the transition away from the supereondueting to
the eonducting region (this is studied in detail in Chapter 2).
By way of contrast, eonsider onee again the linear system of (1.1). If A E jRn x n
has eigenvalues on the imaginary axis (referred to as the jca axis) then the
linear system admits of a eontinuum of periodic solutions. However, small
perturbations of the entries of A will eause the eigenvalue s to be displaeed
off the jto axis and destroy the existenee of periodie solutions. Thus, a small
parameter variation in linear system models will destroy the eontinuum of
periodic solutions, and instead produee either an unstable or stable equilibrium
1.1 Nonlinear Models 3
point. Thus, linear dynamical systems with periodic solutions are non robust
models .
As in the case of multiple equilibrium points, there exist systems with multiple,
periodic solutions. These can be finite in number in contrast to an infinite
number for linear systems with jto axis eigenvalues and consequently can not
be described by a linear differential equation model.
3. Bifurcations
There are many examples of systems whose qualitative features, such as the
number of equilibrium points, the number of limit cycles, and the stability of
these features, changes with parametric variation in the model. For example:
a, A rod under axial loading has one unbuckled state as its equilibrium state
till the loading reaches a critical value, at which point it acquires two stable
buckled states and an unstable unbuckled state.
b, The compressor of a jet engine changes from steady state operation to a
mode of operation where there is rotating stall when the angle of attack of
the blades reaches a critical value.
c. The Josephson junction changes from the superconducting state, or zero
resistance state (when there is a single stable equilibrium), to the conducting
state (when there is a periodic orbit and an equilibrium point or just a periodic
orbit) as the amount of current forcing goes through a critical value.
d, Hysteretic behavior in the current values of certain one port resistors, such
as tunnel diodes, occurs when the voltage across them is raised or lowered.
e. As the angle of attack of an aircraft changes, a constant flight path angle
trajectory becomes unstable and is replaced by a cyclic mode with constant
roll rate.
Linear systems with parameters have behavior that is considerably less subtle.
As parameters of the system change, the system can change from stable to
unstable. The number of equilibria can go through infinity if any of the eigen-
values go through the origin, However, none of the changes described above
can be captured by a parameterized linear model.
4. Synchronization and Frequency Entrainment
Oscillators when coupled weakly pull into frequency and phase synchronism,
for exampie in
a, Heart muscle cells, muscular cells causing peristalsis in the intestines and
the oesophagus,
b, Phase locked loops for tracking.
In addition , as parameters change the loss of synchronism or frequency en-
trainment is characterized by the onset of complicated behavior, for example ,
arrhythmias (for the heart muscle) or loop skipping (for the phase locked loop).
There is no linear model for this phenomenon.
5. Comp/ex Dynamica/ Behavior
Let us contemplate the dynamical behavior of the linear system of (1.1). It is
extremely simple: the responses are sums of exponentials , with exponents given
4 1. Linear vs. Nonlinear
-+------------'''--. x
The function f models the fact, that Xt+1 > XI when the population is small, so
that the population increases (abundance of food and living space) . Also, X I+ I < XI
when the population is large (competition for food and increased likelihood of
disease). Thus f(x) > X for X small and f(x) < X for X large. Figure 1.2 shows
how one solves the equation (1.3) graphically: One starts with the initial state Xo on
the horizontal axis and reads off Xl = f (xo) on the vertical axis . The horizontalline
intersecting the 45° line corresponds to the iterative process of replacing Xo on the
horizontal axis by Xl . The iteration now continues by progressively reading off Xt+1
on the vertical axis and using the intersection of the horizontalline through Xt+1
with the 45° line to reflect X I + I onto the horizontal axis. The consequent appearance
of the evolution of the population state variable is that of a cobweb. This term is,
in fact, the terminology used by economists for simple one dimensional models of
macroeconomic GNP growth and certain microeconomic phenomena [315].
We will study the effect of raising the height of the hump without increasing
its support (i.e., the range of values (x) for which f (x) # 0) on the dynamics of
the blow fly population (modeling the effect of an increase in the food supply).
For definiteness, this corresponds to increasing the parameter h in the function of
(1.3) .
0.9
0.8
0.7
0.6
equilibrium is unstable . The parameter value at which the zero population be-
comes unstable is h o = I . It should be kept in mind that the stable population
value xli is a funetion of h, given by
xo* = 1 - -I
h
Thus, at h = 2 the equilibrium point moves from the left side of the hump to
the right side .
• The ease 3 < h ::: 1 + ,J6 = 3.449 .
Inereasing the height of the hump even further, a little graphical experimentation
shows that when the slope of the funetion f near the interseetion point beeomes
sufficiently negative, more precisely, less than or equal to -1, a limit cycle of
period 2 shows up, as in the Figure 1.4. Additionally, Figure 1.4 shows the
eobweb at h, = 3.25, demonstrating the onset of the period 2 limit cycle .
The value h, is ealled a period-doubling bifurcation point. Aperiod 2 limit
cycle is equivalent to a closed square in the eobweb in Figure 1.4 involving the
population alternating between two values x~ and xi.
Further, the period 2 limit cycle is stable; that is, all nonzero initial populations
tend to an asymptotie pattern of alternating between the two values, x~ and xi.
Another way of understanding these results is by examining the dynamics of
the two-step evolution, namely,
(1.4)
1.2 Complexity in Nonlinear Dynamics 7
0.9
0.8
0.7
0.6
~
";;0.5
>-
0.4
0.3
0.2
0.1
The form of the function f 2(x) = fex) 0 f(x), that is, f composed with f ,
(not to be confused with the square of f) is a two humped curve when f is steep
enough, as shown in Figure 1.5. In fact, I? acquires its double hump character
precisely when the value of the parameter h = 3.
Note that the 45 degree line intersects the two humped curve of Figure 1.5 in
three points for h > 3. The middle intersection corresponds to the period 1
solution, which is now unstable in the sense that all populations not exactly
equal to X otend away from it (as may be verified graphically) and towards the
period 2 solution (see also Problem 1.4). Also, note that both the points x~ ,xi
are equilibria for the system of equation (1.4). Thus, the interpretation of the
dynamics of the system of (1.4) is one of strobing or sampling the dynamics of
the system of (1.3) and more visually, the portrait of Figure 1.4 every 2 time
steps.
• The case 3.449 < h ~ 3.570.
Extrapolating from these observations, one may conjecture that as the height of
the original hump is increased, the period 2 limit cycle becomes unstable and is
replaced by a stable period 4 limit cycle involving four points . Associated with
this response is the system corresponding to r.
which would be 4 humped
for this case. Also, the 45° line would intersect this curve in seven points
corresponding respectively to one unstable period 1 point, two unstable period
2 points, and four stable period 4 points (forming aperiod 4 limit cycle). This
intuition is indeed correct, and aperiod 4 limit cycle appears at h 2 = 3.449
but its formal verification needs systematic calculation and is a considerable
undertaking. Thus, ha = 3.449 is the second period doubling bifurcation point.
8 1. Linear VS. Nonlinear
0.9
0.8
0.7
0.6
)(
:::;0.5
c-,
0.4
0.3
/
/
/
/
/
/
/ / 45°
/
1fthis program is carried forward even further, points of period 2k start appearing
increasingly more frequently in h, forexample eight period 8 = 23 points appear
at hs = 3.544, sixteen period 16 = 24 points appear at h 4 = 3.564 and so on.
In fact, Feigenbaum [96] has shown that,
0.9
0.8
0.7
0.6
X
7,0.5
>-
0.4
0.3
0.1
values is exactly 4.6692 . .., not only for the specifie logistic map that we have
considered, but also for othcr one parameter maps that are "close " to it.
• The case 3.570 < h < 3.839.
In this range of the parameter h, the dynamics of the logistic map are quite
complicated. For some initial conditions, the iterates are aperiodic and the
trajectories appear to wander. In fact, the logistic map has been used as a random
number generator in this regime (in fact in the next regime as weil). There are
also all the periodic points of period 2" as weil as some other periodic points .
• Tbe case 3.839 s h s 4.
At h = 3.839, there is for the first time a periodic point of period 3, i.e., a fixed
point of 1 3 = 1010 I, (see Problem 1.5). The cobweb diagram showing the
period 3 point is shown in Figure 1.6. In fact, at this value of h, there are periodic
points of arbitrary period as weil as some aperiodic points. The discovery of
this value was the title of alandmark paper by Li and Yorke, "Period Three
Implies Chaos", [I82]. The period 3 orbit is the only orbit which is stable to
sm all perturbations and is the only one that is easy to obtain from numerical
experiments. As the value of his increased there is aperiod doubling bifurcation
of the period 3 orbit to a stable period 6 orbit. This, in turn, changes by period
doubling to a stable period 12 orbit and so on and remarkably, the ratio of the
parameter values for the period doubling bifurcations is remarkably once again
the Feigenbaum number: 4.6692 .. .. As it turns out, the result of Li and Yorke
was a special case of an earlier result of Sharkovskii which we will describe
shortly.
10 I. Linear VS. Nonlinear
-
=
-
2 - - - - - - - - - -f - - - - ~ -
=
- - -I -
I
+- ~ ... h
3.0 3.449 3.570 4
3.839
FIGURE 1.7. Stable and unstable equilibrium points as a function ofthe height ofthe hump
A plot of the sequence of bifurcations of the logistic map showing the period-
doubling bifurcations is given in Figure 1.7. In terms of population ecology, it is
not surprising that wild populations seem to lie weIl within the nonchaotic region;
chaotic population variations do not seem conducive to survival. It is, however,
possible to produce conditions for chaotic population fluctuation, as has been done
by May and Oster in a laboratory using blowflies.
It is of obvious interest to understand which qualitative features of this incredi-
bly delicate and complex behavior are preserved in one-hump maps more general
than the logistic map. Surprisingly, most of the qualitative characteristics are pre-
served, and the sequence ofbifurcations described above is referred to as the period
doubling route to chaos. In an interesting generalization of the analysis of the 10-
gistic map by Sharkovskii, which actually preceded the work of Li and Yorke,
Sharkovskii ordered the integers as 3 l> 5 l> 7 l> . . . l> 2.3 l> 2.5 l> 2.7 · . . l>
22 .3 l> 22 .5 l> .. . l> . .. l> 2n l> 2n - 1 l> ... l> 22 l> 2 l> 1, that is, the odd integers
except I, followed by twice the odd integers except 1, followed by 22 times the
odd integers except I, followed by 23 times the odd integers followed except 1,
and so on. The tail end of the ordering is made up of the decreasing powers of 2.
He then showed that for an arbitrary system of the form of (1.2), if the discrete
time system has a point of a certain period, it has points of all periods lower than
that one in the Sharkovskii ordering . Thus in particular if the system has a point of
period 3, it has points of arbitrary period. In the context of our example of (1.3),
this happens at h = 3.839 . Thus, at this value of h or higher, the population XI can
vary in a very complicated manner with great sensitivity to initial conditions.
1. Generally speaking, one can obtain closed form solutions for linear systems .
This is seidom the case for nonlinear systems. Consequently, there is a need for
both qualitative insight and repeated simulation for quantitative verification.
Qualitative understanding is important since, even if we had the most powerful
computers at our disposal, exhaustive simulation will be prohibitively expen-
sive. (The reader may wish to amuse herself with considering how much time
would be needed to simulate an equation with about 10 states and a discretiza-
tion grid of initial conditions of about 10 per unit along each axis for the initial
conditions.)
2. The analysis involves mathematical tools that are more advanced in concept
and involved in detail.
Consider for example, nonlinear systems whose dynamics can be described by
first order vector differential equations of the form
i = f t», u, t), x(O) = Xo. (1.5)
Here x E jRn is the state vector, Xo is the state of the system at time 0 and u E jRm
is the control or forcing function . If equation (1.5) were affine in x and u for each
t, one would expect that corresponding to each input u(·) :
• (1.5) has at least one solution belonging to some reasonable dass of functions
(existence 01a solution).
• (1.5) has exactly one solution in the same dass as above (uniqueness 01 the
solution).
• (1.5) has exactly one solution for all time, i.e., on [0, oo[ (extension 01 the
solution up to t = (0).
Note that the latter requirements in this list successively subsurne the former: They
get more demanding. In the full generality of (1.5), none of the above statements
are true for nonlinear systems. We will now give examples of systems that violate
the three preceding requirements.
1. Lack of existence of solutions.
i = -sign(x) , x(O) = O. (1.6)
Here sign(x) is defined to be I if x :::: 0, and sign(x) = -1 if x < O. Conse-
quently, there can be no continuously differentiable function satisfying (1.6).
Nevertheless, the system is an acceptable model of the dynamics of a thermo-
stat about a set point temperature, modeled by x = 0 (one can imagine the
fumace tumed on full blast when the temperature drops below the set point and
the air conditioner tumed on full strength when the temperature rises above the
set point). Indeed, you may have noticed that some thermostats tend to chatter
about their fixed points. Have you thought about how you might quench this
chattering?
2. Lack of uniqueness of solutions .
Consider the differential equation
i = 3x 2/ 3 , x(O) = O. (1.7)
12 1. LinearVS. Nonlinear
xa(t) = O. t < a.
satisfies the differential equation and initial condition for arbitrary values of a ,
3. Finite escape time.
Consider the system
x = 1 + x 2, X (0) = O. (1.8)
It has a solution x(t) = tanu), so that there is no solution defined outside of
the interval [0. 1f/2[ .
We shall not resolve these issues immediately, but rather draw from them the
lesson that the preceding requirements on the existence, uniqueness, and exten-
sion of solutions of (1.5). while obvious for linear systems, require more careful
consideration in a nonlinear context.
X2 = -k1X2 -k2sin(xl)'
In these equations k, is proportional to fr ictional damping and k 2 the length ofthe
pendulum. The system is autonomous and has equilibria at
X2=0. XI=n1f. n=0.±1.±2, . .. .
Note that the system has multiple equilibria.
1.2 Complexity in Nonlinear Dynamics 13
f(x) = O. (1.10)
Ax =0.
To give the reader an early feel for the methods of nonlinear analysis, we show
how sufficient conditions for the existence of isolated equilibria in a system of
the form (1.9) can be given. The proof will use concepts of norms (I . I on ]Rn),
and inequalities associated with norms: These are developed more completely in
Chapter 3. Assume that Xo be an equilibrium point of (1.9) for all time, and assume
that f(x, t) is a Cl function (i.e ., a function that is continuously differentiable in
x), and define its linearization at to as a matrix in ]Rnxn :
Bf
A(to) := -(xo , to).
Bx
Here Aij(to) = ~f (xo, to).
}
IA(to)xl ~ clxl
Further, since f (x, to) is Cl , we may write down its Taylor series about x = Xo as
where f tx«, to) is zero, since Xo is an equilibrium, the linear term is A(to)(x - xo),
and the remainder r (x, to) is the sum of the quadratic, cubic and higher order terms,
i.e., thc tayl' of the Taylor series. r (x , to) is of order Ix - Xo 12 , i.e.,
IThis misspelling of "tail " is standard in the literature for comic intent.
14 I. Linear VS. Nonlinear
Using this estimate in (I.lI) along with the bound on IA(to)x l yields
eile = - i R(vc) + ic .
(I.I3)
u; = -Ve - Ri L + E .
1.3 Some C1assical Examples 15
FIGURE 1.8. A tunnel diode circuit with the tunnel diode characteristic
Here C, L, and R stand for the linear capacitance, inductance and resistance values
respectively and Estands for the battery voltage. The only nonlinear element in
the circuit is the tunnel diode with characteristic iR(VR)' Since VR = vc, equation
(1.13) uses as characteristic iR(vc) . The equilibria of the circuit are the values at
which the right -hand side vanishes; namely, when i n = it. and Vc = E - Ri L .
Combining these two equations gives Vc = E - RiR(vc). This equation may be
solved graphically as shown in Figure 1.9. The resistive characteristic of a tunnel
diode has a humped part followed by an increasing segment, as shown in the figure.
Thus, as seen from the figure, there are either three, two or even one solutions of
the equation, depending on the value of E. The situation corresponding to two
solutions is a special one and is "unstable" in the sense that small perturbations of
E or the nonlinear characteristic will cause the formation of either one solution or
three solutions.
Consider the thought experiment of increasing the voltage E and letting the
circuit equilibrate. For small values of E , the equilibrium current follows the left
branch of the i - v characteristic of the tunnel diode till E = E 3 at which point,
there is no longer an equilibrium point on the left leg of the characteristic, but there
is one on the right branch of the characteristic. Now, as the voltage is reduced, the
equilibrium will stay on the right branch ofthe characteristic till E = E2, at which
point it will jump back to the right hand branch. The hysteresis in the equilibrium
values in this quasi-static experiment is characteristic of nonlinear systems.
FIGURE 1.9. Equilibrium points of a tunnel diode circuit shown by the method of load
Iines
16 1. Linear VS. Nonlinear
0= i/. + i nivc) ,
(1.15)
u; = Ve.
Thus the equation spends most of its time on segments where i/.= is (v e), except
when it jumps (i.e., makes an instantaneous transition) from one leg of the char-
acteristic to the other. In Chapter 6 we will see why this system of equations does
not have solutions that are continuous functions of time, see also [337].
+
--+---~-----'7'-'--"" "c
1..5 , -- - - --,-------.-----.---------.-----,------,
-I
2 .,5 ,----.--.------,-------,r----~--__,_--__.---,__--_,
\
,
2 ,
\
,, decr""sing C
1.,5
\
\
\
,
0 .,5 '.K---------t-- - - - -
........ 0
-0.,5
- - -- - - -- - -- -;}',,
-I \
,
\
\
- 1.,5
\
,
\
,
,
-2
\
-1.5 -I -0.5 0 .5 1.,5 2
.. '
2.5
the (j direction with period 21T. This again is not surprising, given that the right
hand side of the differential equation is periodic in the variable (j with period
21T. Further, the equilibrium points are surrounded by a continuum of trajectories
that are closed orbits corresponding to periodic orbits in the state space. A little
numerical experimentation shows that the closed orbits are of a progressively lower
frequency as one progresses away from the equilibrium point . One particularly
curious trajectory is the trajectory joining the two saddles, which are 2rr radians
apart. This trajectory corresponds to one where a slight displacement from the
o
equilibrium position «(j = -rr, = 0) results in the pendulum building up kinetic
energy as it swings through (j = 0 and just back up to another equilibrium point at
(j = 1T trading the kinetic energy for potential energy. Actually, since the equations
of the pendulum are periodic in the (j variable with period 21T it is instructive to
redraw the phase portrait of (1.14) on astate space that is cylindrical, i.e., the (j
variable evolves on the interval [0, 2rr] with the end points identified as shown
in Figure 1.15. Note that, on this state space, the points (j = -1T and (j = n are
identical. Also, as mentioned above , each of the equilibrium points of the pendulum
is surrounded by a set of closed orbits of increasing amplitude and decreasing
frequency. The trajectory joining the saddle at (1T, 0) to the saddle at (-1T , 0)
e
which lies in the upper half plane (that is, with ~ 0), and its mirror image about
the (j axis, taken together may be thought of as constituting an infinite period closed
orbit. This simple example manifests many of the features of a nonlinear system
that we mentioned at the outset: multiple equilibrium points, periodic orbits (in fact,
a continuum of periodic orbits), and an interesting dynamic feature corresponding
to the union of the upper and lower saddle connection trajectories, which bears
some resemblance to an infinite period closed orbit.
This equation shows up in several other contexts as weil. In Chapter 2 we study
the version of this system with non-zero damping in the context of the dynamics
of a Josephson junction circuit and in Chapter 5 we study it in the context of the so
called swing equations modeling the dynamics of an electrical generator coupled
to a power transmission network. The continuum limit of this equation also appears
in many problems of c1assical and quantum physics, and this partial differential
equation is referred to as the sine-Gordon equation.
°
the existence ofthe three equilibrium points ofthis system at r = 0, x = 0, ± I, and
the orbits connecting the equilibrium point x = 0, x = to itself, and a continuum
of closed orbits between the equilibrium point x = 0, x = - land the trajectory
x °
connecting the saddle at = 0, x = to itself as well as between the equilibrium
point x = 0, x = 1 and the other trajectory connecting the saddle to itself. Each of
1.3 Some Classical Examples 21
11 -"-
- - -•• 11
FIGURE 1.17. A bifurcation diagram for the Euler beam buckling showing the transition
from the unbuckled state to the buckled state
the saddle trajectories bears resemblance to the preceding example, though each
of them connects the saddle to itself, rather than a different saddle. Again the
continuum of closed orbits contained inside each of the saddle connections is of
progressively decreasing frequency away from the equilibrium point towards the
"infinite" period saddle connections. A bifurcation diagram showing the transition
from the unbuckled state to the buckled state with JL as the bifurcation parameter
is shown in Figure 1.17. There are several qualitative features of the phase portrait
of the buckling beam that are like those ofthe undamped pendulum . These features
are common to Hamiltonian systems , that is, systems of the form
ßH(XI, X2)
XI = - - - -
ßX2
( l.l8)
• ßH(x], X2)
X2 =-
ßXI
for some Hamiltonian function H (XI, X2). In the case of the undamped beam
(d = 0), with X = XI , X = X2, we have that
X =ax -bxy,
(I.l9)
y = cxy - dy.
Here a, b, c, d are all positive (as are the populations x, y). These equations follow
from noting that the prey multiply faster as they increase in number (the term ax)
and decrease in proportion to large numbers ofboth the predator and prey (the term
bxy). The predator population is assumed to increase at a rate proportional to the
number of predator-prey encounters modeled as c(x -d/ c) y. Here c is the constant
of proportionality, and the number of predator prey encounters should be xy.
However, to model the fact that a steady state prey population is required to sustain
growth, this term is modified using an offset in x as (x -d/ c). Admittedly, the model
is oversimplified but is nevertheless ecologically meaningful. A more meaningful
model is given in the exercises as a refinement of this one. A representative phase
portrait of this system is given in Figure 1.18. Other seenarios could occur as weil :
for some values of the parameters, the Volterra-Lotka equations predict conver-
gence of the solutions to a single stable equilibrium population. These are explored
in the exercises (Problems 1.11-1.13). In particular, Problem 1.13 explores how
robust the model is to uncertainties in modeling.
2.5
t1.5
0.5
prey
FIGURE 1.18. Phase portrait of the Volterra-Lotka equations showing cyclic variation of
the populations
1.4 Other Classics: Musical Instruments 23
1.5
0.5
-0.5
-1
- loS
FIGURE 1.19. Phase portrait of the clarinet model, with different spring constants and
different blowing characteristics
~
I
Ib
I
by having a friction characteristic between the body and the conveyor belt, that
is sticky friction or stiction. The friction force between the conveyor belt has the
qualitative form shown in Figure 1.20. The equations of the system are given by;
+ kx + f(x -
Mx b) = O. (1.21 )
The equilibrium ofthe system is given by x = 0, x = - f( -b)j k. It is unstable in
the sense that initial conditions close to it diverge, as shown in the phase portrait
of Figure 1.21. The phase portrait however has an interesting limit cycle to which
trajectories starting from other initial conditions appear to be attracted. The limit
cycle shows four distinct regimes, with the following physical interpretation:
1. Sticking. The block is stuck to the conveyor, and the friction force matches the
spring force. The velocity of the mass is equal to the velocity of the conveyor
bell. Thus, the block sticks to the conveyor belt and moves forward .
2. Beginning to slip. The spring has been extended sufficiently far that the force on
the block is strong enough to cause the block to start slipping. That is, the spring
1.5 Summary 25
1.5
05
0
?>
8
1
-05
- I
-1.5
-2
-I -0.5 0 0 .5 1 1.5 2 2 .5 3 35
posilDn
FIGURE 1.21. Phase portraits of a violin string starting from different initial conditions
force exceeds the frictional force. Hence the acceleration becomes negative and
the velocity starts to decrease.
3. Slipping. The sudden drop of friction caused by the onset of slipping yields
the tug of war to the spring, causing the block to slip faster-the acceleration
becomes negative, the velocity becomes negative, and the block starts moving
to the left.
4. Grabbing . When the leftward motion of the block has decreased the spring
force to a value at which it is less than the slipping friction, the motion to the
left slows, and the velocity starts going positive. When the velocity reaches a
critical value we return to the sticking phase.
Convince yourself by simulation that initial conditions starting from far away
also converge to this limit cycle . It is once again of interest to see that a stiffer string
may be modeled by a stronger spring with a consequent greater tone or richness
of the note. A broader friction characteristic of the stiction or higher velocity of
the belt will correspond to a louder note.
1.5 Summary
In this chapter, we have seen the richness of dynamic behavior predicted by non-
linear differential equations and difference equations. To warm up for the sort of
detailed mathematical analysis that will be necessary in the chapters to come, we
have given , by way of example, sufficient conditions for the existence of isolated
26 1. Linear VS. Nonlinear
equilibria. The origins of the endeavor to build simple nonlinear models of physi-
cal phenomena are old. Here we have tried to give a modem justification for some
important classical equations. We encourage the reader to begin his own voyage of
discovery in the literature and to use a simulation package to play with the models,
especially in the two-dimensional case.
Some excellent references are Abraham and Shaw [3], HaIe and Kocak [126],
Wiggins [329], Hirsch and Smale [I4Il, Amold [I l l, Nemytskii and Stepanov
[229], Guckenheimer and Holmes [122].
1.6 Exercises
Problem 1.1 Alternative version ofthe logistic map. Find the transformation
required to convert the logistic map of (1.2), (1.3) into the form
_ 2
Yk+1 - f,L - Yk'
Find the transformation from x to Y and the relationship between hand u:
Problem 1.2 Computing the square root of2. Your friend teIls you that a good
algorithm to compute the square roots of 2, is to iteratc on the map
Xk 1
Xk+l = -+-.
2 Xk
Do you believe her? Try the algorithm out, with somc random guesses of initial
conditions. Do you get both the square roots of 2?
Problem 1.3. It is instructive to derive conditions for the existence of limit cycles
for linear systems of the form (1.1). Indeed, let Xo E !Rn be a point lying on such a
limit cycle and further assume that the period of the limit cycle is T. Then derive
the equation that Xo should satisfy and derive necessary and sufficient conditions
for the existence of such a limit cycle in terms of the eigenvalues of A. Further,
show that if the conditions are satisfied, there is a continuum of Xo satisfying the
equation you have derived. Characterize this set (subspace).
Problem 1.4. Consider the one hump function of (1.3). Derive the dynamical
system for the two step evolution of this function. Now determine the minimum
value of the parameter h for which the period 2 limit cycle exists by looking for
equilibrium points of /2. Note that equilibrium points of / will also be equilibrium
points of /2. Thus you are looking for the parameter value h at which the equation
/2(X) =x
has 4 solutions. Determine the values of the equilibria explicitly. The critical value
of h at which the period 2 limit cycle appears is the value of h at which the map
/2(X) becomes two humped. Explain this and use this as an alternate method of
doing this problem.
1.6 Exercises 27
Problem 1.5 Period 3 points. Consider the one hump equation of (1.3). Find
the value of h for which it has aperiod three point. That is, find the value of h for
which there exist a < b < c, such that f(a) = b, f(b) = c, f(c) = a. (Hint: try
h = 3.839).
Problem 1.6. Read more about the details of the dynamics of the one-hump map,
you may wish to study eitherthe original paperofLi and Yorke [182] orread about
it in the book of Devaney [84].
Problem 1.7 One Hump Map Game. Consider the discrete time system of the
one hump map, namely
For 0, h < 1 find all the fixed points of the equation, that is, for which Xk+ I = xi;
and determine which are stable and which are unstable (if any!).
Let 1 < h < 3. Find all the equilibrium points. Show that
1. If Xo < 0, then Xk ~ -00 as k ~ 00 .
2. If Xo > 1, then Xk ~ -00 as k ~ 00 .
1
3. For some intervall C ]0, 1[,if Xo EI, then Xk ~ Xh := hh •
Determine the open interval of attraction of Xh. For agame, program the above
recursion on your calculator or computer for h = 4 and note that initial conditions
starting in ]0, 1[ stay in ]0, 1[. Watch the sequence Xk wander all over the interval
except for some special initial conditions Xo corresponding to periodic solutions .
Problem 1.8. Use your favorite simulation environment to simulate the pendulum
equation, the buckling beam equation, the Volterra-Lotka predator prey equations,
the Rayleigh equation and the van der Pol equation with several of the parameters
of the equations being varied. Write a small macro to run the simulation for a set of
initial conditions and plot the resulting phase diagram. Then , think about how you
may look for interesting features in the phase portraits. How do you know where
to look for periodic solutions? How do you get at unstable equilibrium points?
Remember that you can integrate equations for t ~ 0 by changing the sign of the
right hand side. What does this do to the stability of equilibrium points?
Problem 1.9. Consider the equations of the pendulum and the buckling beam
namely (1.16) and (1.17) with positive non zero damping d. Use a numerical
simulation package to draw phase portraits for these two cases . What happens to
the continuum of limit cycles and the orbits connecting the saddles? The phase
portrait of Figure 1.22 is that of the damped buckling beam equation with m = 1,
/L = 1.2, A = 0.2, d = 0.3. Use it to stimulate your imagination!
"
FIGURE 1.22. Segments of the phase portrait of the darnped buckling beam equation with
d = 0.3. The reader is invited to fill in the location of the stable manifold of the saddle at
the origin of the picture.
I \
I \
1,,--, "--'\
18 1 8 2 \
I \
I \
~ ~
FIGURE 1.23. Showing a pendulum with two magnets of unequal strength
how many equilibria do you expect (for -rn :s () < rr)? How many of them are
stable and how many unstable? What can you say about what volume of initial
conditions gets attracted to each equilibrium point? How does this phase portrait
relate to that of the Euler buckled beam?
population. How could you modify the model to have trajectories converging to a
steady state non-zero population of predator and prey.
x=(a-by -h)x ,
(1.22)
y= (ex - d - I-LY)Y.
Study the equilibria of this system and try to simulate it for various values of the
parameters a, b, c, d, A, I-L > O.
Problem 1.13 Volterra-Lotka generalized. A more meaningful Yolterra-
Lotka model is obtained by considering the equations
x = xM(x, y),
(1.23)
y= yN(x, y).
Think about what minimum requirements M, N should have in order for the
equations to model predator-prey characteristics (with the bounded growth phe-
nomenon discussed above). From drawing the curves {(x, y) : M(x, y) = O}
and {(x, y) : N(x, y) = O} qualitatively determine the nature (stability char-
acteristics) of the equilibrium points. Further, in the positive quadrant draw the
qualitative shape of the direction of population evolution in the four regions ob-
tained by drawing the curves above. Give conditions for convergence of an prey
populations to zero, an predator populations to zero, and for the nonexistence of
periodic solutions. Simulate the systems for some guesses of functions M, N.
Problem 1.14 Competing Populations. Let us say that x and y model the pop-
ulations of two competing species. We will consider a somewhat different kind of
population dynamics for them. Let a , b, c, d be positive where a and d represent
the unimpeded population growth, and b, c represent the competition: Species 1
eats species 2 and vice versa. Thus, we have
x = ax - bxy ,
(1.24)
y= dy - cxy.
Find an the equilibrium points and sketch a plausible phase portrait. Try to
generalize this example as in the previous one to the case when the right hand
sides are respectively xM(x, y) and yN(x, y), with appropriate hypotheses on
M,N .
30 I. Linear vs. Nonlinear
fex)
In
4
o--==C»----A,Mr--t--o xz - t - - + - - - - - + - -... x
4
Problem 1.15 The SR Latch. The figure (1.24) shows a dynamic model of a
standard SR (Set-Reset) latch. Assume that the NANO gates are identical, and that
S,R are both high (represented in the figure as the binary I), so that the NANO
gates function as inverters. Let the state variables XI , Xz be the voltages on the
capacitors and draw an approximate phase portrait showing the (three!) equilibria
of the system. Two of them are the logic one and logic zero output of the latch.
What can you say about the third?
2
Planar Dynamical Systems
2.1 Introduction
In the previous chapter, we saw several classical examples of planar (or 2 di-
mensional) nonlinear dynamical systems. We also saw that nonlinear dynamical
systems can show interesting and subtle behavior and that it is important to be
careful when talking about solutions of nonlinear differential equations. Before
dedicating ourselves to the task of building up in detail the requisite mathematical
machinery we will first study in serni-rigorous but detailed fashion the dynam-
ics of planar dynamical systems; that is, systems with two state variables. We
say semi-rigorous because we have not yet given precise mathematical conditions
under which a system of differential equations has a unique solution and have
not yet built up some necessary mathematical prerequisites . These mathematical
prerequisites are deferred to Chapter 3.
The study of planar dynamical systems is interesting , not only for its relative
simplicity but because planar dynamical systems, continuous and discrete, contain
in microcosm a great deal of the variety and subtlety of nonlinear dynamics. The
intuition gained in this study will form the heart of the general theory of dynamical
systems which we will develop in Chapter 7.
is the study the eigenvalues of A. Recall that in order for the system (2.1) to have
an isolated equilibrium point at the origin, it should have no eigenvalues at the
origin. Now, consider the similarity transformation T E JR 2x2 ,
T-1AT = J ,
that determines the real Jordan form' of the matrix A. Applying the coordinate
transformation z = T-1x , the system is transformed into the form
i = T- 1ATz = J z,
The following three cases, having qualitatively different properties, now arise.
• Real Eigenvalues
with AI, A2 E R results in the following formula for the relationship between
zi . Z2 obtained by eliminating time from the
formulas
ZI (t) = ZI (O)e Z2(t) = Z2(O)eA 21
A 1/
,
I A real Jordan form for a real-valuedmatrix is one for which the matrix T is constrained
to be real. Consequently, real-valued matrices with complex eigenvalues cannot be diag-
onalized, but rather only block diagonalized by using the real and imaginary parts of the
eigenvectors associated with a complex eigenvalue as columns of T. For details on real
Jordan forms, see Chapter 6 of'{l-tl l.
2.2 Linearization About Equilibria of Second-Order Nonlinear Systems 33
Node Saddle
FIGURE 2.1. Phase portraits for anode and saddle in the z coordinates
formulas for ZI, zz. The stable node has particularly simple dynamics. All the
trajectories decay to the origin. An unstable node corresponding to the case
that Al, AZ > 0 has equally simple dynamics. All trajectories take off from
the neighborhood of the origin exponentially (altematively, one could say that
trajectories tend to the origin in reverse time (or, as t ~ -00). Thus, the
phase portrait is the same as that of the stable node of Figure 2.1 but with the
sense of time reversed. The Z1, Zz axes are the eigenspaces associated with the
two cigenvalues. They are invariant under the flow of (2.1), in the sense that
trajectories starting on the ZI or zz axes stay on the Z, ,or Zz axes, respectively.
The term saddle is particularly evocative in the description of the dynamies
at an equilibrium point which has one positive (unstable) and one negative
(stable) eigenvalue. The only initial conditions that are attracted to the origin
asymptotically are those on the ZI axis (the bow ofthe saddle). All otherinitial
conditions diverge asymptotically along the hyperbolas described by (2.3). Of
course, when Al > 0 > Az, the equilibrium point is still a saddle. (Think about
the use of the terminology saddle and justify it in your mind by rolling small
marbles from different points on a horse's saddle.)
In the original x coordinates, the portraits of Figure 2.1 are only slightly mod-
ified since the relationship between x and Z is linear, as shown in Figure
2.2.
• Single repeated eigenvalue with only one eigenvector
In this case the Jordan form J is not diagonal but has the form
(2.4)
Saddle
FIGURE 2.2. Phase portraits for anode and saddle in the x coordinates
Zz
Observe that by eliminating t the form of the trajectories in the phase plane is as
shown in Figure 2.3 and is given by
ZIO
ZI = -ZZ + -zzln
1 (zz
- ) .
Zzo Ä Zzo
Thus, the trajectories converge to the origin as in Figure 2.1, but they do so in a
more complex fashion. This kind of an equilibrium is referred to as an improper
stable node (or simply improper node) . For Ä > 0, the improper node is referred to
as an improper unstable node . In this case the ZI axis is the eigenspace associated
with the eigenvalue >.. and is invariant. However, the Zz axis is not invariant, as
evidenced by the fact that trajectories cross the Zz axis in the phase portrait of
Figure 2.3.
2.2 Linearization AboutEquilibria of Second-Order Nonlinear Systems 35
J=[er-ß ß]
er
. (2.5)
To obtain solutions for this case, it is useful to perform a polar change of coordinates
ofthe form
r =err,
~=-ß.
The phase portraits are now easy to visualize: The angular variable f/J increments
at a fixed rate (the trajectory spins around in a clockwise direction if ß is positive
and counterc1ockwise otherwise). Also, the trajectory spirals inward toward the
origin if er < 0, in which case the equilibrium is referred to as a stablefocus. When
er > 0, trajectories spiral away from the origin, and the equilibrium is called an
unstable focus. If er = 0, the origin is surrounded by an infinite number of circular
closed orbits and the equilibrium referred to as a center . Figure 2.4 shows a stable
focus and a center. When the phase portrait is drawn in the original x coordinates,
it is rotated as in the instance of the node and saddle.
--+-+~~lf-4-iH--- Zl
XI = I1 (XI , X2),
(2.6)
X2 = h(x\, X2).
al l all]
al aXI aX2 2 x2
A(xo) = ax (x o) = öh ah (xo) E ]R
[
ÖXI aX2
be the Jacobian matrix of I at Xo (linearization of the vector field I (x) about xo) .
A basic theorem of Hartman and Grobman establishes that the phase portrait of
the system (2.6) resembles that of the Iinearized system described by
i. = A(xo)z (2.7)
if none of the eigenvalues of A (xo) are on the jco axis. Such equilibrium points are
referred to as hyperbolic equilibria. A precise statement of the theorem folIows:
Theorem 2.1 Hartman-Grobman Theorem. Ifthe linearization ofthe system
(2.6), namely A(xo), has no zero or purely imaginary eigenvalues then there exists
a homeomorphism (that is, a continuous map with a continuous inverse)from a
neighborhood U 01Xo into ]R2,
h: U ~ ]R2,
taking trajectories of the system (2.6) and mapping them onto those 01(2.7). In
particular, h(xo) = O. Further, the homeomorphism can be chosen to preserve the
parameterization by time.
The proof of this theorem is beyond the scope of this chapter, but some remarks
are in order.
Remarks:
1. Recall from elementary analysis that a homeomorphism is a continuous one
to one and onto map with a continuous inverse. Thus , the Hartman-Grobman
theorem asserts that it is possible to continuously deform all trajectories of the
nonlinear system of (2.6) onto the trajectories of (2.7). What is most surprising
about the theorem is that there exists a single map that works for every one of
a continuum of trajectories in a neighborhood of Xo. In general, it is extremely
difficult to compute the homeomorphism h of the theorem . Hence, the primary
interest of this theorem is conceptual ; it gives a sense of the dynamical behavior
2.2 Linearization About Equilibria of Second-Order Nonlinear Systems 37
of the system near XQ. In modem dynamical systems parlance, this is referred
to as the qualitative dynamics about XQ. The theorem is illustrated in Figure 2.5
for the case that the linearized system has a saddlc equilibrium point.
2. To state the Hartman-Grobman theorem a little more precisely, we will need
the following definition:
Definition 2.2 Flow. The state 0/ the system (2.6) at time t starting from x
at time 0 is calied the ftow and is denoted by ifJ, (x) .
Using this definition, the Hartman-Grobman theorem can be expressed as
folIows : If x E U C ]R2 and ifJ, (x) EU, then
h(ifJ,(x)) = eA(xo)'h(x). (2.8)
The right hand side is the ftow of the linearized system (2.1), starting from
the initial condition h (x) at time O. The preceding equation states that, so long
as the trajectory of (2.6) stays inside B(xQ. 8) and the homeomorphism h is
known, it is not necessary to integrate the nonlinear equation, but it is possible
simply to use the linear ftow eA(xo)' as given in (2.8) above. Equation (2.8) may
be rewritten as
ifJ,(X) = s:' (eA(xo)'hex)~ .
3. The Hartman-Grobman theorem says that the qualitative properties ofnonlinear
systems in the vicinity of isolated equilibria are determined by their linearization
if the linearization has no eigenvalues on the jco axis,
4. When the linearization has an eigenvalue at the origin, then the linearization pre-
dicts the existence of a continuum of equilibria. However, the original nonlinear
system mayor may not have a continuum of equilibria. Also, the qualitative
behavior may vary greatly depending on the higher order nonlinear terms. The
following example shows what can happen when there is an eigenvalue at the
origin:
Example 2.3. The system
is also a scalar system with a zero linearization and also a single equilibrium
point at O. However, only negative initial conditions converge to the origin as
t ~ 00.
Thus when the linearization has an eigenvalue at the origin, the linearization is
inconclusive in determining the behavior of the nonlinear system.
5. When the linearization has nonzero eigenvalues on the jto axis, the linearization
predicts the flow in the neighborhood of the equilibrium point to resemble
38 2. Planar Dynamical Systems
the flow around a focus. However, the original nonlinear system may have
trajectories that either spiral towards the origin or away from it depending
on the higher order non linear terms. Thus, the linearization is inconclusive
in determining the qualitative behavior elose to the equilibrium point. The
following example illustrates this point:
Example 2.4. Consider the nonlinear system
Xl =X2 .
• 2
X2 = -Xl - EX j X2 ·
The linearized system has eigenvalues at ±j1 but for E > 0 the system has
all trajectories spiraling in to the origin (much like a sink), andfor E < 0 the
trajectories spiral out (like a source). Think about how one might prove these
statements! (Hint: Determine the rate of change of xf + x~ along trajectories
ofthe equation.)
It is important to try to understand what the Hartman-Grobman theorem means.
If the linearization of an equilibrium point is a saddle, the conelusions of the
theorem are as shown in Figure 2.5. As before, note that the same homeomorphism
h maps all the trajectories of the nonlinear system onto those of the linearized sys-
tem. Thus the homeomorphism h has the effect of straightening out the trajectories
of the nonlinear system.
Another question that comes to mind immediately in the context of the Hartman
Grobman theorem is: What do the eigenvectors ofthe linearization correspond to?
For the linearized system (2.1) they correspond to the eigenspaces which are invari-
ant under the flow of the differential equation (2.1) , as was discussed in the previous
section. The nonlinear system (2.6) also has "nonlinear eigenspaces"-invariant
manifolds. Invariant manifolds are the nonlinear equivalents of subspaces. 2 It is
quite striking that these manifolds are, in fact, locally tangent to the eigenspaces
of the linearized system at xo. To make this more precise, consider the following
/
\
,
2 A more precise definition of a manifold will be given in Chapter 3; for now, think of
manifolds as being bent or curved pieces of subspaces.
2.2 Linearization About Equilibria of Secend-Order NonlinearSystems 39
definitions:
Here ifJ,(x) is the flow of (2.6) starting from x(O) = x , i.e., the loeation of the
state variable x(t) at time t starting from x(O) = x at time 0, and U is an open
set eontaining xo. The set WS (xo) is referred to as the loeal inset of xo, sinee it
is the set of initial eonditions in U that eonverge to Xo as t -+ 00 , while the
set W" (xo) is referred to as the loeal outset of xo, since it is the set of points in
U whieh is anti-stable, i.e., the set of initial eonditions whieh eonverge to Xo as
t -+ -00. A theorem ealIed the stable-unstable manifold theorem states that, when
the equilibrium point is hyperbolie (that is, the eigenvalues of the linearization are
off the jos axis), the inset and outset are the nonlinear versions of the eigenspaees
and are ealIed, respeetively, the stahle invariant manifold and unstahle invariant
manifold. In Chapter 7 we will see that if the equilibrium is not hyperbolie, the
inset and outset need not be manifolds (this distinetion is somewhat more teehnical
than we have maehinery to handle here, we will develop the eoneept of manifolds
in Chapter 3). Figure 2.6 shows the relationship between the subspaees associated
with the eigenvalues ofthe linearized system and WS(xo), W" (xo) elose to xo. The
eigenspaees of the linearized system are tangent to ws (xo), W" (xo) at xo.
The tangeney of the sets Ws, W" to the eigenspaees of the linearization needs
proof that is beyond the level of mathematical sophistication of this ehapter. We
will return to this in Chapter 7. For now, the preeeding diseussion is useful in
deterrnining the behavior around hyperbolie equilibrium points of planar systems.
As the folIowing example shows, qualitative eharaeteristics of W", W" are also
useful in deterrnining some plausible global phase portraits:
Example 2.5 Unforced Duffing Equatlon, Consider the seeond order system
x + o.i - x + x 3 = O.
ws
FIGURE 2.6. Showing the relationship between the eigenspaces and stable, unstable
manifolds
40 2. PlanarDynamical Systems
. 3 .
(2.9)
X2 =XI - X I -oX2 ·
The equilibrium points are (0, 0), (± 1, 0) . The Jacobian 0/ the linearization is
[ I _03X~ ••
A small calculation shows that the eigenvalues 0/the linearization at the equilibria
(±1 ,0) are
-8±#-=8
2
Thus,for 8 > 0 they are both stable. The eigenvalues ofthe linearization at (0, 0)
are
-8±~
2
For 8 > 0, one ofthese is positive and the other negative corresponding to a saddle
equilibrium. The corresponding eigenvectors are given by
( -8 ± J8 + 4)
2
T
1, 2
and we can plot the phase portrait as the level sets ofthe Hamiltonian
1 2 1 2 1 4
H(Xt , X2) = 2X2 - 2 XI + 4Xt .
Thus a few plausible phase portraits in the vicinity 0/ these three equilibria are
shown in Figure 2.7. The only part that we are certain about in these portraits is
the behavior near the (hyperbolic) equilibria. The existence 0/ the closed orbits
in some 0/ the portraits is more conjectural and is drawn only to illustrate their
possible existence. This is the topic 0/ the next section .
~
Ci)
~ E
?>(
c8 ~
Cii)
{4)
e
Ciii)
?>(
~
FIGURE 2.7. Plausible phase portraits for the Duffing equation from the linearization
By a closed orbit is meant a trajectory in the phase space that is periodic with a
finite period. Thus, a collection of trajectories connecting a succession of saddle
points is not referred to as a closed orbit; consider for example the trajectories
connecting the saddles in Figure 1.14. These trajectories are, roughly speaking,
infinite period closed orbits (reason about this before you read on, by contemplating
what happens on them). In the sequel , we will refer to such trajectories as saddle
connections, and not as closed orbits . Consider a simple example of a system with
a closed orbit:
(2.10)
42 2. Planar Dynamical Systems
4>=-1.
The resultant system has a closed orbit which is a perfect circle at r = ß. It also
has an unstable source type equilibrium point at the origin since its Iinearization
there is
which has unstable eigenvalues at aß2 ± j. In fact, the equations can be integrated
to give
Hut div f does not change sign and is not identically zero in S by assumption. This
establishes the contradiction." 0
3 A simply connected region in the plane is one that can be (smoothly) contracted to a
point. Thus, a simply connected region cannot have more than one "blob", and that blob
should not have any holes in its interior.
4We have been a little cavalier with the smooth dependence of f ex) on x needed to use
Gauss's theorem . We will be more careful about stating such assumptions more precisely
2.3 Closed Orbits of Planar Dynamical Systems 43
after the next chapter. It is also important to notice that in the hypotheses of Bendixson's
theorem div f may be allowed to vanish on sets of measure 0 in D , since it will not change
the surface integral of div f : this indeed is the import of the assumption on div f not
vanishing on any sub-region of D.
44 2. Planar Dynarnical Systems
Definition 2.11 w Limit Set. A point z E ]R2 is said to be an w limit point ofa
trajectory rPr (x) of (2.6) if there exists a sequence of times tn, n = I, ... , 00 such
that tn t 00 as n ~ 00 for which lim n - H lO rPr. (x) = z. The set ofall limit points of
a trajectory is called the w limit set ofthe trajectory. This is abbreviated as w(x) .
Remarks:
1. The preceding definition was stated for dynamical systems in the plane but we
will have occasion to use the same definition in ]Rn in Chapter 7. A similar
definition can be made for a limit sets using a sequence of times tending to
-00. The terms a and to, the first and last letters of the Greek alphabet, are use
to connote the limits at -00, and +00, respectively. While we have defined w
and o Iimit sets of trajectories, the notation w(x) or a(x) will be used to mean
the wand a limit sets of trajectories going through x at t = O. We will also
refer to w(x) and a(x) as w (respectively, a) limit sets of the point x.
2. Ir Xo is an equilibrium point, then w(xo) = Xo and a(xo) = xo. Also, if Xo E Y
is a point on a closed orbit, then w(xo) = y and a(xo) = y .
3. The wand a limit sets can sometimes be quite complicated and non-intuitive,
as this next example shows:
where p , q are both integers. Verify that, ifthe ratio p/q is rational, the w limit
set ofa trajectory is the trajectory itself. If, however, the ratio is irrational, the
nf--L.---"----+--...·,,
o
w limit set is aLL of the torus. (Be sure that you can convince yourself of this
fact. Can you determine the a limit sets in these two instances?)
4. If the a and to limit set of a point are the same, they are referred to simply as
the limit set.
Using these definitions, a distinction is sometimes made between a closed orbit
and a limit cycle as follows.
Definition 2.13 Limit Cycle. A limit cycle is a closed orbit y for which there
exists at least one x not in y such that either y is the a limit set of x, or y is the
t» limit set ofx . Alternatively, this means that
or
lim ifJ,(x) --+ y.
t~ - OO
1. w(p) =1= 0 , that is, the w limit set of a point is not empty.
2. w(p) is closed.
3. w(p) in an invariant set.
4. co (p) is connected.
A further definition which is used in the construction is a transverse section. A
transverse section :E is a continuous, connected arc such that the dot product of
the unit normal to :E and the vector field is not zero and does not change sign on
:E . That is, the vector field has no equilibrium points on :E and is never tangent to
:E.
Fact 2: Let 1: be a transverse section in M. The fiow of any point P E M, 1Jr (p)
intersects :E in a monotone sequence for t ::: O. Thus, if Pi is the i -th intersection
of 1Jr(P) with 1:, then Pi E [Pi-I> pi +d·
Proof of Fact 2: Consider the piece of the orbit 1Jr (p) from Pi _] to Pi along
with the segment [Pi _I , Pi] c :E (see Figure 2.10). Of course, if the orbit 1Jr (p)
intersects :E only once, then we are done. If it intersects more than once, consider
the piece of 1Jr (p) from Pi _I to Pi. This piece along with the segment of :E from
Pi -I to Pi form the boundary ofa positively invariant region D as shown in Figure
2.10. Hence, 1Jr (Pi) cD, and it follows from the invariance of D that Pi+ I (if it
exists) C D . Thus, Pi E [Pi-I, Pi+tl.
Fact 3: The w limit set of a point P intersects :E in at most one point.
ProofofFact 3: The proof is by contradiction. Suppose that w (p) intersects :E in
two points, q \ , qi- Then, by the definition of w limit sets we can find two sequences
of points along rPr(P), say Pn and Pn, such that as n ~ 00, Pn E :E ~ ql and
Pn E :E ~ q2. However, this would contradict the monotonicity of intersections
of 1Jr(P) with :E.
Fact 4: In the plane, if the w limit set of a point does not contain equilibrium
points, then it is a closed orbit.
Proof of Fact 4: The proof of Fact 4 proceeds by choosing a point q E w(p)
and showing that the orbit of q is the same as w(p) . Let x E w(q); then x is not
a fixed point since w(q) C w(p) (this actually uses the fact that w(p) is closed).
FIGURE2.10. Construction for the proofof Fact 2 in the Poincare-Bend ixson Theorem
2.3 Closed Orbits of Planar DynamicalSystems 47
Construct the line section S transverse (i.e., not tangential) to the vector field at x,
as shown in Figure 2.11. Now, there exists a sequence of tn t 00 such that
r/J (tn, q) E S -+ x as n -+ 00. By the Fact 3 above , this sequence is a mono-
tone sequence tending to x . But, since r/J(tn, q) E w(p), we must have by Fact 3
above that r/J(tn, q) = x. Thus, the orbit of q must be a closed orbit. It remains
to be shown that the orbits of q and w(p) are the same . But, this follows from an
argument identical to the one given above by choosing a local section transverse
to the trajectory at q and noting from Fact 3 above that w(p) can intersect 1: only
at q. Since w (p) is an invariant set, contains no fixed points and is connected, it
follows that the orbit of q is w(p).5
To finish the proof of the theorem we simply put the above facts together. First,
since M is invariant for the flow, p E M =} w(p) E M . Further, since M has no
equilibrium points, by Fact 4 it follows that w(p) is a closed orbit.
5The arguments involving crossing a local section transverse to a flow are specific to
planar systems and are also used in Remark 3. (What aboutthe plane is so specific?Is it the
fact that a closed curve in the plane divides the plane into an inside and an outside?)
6Compact sets in JRn are closed and bounded sets.
48 2. Planar Dynamical Systems
equilibrium points, i.e., a union offinitely many saddle connections (see Figure
2.13 foranexampleofthis). As we will see in Section 7.5 a theoremof Andronov
establishes that these are the only possible w limit sets for dynamical systems
in the plane.
3. Let y be a closed orbit of the system enclosing an open set U. Then V contains
either an equilibrium point or a closed orbit.
Proof: Define the set D = V U y. Then D is a compact invariant set. Assume,
for the sake of contradiction, that there are no equilibrium points or closed
orbits inside D. Since the vector field on the boundary of D is tangential to y,
it follows that the w limit set of each point inside V is in D. By the Poincare-
Bendixson theorem, then, the w limit set is y . By applying the same reasoning
to the system
x= - fex)
we may conclude that for each Xo EU, there exists a sequence t; t 00,
Sn -!. -00 such that x(tn) ~ Z E Y and x(sn) ~ Z E Y as n ~ 00 starting
from the same initial condition xo. To see that this results in a contradiction draw
a line section S transverse to the solution starting from Xo (i.e., not tangent to
the trajectory). We now claim that the sequences x (tn), x(sn) are monotone on
the section S, i.e., if x(sj) is above xo, then so are X(S2 ), x (S3), .... Also in this
instance XI I ' X I2 " " He below Xo on S. (This is intuitively clear from studying
2.4 Counting Equilibria: Index Theory 49
Figure 2.14.) Thus, the region U should contain either an equilibrium point or
a closed orbit.
4. Actually, in the previous application we can say more: not only does the region U
contain either a closed orbit or an equilibrium point, but it certainly includes an
equilibrium point. Thus, every closed orbit in the plane encloses an equilibrium
point. The proof of this result uses a very interesting technique called index
theory, which is the topic of the next section.
(2.11)
where
ef (XI, X2 ) := tan -I hf l'
More explicitly, we have
(2.12)
7 A simple curve means one which can be contracted to a point. This definition of a
simple curve is reminiscent of the definition of a simply connected region in the statement
of the Bendixson theorem, Theorem 2.7.
50 2. Planar Dynamical Systems
c
\
B D
\
Remarks:
1. ()f is the angle made by f with the XI axis, and If(D) is the net change in the
direction of f as one traverses J divided by 2n. It is easy to verify that If(D)
is an integer.
2. If Xo is an equilibrium point inside D and D encloses no other equilibrium
points then If(D) is called the index ofthe equilibrium point Xo and is denoted
If(xo) .
3. Prove that 1f of a center, focus, and node is 1 and that of a saddle is -1 . The
best way to verify this is to keep track of the rotation of the vector field f as
one traverses the curve J on the boundary of Das shown in Figure 2.15. That
is, if the Iinearization of the vector field f at an equilibrium point has no
eigenvalues at the origin, then one can write its index by inspection of the
eigenvalues and it is ± 1. The exercises explore situations in which equilibrium
points have index other than ± I . Here is a simple example of an equilibrium
point of index 2:
. z Z
XI = XI -Xz,
Xz = 2xJxz·
The utility of index theory is that it enables one to predict the existence of
equilibrium points in a region D without doing detailed calculations. In particular,
one never does the calculation of the line integral of formula (2.12) for computing
the index of a region. Rather, one does the sort of graphical calculation of Figure
2.15. For instance, if the vector field f points inwards everywhere on the boundary
of a region D, then one can verify graphically that the index of D is 1. This could
result from either ofthe situations shown in Figure 2.17.lt is a basic theorem that
if a region D has non-zero index, then it has at least one equilibrium point. If J
is a closed orbit enclosing a region D one can verify graphically that the index of
2.5 Bifurcations 51
2.5 Bifurcations
The tenn bifurcation was originally used by Poincare to describe branching of
equilibrium solutions in a family of differential equations
(2.13)
52 2. Planar Dynamical Systems
Here, we will restriet ourselves to the rudiments of the theory for n = 2. The
equilibrium solutions satisfy
/Jl(x) = O. (2.14)
As the parameter p., is varied, the solutions x*(p.,) of (2.14) are smooth functions
of p." so long as the Jacobian matrix
(2.15)
Here D x /Jl = p., - 3x 2 and the only bifurcation point is (0,0). It is easy to verify
that the equilibrium point x = 0 is stable for p., ~ 0 and that it is unstable for
u. > O. Also, the new bifurcating equilibria at ±,Jp., for p., > 0 are stable. We
may show all of these solution branches in the bifurcation diagram of Figure 2.18.
Note that although the bifurcation diagram is intended only to be a plot of stable
and unstable equilibrium points, it is possible to show the entire family of phase
portraits in Figure 2.18. This kind of a bifurcation is referred to as a pitchfork
bifurcation (for obvious reasons). Bifurcations of equilibria only happen at points
where D'; /Jl has zero eigenvalues. However, sometimes solution branches do not
bifurcate at these points. Consider, for example:
• 2
x=p.,x-x . (2.16)
At (0,0) this system has D, / Jl = O. However, its solution branches do not change
in number and only exchange stability as shown in Figure 2.19. This phenomenon
is referred to simply as transcritical or exchange 0/ stability. In some sense, the
canonical building block for bifurcations is the saddle-node bifurcation described
.+..+• •+••+. J1
------
)c. ' •
.. '
~
.l(."
~.
.. '
)c.'
~
-,
......
by
. 2
x=J-t-x. (2.17)
For J-t > 0 this equation has two distinct equilibrium points, one stable and the
other unstable. These two equilibria fuse together and disappear for J-t < 0, as
shown in Figure 2.20 . The pitchfork of Figure 2.18 appears to be different, but
it is not hard to visualize that if the system of equation (2.15) is perturbed on
the right-hand side with additional terms, then the pitchfork changes into one
unbifurcated branch and one fold as shown in Figure 2.21. Further, though the
transcritica1 appears not to resemble the fo1d, it is not difficult to visualize that
under perturbation the transcritical changes into one of the two portraits shown in
Figure 2.21 . Note that under perturbation we either get two folds or else a fold and
one unbifurcated branch.
The preceding discussion has focused exclusively on scalar examples of bifur-
cations. While we will have to defer a detailed discussion of a study of the forms
of bifurcations to Chapter 7, a few brief observations about bifurcations of planar
dynamical systems can be made here. First, bifurcations like the fold, pitchfork,
and transcritical may be embedded in planar dynamical systems. When the eigen-
values of the linearization of an equilibrium point cross the jw axis at the origin
as a function of the bifurcation parameter u. then solution branching may occur.
Folds are characterized by the appearance or disappearance of a pair of equilibria,
one a saddle and the other a (stable or unstable) node. They are referred to as
saddle node bifurcations. Pitchforks are characterized by the appearance of either
two nodes and a saddle from anode or two saddles and anode from a saddle.
(H is useful to notice the conservation of index as one passes through a point of
54 2. Planar Dynamical Systems
Je '-----y...------'
Pitchfork
FIGURE 2.21. The appearance of the fold under perturbation of the pitchfork and
transcritical bifurcations
This equation has one equilibrium for JL :::s 0, namely (XI = 0, X2 = 0), and
three equilibria for JL > 0, namely (XI = 0, X2 = 0), (X I = ±..jji, X2 = 0). This
is a pitchfork bifurcation. Indeed, it is not difficult to see how equation (2.15) is
embedded in it. By checking the linearization at the equilibria, one observes that
the stable node for JL < 0 bifurcates into a saddle and two stable nodes for J1- > O.
One other new phenomenon that does not occur in scalar dynamical systems
appears when the eigenvalues of the linearization of an equilibrium point cross
from C ~ to C~ and not through the origin. A canonical example of such a system
is fumished by
r = r(JL - r ),
2
(2.19)
0=1.
Note that the eigenvalues of the linearization of (2.18 around the equilibrium at
the origin are respectively JL ± j. Thus, as JL goes through zero, the equilibrium
point at the origin changes from being a stable node to being an unstable node.
A simple polar change of coordinates also establishes that the equilibrium point
is surrounded by a stable circular closed orbit of radius ..jji for JL > O. This is
2.6 Bifurcation Study of Josephson Junction Equations 55
R
v = .!!. d<p
c 2edt
through the inductor the "tunneling current" is given by 10 sin(4)), where 10 is the
maximum tunneling current. (in the quantum mechanical model of the Josephson
junction the quantity 4> has an interpretation in terms of the difference of phase
of the wave function across the junction). Josephson's basic result was that the
voltage across the junction is given by
h .
v= -4>,
2e
where his Planck 's constant (divided by 27l') and e is the charge of an individual
electron [310]. A circuit diagram of the junction is as shown in Figure 2.22. The
circuit equations are given by
h. hC ..
i = 10 sin(q,) + -q, + -q,. (2.22)
2eR 2e
Here i is the forcing current which we will assurne to be constant. Following the
convention in the Josephson junction literature [310], define
1
Ci=- ,
RC
2elo
ß= nc '
2ei
y = nc
to get the following equation
;p + Ci4J + ß sin(q,) = y. (2.23)
To express these equations in the form of a planar dynamical system, define XI = q,
and Xz = 4J:
Xz = - CiXz - ß sin(x]) + y.
We will draw phase portraits for a fixedjunction, i.e., Ci, ß are fixed as the parameter
y is increased from zero in the positive direction and keep track of the average
voltage at the junction terminals, that is, the average value of f;4J at the terminals.
The same experiment can also be repeated for negative values of y and yields
similar conclusions. Thus, we will derive the i-u characteristics of the junction
intuitively.
2.6 Bifurcation Study of Josephson Junction Equations 57
-lt- sm ß
. - 1"(
0\/
.... -
. -1 "(
sm ß
1. Small forcing . When y is smalI, the phase portrait has the form shown in
Figure 2.23. Note the presence of the two kinds of equilibria with X 2 = °
and XI = sin -I (~) and XI = tt - sin -I (~). The first is a stable focus and the
second a saddle (verify!). These are also repeated every 2Jr because the right
hand side of (2.6) is periodic with period 2Jr in x. The phase portrait shows
the appearance of a "washboard" Iike stable manifold for the saddle; of course
only the local behavior of the stable manifold of the saddle is obtained from
the eigenvector of the Iinearization: the exact nature of the stable manifold is
determined only by painstaking simulation. Also almost all initial conditions
will converge to one of the stable foei, and the resulting steady state voltage
(X2) across the junction will be zero. This is the superconducting region of
the junction, since a nonzero applied current y produces zero average voltage
across the junction (proportional to ~). This qualitative feature persists for y <
ß smalI, though as y increases, the stable manifold of the saddle equilibrium
at Jr - sin -I (~) dips closer to the stable manifold of the saddle equilibrium at
-Jr - sin -I (~) in the upper half plane.
2. Saddle connection. As y is increased, the upper part of the "washboard" shaped
stable manifold of the saddle dips lower and lower to the preceding saddle till it
finally touches it at a critical value of y = Yc' The critical value Yc is an involved
function of the parameters a, ß of the junction, but yc(a, ß) < ß, so that the
two kinds of equilibria of the previous region persist. At this point we have the
phase portrait of Figure 2.24. Note the cap shaped regions of attraction of the
stable foei and the saddle connection. The caps are in the upper half plane for
positive forcing y > 0, which we have considered first. For negative forcing,
y < 0, the caps would have been in the lower half plane . Finally, note that if
this phase portrait had been drawn on the cylinder, the saddle would have been
connected to itself. Trajectories starting inside the cap reach the stable equilib-
rium point. Trajectories starting above the saddle connection converge to the
saddle connection; that is, their w limit set is the set of trajectories joining the
(infinitely many) saddle equilibrium points along with the saddle equilibria. If
58 2. Planar Dynamieal Systems
,
....
....
....
....
-- --- -
the phase portrait were drawn on the eylinder, then the lV limit set would be
a single saddle and a trajeetory conneeting the saddle to itself. This trajeetory
represents the eonfluenee of one leg of the stable manifold of the saddle with
a leg of the unstable manifold of the saddle-the saddle connection. This sad-
dle eonneetion trajeetory takes infinite time to eomplete. Thus it resembles a
closed orbit , but it has infinite period and so is not teehnieally one. However,
the average value of Xz on this lV limit set is zero . Thus, for all initial eonditions,
the steady state value of the voltage aeross the junetion is O.
3. Coexistent limit cycle and equilibria. As the magnitude of the foreing inereases
beyond Yc, the saddle eonneetion bifureates into the trajeetory LI . This tra-
jeetory LI is a limit eycle for the trajeetory on a eylindrieal state spaee sinee
it is periodic in the Xl variable as shown in Figure 2.25. Note the eap shaped
domains of attraetion for the stable foei . Initial eonditions in the eap regions
tend to the foeus where the steady state value of the voltage is zero. Other initial
eonditions tend to the limit eycle, whieh has nonzero average value, sinee it is
in a region where Xz is positive.
4. Saddle node bifurcation. As y inereases up to ß, the saddle and the foeus move
closer together and finally eoincide at y = ß. At this point, the saddle and node
-lt-sm
. -11ß
FIOURE 2.25. Phase portrait showing coexistent limit cycle and equilibria
2.6 Bifurcation Study of Josephson Junction Equations 59
,
,
---
"-
....
....
....
lt lt
2
---
"-
.... ....
-,
....
I
"
."".----
-;,... - -- ,,'
I
-lt
o lt
\
\
,,
"-
....
fuse together and pinch together the cap shaped domain of attraction to a line,
as shown in Figure 2.26. Almost all initial conditions now converge to the limit
cycle with its nonzero average value of X2 and attendant nonzero voltage across
the junction.
5. Lim it cycle alone. For values of y > ß the system of equation (2.22) has only
a limit cycle and no equilibria, as shown in Figure 2.27 . All initial conditions
are attracted to the limit cycle with non-zero average value of the voltage.
As y increases the average value increases. This is the resistive region of the
Josephson junction.
Using this succession of phase portraits, one can estimate the form of the i-v
characteristic of the junction. It is as shown in Figure 2.28. In an experiment
in which the current is gradually or quasi-statically increased, i.e., the current is
increased and the system allowed to settle down to its equilibrium value from zero
the steady state voltage stays zero for low values of current forcing i. Since i is a
scaled version of y we will conduct the discussion in terms of y. After y exceeds
60 2. Planar Dynamical Systems
Yc, a new steady state solution with a non zero voltage appears, but small initial
conditions still converge to the zero voltage solution until y becomes equal to ß.For
y > ß the voltage is nonzero and increases with y. We consider the consequences
of quasi-statically reducing y. When y is decreased, the voltage decreases since
the average value of the voltage is proportional to the size of the limit cyc1e. Even
when y < ß, the initial conditions will converge to the limit cyc1e rather than the
equilibrium points. Finally, when y < Yc, the limit cyc1e disappears in the saddle
connection and all trajectories converge to the equilibrium points with attendant
zero average voltage. Thus, there is hysteresis in the i-v characteristic of the
Josephson junction. This hysteresis can be used to make the junction a binary
logic storage device: The ON state corresponds to high values of y, and the OFF
state corresponds to low y.
(2.24)
0= XI - Xz + xi.
The model of equation (2.24) is not a differential equation in the plane, it is a
combination of a differential equation and an algebraic equation. The algebraic
equation constrains the XI, Xz values to lie on the curve shown in Figure 2.30. In
the region where Xz 2: 0, the variable XI is constrained to increase as shown in the
figure. From an examination of the figure, it follows that there are difficulties with
drawing the phase portrait at Xz = ± ~. At these points, it does not appear to be
2.7 The Degenerate vander Pol Equation 61
FIGURE 2.29. Nonlinear Re circuit model for the degenerate van der Poloscillator
Xl = Xz,
. 3
(2.25)
EXZ = XI - Xz +Xz·
Here E > 0 is a small parameter modeling the smaII parasitic inductance. This
system is weIl defined everywhere in the plane. Qualitatively, it is important to
note that , when Eis srnall, then outside a small region where Xz - xi - X I is ofthe
order of E, Xz is much larger than XI , making the trajectories almost verticaI. The
phase portrait of this system for E = O. I is shown in Figure 2.31 with an unstable
equilibrium point at the origin surrounded by a limit cyele . The limit cyele is
exceptional in that it has two slow segments close to the curve XI = Xz - xi and
two fast almost vertical segments close to X l = ± JJ.
To return to the original
system of (2.24), we define its trajectories to be the limit trajectories of (2.25) as
E ..j, O. These trajectories are as shown in Figure 2.31. Several points about this
portrait are worth noting:
---------::::::>"'i..::::c-------- XI
I
I ...
~ ~
I
t
A y
I
I
I
I
I
I
FIGURE 2.31. Showing the trajectories ofthe RC circuit obtained from the limit trajectories
of the augmented RLC circuit
1. The sharp transitions 0/ a hysteresis are modeled as arising from taking the
limit as the dynamics of a subset of the state variables (referred to as parasitic
or fast state variables, Xz in the example) become infinitely fast.
2. The lack 0/ symmetry of the transitions, i.e., hysteresis , for increasing and de-
creasing values ofthe slow state variable (XI) arise from bifurcations associated
with the dynamics of the fast state variables (xz), with the slow state variable
treated as the bifurcation parameter.
A simple variant of this model may be used to explain the dynamics of a clock
circuit (astable multivibrator), where the oscillation is considered to jump between
the 0 and 1 states . It is also used as a model for the heartbeat and nerve impulse
[337]. Generalizations of this example to R" provide models for jump behavior in
many classes of circuits and systems (see, for example, [263]).
2.8 Planar Discrete-Time Systems 63
Thus an equilibrium point of (2.26) is a fixed point ofthe map G . It is not surprising,
thcrefore, that the study of planar dynamical systems is closely associated with
the study of maps (espeeially invertible ones). Just as in the continuous time case
the linearization of the dynamical system at an equilibrium point provides clues
to the qualitative behavior in a vieinity of the point. Thus if Z = x - i , then the
linearization of (2.26) is the system
9Gn , the n-th iterate of the map G, should not be confused with the n-th power of G.
64 2. Planar Dynamical Systems
However, one can mimic the discussion for continuous time linear systems to get
the stable and unstable invariant subspaces (corresponding to eigenvalues in the
interior or exterior of the unit disc) . Aversion of the Hartman-Grobman theorem
also applies for nonlinear systems with hyperbolic equilibria.
Theorem 2.18 Hartman-Grobman for Maps. Consider the discrete time pla-
nar dynamical system 0/ equation (2.26). Assume that x is a fixed point 0/ the
system, and let its linearization be given by (2.27). I/ the linearization is hyper-
bolic, that is to say that it has no eigenvalues on the boundary ofthe unit disk, then
there exists a homeomorphism
h: eu, 0) ~ 1R2
taking trajectories 0/(2.26} onto those 0/(2.27}. Further, the homeomorphism can
be chosen to preserve the parametrization by time.
Exactly the same comments as in the case of the differential equation apply.
Thus, if the Iinearization is A := DG(x), the diffeomorphism maps x into the
origin and Gn(x) E B(x , 0), it follows that
Also , when the linearization is not hyperbolic, the Iinearization does not accurately
predict the behavior of the nonlinear system.
As in the continuous time case the eigenvectors of the Iinearization corre-
spond to tangents of the "nonlinear eigenspaces." Mimicking the definitions of
the continuous time case, we define:
Here U is a neighborhood of X. The local inset Ws and the local outset W U are
respectively the local stable, invariant manifold and the local unstable, invariant
manifold and are tangent to the eigenspaces of the Iinearization A = DG(x) ,
provided none of the eigenvalues Iie on the unit disk, i.e., that the equilibrium
point is hyperbolic.
Example 2.19 Delayed Logistic Map. Consider the logistic map (one-hump
map) 0/ Chapter I with a one step delay as an example 0/ a planar dynamical
system:
(2.28)
with Ä > 0 corresponding to the delayed logistic equation ; compare this with
(1.3 ):
2.8 Planar Discrete-Time Systems 65
This map has two fixed points, one at the origin and theotherat (I-li>" , I-li>") .
The linearization 01 the map at the origin is
with eigenvalues 0 and >... Thus, the origin is asymptotically stable if >.. < land
unstable if >.. > I. The linearization at the other equilibrium is
!
with eigenvalues (I ± JS - 4),,). This equilibrium is a saddle for 0 < >.. < 1 and
is a stable node for I < >.. < 2. The phase portrait 01this system at >.. close to 2 is
particularly pretty: Generate it for yourselfl
Note that each point x, of aperiod N orbit is a fixed point of the map G N (x) .
Thus, if one considers the new map F(x) = G N(x), its fixed points are points of
period N or submultiples thereof (thus, if N = 6, the fixed points of F are points
of period 1,2,3, and 6 of G) . The big simplification that one encounters in discrete
time systems is that the stability of period N points is determined by linearizing
F about x., It should be of immediate concern that the stability of the period N
points may be different. Indeed even the linearization of F is different at the points
x i, i = 1, .. . , N. In Problem 2.20 you will show that though the Jacobians
are different, they have the same eigenvalues. In fact, more subtly one can establish
that the stability of the x, is the same even when they are not hyperbolic! Since
period N points are discrete orbits of the system (2.26) there is no hope of finding
discrete counterparts to the Bendixson and Poincare-Bendixson theorems.
Also , it is worth noting that it is not automatie that even linear maps with
eigenvalues on the boundary of the unit disk have period N points. Consider the
linear map
cos8 sin8 )
Zn+l = ( Zn (2.30)
-sin8 cos8
66 2. PlanarDynamical Systems
with eigenvalues cos e ± j sin e on the unit disk, Verify that the system has period
N points if and only if
e =
p
p,q E Z,
27T q
is a rational number. In this case, all initial conditions are points of period q
(provided p, q are relatively prime). Further every set of period q points lies on a
circle of constant radius. When this number is not rational, points wind around on
the edge of each disk, densely covering it!
period 2 point and one unstable fixed point emerging from the stable fixed point.
Tbe period doubling continues at J.L = 3.449,3.570, .. .. Period-doubling is best
studied by considering the fixed points of J (x , J.L) , J2 (x, J.L), . . .. The calculations
of these fixed points can be quite involved, and so we consider, as a (simpler)
prototypical example the map x ~ J (x, J.L) = -x - J.LX + x 3• Easy calculations
show that for this system:
1. The fixed point x = 0 is unstable for J.L :::: -2, J.L > 0 and stable for -2 < J.L ::::
O.
2. x = ±J2 + J.L are two fixed points defined for J.L :::: -2 which are unstable.
Tbe map has a pitchfork bifurcation at J.L = -2 when two unstable fixed points,
and a stable fixed point emerge from an unstable fixed point. However at J.L = 0,
thc stable fixed point becomes unstable so that for J.L :::: 0 the map has exactly three
fixed points, all of which are unstable. To explore the map further, consider the
second iterate of J, namely the map
J.l = x2 - 2
----- I
,- ~--J.l+- =x 2
2J.l2
/
,
" Period 2
-----
I J-:d I
the circle given by
Example 2.21 Saddle Node in the Henon map, A celebrated planar map is
the Henon map, given by:
Xn+1 = 1 + Yn - ax,;,
(2.32)
Yn+1 = bx. ,
Thus, it follows that when a < -(1 - b)2/4, there are no jixed points. There are
two jixed points , one stable and the other unstable , when a > (I - b)2/4, and
there is a saddle node at a = (1 - b)2/4.
2.10 Exercises 69
2.9 Summary
In this chapter, we described some important techniques for the analysis of planar
continuous time and discrete time dynamical systems. Some of the techniques
and theorems generalize readily to higher dimensions, for example, the Hartman
Grobman theorem, bifurcations (see Chapter 7), and index theory (see degree
theory in Chapter 3). However, some of the main results of this chapter such as
the Bendixson theorem and the Poincare-Bendixson theorem are unique to two
dimensional continuous time systems, since they rely on the Jordan curve theorem.
For further reading on the material in this chapter, see Wiggins [329], Haie and
Kocak [126], Abraham and Shaw [3], Guckenheimer and Holmes [122], and Hirsch
and Smale [141],
2.10 Exercises
Problem 2.1. Consider scalar differential equations of the form
x= kx",
For all integral values of n > 0 they have an isolated equilibrium at the origin.
Also, except for n = 1 their linearization is inconc1usive. What is the qualitative
behavior of trajectories for different n E Z and k E IR.
Use this to conjecture the behavior of trajectories of planar dynamical systems
when the linearization has a single, simple zero eigenvalue. More precisely think
through what would happen in the cases that the other eigenvalue is positive,
negative.
Problem 2.2 Weak attractors and repellors. Consider the system of the
previous problem:
for n > I . The equation can be integrated explicitly. Use this to determine the rate
of convergence and divergence from the origin of these systems. Reconcile this
result with the title of this problem.
Problem 2.3 Slight generalization ofBendixson's theorem due to Dulac. Let
q : IR 2 H- IR be a given smooth function . For a planar dynamical system show that
if div (qf) does not change sign or vanish on a region D, there are no orbits in that
region. Apply Dulac 's theorem to the system
•
XI = -X2 + XI2 - XI X2,
X2 = XI + XIX2,
with a choice of the Dulac function
70 2. Planar Dynamical Systems
Actually, div (qf) does change sign at X2 = -1 and XI = 1, but find appropriate
invariant sets to rule out periodic orbits which cross from one region where div
(q f) has one sign into another region where it has a different sign.
Problem 2.4. Consider the system of equation (3) with its equilibrium point of
index 2. How would you construct equilibrium points of index +n and -n?
Problem 2.5. Investigate the bifurcations of the equilibria for the scalar
differential equation:
f/l-(x) = J.L - x 2,
f/l-(x) = J.L2 X _x 3,
f/l-(x) = J.L2 X +x3 ,
f/l-(x) = J.L2a x + 2J.Lx 3 - x5•
For the last example, use different values of a as weIl as of J.L.
Problem 2.6 Modification ofDuffing's equation [329]. Consider the modified
Duffing equation
XI =X2,
• 3 ~ 2
X2 = XI - XI - OX2 +X IX2.
Find its equilibria. Linearize about the equilibria. Apply Bendixson's theorem to
rule out regions of closed orbits. Use all this information to conjecture plausible
phase portraits of the equation .
Problem 2.7 Examples from Wiggins [329]. Use all the techniques you have
seen in this chapter to conjecture phase portraits for the following systems;
1.
. 3
XI = -XI +x l '
X2 = XI +X2 .
2.
3.
Xl = -8xI + J.LX2 + XIX2,
X2 = -J.LXI - 8X2 + X?/2 - xi./2.
Assume that 8, J.L > O.
2.10 Exercises 71
(a) (b)
(C)
Problem 2.8 Plausible phase portraits. Use all the techniques that you have
learned in this chapter to make the phase portraits ofFigure 2.33 plausible. Youare
allowed to only change the directions of some of the trajectories, or add trajectories
including orbits.
Problem 2.9 First integrals. One way of drawing phase portraits for planar
dynamical systems is to find a first integral of the system. A first integral of (2.6)
is a function H : IR2 ~ IR, conserved along the flow, that is,
. aH aH
H(XI ,X2) = - ! I ( X t ,X2) + - h ( X I ,X2) = o.
aXt aX2
Hence H is constant along the trajectories of the differential equation. How do first
integrals help draw trajectories of the system? Find first integrals of the following
systems:
1. Volterra-Lotka model:
XI = aXt - bXIX2,
X2 = CXIX2 - dX2.
XI =X2,
. 3
X2 = /.LXI - XI .
Problem 2.10 Method of Isoclines. A simple but useful technique for the ap-
proximation of solution curves in the phase plane is provided by the method of
isoclines. Given the nonlinear system
XI = II (XI , X2), (2.33)
X2 = !2(XI, X2), (2.34)
we can write (assurne for the moment that !2(XI, X2) =1= 0)
dXI I, (x, , X2) (2.35)
dX2 = !2(XI, X2)'
We seek curves X2 = h(xd on which the slope dX2/dxI = c is constant. Such
curves, called isoclines, are given by solving the equation
!2(XI, X2) = eil (XI, X2). (2.36)
Now, consider the following system:
• 2
XI = XI - X1 X2,
• 2
X2 = -X2 +X,.
• Find the equilibria of this system and show that the linearization of the system
around the equilibria is inconclusive in terms of determining the stability type.
• Show that the X2-axis is invariant and that the slope dX2/dxl is infinite (vertical)
on this line; find other lines in the plane on which the slope dx-fdx, is infinite.
• Now seek isoclines on which dx-fdx, = c for finite c (try c = 0, .5, 1,2).
• Sketch these curves, and the associated slopes dX2/dx I on top of these curves,
in the (XI, X2) plane.
• Conjecture the phase portrait from this information, and verify by simulation.
This problem was formulated by C. Tomiin, see also [I22; 317].
Problem 2.11. Try using the Hartman-Grobman theorem and some numerica1 ex-
perimentation to determine the insets of the three equilibria of the Duffing equation,
with damping:
.
X2=- kX2+XI-X,.
3
contains a closed orbit. (Hint: consider the rate of change of V(XI , X2) = x~ + xi
on the boundary of the annulus).
Problem 2.13. Use the techniques of this chapter to draw the phase portrait of the
system
XI = (2 - XI - 2X2)XI ,
X2 = (2 - 2xI - X2)X2 .
in the positive quadrant. Find lines in the XI -X2 plane which are invariant, locate
equilibria, and go forward from there.
Problem 2.14 Recurrent points. A point x* E jR2 is said to be recurrent for
the flow eP, (x) if X belongs either to the a limit set or to the (J) limit set of the
trajectory eP,(x), for some x. Show that for systems in jR2 all recurrent points are
either equilibrium points or belong to a closed orbit.
Problem 2.15 Minimal sets. A set M C jR2 is said to be minimal for the flow
eP, (x) if it is a non-ernpty, closed, invariant set with the property that no non-empty
closed subset of it is invariant. Show that if a minimal set is unbounded, then it
consists of a single trajectory that has empty a and (J) limit sets.
Problem 2.16 Hyperbolic linear maps, Consider the following hyperbolic
linear maps on jR2:
1.
Determine some sampie trajectories (they will all be discrete sequences) for
lAI< 1, IJLI > 1. Distinguish between the cases when A, JL > 0, and A, JL < O.
Repeat the problem for lAI, IJLI < 1, and assorted choices for the signs of A, JL.
2.
Distinguish between the cases A2 + (J)2 < 1 and A2 + (J)2 > 1, and A+ j (J) = ej<x
G=(~ ~)
with lAI< 1. Draw some sampie trajectories.
74 2. Planar Dynamical Systems
2.
Problem 2.18. Give examples of nonlinear maps from 1R2 -+ 1R2 where the
linearization does not accurately predict the behavior of the nonlinear system.
Problem 2.19. Identify nodes, saddles, and foci for linear discrete time dynamical
systems
with Al, A2 both real, and then consider the case of a Jordan form with one real
eigenvalue. Finally, consider the case that A has two complex eigenvalues a ± ß,
so that it can be converted into its real Jordan form
Pay special attention to the case where one or both of the eigenvalues is negative
or o < O.
Problem 2.21. Study the bifurcations of the following planar maps as a function
of the parameter A. near A. = 0:
1.
2
X) f-+ ( A. + X+ Ay + x ) .
( Y 0.5y +J..x+x 2
2.10 Exercises 75
2.
x ) 1-+ ( A- x + xy + ~2 ) •
( y 2x -xy - y
3.
( x) x 2
1-+ ( A+ +X - l ).
Y A+X
2+
l
3
Mathematical Background
In this chapter we will review very briefly some definitions from algebra and
analysis that we will periodically use in the book. We will spend a little more
time on existence, and uniqueness theorems for differential equations, fixed point
theorems and some introductory concepts from degree theory . We end the chapter
with an introduction to differential topology, especially the basics ofthe definitions
and properties of manifolds . More details about differential geometry may be found
in Chapter 8.
Definition 3.4 Field. A field K is a set with two binary operations: addition
(+ ), and multiplication (') , such that:
for {XI, Xz, YI, YZ} in IR, is not a field. Why not? If it were, we would have
(1,0) . (0, I) = (0,0)
(1,0) -1 . (1,0) · (0, I) = (1,0) -1 . (0,0)
(0, I) = (0,0).
z
This is c1early a contradiction. IR can be made into a field if we define (.) as
(XI , x z) . (Yl, yz) = (XI YI - XzYz, XIyz + XZYI). We denote this field as C, the
set of complex numbers, with the understanding that (XI , Xz) = XI + IXZ.
3. The quaternions JHI comprise the set of 4-tuples (XI , Xz, X3, X4) = (XI + IXZ +
) X3 + kX4) with addition defined in the usual way, and multiplication defined
according to the following table:
(-) I 1 J k
I I 1 J k
1 1 -\ k -J
J J -k -\ 1
k k J -I -\
Note that K - {O} in the definition of the field is only a group, which is not
abelian under multiplication. Tbc skew field of quatemions is denoted JHI, for
Hamiltonian field.
In the rest of this section, we wiII be defining similar constructions for each of
the fields IR, C , and JHI. For ease of notation, we denote the set as lK E {IR, C, JHI}.
We write lK n as the set of all n-tuples whose elements are in lK.
commutative group under the addition operation ( +) and the scalar multiplication
operation defined on it from (V , K) ~ V with scalar multiplication distributing
over the addition.
Examples of vector spaces are IRn, n tuples of reals over the field R; C", n tuples
of complex numbers over the cornplex field C, the set of real valued functions on
an interval [a, b] over the fieldIR, the set ofmaps from a set M C lRm ~ lRn (over
IR). More generally lKn is a vector space over the field lK for each of the fields
defined in the previous section. If we define 1{1 : lKn ~ lKn as a linear map, then
1{1 has matrix representation Mn(K) E lKnxn. We note that lKn and Mn(K) are both
vector spaces over K
Definition 3.6 Algebra. An algebra is a vector space with a multiplication
operation that distributes over addition.
Mn (K) is an algebra with multiplication defined as the usual multiplication of
matrices: For A , B, CE Mn(lK),
A(B + C) = AB + AC ;
(B+C)A=BA+CA.
Definition 3.7 Unit. 11A is an algebra, x E A is a unit if there exists y E A
such that xy = yx = 1.
If A is an algebra with an associative multiplication operation, and U E A is the
set of units in A, then U is a group with respect to this multiplication operation.
More details on algebras are given in Chapter 12.
In order to define distances in a vector space, one introduces the concept of a
normed vector space:
Definition 3.8 Normed Vector Space. A vector space V over the field olreals
is said to be a normed vector space if it can be endowed with a norm, denoted by
I . I, afunctionfrom V ~ R, satisfying:
1. lxi::: 0 Vx EX and lxi = 0 <:} x = O.
2. laxl = lailxi v« E IR.
3. Ix + yl s [x] + [y].
Here are some exarnples of norms on IRn, a vector space over the reals (R).
1. Ixloo = max(lx;!, i = I, ... , n) .
2. lxi, = 2::7=1 Ix;!.
3. Ixlp = (2::7=1 \x;!p)l /p for 1 S p < 00 . This norm with p = 2 is referred to
as the Euclidean norm .
Definition 3.9 Independence and Basis Sets. A set 01 vectors (elements)
VI, ..• , Vm E V , a vector space, is said to be independent ifthere exists no set 01
scalars ai, not all zero in the associated field F such that
n
Laivi =0.
i=1
3.2 VectorSpaces, Algebras, Norms,and Induced Norms 79
It follows from the requirement that v be represented uniquely that a basis set
is an independent set of vectors. A vector space is said to be finite dimensional if
it has a finite basis set. It is a basic fact that all basis sets of a vector space have
the same dimension. Thus the dimension of the vector space is the cardinality of a
(any) basis set. It may be shown that all finite dimensional real vector spaces are
isomorphie to \Rn. (Isomorphie to \Rn means that one can find a linear map from
\Rn to the space whieh is one to one and onto) . Thus, \Rn will be OUT prototype finite
dimensional vector space and when we state a property of \Rn, we will mean that
it holds for all finite dimensional vector spaces . It will frequently be of interest
to study infinite dimensional vector spaces, i.e., those with no finite basis set.
Consider the space of all continuous functions from [0, 1] to ] - 00 , 00[ . We will
denote elements in this space by fO to emphasize the fact that they are functions.
The space is areal vector space denoted by C[O, 1] and may be endowed with a
norm,
Similarly, the space of continuous maps from [0, 1] to \Rn , denoted C" [0, 1] may
be endowed with norms by mixing norms on \Rn and the function space norms just
presented. For example, we may have:
I If the basis set is not countable, then the summation in the definition needs to be
specified to be a countable surn. In all of the examples that we will consider the basis sets
are at most countable.
80 3. Mathematical Background
---4~---+---*-- x I
Proposition 3.10 Equivalence ofNorms. Alt norms are equivalent on !Rn , i.e.,
if I . III and I . Iß are two norms on R", then 3 k, k 2 such that for alt x E !Rn,
kJlx III :os IxIß s k2 1x Ill'
and 3 k 3 , k 4 such that
k3 1x lß:OS [x], :os k4 lx lß·
The preceding proposition allows us to be somewhat careless in !Rn about the
choice of norms, especially in checking continuity, convergence of sequences and
other such qualitative properties of functions and sequences. To ilIustrate this point
consider the following definition:
Definition 3.11 Convergence of a Sequence. A sequence (x n , n = I, 2, 3, ...)
in a normed linear space is said to converge to an element Xo E X if for every
E > 0 there exists N (E) such that
nonn if all Cauchy sequences converge to limit points in the space. The virtue of
the definition of Cauchy sequences is that it enables us to check the convergence
of sequences without knowing apriori their limits. Note that sequences that are
Cauchy under one norm need not be Cauchy under another norm, unless the two
nonns are equivalent in the sense of the proposition above.
The space of continuous functions C" [0, I] is not a Banach space under any of
the norms defined above , namely
(l
I 1
IJOl p := IJ (t W d t ) p
The completion of this space under this nonn is referred to as L;[O, I] for p =
I , 2, .. .. We are sloppy about what nonn is used at a fixed value of t , namely on
J(t), since all nonns are equivalent on IRn.
A complete inner product space is called a Hilbert space. Some examples of inner
product spaces are:
1. On IRn we may define the inner product of two vectors x , y as
n
(x, y) = Lx;y;.
;=,
2. On Cn[O, 1] one may define the inner product oftwo elements J(.), g(.) as
I. Ix , I. Ir, then these two norms induce a norm on the vector space of linear maps
from X t-+ Y, denoted by L(X , Y).
Definition 3.14 Induced Norms. Let X, Y be normed linear spaces with norms
1.lx, l-lr respectively. Then the space 01linear maps from X t-+ Y, L(X, Y) has a
norm induced by the norms on X, Y as follows. Let A E L(X , Y). Then
lAI; = sup 0, X)
(lAxIr : [x] =1= x E
lxix
= sup(IAxlr : lxix = 1).
Since m x n matrices are representations oflinear operators from lRn to lRm , norms
on lRn and lRm induce matrix norms on lRm x n • Consider the following examples of
matrix norms on lRm x n. For the purpose of these examples we will use the same
norms on lRm and R".
Norm on lRn , lRm Induced norm on lRm xn
Ix100 = max
;
Ix;I IAI;oo = m~x
, L. laij l
J
As for the uniqueness of x" , we assume for the sake of contradiction that x** is
another fixed point. Then, we have
Ix* - x**1 = ITx* - Tx**1 ~ plx* - x**I.
Since p < I, this establishes that x* = x**. o
Remark: It is important to note that it is not possible to weaken the hypothesis
of the theorem to
ITx - Tyl < Ix - yl Vx, y E X. (3.6)
84 3. Mathematical Background
Indeed, consider the function from lR 1--+ lR given by T(x) = x + 7r/2 - tan :' x.
By noting that its derivative is given by 1~:2' which is less than I for all x, it may
be verified that it satisfies the condition of (3.6). However, there is no finite value
of x at which T (x) = x. The hypothesis on T of the preceding theorem is referred
to as a global contraction condition. The condition is called global, since it holds
uniforrnly in the whole space. Under suitable conditions, this hypothesis may be
weakened to give the following local version of this theorem:
Theorem 3.18 Local Contraction Mapping Theorem. Consider a subset M
ofa Banach space (X, 1.1). Let T : X 1--+ X be such that for some p < I,
ITx - Tyl s plx - yl Vx, y E M. (3.7)
Then, if there exists Xo E X such that the set
+
Yl
Then . if
e, =UI -N2(e2),
e: = U2 + NI (e.),
YI = N1(el),
Y2 = N2(e2).
Now fix u, , U2 and note that we have unique solutions YI , Y2 if and only if there are
unique solutions el, ez- In turn the equations for el , e2 can be combined to yield
(3.10)
The solution of the equation can now be equated to the existence of a fixed point
ofthe map :
Hence, by equation (3.9) the map T is agIobaI contraction map on L;; [0, oo] and
so has a unique fixed point. Consequently there exist unique e" ez solving the
feedback loop. The existence of unique Yt , Y2 now folIows. D
86 3. Mathematical Background
The proof of this theorem needs a lemma called the Bellman-Gronwall Iemma
which is an important result in its own right.
since the exponential is positive and s( .) ::: 0, the inequality (3.15) follows directly.
o
Proof of Existence and Uniqueness theorem: Let xo(') be a function on
C" [0, 0], defined by abuse of notation as
xo(t) == Xo for t E [0,0].
Let S denote the closed ball of radius r about the center xo(.) in C" [0,0] defined
by
S = (x(·) E Cn[O, 0] : IxO - xoOI ~ r} .
~ kr S + hO,
88 3. Mathematical Background
using hypotheses (3.13). For 8 :5 kr~h' P maps Sinto itself. Thus we choose
8 :5 min(kr~h' r) to guarantee that P is a contraction on S. Hence, (3.11) has
exactly one solution in S. But does it have only one solution on C " [0, 8]? Indeed,
if x (.) is a solution of (3.12), we have
By the Bellman Gronwall lemma (with a(t) == k, z(t) = Ix(t) -xol and u(t) = M
in the notation of the lemma) , we have
Ix(t) -Xol:5 Me k ' :5 Me k8 • (3.17)
Thus if 8 were chosen such that 8 :5 min( kr~h ' r, k8
T) and Me :5 r, then any
solution to (3.11) has to He in S, so that the unique fixed point of P in S is indeed
the unique solution in Cn[O, 8] (actually also L~JO , 8]). 0
Theorem 3.22 Global Existence and Uniqueness Theorem. Consider the sys-
tem of(3.11) and assume that f(x , t) is piecewise continuous with respect to t and
for each TE [0, oo[ there existfinite constants k r, h r such thatfor all t E [0, Tl.
Proof: Let T be specified. Then by the preceding theorem, the system of (3.11)
has a solution on some interval [0, 1;:-] for some p < 1. if T :5 1;:-, we are done.
If not, set 8 := 1;:- and define y, (.) to be the solution on [0. 1;:-] and consider the
system
The right hand side of (3.19) satisfies the conditions of (3.18) above , so we have
a unique solution on the interval [0, 1;:-]. Call this Y2(')' Then the solution x(t) =
YI (t) for t E [0,15] and x(t) = Y2(t - 8) for t E [15 ,28] is the unique solution of
(3.11) on [0.28] . Continue this process till mö > T. 0
rem. Let x( ·), y(.) be two solutions ofthis system startingfrom xo, Yo respectively.
Then,for given E > 0 there exists 8(E, T) such that
Ixo - Yol ~ 8 => Ix(,) - yOI ~ E. (3.20)
Proof: Since x(t), y(t) are both solutions of (3.11) we have
As a conscquence, given E > 0 one may choose 8 = E/ ekr T to prove the theorem.
o
The most important hypothesis in the proof of the existence-uniqueness the-
orems given above is the assumption that f (x, t) satisfies the first condition of
(3.13) referred to as Lipschitz continuity,
Definition 3.24 Lipschitz Continuity. The function f is said 10 be locally
Lipschitz continuous in x iffor some h > 0 there exists I 2: 0 such that
If(x), t) - f(xz, t)1 ~ Ilx) - xzi (3.21)
for all XI, Xz E Rh, t 2: O. The constant I is calied the Lipschitz constant. A
definitionfor globally Lipschitz continuousfunctionsfollows by requiring equation
(3.21) to holdfor XI, Xz E lRn • Thedefinition ofsemi-globally Lipschitz continuous
functions holds as well by requiring that equation (3.21) hold in Rh for arbitrary
h but with I possibly a function ofh. The Lipschit; property is by default assumed
to be uniform in t.
If fis Lipschitz continuous in x, it is continuous in x . On the other hand, if f has
bounded partial derivatives in x, then it is Lipschitz. Formally, if
o.tt», t) := [:~J
denotes the partial derivative matrix of f with respect to x (the subscript 1 stands
for the first argument of f (x , t)), then ID I f (x, t) I ~ I implies that f is Lipschitz
continuous with Lipschitz constant I (again locally, globally or serni-globally de-
pending on the region in x that the bound on IDzf (x, t) I is valid) . The reader
mayaiso now want to revisit the contraction mapping theorem (Theorem 3.17)
to note that the hypothesis of the theorem rcquires that T : X 1-+ X is Lipschitz
continuous with Lipschitz constant less than 1! In light of the preceding discussion
if X = lRn , a sufficient conditions for T to be a contraction map would be that the
norm of the matrix of partial derivatives of T with respect to its arguments is less
than I .
90 3. Mathematical Background
-- - -
-- -
~,
-
• I
<,
-,
-~- -- "'"
\
~
~. I
I
'--
FIGURE 3.3. Showing lackof continuous dependence on non compacttime intervals
A+
. (I
= hmsup - log
(Ix(t) - y(t)I)) , (3.22)
HOO t Ixo - Yol
From the preceding theorem it follows that if k T ~ k for all T, that
-k ~ A+ ~ k.
In general, the estimates of (3.20) are quite crude, Usually, two trajectories will
not diverge at the exponential rate predicted by this estimate, especially if both
trajectories are in the basin of the same attracting set. If trajectories in the basin
of the same attractor do in fact diverge at an exponential rate, in the sense that A+
as defined above is close to the upper bound k, then the system shows extremely
sensitive dependence on initial conditions. This is taken to be indicative of chaotic
behavior. In fact some definitions of chaos are precisely based on the closeness of
the Lyapunov exponent to the Lipschitz constant in the basin of attraction of the
same attracting set.
of the application to circuits is involved ; we only sketch the facts most relevant to
illustrate the theory.
The dynamics of a large c1ass of nonlinear circuits may be written in the so-called
tableau form and read as
x- = 1/!(x-, x, u, t) . (3.23)
In equation (3.23) the state variable x is a list of all the state variables in the circuit
(usually, the inductor fluxes and capacitor charges) and the equations are Faraday's
law and Coulomb's law with the resistive constitutive relationships built in to the
right hand side. The input u represents the independent voltage and current sources
to the circuit. It is instructive to note that these equations are not explicit differential
equations like (3.11), but are implicitly defined. That is, X- is not explicitly specified
as a function of x(t), u(t) , t. In general, implicitly defined differential equations
are an involved story, in and of themselves, since there is the possibility that there
are either many or none or even uncountably many solutions to X- in (3.23) and it
is not c1ear which solution is to be chosen when there are many. Here, we use the
contraction mapping theorem to conc1ude the following proposition:
Then if k, < 1 the implicit system (3.23) can be transformed to an explicit system
ofthe usualform, (3.11).
Proof: See Exercises.
Proposition (3.25) may be used to convert an implicit differential equation into
an explicit one, namely one of the form of (3.11). Further, the uniqueness part of
the contraction mapping theorem guarantees that there is only one solution for X-
in (3.23).
The waveform relaxation algorithm to solve the system of (3.23) starting from
an initial condition Xo is based on the following iteration which is in the spirit of
the Picard-Lindel öf iteration of the preceding section
x k (t) = Xo + 11
The initial function xO(t) is arbitrary except that xO(O) = Xo. We will assurne that
the conditions of Proposition 3.25 hold so that the equation (3.23) may be written
as the normal form equation of the form of (3.11), namely
X- = f(x , u, r) (3.25)
xk(t) = Xo + 1 1
Since the proof of that theorem involved proving that the map P was a contrac-
tion map, the sequence constructed by the Picard-Lindelöf construction is exactly
that of the proof of the contraction mapping theorem. Hence , we are guaranteed
convergence of the iterates . The only extra feature of waveform relaxation beyond
what was elucidated in the previous section is the convergence of the iterates on a
given interval [0, Tl. It was shown in the previous section that P was a contraction
map on an interval of length 0 for 0 sufficiently smalI. However, this situation may
be improved to get convergence on the entire interval [0, Tl by defining the new
norm
W1
Ix(-) Iw = SUPIE[O,TJ Ix(t)e- I. (3.27)
Norms of the form of (3.27) are called (exponentially) weighted norms. It is in-
structive to vcrify that Cn[O, Tl is a Banach space under this norm. Now, if the
hypotheses of the global existence and uniqueness of the theorem of the previous
section held for the normal form system, then equation (3.23) may be transformed
into (3.25) under the Lipschitz continuity hypothesis on f uniformly in u, t of
Proposition 3.25 then the iteration map P of that theorem (implemented by the
iteration of (3.26)) may be shown to be a contraction under the norm of (3.27).
Indeed using the estimate of (3.16),
IPx(t) - Py(t)1 S i l
we have that
IPx(t) - Py(t)1 S i l
keWTlx(·) - Y(·)l wdr . (3.29)
e-W1IPx(t) - Py(t)1 S e- w1 i l
keWTlx(,) - Y(')l wdr,
where IxOI is the sup or L oo norm on erO, Tl , it follows that the sequence of
Picard-Lindelöf iterates converges in the L oo norm as weIl!
In practice, the way that waveform relaxation is used in the simulation (timing
analysis) of digital VLSI circuits is as follows : A logic level simulator provides
a digital waveform far the response of different state variables in the circuit. The
output of the logic level simulator is used as the initial guess for the waveform
relaxation algorithm to provide the final analog waveform forthe circuit simulation.
Of course, in a practical context the large dimensional state variable (i.e., x E
!Rn with n large for VLSI circuits) is partitioned according to subsystems of the
original system and each subsystem is iterated to converge, thus the simulation
is rippled through from stage to stage to speed up the computation. For this and
other interesting details on how the circuit equations can be made to satisfy the
conditions of the contraction mapping theorem see [177] and Problem 3.7.
the trajectories of f + point toward So and those of f - away from it. There appears
to be no problem with continuing the solution trajectories in this instance through
So. In the bottom left figure the trajectories of f+, f - both point away from So.
It would appear that the initial conditions on So would folIoweither one of the
trajectories of f + or f- and which one specifically appears to be ambiguous. In
the last figure on the bottom right we have a combination of the circumstances
represented in the three other figures. The standard technique in the differential
equations literature for dealing with this and other breakdowns of assumptions
needed to guarantee the existence and uniqueness of solutions is to regularize the
system. This means adding a small perturbation to the given system so as to make
the system a weil defined differential equation (satisfying the standard existence
and uniqueness conditions) and then studying the behavior ofthe well -defined sys-
tems in the limit that the perturbation goes to zero. One common regularization for
the case of step discontinuities in the differential equation is to assume that (3.31)
is the limit as /),. ,j, 04 of the hysteretic switching mechanism shown in Figure 3.5.
The variable y represents the switching variable: when y = +1, the dynamics
are described by 1+, and when y = -1 they are described by f- . Applying this
regularization yields the phase portraits shown in Figure 3.6 for the scenario of
Figure 3.4(a) for successively smaller values of 11. The frequency of crossing So
(chattering) increases as 11 .J, O. Also, it appears that in the limit 11 = 0, that the
trajectory is confined to the surface So. Other fonns of regularization for (3.31) rep-
resenting various imperfections in the switching mechanisms such as, for instance ,
time delays associated with the switching or neglected "fast" dynamics associated
with the switching mechanism may also be used. Consistent with the foregoing
intuition, Filippov [102] proposed a solution concept for differential equations
with discontinuous right hand sides. His definition was for general discontinuous
differential equations, of which (3.31) is a special case (only step discontinuous),
ofthe form
x= f(x) (3.32)
y 1
-6.
. sex)
6.
-1
FIGURE 3.5. Showing the hysteretic switching mechanism
°
right hand side, which are the systems of greatest current interest.
2. The reason for the sets N of measure in the definition is to be able to excIude
sets on which f(x) is not defined, such as So.
3. It is of interest to note that the definition requires only that x belong to a set.
Thus, equation (3.33) is caIIed a set valued different ial equation or differential
inclusion.
We will now study the application of our definition to the system of (3.31).
Denote by A+(X) (respectively, L(x)) the time rate of change of s(x) along
trajectories of f+(x) (respectively, f-(x)). More precisely,
as
A+(X) = - f+(x) x E S+,
ax (3.34)
as
L(x) = ax f -(x) XE S_.
Since f+(x) , f-(x) are aII smooth functions of x assumed to have left and right
limits , respectively, both
are well defined for x* E So. Filippov's definition (3.33) asks that
XE conv lf+(x*), f-(x*)}'
whereconv is theconvex hull ofthe setwith two points f+(x*), f- (x*)for x* E So.
This convex hull is further characterized as the set of aII convex combinations of
f+(x*), f -(x*) , namely
a(x*)f+(x*) + (I - a(x*))f_(x*)
for a(x) E [0,1]. However, when A+(X*) < °
and A_(X*) > 0, then from the
intuition ofthe chattering in Figure 3.6, the definition yields more (cf, Lemma 3 of
[102]): It yields that x is that unique fo (x) in the convex hull of the set containing
f+(x) , f-(x) chosen to make the trajectory lie on So, that is,
8s * *
8x (x )fo(x ) = O.
It is easy to verify that the specific convex combination of f+(x*), f- (x*) required
to achieve this is
r. * Ä_(x*) * -Ä+(x*) f * (3.35)
JO(x ) = L(x*) _ Ä+(x*) f+(x ) + L(x*) _ Ä+(x*) _(x).
This construction is shown in Figure 3.7. The construction is consistent with the
intuition that in the limit as the regularization !l of (3.31) goes to 0, the chattering
becomes infinitely rapid and of infinitesimal amplitude . Thus , fo is the averaging
of the chattering. Further, the trajectory of the system slides along the surface of
So once it hits So. This is referred to as the sliding mode. The combination of the
conditions Ä+(x*) < 0, L(x*) > 0 is realized by
d
dt s\x) < 0 for x E Btx", 8) - So, (3.36)
Thus, it may be shown that a system with a step discontinuity at So has unique
solutions in the sense of Filippov if at each point x* E So at least one of the
fol1owing two inequalities is satisfied:
Furthermore, the solution depends continuously on the initial conditions. The one
case not covered by the foregoing discussion is when both A+, A_ are equal to O.
This is the case when both f+ , f- are tangential to the surface So at at point x .
In this case one can have more complicated behavior, referred to as higher-order
sliding, see for example, [I98].
The preceding development was forthe time-invariant case, that is, s, f+, f- are
not explicitly functions oftime. When s, f+, f- are functions of x, t, the preceding
development generalizes as fol1ows: define the sliding surface Mo in (x, t) space
as
is satisfied for each (x*, r") E Mo. Some authors prefer to deal not with Mo but
with a time varying sliding surface
Set) := {x : sex, t) = O}.
Of course, this does not change any of the preceding formulation.
ailB
A®B := :
[
a n 1 1B
The following properties of the Kronecker product are left as Problem 3.13.
Proposition 3.27 Properties of the Kronecker Product.
1. (A + B) ® (C + D) = (A ® C) + (A ® D) + (B ® C) + (B ® D).
2. (AB) ® (CD) = (A e C)(B ® D).
3. A ® B = 0 # A = 0 or B = O.
4. I/ A, B are square and invertible, so is A ® B, and
(A ® B)-I = A - I ® B- 1•
The Kronecker product may be used to write the Taylor series of an analytic
function / : ~n ~ ~m as folIows:
(3.37)
where x E ~n(x I) and F, E ~m x n' • By convention, x ® ... ® x (repeated i times)
is abbreviated x(i) . The Taylor series of an analytic function is convergent in the
domain of convergence U C R". Thus, given U, € > 0, there exists N (U, s) such
that
Some notational savings may be gained by noting that some of the terms in x(i)
are repeated. For example if x E ~3 and i = 2, we have that
columns.
Now, consider the linear differential equation in R"
Thus, x (2) satisfies a linear differential equation as weIl. More generally, it may be
shown (see Problem 3.14) that x (i) and x li ] satisfy linear differential equations as
weil.
We will now use Taylor series expansions for solutions of general analytic
differential equations of the form
00
Au = A; ® I . .. ® I + I ® A; . , , ® I + ... + I ® .. . ® A;,
• 2 N
Now define for grven N the vector x$ E Rn+n +··+n by
All A12 A IN
0 A ZI AZ,N-l
dx$
--= 0 0 A 3,N - Z x$ + h.o.t. (3.41)
dt
o o
In the preceding expression h.o.t. stands for higher order tenns (i.e., tenns involving
polynomials of degree greater than N) . Fonnally, we can continue the process by
defining x'P to be an infinitely long vector, and then the equation (3.41) will not have
any higher order tenns, and the resulting infinite dimensional system (L~J n i ) is
linear. This is referred to as the process of Carleman linearization. Sometimes, the
truncation of the tenns of (3.41) to terms of order N (i.e., dropping the higher order
terms) is also referred as the (approximate) Carleman Iinearization. For compact
intervals of time , bounds for the distance between the solutions of the approximate
Carleman Iinearization and the original differential equation (3.40) may be derived
(see Problem 3.15) .
lt is required that I-I (p) be a finite set and that DI (x) be nonsingular for x E
I-I (p) and that I-I (p) n aD = 0, that is, that there are no solutions on the
boundary 01 D. 6
11 pis such that D/(x) is singular for x EI-I (p), then perturb p to PE ' Now,
if I-I (PE) is a finite set define
The degree of I with respect to D at the point P is a mod 2 count of the number
ofsolutions ofthe equation I(x) = p , We say a mod 2 count, since each solution
is given a sign, either+ 1 or -1 depending on the sign ofthe deterrninant of D/(x).
There is a "volume integral" forrnula for the degree of a map, namely
1.1/!E(s)=Ofors::':E .
2.
r 1/!E(lxl)dx = 1.
JJR.II
Proof of volume integral formula: Assume that I-I (p) = (XI, X2, •• • , xd is
the finite set of solutions to I (x) = p . There exist E, and Ei, i = I, . . . , k > 0
such that I is a homeomorphism from each ball B(Xi, Ei) ~ B(p, E). (This
result, somewhat generalized, is referred to as the stack of records theorem in the
Exercises.) We may rewrite (3.43) as
L
k
= sgn det Dj'(z.).
i= 1
6In the terminology of the next section, p is said to be a regular value of the function f.
3.7 Degree Theory 103
In the first step we used the change of variable fonnula and the fact that for Ei
sufficiently small the sign of the detenninant of Df(x) is constant and in the
second step we used the fact that
[ 1{If(lxl)dx= [ 1{If(lxl)dx= 1.
JB(O ,f) JJRn
We may group the properties of degree into two categories:
1. Boundary Dependence.
d(f, p, D) is uniquely detennined by f on 8D, the boundary of the open set
D.
2. Homotopy lnvariance.
Suppose that H(x, t) = P has no solution x E 8D for any t E [0,1]; then
d(H(x, t) , p , D) is a constant for tE [0, 1].
3. Continuity,
d(f, p, D) is a continuous function of f, that is,
Proof:
• (3) follows from the v01ume definition of degree, since all the quantities in the
integrand of (3.43) are continuous functions of the function fand the point p.
Further, E is to be chosen small enough so that no solutions leave through the
boundary 8 D .
• (3) :::::} (2). Define h(t) = d(H(x, t), p , D) . Then, so long as there are no
solutions to H (x , 1) = p on 8 D, h (t) is a continuous function of t . Since it is
an integer valued function, it is constant.
• (2):::::} (1) Let g = f on 8 D . Then consider
• (2):::::} (4) Define H (x, t) = t(f(x) - p) + (1-t)(g(x) - p). Since f(x) - p =1=
°
-a(g(x) - p) Vx E 8D and any a E 1R+, H(x , t) =1= Vx E 8D, t E [0,1].
104 3. Mathematical Background
Hence, we have
k
d(f, p, D) = Ld(f, p, D i) (3.44)
i= 1
I(x)
- - =1= c for all x E aD.
!/(x)1
If 1 omits any direetion c then d(f, 0, D) = O.
5. Let D be symmetrie with respect to the origin, that is, x E D => -x E D, and
I(x) = - I(-x) on aD with 1 =1= O. Then d(f, 0, D) is odd.
6. Let D be symmetrie with respect to the origin and let 1 (x) and 1(- x) not point
in the same direction for all x E aD, then d(f, 0, D) is an odd integer.
7. IfO E D and d(f, 0, D) =1= ±I, then
a, There exists at least one x E öD such that 1 (x) and x point in the same
direction.
b, There exists at least one x I E aD such that 1 (x I) and x 1 point in opposite
directions.
Proof:
i = I +
i= k R-Linear
_ [
g-
gl~ZI)
:
]
.
gk(Zk)
We will assume that G E iRk x k , the constitutive relation matrix of the linear
resistive network, is positive definite; that is, the linear resistors are passive. The
vector s E iRk models the effect of the sources on the network.
Proof: The proof consists in establishing a homotopy to the identity map. Define
the homotopy map
Since the set (z : Izi = max(R, R 1 )} is the boundary of the open ball Iz : [z] <
max(R, R J ) }, it follows from arithmetic property 7 above that
d(f, 0, B(O, R)) = 1
with R = max(R, R 1) . Thus , the equation fez) = 0 has at least one solution
inside B(O , R). 0
oe
FIGURE 3.9. Surfaces in 3 dimensional space
generally, if X C JRk and Y C JRI are arbitrary subsets of Euelidean spaees (not
neeessarily open), then / : X ~ Y is ealled smooth if there exists an open set
U C JRk eontaining X and a smooth mapping F : U ~ JRI that eoincides with /
in U n X. If / : X ~ Y and g : Y ~ Z are smooth, then so is go/ : X ~ Z.
Definition 3.32 DifTeomorphism7 • A map / : X ~ Y is said to be a dif-
feomorphism if / is a homeomorphism (i.e., a one-to-one continuous map with
continuous inverse) and ifboth / and /-1 are smooth.
The sets X and Y are said to be diffeomorphic if there exists a(ny) dif-
feomorphism between them. Some examples of sets diffeomorphic to and not
diffeomorphic to the elosed interval, eirele, and sphere are shown in Figure 3.10.
Definition 3.33 Smooth Manifold ofDimension m, A subset M C JRk is calied
a smooth manifold of dimension m iffor each x E M there is a neighborhood
W n M (W C ]Rk ), that is dijJeomorphic to an open subset U C JRm .
A diffeomorphism 1/1 from W n M into U is ealled a system 0/ coordinates on
W nM and its inverse 1/1 -I is ealled a parameterization. The map itself is referred
to as a coordinate map. These definitions are illustrated in Figure 3.11.
Examples:
1. The unit cirele SI C JR2 defined by {(eose, sin s), e E [0,2Jr]}.
2. The unit sphere S2 C JR3 defined by {(XI, X2, X3) : xf + x~ + x~ = I} is a
smooth manifold of dimension 2. Indeed, the diffeomorphism
• •
Diffeomorphic
v Not Diffeomorphic
o Diffeomorphic
ß
Not Diffeomorphic
8
8 ®& Diffeomorphic
BJ LV
Not Diffeomorphic
FIGURE 3.10. Surfaces diffeomorphic to and not diffeomorphic to the interval, circle, and
sphere
[ cose - sine ].
sine cos e
110 3. Mathematical Background
In fact, since the matrix is periodic in (), the space of orthogonal matrices is
diffeomorphic to Si.
We next develop the tools for performing calculus on manifolds:
U -----t.~ W
goi
FIGURE 3.12. Commutative diagram of smooth maps and their derivatives
3.9 Basicsof Differential Topology 111
A partial converse to this proposition is the inverse function theorem, stated here
without proof (see [200]):
Theorem 3.35 Inverse Function Theorem. Consider a map I : V 1-+ V be-
tween open sets in JR,k .Ifthe derivative DIx is nonsingular thenfmaps a sufficiently
small neighborhood VI ofx difJeomorphically onto an open set I(Vl).
Remark: I need not be aglobai diffeomorphism on noncompact sets V even if
DI x is bounded away from non-singularity, For example, the map I : (Xl . X2) 1-+
(e X \ COSX2, eX ' sinX2) has a non -singular Jacobian at every point (verify this) but
is not aglobai diffeomorphism, since it is period ic in the X2 variable. However, a
theorem due to Palais ([240], for the proof refer to Wu and Desoer [333]), states
that if I is a proper map; that is, the inverse image of compact sets is cornpact ,
then DI. bounded away from singularity on JR,k guarantees that I is aglobai
diffeomorphism. We may now define the tangent space T Mx to M at x .
Definition 3.36 Tangent Space T Mx. Choose a parameterization 1/1 : V C
JR,m 1-+ M C IR k 01a neighborhood 1/I(V) 01x in M. Now D1/Iu : IRm 1-+ IR k is
weil defined. The tangent space to M at x = 1/I(u) is defined to be the image 01
D1/Iu (IRm ) .
The first order of business is to verify that T Mx as defined above is indeed weIl
defined, i.e., it does not depend on the specific parameterization map 1/1 chosen.
Indeed let r/J : V 1-+ M be another parameterization of M with x = r/J(v) . Now
the map r/J-I 01/1 : VI 1-+ VI defined from some neighborhood of u E V to a
neighborhood of v E V is a diffeomorphism. Thus , we have the commutative
diagrams of r/J , 1/1 and their derivatives as shown in Figure 3.13. Hence by Proposi-
tion 3.34 the linear map D(r/Jv)-l 0 D1/Iu is nonsingular. Consequently, the image
of D1/Iu is equal to the image of DifJv, and the tangent space to M at x is weIl
defined.
The second order of business is to verify that T Mx is an m dimensional vector
space. Now 1/1-1 is a smooth map from 1/I(U) onto U. Hence, it is possible to find
a map F defined on an open set W E IRk containing x such that F coincides with
1/1-1 on W n 1/I(U). Then the map F 01/1 : 1/1-1 (W n 1/I(U» ~ IRm is an inclusion.
Consequently, D F; 0 D1/Iu is the identity so that the map D1/Iu has rank m, whence
its image has dimension m.
We are now ready to define the derivative of a smooth map I : M C IRk ~ N c
IR' with M , N smooth manifolds of dimension m, n respectively. If y = I (x), then
the derivative DI x is a linear map from T Mx to T N y defined as folIows: since I
is smooth there exists a map F defined on a n open set W E IRk containing x to
IR' which coincides with Ion W n M. Now define DIAv) to be equal to DFAv)
for all v E T Mx. To justify this definition , we must prove that DFAv) belongs
to T Ny and that it is the same no matter which F we choose. Indeed, choose
parameterizations
.[ ..
F
W IR'
[~
U
'V-Io f 0
..
<P
V
Rk
DF
.. R'
D. [ [ D~
m
Rm ~ R
D('V-1o f 0 <P)
commutative diagram of linear maps as shown in Figure 3.15. If u = </J-J (x) and
v = l/I - J (y) then it follows that D F x maps TM.r = Image D</Ju onto T N y =
Image (Dl/Iv)' The diagram 3.15 also establishes that the resulting map D], does
not depend on F by following the lower arrow in the diagram, which shows that
ot, = o», 0 D(l/I-I 0 I 0 </J)u 0 (D , </Ju) -I,
where (D</Ju)-I is the inverse of D</Ju restricted to the image of D</Ju' It still remains
to be shown that the definition of D Ix is independent of the coordinate charts </J. l/I;
the details of this are left to the reader.
As before, the derivative map satisfies two fundamental properties:
1. Chain Rule If I: M t-+ N and g : N t-+ P are smooth with y = I(x) , then
D(g 0 f)x = Dg y 0 or; (3.48)
2. If M C N and i is the inclusion map then T Mx C T N, with inclusion map
tn, (see Figure 3.16).
These properties are easy (though a little laborious) to verify, and in turn lead
to the following proposition:
Proposition 3.37 Derivatives of DitTeomorphisms are Isomorphisms. 11 I :
M t-+ N is a dijJeomorphism then D Ix : TM.r t-+ T N y is an isomorphism 01
vector spaces. In particular the two manifolds M. N have the same dimension.
Proof: See Problem 3.18 .
regular value then the set I-I (y) is finite. Further, if y is a regular value, then
the number of points in the set I-I (y) is locally constant in a neighborhood of
y. Proofs ofthese facts are left to the reader as exercises. A basic theorem known
as Sard's theorem establishes that the set of critical values is nowhere dense and
consequently a set of measure 0 in the co-dornain. The impact of this theorem is
that a point in the co-domain of a map I : M 1-+ N is generically a regular value,
and consequently it's inverse image contains only finitely many points, i.e., if a
point y in the co-domain of the map is a critical value, then there are (several)
regular values in arbitrarily small neighborhoods of it. It is important to keep in
mind while applying this theorem that the theorem does not assert that the set
of critical values is nowhere dense in the image of I, rather than the co-domain
of I. Indeed, if the image of I is a submanifold of N of dimension less than n,
then every point in the range of I is a critical value without contradicting Sard's
theorem.
We will now define regular values for maps between manifolds of arbitrary
dimensions: we will be mainly interested in the case that m ::: n. The set of critical
points C c M is defined to be the set of points x such that
Df; : TMx 1-+ TN y
has rank less than n, i.e., it is not onto. The image of the set of critical points is
the set of critical values of I. An easy extension of Sard's theorem yields that the
set of regular values of I is everywhere dense in N. The following theorem is
extremely useful as a way to construct manifolds :
Theorem 3.39 Inverse Images ofRegular Values. /11 : M 1-+ N is a smooth
map between manifolds 01 dimension m ::: n and if yEN is a regular value , then
the set I-I (y) C M is a smooth manifold 01 dimension m - n.
Proof: Let x E 1-1(y). Since y is a regular value, the derivative Df'; maps
TMx onto TNy- Consequently, Df, has am - n dimensional null space N c
T Mx. If MeRk, choose a linear map L : JRk 1-+ JRm-n which is one to one on
the subspace .N: Now define the map
F :M 1-+ N x JRm-n
by F(~) = (f(~), L~) . Its derivative DFx is equal to (Dlx, L). Thus, DFx is
nonsingular; as a consequence F maps some neighborhood U of x onto a neigh-
borhood V of (y, Lx). Also note that F 0 I-I (y) = {y} X JRm-n. In fact, F maps
I-I (y) n U diffeomorphically onto ({y} X JRm-n) n v. Thus we have established
that I-I (y) is a smooth manifold of dimension m - n.
The preceding theorem enables us to construct numerous examples of manifolds,
for example:
1. The unit sphere sn-l in Rn defined by
sr:' := {x E Rn : x~ + ... + x~ = I}
This manifold is particularly important since it models the space of all orienta-
tions of a rigid body in space. Its generalization to the space of n x n matrices,
denoted by SO(n), is a manifold of dimension n(n - 1)/2.
and its boundary aHn is defined to be the hyperplane Rn-I X {O} c R",
I>t::,: 0
n
1-
;=1
j-I j-l
~
~
FIGURE 3.17. Construction of a cylinderand a Möbius strip
3.9 Basics of Differential Topology 117
We claim that n has 0 as a regular value. Indeed, the tangent space of g - I (y) at a
point x E n - I (0) is equal to the null spaee of
We may now prove onee again, by a different method , the Brouwer fixed point
theorem, that we proved using teehniques of degree theory. The proof technique
is of independent interest.
Lemma 3.44. Let X be a manifold with boundary. There is no smooth map I :
X t-+ ax that leaves ax pointwisefixed.
Proof: Suppose for the sake of eontradiction that I : X t-+ ax is a map that
leaves ax fixed. Let y E ax be a regular value for I . Since y is eertainly a
regular value for the identity map IlaX also, it follows that I-I (y) is a smooth 1
dimensional manifold, with the boundary eonsisting of the single point
However, I-I (y) is also eompact and the only compaet one dimensional manifolds
are disjoint unions of eircles and segments. Henee, al-I (y) must have an even
number of points . This establishes the contradietion and proves that there is no
map from X to ax that leaves the boundary pointwise fixed. 0
Thus, by the preeeding proposition it is not possible to extend the identity map on
sn -I to a smooth map D" t-+ sn-I.
Lemma 3.45 Brouwer's Theorem for Smooth Maps. Any smooth map g :
D" t-+ D" has a fixed point x E D" with g (x) = x .
I 18 3. Mathematical Background
FIGURE 3. I8. Construction for the proof of Brouwer's theorem for smooth maps
Proof: Suppose that g has no fixed point. For x E D", let f (x) E sn-I be the
point nearer x than g(x) on the line through x and g(x) (see Figure 3.18) . Then f
is a smooth map from D" f-+ sn-I leaving sn-I invariant. This is impossible by
the preceding discussion (f(x) can be written out explicitly as
fex) = x + tu
where
u = x - g(x) , t = _x T U + Jl -lxl 2 + (x T u ) 2
Ix -g(x)1
with t > 0). 0
Theorem 3.46 Brouwer's theorem for continuous maps. Any continuous map
G : D" f-+ D" has a fixed point.
Proof: This theorem is reduced to the preceding lemma by approximating G
by a smooth mapping. Given E > 0, according to the Weierstrass approximation
theorem [90, there isapolynomialfunction P : !Rn f-+ !Rn with IP(x) -G(x)1 < E
for x E D", To cut down the range of P to D", we define
PI := P(x) .
l+E
Then PI : D" f-+ D" and IPI (x) - G (x) I < 2E for x E D", Suppose that
G (x) =1= x for x E D", then the function IG (x) - x I, which is continuous on
D" must take on a maximum value on D", say M. Choose E < M /2. Then
PI (x) =1= x for x E D", But PI (x) is a smooth map, since it is a polynomial. Thus
a non-existence of a fixed point for PI contradicts the preceding lemma. 0
3.10 Summary
In this chapter we have presented a set of mathematical tools for the study of
nonlinear systems. The methods have been presented in abbreviated form and here
we include some additional references. These methods included a review of linear
algebra (for more on linear algebra see for example [287], [153], [233], [275]. We
presented an introduction to the existence and uniqueness theorems for differential
3.11 Exercises 119
equations (for more on this subject see [t 25], [68]), including a study of how to
"regularize", i.e., to define solutions in a relaxed sense for differential equations
which do not meet the existence uniqueness conditions in the sense ofCaratheodory
[102] . We did not have a detailed discussion on numerical methods of integrating
differential equations, for a good review of numerical solution techniques see
[65]. However, we discussed a method of solution of differential equations called
waveform relaxation [I 77], wh ich is useful in circuit theoretic applications. We
discussed the generalization of degree theory to lRn referred to as index theory
and gave an introduction to differential topology. Our treatment of degree theory
followed that of [278]. In particular, the reader may wish to study fixed point
theorems in general Banach spaces, which are invaluable in applications. The
introduction to differential topology is standard, and followed the style of [211].
There are several other additional good introductions to differential topology, such
as [221], [34], [282]. There are some very interesting generalizations ofBrouwer's
fixed point theorem, such as the Kakutani fixed point theorem, which is of great
importance in mathematical economics, see for example [80]. Also, the infinite
dimensional counterpart of the Brouwer fixed point theorem, called the Rothe
theorem is useful for proving certain kinds of nonlinear controllability results
[260].
3.11 Exercises
Problem 3.1. Show that the norms 1.l p , 1.100 are equivalent on R".
Problem 3.2 Measure ofa matrix [83]. Given a matrix A E c n x n , the measure
0/ matrix A is defined as
-lAI ~
.
11m
11 + sAI- 1 ~ lAI Vs,
s.j,O s
show that the measure is well defined. Also, note that the measure of a matrix is
the (one-sided) directional derivative of the I . I function along the direction A at
I . Now prove that
Problem 3.3 Matrix measures for 1),12,100 induced norms. Prove that forthe
matrix norms induced by the 1), 12 , 100 norms, we have
Problem 3.4. Use the definition of the matrix measure to prove that the solution
of a linear time varying differential equation
x= A(t)x
satisfies
Problem 3.7 Waveform relaxation for implicit systems. Work out the details
required to verify that the Picard-Lindelöf iteration on (3.24) actually converges
when 1fr is a contraction map in its first argument. Assurne that 1fr is Lipschitz
continuous in both its arguments and use a norm for functions on [0, Tl given by
Problem 3.8. Proposition 3.25 gave conditions under which implicit differential
equations of the form of (3.23) could be reduced to normal form equations. Here
we explore what could happen if the implicit equation cannot be made explicit.
Consider:
(3.50)
Draw a phase portrait of this system as folIows: represent the algebraic equation
as a curve in the XI, X2 plane. From the first equation trace the behavior of the state
variable on the curve. Conjecture as to what the differential equation does at the
state values (± 3~ ' ± JJ).
The preceding differential equation could be thought
of as a long hand version of
3.11 Exercises 121
0 0 0
0 0 0 0
X= x+ u (3.51)
The a, (t) are not known exactly, we know only that a, :s a, (t) :s ßi. Also we have
abound v on the magnitude of y~n) . The control objective is to get the output of
the system y(t) = XI (t) to track a specified n times differentiable function Yd(t) .
We define a sliding surface Set) c IRn by
s ex, t) = (Xn - y~n - I )(t)) + C I (Xn - I - y~n-2) (t )) + ...+ Cn(XI - Yd(t )) (3.52)
where sgn stands for the sign of s(x, t), defined to be +1 when s(x, t) > 0 and -I
when s(x , t) < O. Choose the constants Yi (x), ki(x , t) so as to make Set) a sliding
surface. Thus, choose u so that the system of (3.51) satisfies the global sliding
condition, namely,
d
d,s2(X, t) :s Els(x, t)1
for some E. This will guarantee that we reach the sliding surface in finite time.
Explain why. Now show that once on the sliding surface, we can make the tracking
error e(t) = XI (t) - Yd(t) go to zero asymptotically by a clever choice of the Ci.
Simulate this example fOT several choices of ai(t) and the bounds a, and ßi.
What do you notice if the bounds are far apart, that is, ßi - a, is large? Can you
comment on this.
Problem 3.11 Peano-Baker Series. Consider a linear time varying dynamical
system
x= A(t)x(t) , x(to) = xo.
Verify that the solution ofthis differential equation is ofthe form x(t) = <I>(t , to)xo
where <I> (t, to) E IRn xn, called the fundamental matrix is the solution of the matrix
122 3. Mathematical Background
differential equation
je = A(I)X, X(O) =I.
Now prove that if A(·) is bounded, we have the following convergent series
expansion for <1>(t , r), called the Peano-Baker series:
1<1*-)
A(a2) d a2d al
+ ... +
I r A(a!) r A(a2)'" r A(ak)dak . . . da! + ... .
Problem 3.12 Numerical solution techniques: 4th-order Runge-Kutta. The
Taylor series expansion for the solution of the differential equation
x= f(x , I) x(O) = Xo
(3.53)
where
matches the first four coefficients of the Taylor series for Xn +1•
Problem 3.13. Prove Proposition 3.27.
Problem 3.14. Prove that if x E Rn satisfies a linear differential equation of the
form of equation (3.38), then X (i) also satisfies a linear differential equation. Prove
the same conclusion for xli) as weil.
3.11 Exercises 123
All A I2 A 1N
0 A2 2 A2N
dx $
-- 0 0 A 3N x$ (3.54)
dt
0 0 A N1
with the appropriate initial conditions on x $. Show, given T > 0, that if the initial
conditions are small enough that for t E [0, Tl the difference between the first
entry of x $(t) solving (3.54) and the exact solution x(t) of (3.40) can be bounded
by a growing exponential.
Problem 3.16 Carleman (bi-)linearization of control systems. Repeat the
steps of the Carleman linearization for the control system with single input U E IR
given by
L +L
00 00
to get:
All A I2 A1N
0 A22 A2N
dx $
--= 0 0 A 3N x$
dt
0 0 A N1
(3.56)
Bll B I2 BI N BIO
0 B 22 B 2N 0
+ 0 0 B 3N x$u + 0 u.
0 0 B NI 0
This is referred to as the Carleman bilinearization of the control system, since it
is not linear in x but rather bi-linear in x, u. Give formulas for the matrices Bij in
(3.56) .
Problem 3.17. Prove that the definition ofthe derivative Df; given in this chapter
does not depend on the choice of coordinate charts in its definition.
124 3. Mathematica1 Background
1 00
m(a) da = 00 .
that SO(n) is a Lie group. What is its dimension? What can you say about
unitary matrices of determinant -I?
3. Consider matrices in R, (n +l ) x(n +l ) ofthe form
Problem 3.29. Let U, 1/1, and V, ifJ be two different coordinate charts for neighbor-
hoods of a point p E M , a smooth n-dimensional manifold with local coordinates
x , z. Denote the coordinate transformation z = ifJ 0 1/1 -1 by S(x). If a given
X p E TpM may be represented as L:7=1 a, Ö:i Ip and as L:7=1 ßi Ö~i Ip , then prove
that the coefficients a, and ßi are related by
In the equivalence c1assabove T refers to change ofbasis ofthe subspace. Prove that
in the topology inherited from R,n, G(m, n) is a manifold. What is its dimension?
Can you identify G(1, 2), G(n, n + I)?
Define a Stieffel manifold Sem, n) to be a Grassmann manifold with orthogonal
bases. Thus, elements of Sem, n) are an equivalence c1ass of matrices U E R,nxm
satisfying UTU = I E R,m xm with the equivalence c1ass defined by
Problem 3.32 Morse lemma. Let ! : R" f-+ R, and ! (0) = O. Prove that in a
neighborhood of x = 0 the function ! may be written as
n
(3.59)
where the derivative and Hessian of j vanish at Y = O.
4
Input-Output Analysis
In this chapter we introduce the reader to various methods for the input-output
analysis of nonlinear systems. The methods are divided into three categories:
In some sense, this chapter represents the c1assical approach to input-output non-
linear systems, with many fundamental contributions by Sandberg, Zames, Popov,
Desoer, and others. An excellent c1assical reference to this is the book of Desoer
and Vidyasagar [83]. The current modemized treatment ofthe topics, while abbre-
viated, reveals not only the delicacy of the results but also a wealth of interesting
analysis techniques.
128 4. Input-Output Analysis
We will now optimally approximate the non linear system for a given reference
input Uo E C ([0, 00 Dby the output of a linear system. The dass of linear systems,
denoted by W, which we will consider for optimal approximations are represented
in convolution form as integral operators . Thus, for an input Uo E C([O, ooj), the
L:
output of the linear system W is given by
°
with the understanding that u(t) == for t ::: 0. The convolution kernel is chosen
to minimize the mean squared error defined by
e(w) = lim -
T-HXJ T
I1 0
T
[Wuo(r) - Nu o(r)fdr (4.2)
1 00
Equation (4.3) guarantees that a bounded input u(·) to the linear system W
produces a bounded output.
3. Stationarity 0/Input. The input uoO is stationary, i.e.,
lim -
1 [ .< +T luo(t)1 2 dt (4.4)
T ..... oo T s
{this can be proved starting from the definition (see Problem 4.2». Thus, the
Fouriertransform ofthe autocovariance function Ru(t) is a positive function S, (ei),
Su(w) =
e- jwTRu('r)dT, (4.7)
Ru(t) = - 1
2rr
1 00
-00
e jwt Su(w) dca, (4.8)
Some properties of spectral measure are explored in the exercises. Here, we will
prove the following proposition relating the autocovariance of the input and output
signals of a stable, linear, time-invariant system:
Proposition 4.3 Linear Filter Lemma. Let y = H (u) where H is a causal,
b.i.b.o. stable linear system, i.e., its convolution kernel h(t) is identically 0 for
t S 0, and h (-) E L J • Then if u is stationary with autocovariance Ru (r), then so
i:
is y(t). Further. the cross correlation between u and y is given by
i:
Proof: First, we establish that y( .) is stationary. By definition,
Iim
T -+ 00
~T (l S
s T
+ [1 1
00
-00 -00
00
Iim
T -+00
~T (1 1 l
00
-00
00
-00 S
S
For all T , the integral above exists absolutely, since h (-) E L t; hence, we can
i:i:
interchange the order of integration to obtain
establishing the stationarity of y(.) . To show that there exists a cross correlation
between y and u, one proceeds similarly:
T ',+T[1
T1 l., + y(T)U(t + T)dT = T1 l
s 00
As before for aIl T , the integral above exists absolutely, since hO E LI; hence,
i:
we can interchange the order of integration to obtain
i:
The expression in the brackets converges uniformly to Ru U + n) . so that we have
Using these foregoing definitions and propositions, we have the following result:
(4.10)
where cPw. (r) is short handfor the cross correlation between uoO and W*(uo('))
and cPN(r) is the cross correlation between uo(') and NuoO defined exactly as in
the previous equation with W*uo replaced by N ü«.
Proof: Let w* be the convolution kernel of the optimal linear system. We know
that an optimal linear causal b.i.b.o . stable approximant exists, since the criterion
of (4.2) is quadratic in the convolution kerne I w (.). Now, for any w we should have
e(w*) ~ e(w) . Thus, we have that e(w) - e(w*) is given by
lim ..!..
1'-+00 T 10r Duo(t)(Nuo(t) - W*uo(t))dt = O. (4.12)
using the fact that w(t) == 0, uo(!) == 0 for t ~ O. Using (4.13) in (4.12) yields
\im ..!..
1'-+00 T
r
10r {10 (w(r) - w*(r))uoU - r)dr} (W*uo - Nuo)dt (4.14)
132 4. Input-Output Analysis
lim
T ..... oo 10r (w(r) - w*(r»gT(r)dr = O. (4.15)
Now, since both the linear systems Wand W* are b.i.b.o. stable, we have that
l T
Iw(r) - w*(r)ldr < 00 .
Further, the assumption on N guarantees that if uo(t) < b for all t and that uoO
is stationary, then IgT(t)1 < K for all t. Then, we may use Fatou's lemma to
interchange the order of integration to yield
lim
T ..... oo
..!-
T 10r uo(t - r)(W*uo - Nuo)dt . (4.17)
l/lN(r) = lim
T ..... oo
r
-!..
T 10
kn(k)dr = kn(k).
Thus W* is any causal, b.i.b.o. stable linear system that has dc gain of n~k)
since l/lw.(r) == kn(k). A particularly easy choice is a linear operator with
convolution kernel n~k) 8(t), though the kerneI is not an integrable function
as required in the preceding derivation. However, here, as in other pIaces in
the engineering literature, we will use these generalized functions (the reader
4.1 Optimal Linear Approximants to Nonlinear Systems 133
n(k)
-1
tan n(k)/k
-----------l'<::::....-------JL.-------I~ k
We begin with the case that nO is an odd nonlinearity, that is, n(x) = -n(-x).
In this instance, we have Ci = 0 for all i , Now, using the formula for the cross
correlation and the orthogonality of sinusoids at integrally related frequencies,
we get
a sin w(t - r) (f
;=1
d, Sin(iwt»)
(4.19)
dIa
= Tcos(wr).
For an input uo(t) = a sin(wt), the optimal linear system W* will produce an
output of the form
W*uo(t) = YI (w)a sin(wt) + Y2(w)a cos(wt),
where YI (w) + jY2(W) = W*(jw), with W*(jw) being the Fourier transform
of the convolution kernel w* (.). Thus, the cross covariance <Pw' (r) is
2
A. ( ) _ YI(w)a
'Pw. t' -
2
By inspection it follows that the optimal linear system should have W*(jw) =
~ + jY2(W) with Y2 being arbitrary. One may set Y2 = 0 and take as choice of
convolution kerne I w*(t) = ~8(t) . It is worthy ofnote, that the optimal causal
134 4. Input-Output Analysis
' -1
rPN(r:) = I1m
T-+ooT
10
00
asmw
• ( t - r)
dia Cta ,
= 2 cos(wr:) + 2 sm(wr:);
*. d,
W (jw) = -
a
+ J-.
, CI
a
(4.21)
Thus the optimal linear system maintains harmonic balance to the first harmonic
0/ the output of the nonlinear system. If the approximating linear system is a
memoryless linear systern, it is characterized by its gain alone, referred to as
equivalent gain. The equivalent gain or describing function is the complex
number
dl(w,a) ,c ](w , a )
( ,w) =
T/a +J---
a a
Thus, the describing function is a complex number which is a function of
both the frequency to and amplitude a of the input necessary to match the first
harmonie of the output of the nonlinear system.
CI (a) = -1 1 21t
n(a sin 1fr) sin 1fr d1fr,
1
7r 0
(4.22)
21t
d, (a) = -1 n(a sin 1fr) cos 1fr d1fr .
7r 0
4.1 Optimal LinearApproximants to NonlinearSystems 135
Proposition 4.6. Suppose that the function n (-) is odd and is sector bounded , i.e.,
it satisfies
k)a 2 ~ an(a) ~ k 2a 2 Va ER (4.23)
y+3iy+ y =u (4.25)
with forcing u = A sin(wt). If the nonlinear system produces a periodic output
(this is a very nontrivial assumption, since several rather simple nonlinear systems
behave chaotically under periodie forcing), then one may write the solution y(t)
in the form
y(t) =
k=)
Using this solution in (4.25) simplifying and equating first harmonie terms yields
(l - ( 2
)y ) sin(wt + ( 1) + ~WYI3 cos(wt + 0)) = A sin(wt). (4.26)
2
(l - ( ) y )2 + (~wY?Y = A
2
(4.27)
0) = - tan
-I 3wY? .
4(1 - ( 2)
Thus, if one were to find the optimal linear, causal, b.i.b.o. stable approximant sys-
tem of the nonlinear system (4.25), it would be have Fourier transform at frequency
w given by what has been referred to as the describing function gain
Y1e j O,
A
Definition 4.7 Describing Function Gain. Consider a bounded-input bounded-
output stable nonlinear system with sinusoidal input uo(t) = a sin tot , Under the
136 4. Input-Output Analysis
assumption that the output 0/ the nonlinear system is periodic with Fourier series
00
(4.29)
The key point to the definition is the major assumption that the output of the
system is periodic with the same period as the input. Nonlinear systems that have
sub-harmonie components thus do not qualify for this definition. The describing
function defined above should more properly be rcferred to as the sinusoidal input
describing function. Examples of other describing functions may be studied in
Gelb and van der Velde [109). Note that the definition of the describing function
has built into it the notion of harmonie balance .
One of the major uses of the describing function is in predicting (nearly si-
nusoidal) oscillations in nonlinear control systems. Consider the class of unity
feedback systems consisting of a nonlinear system and a linear system, as shown
in Figure 4.2. The two systems N and G are described respectively by
u(t) = N e(t),
The problem to be addressed is whether (4.30) has a fixed point e( ·). The first
approach is to try
e(t) = a sin(wt).
FIGURE 4.2. A unity feedback system with linear and nonlinear systems
4.1 Optimal Linear Approximants to Nonlinear Systems 137
uo(a, b) +b = 0,
I +ul(a,b,w) = 0, (4.33)
uz(a, b, w) = O.
Equation (4.33) has three unknowns, a, b, and w and three equations. There is,
however, no easy graphieal way to mechanize solutions of the system (4.33) .
L
00
L
00
j wkl, (4.35)
(NeHt) = dk(e, w)e
k=-oo
4.1 Optimal Linear Approximants to Nonlinear Systems 139
where e = (e. , e2, ...) is the vector of Fourier series coefficients of e(t). From
(4.34) and (4.35) it follows that
L
00
Now, the second equation of (4.38) is Lipschitz continuous in Pi;« with Lipschitz
constant Ey < I, has a solution with
- Ey
IPm el ::: --lPmel. (4.41)
1- Ey
Using this solution in the first equation of (4.38), we get
Pme = -Pm(GN(Pme))
(4.43)
= -U(Pme) .
The idea is to show that Pme+U(Pme) and Pme+ V(Pme) have the samedegree
with respect to zero in a bounded region in the space of Pme, namely ]R2m+l . The
details are as folIows: We will assurne that we represent Pm e by the set of m
complex numbers el , . • • , em and one real number eo, since
L
m
Pme = ekejwkt .
k=-m
We will refer to the space of ek as ]R2m+1, though it consists of m complex numbers
and one real number. We will assurne that the "describing function" equation (4.43)
has a single solu~on in a set B C ]R2m+l for some W = 2:; E [Wf, Wh] . The
assumption that IPm GI::: E can then be re-expressed as
Toestablish that the degree of Pme+U(PMe) is the same as that of Pme+ V (Pme),
we construct the homotopy
Pme+tU(Pme)+(l-t)V(Pme)
= (Pme + U(Pme)) + (l - t)(U(Pme) - V(Pme)) . (4.44)
If E were sm all enough that on the boundary of the set B the right hand side of this
equation is greater than 0, then, by the homotopy invariance of degree, we have
that
Example 4.8 Van der Pol's Equation. Consider a form of the van der Pol
equation described by
and let us model it in the form ofa nonlinearfeedback system as in Figure 4.2 with
J,LS
g(s) = -,,-----
A
(4.46)
sz - J,LS + I
with N being represented by a memoryless nonlinearity of the form n (X) = x 3 .
Since the nonlinearity is odd, the first harmonie is the only one that needs to be
accounted for, and T/(a) = 3a z /4. The Nyquist locus and the describing function
locus of -I /T/(a) are shown in Figure 4.4. The Nyquist locus does not change in
shape with J,L though the scaling withfrequency does indeed change. However the
intersection with the imaginary axis is always at w = I so that the frequency of
the oscillation and the amplitude apredicted by the intersection is 2/..[3 (also
independent of J,L). Let us compare these conclusions with an analysis of what
happens in the limit J,L ~ 00. Indeed, in the state space we may write the equations
in the scaled time r = s. as
Il-
dXI
-=Xz,
dr
(4.47)
I dx-, 3
- - = -X I +xz -Xz'
J,Lz dt
The formal limit ofthis system is the degenerate van der Pol system (see Problem
3.8).
dXI
- = Xz,
°
dr (4.48)
3
= -XI +xz -Xz,
142 4. Input-Output Analysis
-1
n(a)
Xz
-------I-------,~----1-------- xI
which has an oscillation in the XI , X2 state space (y , y space) of the form shown
in Figure 4.5. The amplitude of this oscillation is not close to 2/.J3. Of course ,
the describing function is not expected to give the correct answer as Jl gets to be
[arge since the transfer function g(jw) is close to lover a very [arge jrequency
band as Jl gets to be large as may be verified by writing g(jw) as
4.2 Input-Output Stability 143
.i = l(t ,x,u) ,
y=h(t,x,u)
or
Xk+1 = f tk, xi; Uk),
Yk = h(k , Xk, Uk).
One can also think of this equation from the input-output point of view. Thus,
for example, given an initial condition x (0) = Xo and an input U(.) defined on the
interval [0,00[, say piecewise continuous, and with suitable conditions on Ie-. " .)
to make the differential equation have a unique solution on [0, oo[ with no finite
escape time , it follows that there is a map N xo from the input u(·) to the output
y(.). It is important to remember that the map depends on the initial state Xo of the
system. Of course, if the vector field land functions h are affine in x, U then the
response of the system can be broken up into apart depending on the initial state
and a part depending on the input. More abstract!y, one can just define a nonlinear
system as a map (possibly dependent on the initial state) from a suitably defined
input space to a suitable output space. The input and output spaces are suitably
defined vector spaces. Thus, the first topic in formalizing and defining the notion
of a nonlinear system as a nonlinear operator is the choice of input and output
spaces . We will deal with the continuous time and discrete time cases together. To
do so, recall that a function g(-) : [0, ool-» IR (respectively, g (-) : Z+ ~ IR) is
said to belong to Lp[O, oo[ (respectively, lp(Z+) ifit is measurable and in addition
1 00
n=O
Ig(nW < 00.
Also, the set of all bounded/ functions is referred to as L oo[O, oo[ (respectively,
loo(Z+). The L p (lp) norm of a function g E Lp[O, oo[ (respectively, lp(Z+)) is
defined to be
I
2 Actually, essentially bounded; essentially bounded means that the function is bounded
except on a set of zero measure.
144 4. Input-Output Analysis
• 1E LI but 1 rt L 2 , L oo : t ~ 0(:+1)'
. (1+1)' /4
• 1E L 2 but 1 rt LJ, L oo . t ~ , '/4(1+1)'
• 1E L oo but rt LI, L 2 : t ~ sin t .
For p E [1,00], L" is a Banach space, or a complete normed vector space, this
result is sometimes refereed to as the Fischer Riesz theorem. Some equalities about
L" spaces are useful: we quote a few here (for more details, see Brezis [42]): The
first is easy to verify
Proposition 4.10 Hölder, (Schwarz), and Minkowski Inequalities. Given
p E [1, 00], define it' s conjugate q E [1, 00 [ to satisfy
1 I
-+-=1.
P q
Then , givenfunctions 1(.), g(.) such that 1 E L", g E t., (I", Iq• respectively) the
following inequality, called the Hölder inequality holds:
1 00
(4.50)
Proof: We start with the proof of the Hölder inequality. The inequality (4.49) is
trivial when either 1/01" = 0, Ig(·)I" = 0, hence we will define
{l {l
, I
OO OO
A= 1/(t)!"dt} P, B = Ig(tWdt} ;; ,
Since 1/ p + 1/q = 1, from the convexity of the exponential function wc havc that
1 00
Now, sincc (f + g )p -I E L q ([0, oo[) (the reader should vcrify this), wc may use
the Hölder incquality on each of the terms on the right hand sidc of thc preceding
inequality to obtain
(4.52)
1 00
1I + g lPdt
{OO
.: : [10 1I + gl Pdt 10 I/UWdt )I /P+ ({OO
] l/Q[({OO
10 Ig(tWdt ) l/PJ .
(4.53)
Now, the Minkowski inequality (4.50) follows by dividing both sides of (4.53) by
the first term on the right hand side of (4.53). 0
The function spaces L P contain many bizarre functions. For instance, it is not
true that evcry L2 or LI function has to tend to 0 as t ~ 00, or that every L2 or LI
function is even bounded. However, if a function I E LI or L 2 and its derivative
j are bounded, it follows that IU) ~ 0 as t ~ 00. This fact is referred to as
Barbalat's lemma and is useful in Lyapunov analysis: It is an exercise (Problem
5.9 in Chapter 5). Another interesting fact is that if I E L I n L oo, then I E L p
for pE [1 ,00] (see 4.10).
We mayaIso define truncations offunctions as folIows: given a function 1(-) E
L p[O,oo[ (rcspectivcly, lp(Z+) , thc truncation up to time T, denoted fr(-) , is
146 4. Input-Output Analysis
defined to be
°
are specified as arising from differential (or difference) equation descriptions, it
follows readily that the output at time t given Xo at time depends only on u(t)
causaIly, i.e., defined on [0, t]. When we specify a nonlinear map more abstractly,
however, it is important to insist that it be causa!. In the development that follows
we will state definitions for the continuous time case and refer to the analogous
results for the discrete time case onl when they are different. FormaIly, we define
causal nonlinear operators as folIows :
Further, assume that the loop is well posed in the sense that for given u I E
L;~ , U2 E L;~ there are unique e!, Y2 E L~~; eu YI E L;~ satisfying the loop
equations namely
el = u! - N 2(e2), YI = N! (ei),
(4.55)
e2 = U2 + NI (e.), Y2 = N 2(e2).
Then, the closed loop system is also finite gain stable from u I, U2 to YI , Y2 iJ
148 4. Input-OutputAnalysis
Proof: Consider inputs UI E L~~, U2 E L~~. From the weIl posedness assumption
it follows that there are unique solutions el, ei, y" Y2 to the feedback loop. Further
from the feedback loop equations (4.55) it follows that
From these estimates it follows that bounded inputs, i.e., Ul E L~i, U2 E L~o
produce bounded outputs. Also, this establishes that the cIosed loop system is
finite gain stable. 0
Remarks:
1. The preceding theorem is calIed a small gain theorem because it gives conditions
under which the cIosed loop system is finite gain stable when the loop gain k, k2
is less than I (smalI).
2. The small gain theorem does not guarantec the existence or uniqueness of solu-
tions to the cIosed loop system, i.e., the existence and uniqueness of el, ei, y" Y2
given u I, U2 in their respective L p e spaces. This is one of the hypotheses of the
theorem.
4.2 Input-Output Stability 149
el = UI - N 2(e2), YI = N I(ej) ,
ei = U2 + NI (ed, Y2 = N 2(e2).
Now, fix u I, U2 and note that we have unique solutions YI, Y2, if and only if, there
are unique solutions YI , Y2. Combining the two equations above yields
(4.56)
This equation has a unique solution if and only if the map T:
L;~ ~ L;~ has
a unique fixed point. To establish this we use the contraction mapping theorem
(Theorem 3.17). Note that
the composite system, let U I, U 2, and UI, U2 be two sets of inputs for the system
with corresponding outputs el , e i and el ,e2. From (4.56) it folIows that
Analogously, similar bounds can be obtained for le2- e21. IYI - YII. IY2 - Y21.
This completes the proof. D
The smalI gain theorems, while simple in statement, are extremely useful in
proving several interesting results:
Heu = [I + G + !:1r l
= {(I + !:1 (1 + G)- l)(I + G)}- I
= (I + G) -I [l + !:1(1 + G) -I r '.
Note that the preceding calculations involve nonlinear operators rather than matri-
ces. Now, consider the smalI gain condition that the product of the (incremental)
Under these conditions it follows that Heu is also (incremental) finite gain stable.'
In fact, the reader should verify that the incremental finite gain of Heu may be
bounded above by
I
kU+G)-1 - - - - -
I - kt.kU+G) - 1
If the unperturbed system has open loop incremental gain k G less than I, then it
follows that it is stable by the small gain theorem. Further, we can get abound on
kU+G) - 1 as follows:
This result is weaker (more conservative) than the one above, since
I
l -kG < - - -
- ku+G )-,
The small gain condition has an interpretation in terms of redrawing the perturbed
feedback loop as shown in Figure 4.9.
+ +
FIGURE 4.9. Redrawing the perturbedfeedback loop to illustrate the small gain condition
"There is aversion of this result that also holds for the case that G is only finite gain
stable and not incremental finitegain stable; also ß, with the added provisothat ß(O) = O.
The interested reader should work this out.
152 4. Input-Output Analysis
UI = el + G2e2, (4.57)
U2 = e2 - G,el.
Subtracting K times the second equation of (4.57) from the first and using (by
the assumption that K is linear)
K(e2 - G,el) = Ke2 - KG1el
weget
(4.58)
Define
111 = (/ + KG1)el
and assume that the map (/ + KG,)-I maps L;~ to L;~. Then the equations of
the transformed loop are given by
el = (/ + KGd- J
77 I ,
it follows that 771 E L p :::} el E L p• o
Yl
power) does not exceed the change in the amount of energy stored in the circuit.
If the nonlinear circuit starts with all its storage elements, namely inductors and
capacitors, uncharged, it follows that the circuit is passive if
l T
yT (t)u(t) dt ~0
for all T. This thinking is generalized as follows:
l T
(Nu)T (t)u(t) dt ~ ß.
2. N is strictly passive ifthere exists a > 0, ß E lR such thatfor all T, u E L~~
l T
(Nu)T (t)u(t) dt ~a l T
u T (t)u(t) dt + ß.
3. N is said to be incrementally passive iffor all T , UJ, U2 E L~~, we have
l T
(Nu] - NU2l (t)(Ut - U2)(t) dt ~ O.
4. N is said to be incrementally strictly passive ifthere exists a 8 such thatfor all
T , U], U2 E L~~ . we have
l T
(Nu) - NU2)T(t)(Ut - U2)(t) dt ~ 81 T
(UI - u2l (t)(Ut - U2)(t) dt .
In the sequel, we will use the notation (' , ' )T to denote the integral up to time T
between two functions:
(UJ,U2)T =
T
l
Uf(t)U2(t)dt.
(4.60)
e2 = G Jel'
with G t ,G 2 : L~~ ~ L~~. Then , if G I is passive and G 2 is strictly passive the
closed loop is L2 finite gain stable.
Proof: Using (4.60), and the follow ing representation ofthe passivity and strict
passivity of G J , G 2 respectively, we have
It follows that
(ei. GI el)T + (e2. G2e2)T ::: EJied~ + 8dGled} + E21e21~ + 82IG 2e21~ .
Using this bound and the Schwartz inequality in the first bound , we have
Using the hypothesis that 8 1 + E2 =: /LI > °and 82 + EI = : /L2 > 0, we have
:s (
(8z + 1)Z
/LI -8z
)
lud T+
z ((8 1 + 1)Z
/Lz
) z
-8 1 IUzlT"
J(t) = (4.64)
;=1
where fi E lR, t; E lR+, 8(.) is the unit delta distribution, and Ja(-) E LI ([0, OO[),
is called the algebra of stable convolution functions A , provided that
L lfil <
00
00 .
;=1
The set A with the norm I·IA is a Banach space (see Problem 4.16).lt is made into
an algebra by defining the binary convolution operation between two functions
J, gin A as
J * g(t) := i'oo J(t - r)g(r)dr.
It may be verified that convolution makes A into an algebra (Problem 4.16). The
Laplace transform of a function in A belongs to the set A. Thus fes) E A is
analytic in C+ .
Remarks:
1. The set Aincludes the space of properrational transfer functions (proper means
that the degree of the numerator is less than or equal to that of the denominator),
which are analytic in C+ and have no repeated poles on the jw axis. On the
other hand, functions in A of the kind
J(t) = 8
00
(i
1
+ 1)2 8 (t - i)
have Laplace transforms that belong to Abut are not rational transfer functions .
2. For sequences a(·), we can define the analogue of the set A to be the space of
II sequences.
3. The extension ofthe set Ais defined to be A e , the set of all functions J(.) such
that for all finite truncations T, Ir (-) E A.
Given a function h(-) E A e , we define a zero initial condition single-input
single-output (SISO) linear system with input u E L pe to be y(.) defined by
The following theorem gives stability conditions for SISO linear systems. Note
that for linear systems finite gain stability and incremental finite gain stability are
158 4. Input-Output Analysis
the same concept. We will use the term LI' stable for linear systems to mean finite
gain or incremental finite gain LI' stable.
Theorem 4.24 Input Output Stability of SISO Linear Time Invariant Sys-
tems. Consider the SISO linear time -invariant system H described by equation
(4.65) with the convolution kernel hO E A e. Then, we have that that for all
p = I, 2, .. . , 00, H is L pe finite gain stable if and only if h E A. Moreover in
this case we have that
(4.66)
Proof: We begin with the proof that h E A implies L pe bounded input bounded
output stability. We start with p = I, 2, ... < 00. The p = 00 case needs to be
done separately. Let u( ·) E LI" Then, defining as before the integer q such that
1 1
- +- = I,
p q
we see that
using the Hölder inequality. Taking the p-th power of both sides we get that
ly (tW s Ih l~/q 1
1
Ih (t - r) lIu(rWdr (4.68)
1 00
= Ihl~/q 1 / 00 Ih (t - r)llu(rWdtdr
Ihl~/q 1 [/00
(4.69)
00
= Ih(t _ r)ldt] lu(r)ll'dr
= Ih l~/qlhI Alfl~.
The last equation establishes that
so that
00
y(t) = ~hiU(t - ti) + Jo
r ha{t - r)u(r)dr,
ly (t)1 ~ sUP lu(t)1 [t + 1jh;l
00
Iha(r)ldr]
= IhIAlu(·) loo .
For the converse, that is, to show that L p stability implies that h E A, we note
that if the linear system H is stable for all L p , it is in particular L I stable. Also
the linear system His a continuous linear operator from L I to LI (the finite gain
is actually the Lipschitz constant of continuity!). Since every distribution in A can
be obtained as a limit (in the sense of distributions or in the A norm) of functions
in LI, it follows that H also maps functions in A to functions in A. In particular,
this implies that H(8(t)) = h(t) E A, completing the proof. 0
I/I ~ = - I
2JT
1 00
-00
I/(jw)I Zdw.
A
o
The bound (4.70 is a better bound than (4.66), since it can be shown that
sup", Ih(jw) is the L2 induced norm for the linear operator H. It has the physical
interpretation of being the maximum amplitude of the frequency domain transfer
function h(jw). The idea of the proof is to follow this intuition. The proof would
be completely straightforward if sinusoidal signals belonged to L 2 , but since they
do not, the idea is to approximate a sinusoidal signal at the frequency wsup at which
the supremum is achieved by
e- fI sin(wsupt)
for progressively smaller E (see [83] for details).
The multiple-input multiple-output (MIMO) generalization of this theorem to
systems of the form H: L;~ ~ L;~ described by
y(t) = 1 1
where u(t) E R ni, y(t) E Rn o, h(t) E Rnuxni is as folIows. The proof is quite
procedural and is left to the exercises .
Theorem 4.26 Input-Output Stability ofMIMO Linear Time Invariant Sys-
tems. Consider the MIMO linear time invariant system H described by equation
(4.71) with the convolution kernel h (-) E A ~o " ?', Then, we have that that for all
p = I, 2, . .. , 00, H is L pe stable if and only if h E A no xni • Moreover in this case
we have that
systems, with some linear systems in the feedback loop. The results that we derive
frequently have some very nice graphical interpretations. We begin with a lemma
knownas the Paley-Wiener lemma. It's proof while straightforward is not in the
flow of this section, and we refer the interested reader to Rudin [246] .
Lemma 4.27 Paley-Wiener Lemma. Let! E A. Then, thefunction I/! E A
if and only if
inf 1!(s)1 > O. (4.74)
Re s~O
In the preceding equation, the notation Re refers to the real part 01 a complex
number.
Remarks:
1. The elements of the algebra A that satisfy the Paley-Wiener condition (4.74)
are the units of the algebra.
2. As a consequence of the Paley-Wiener lemma, it is easy to see (Problem 4.18)
for a matrix F<s) E An x n , fr -I (s) E An x n if and only if
inf Idetfr(s)1 > O.
Re s ~O
Now, consider the feedback interconnection of two linear time invariant MIMO
systems as shown in Figure 4.12, which is essentially the same as Figure 4.7 with
the nonlinear operators NI, N 2 replaced by linear operators GI, G2. The inputs
to the system are UI (-) E L;~ and U2(-) E L;~ , and the outputs YI (-) E L;~ and
Y2 E L~~. The ioop equations are given by
ez = U2 + YI,
(4.75)
YI = G1(el),
Y2 = G2(e2)'
Since .1 is an algebra, it follows that all the entries of the matrix in (4.76) are in
.1, establishing the theorem. 0
g(s)
Yt
<1>(.) on L 2e given by
The nonlinearity is said to belong to the sector [a, b], if t/J(O, t) == 0 and
ao? s at/J(a, t) s ba 2 Va.
Theorem 4.29 Circle Criterion. Consider the Lur'e system of Figure 4.13.
with g E A and t/J in the sector [-r, r]. Then, the closed loop system is finite gain
L2 stable if
sup Ig(jw) I < r- I . (4.77)
wER
In the instance that t/J belongs to the sector [a, b] for arbitrary a, b, define
k= Gib
and r = h;G ,
and the transfer function
A g
gc = 1 + kg'
Proof: The proof of the first part of the theorem is a simple application of the
incremental small gain theorem (Theorem 4.17). The proof of the second part of
the theorem uses the loop transformation theorem (Theorem 4.18) to transform
the linear time invariant system g and nonlinearity <1> to
A g <1>("(a, t) = <1>(a, r) - ka,
g(" = 1 + kg'
Note that <1> c lies in the sector [- r, r] now and the first part of the proposition can
be applied. The details are left to the exercises . 0
Remarks: The foregoing theorem is called the circle criterion because of its
graphical interpretation. If g is a rational transfer function, then the Nyquist cri-
terion may be used to give a graphical interpretation of the conditions of the
theorem:
1. Condition (4.77) requires that the Nyquist locus of the open loop transfer
function g(jw) lie inside a disk ofradius r- 1, as shown in Figure 4.14.
2. Condition (4.78) has the following three cases, shown in Figure 4.15. We de-
fine D(a, b) to be the disk in the complex plane passing through the points
-l/a, -1/b on the real axis.
164 4. Input-Output Analysis
---+-+----!t'---+----..<.- Reg(jw)
FIGURE 4.14. Interpretat ion of the eircle criterion for symmetrie seetor bounded nonlin-
earities
a, ab > O. With the jio axis indented around the purely imaginary open loop
poles of g(s) the Nyquist locus of the open loop transfer function g(jw) is
bounded away from the disk D(a, b) and encircles it counterclockwise v.,
times as w goes from -00 to 00 , where v+ is the number of C+ poles of the
open loop transfer function g(s) .
b, 0 = a < b. Here g(s) has no C+ poles and the Nyquist locus of g(jw)
satisfies
I
inf Re g(jw) > - -
wER b
c, a < 0 < b. Here g(s) has no C+ poles and the Nyquist locus of g(jw) is
contained inside the disk D(a , b) bounded away from the circumference of
the disk,
Im.Ww)
O<a<b O<a<b
a>b>O
---f-+----+--. Reg(jw)
-I -I
a b
Wc will revisit the circle criterion in Chapter 6, where we will derive the result
using Lyapunov techniques. It is worth noting that the statement of the theorem
actually applies to so me cases which are not covered in the Lyapunov techniques
of the next two chapters, since it allows for the possibility of the linear system to
be a distributed parameter or infinite dimensional linear system. Of course, in this
case the extension of the graphical interpretation using the Nyquist criterion just
described needs to be done with care.
A multivariable circle criterion can also be derived (see Problem 4.20) , when
G(s ) E IRn x n and the nonlinear cI> satisfies
(4.79)
The passivity theorem of the previous section can also be applied to the Lur'e
system to give the following theorem:
Theorem 4.30 Popov Criterion. Consider the Lur' efeedback system ofFigure
4./3. Assume in addition that the nonlinearity is time invariant, i.e.,
cI>(ez)(t) := <!J(ez(t»,
and belongs to the sector [0, k]. Also assume that both i . and sg E A. Then, if
there exist q E IR, ß > 0 such that
I
inf Re
wER
[0 + jwq)g(jw)] + -k := ß > O. (4.80)
Then , the closed loop system is L z finite guin stable from the inputs U i , UZ, ÜZ to
the outputs Y I , Yz , that is there exists y < 00 such that
Proof: The proof of the Popov criterion use s the following trick known as
introducing "multipliers." We draw the Figure 4.13 as Figure 4.16 by introducing
the "multiplier" I +qs. In this process Uz gets modified to Vz = Uz +qüz. Define
gl(s) = 0 + qs)g(s), the corresponding linear operator GI. and the nonlinear
block G z as the concatenation of 1/(1 + qs) and the nonlinearity cI>. The input
UI and the signals el, ei, Y I, yz are unchanged, but they may not be present as the
outputs or inputs of blocks as shown in the figure. The output of the G I block is
labeled Z I (its input is el), and the input to the Gs block is labeled Wz (its output
is yz) . It is important to specify the initial condition ofthe block 1/0 + qs) in G z
asO. Now,
(el , Glel)T = 1T
e l(t)(Glel)(t)dt
= 100
e IT(t)(G leidt)dt
where we have used the causality of GI in the last step to equate (G I (e l»T to
GI (eI T). Now, by hypothesis, G I is a L z finite gain stable operator, hence G I (eIT)
166 4. Input--OutputAnalysis
81(s)
r---------,
+
- ,-",
- A
el I
I~
IL.
g(s)
_ _ _ _ _ _ _ _ _ .JI
- 1 + qs
I
ZI
+
-
Ir
1
Y2
<I>
l+qs~::::+
FIGURE 4.16. Showing the loop transformation required for the Popov criterion
(eJ,Gt(eIT)) = -Re
1
27T
1 -00
00
= -1
27T
1 00
- 00
Re g) (jw)le) (jw)1 2dw
(Z2, Y2)T = l T
f/J(e2{t))e2(t)dt +q l T
f/J(e2{t))e2{t)dt.
i:
The second term above can be shown to be positive, since
l o
T
l/J(e2(t))e2{t)dt =
e2(O)
l/J(a)da = l
0
e2
( T) l/J(a)da ~ O.
(zz, Y2)T ~ l T
f/J(e2{t))ez(t)dt ,
Combining this equation with equation (4.81) we see from Proposition 4.22 that
the proposition holds. 0
Remarks:
1. The Popov criterion reduces to the circ1e criterion if the q in equation (4.80) is
zero. Of course the circ1e criterion is more general in this instance, since it allows
for arbitrary sector bounded nonlinearities and time varying nonlinearities.
2. The Popov criterion does not quite prove L z stability of the closed loop system
since it requires also that 14z E L z•
3. In the instance that g, s g are proper rational functions then there is a very
pleasing graphical interpretation of the condition of (4.80) shown in Figure
4.17 wh ich shows aplot ofRe g(jw) against wlm g(jw) as w goes from 0 to
00. The plot need not be drawn for negative w since both Re g and to Im g(j w)
are even functions of w. The inequality (4.80) means that there exists a straight
line through -1/ k with slope 1/q which lies above the plot .
slope
-l/k ~
the form
Here the fact that the integral has upper limit t models a causallinear system and the
lower limit of -00 models the lack of an initial condition in the system description
(hence, the entire past history ofthe system). In contrast to previous sections, where
the dependence of the input output operator on the initial condition was explicit,
here we will replace this dependence on the initial condition by having the limits
of integration going from -00 rather than O. In this section , we will explore the
properties of a nonlinear generalization of (4.82) of the form
(we will assume that h i(·) E LI, so that the integral above is weil defined as a
map from L oo ~ L oo)' The overall input output system is a cubic or degree 3
4.7 YolterraInput-Output Representations 169
y(t) = i loo ' " i loo h; (t , a" a2, . . . , an)u(a\ )u(a2) . .. u(an)da\ . . . da;
(4.84)
with the assumption that h n(...) E LI, that is,
1 00
-00
Ihn(t , a), " ', an)1 dt da, ... da; := k < 00.
We will also assurne that the system of(4.84) is time-invariant, so that the kernel
(4.85)
hn(t, al, a2, ... , an) is actuallyafunction oft - a), a, - a2, .... Thus, while we
could continue the development ofwhatfollowsfor general time-varying systems,
we will restriet attention in this section to time-invariant Volterra descriptions,
and will emphasize the dependence of h n on the dijference oftime arguments. An
input output system is said to be polynomial of degree N if it cün be written as
y(t) = N(u)(t)
(4.87)
Remarks:
1. A Yolterra system with the series of (4.87) truncated to length k is a good
approximation of the fuIl Yolterra system for those inputs u (.) for which
L
00
knlu(')I~
n=k+ J
is smaIl.
2. A Yolterra system is incrementaIly finite gain L oo stable (see Problem 4.26 for
a proof) .
3. While Yolterra series operators are usuaIly viewed as maps on L oo , we can also
view them as operators on L p spaces provided we exert care about their domain
of definition (which may not include open subsets of L p). However, if for some
u( ·) E L p we have that N(u)(·) E L p we can derive incremental finite gain
bounds of the kind derived on L oo (see Problem 4.27) .
Here A(t) , D(t) E !Rnxn• b(t) E !Rn , c T (z) E !Rn and the inputu(t), and output
y(t) are scalars. Then
+ t 10r10t" ,10['''-I
k=)
c(t)<I>(t, (1)D(C1I)<I>(C1), (12)D(C12)
Here <I>(t, T) standsfor the state transition matrix ofthe unforced system x =
A(t)x(t).
Proor: The proof follows along the lines of the Peano-Baker series of Problem
3.11. More precisely, define z(t) = <I>(t , O)- Ix(t) and verify that
where
D(t) = <1>-] (t, O)D(t)<I>(t, 0), b(t) = <1> - 1(r , O)b(t), c(t) = c(t) <I> (t ,0).
(4.91)
The Peano Baker series for (4.90) follows from the recursive formula
Now use the fact that <I>(t, 0) is bounded for given t, along with the boundedness of
D(t), b(t), c(t), toprove that as i -+ 00 the seriesof(4.92) is uniformly convergent
for bounded u(·). The fact that the limit zooO is the solution of the differential
equation (4.90) follows by differentiating the limit of equation (4.92) and verifying
the initial condition (by the existence uniqueness theorem of differential equations).
The desired formula (4.89) follows now by substituting fOT D(t), b(t) , c(t), zo =
<1> -1 (t, O)xo. 0
Note that the Volterra series expansion (4.89) ofthe preceding proposition actu-
ally does depend on the initial condition at time O. As a consequence, the integrals
run from 0 to trather than from -00 to t . For more general nonlinear control
systems of the form
x= f(x) + g(x)u ,
(4.93)
y = h(x)
the derivation ofthe Volterra series can be involved. It is traditional in this approach
to append an extra state Xn + 1 = y (t ) to the dynamics of x(t) so as to be able to
172 4. Input-Output Analysis
assurne that the output is a linear function of the (augmented) states. This having
been done, the typical existence theorem for a Volterra series expansion uses the
variational technique of considering the solution of the driven control system as
a variation of the unforced system and proving that for smalI enough lu(·) 100 the
Volterra series converges (see Chapter 3 of [248]). However, such proofs are not
constructive. An interesting approach is the Carleman (bi)linearization of control
systems discussed in Chapter 3. Indeed, recalI from Problem 3.16 that the state
trajectory of an arbitrary linear analytic system of the form
00
+L
00
x= L Aix(i) B;x(i)u ,
;=1 ;=0
00
(4.94)
y= LC;x(i).
;= 1
may be approximated by a bilinear control system with state x Ell E IRn+n2+ ...+nN:
All A)2 A IN
0 A22 A 2N
dx Ell
-- 0 0 A 3N X EIl
dt
0 0 A NI
+ 0 0 B3N x Ellu + 0 u,
0 0 B N1 0
Y = [ CI C2 CN ] X EIl
for suitably defined Aij , Bij . In principle, the Volterra series of (4.95) may be
computed using Proposition 4.34. We say "in principle" because the dimension of
the system grows rather dramaticalIy with the parameter N of the approximation.
Because of the upper triangular form of the Carleman bilinear approximation there
is some simplification in the form of the Volterra kemels. Also, the Volterra series
expansion ofthe approximate system of (4.95) agrees exactly with the first N terms
of the Volterra series of the original system (that is, the terms that are polynomial
of order less than or equal to N) . The proof of this statement, while straightforward
is notationalIy involved.
4.7 Volterra Input-Output Representations 173
L
00
y(t) = ßmejwml
m=-OO
with
(4.97)
Proof: Using the form of the multi tone signal in the formula for the output y(t)
yields
L hn(TI, T2 , .. . , Tn) n L
00 n M
y(t) = etkejwk(t-r;)dT;
n=1 ;= 1 k= -M
= t(
n=1
L ) etk, ... etk"
-M ':':J,···k,,:SM
(4.98)
The term etk, . . . etk..hn(Jwkl , ... , }wkn)ej(wk,+..+wk,,)1 is often called an n-th order
intermodulation product, and h n(Jwk l, ... , }wk n) is the intermodulation distor-
174 4. Input-Output Analysis
tion measure." Since the terms in equation (4.98) are LI it follows that we may
evaluate the m th Fourier coefficient of y(t)
may equal 00 even in some very elementary situations. Thus, to extend formula
(4.97) to more general periodic inputs and almost periodic inputs requires care. In
[37] some sufficient conditions are givcn to establish the absolute convergence of
the series above for periodic inputs: For example, if
hnUwk], .. . , jwkn) = 0 ( 1 k )
kl ... n
then
y(t) = L ßkejwkt
k
with
For more general stationary input signals of the kind discussed earlier in this
chapter, frequency domain formulas for the output of a Volterra system remain an
open problem.
4.8 Summary
This chapter gave an introduction to some frequency domain techniques for the
study of nonlinear systems. Several kinds of techniques have been presented:
"Fhis tenn is frequently used to rate high quality amplifiers and loudspeakers .
4.9 Exercises 175
approximate techniques such as describing functions with a proof of when they pre-
dict oscillations in a nonlinear system. To this day, describing function techniques
remain an important practical tool of engineers: several engineering applications
are in [109], [206], and [69]. Likewise, Volterra series with their frequency domain
and state space realization theory remain an extremely popular tool for the analysis
of nonlinear systems, especially slightly nonlinear ones (l 112], [267], [248] and
[39] are excellent references for more details on this topic). There is also a large
literature on realization theory for bilinear systems, see for example Isidori and
Ruberti [l5I] and Evans [95].
Finally, input-output methods for the study of stability of nonlinear systems,
their applications to passivity and to the circ1e, and Popov criterion, robustness
analysis of linear and nonlinear control systems represent a large body of literature.
A comprehensive treatment is in the book of Desoer and Vidyasagar [83], see
also Vidysagar [317], Willems [330], and Safonov [249] for the application to
robustness of control schemes.
4.9 Exercises
Problem 4.1. Consider u(t) = cos t + cosorr). Find the autocovariance of u(t) .
Is u(t) periodic? Is it almost periodic in any sense? A function u(t) is said to be
almost periodic if given any E > 0, there exists T(E) such that
u(t) = Lc;cosw;t
;= 1
L Ic;1
00
2
< 00
;=)
Problem 4.2. Prove that the autocovariance of a stationary signal u (.) is positive
definite, that is given t, , ••• tk E lRand CI , . . . , Ck E R, then
L
n
c;cjRu(t; - tj) ~ O.
; ,j= 1
176 4. Input-Output Analysis
In this equation T > 0, B > 0, and it models the variation of orders in the
shipbuilding industry (crudely).
Problem 4.8 Calcium release in cells [206]. The following equations model
the release of calcium in certain cells
.
XI = - - -
K bl X I ,
I n +x
Xj = Xj _1 - bjx], j = 2,3, . . . , n.
Use the describing function to predict limit cycles say for n = 10. Verify your
result using a simulation package.
Problem 4.9 Friction Controlled Backlash. Find the sinusoidal input describ-
ing function for friction controlled backlash, say in gears that do not quite mesh.
This is shown in Figure 4.18 along with the hysteretic input -output characteristic.
Problem 4.10. Prove that if f E LI n L oo, then f E L; for p E [1,00].
Problem 4.11. Show that if A : L7,e Ho L;e is a causal operator and for two
functions u 1, u2 E L7,e we have that (u 1h = (U2h then it follows that
(A(UI»T = (A(U2)h·
Problem 4.12. Consider two nonlinear finite gain operators GI, G 2 from L;~ Ho
L 7,~ and L 7,~ Ho L 7,~ , respectively. Prove that G 2 0 GI is a finite gain operator
as weil. Repeat this also for the case that G I , G2 are both incremental finite gain
stable.
Problem 4.13 Feedback interconnection of two systems. Consider the feed-
back interconnection of two nonlinear operators G I, G 2 shown in Figure 4.19 with
4.9 Exercises 177
+ e1
u1
Yl
+
u2
Y2 +
inputs U I , U2 and two sets of outputs YI , Y2ez- Assurne that the operators
and el,
GI, G 2 are defined from L;~ ~ L;; and L;~ ~ Now show that the closed
L;~ .
loop system from the inputs UI, U2 to the outputs YI , Y2 is finite gain L p stable iff
the closed loop system defined from the inputs U I, U2 to YI , Y2 is, Specialize this
problem to the instance that U2 = 0 to prove that
eJ = (l + G2GI)-lul, YI = G 1G2(l + G 1G 2)-lul '
Problem 4.14 Simple memoryless feedback system [83]. Consider the simple
single loop system shown in Figure 4.20 consisting of a memoryless nonlinearity
c/J : IR f-+ IR and the linear gain t The aim of this problem is to choose a c/J (.) and
k such that
1. For some input u, the feedback loop has no solution.
2. For some input u, the feedback loop has multiple solutions.
3. For some input u, there are infinitely many solutions.
4. The output y of the feedback loop does not depend continuously on the input
u.
It is especially interesting to choose c/J(.), k such that they satisfy the hypothesis
of the small gain theorem (they cannot, however, satisfy the hypothesis of the
incremental small gain theorem).
Problem 4.15. Consider the feedback loop of Figure 4.19. Set the input U2 == o.
Assurne that el E L pe and define UI by
UI = el + G 2(G I (eI))
Suppose that there exist constants k21, k l , ß21 , ßI with k21, k, ::: 0 such that
Prove that if k21 < 1, then the system has finite gain from u 1 to e) , YI .
Problem 4.16 Algebra of stable transfer functions. Prove that the space A of
functions of the form
L fi8(t -
00
with
00
L lfil + IJaOIJ .
00
IJ(')IA =
;=1
Further show that it is a commutative algebra under convolution (please review the
definition from Chapter 3). Note that the unit element is the unit delta distribution.
Show also that
y(t) = 800
g ;(t )U(t - t;) + 10
('
ga(t , "C)u("C)d"C, (4.99)
4.9 Exercises 179
Since U(1) = 0 for 1 ~ 0, it follows that if I (t) := {i : t, ~ I}, then the preceding
l'
equation can be written as
Coo := sup
I
[Lie/ (I)
Igi(1)! + [' !ga(t,
Jo
r)dr] < 00,
CI := 00.
Show that this implies that the linear system described by equation (4.99) is L pe
finite gain stable if C I ,Coo < 00. Relate this to Theorem 4.24.
Problem 4.21 MIMO finite gain stability conditions. Derive the necessary and
sufficient conditions for L p finite gain stability of MIMO linear systems described
by (4.71), given in Theorem 4.26 .
This is a generalization of the definition of passivity and strict passivity. Now prove
that if Q is negative definite, then the system is L z finite gain stable, that is there
exists k, ß such that
IGuin s kluln + ß·
Problem 4.23 Stability of interconnected systems [220]. Consider a collec-
tion of N non linear operators G, : L~~ t-+ L~;j that are respectively Qj , Sj, R,
dissipative. Now define the input vector u = (U1 , UZ, ... , UN)T and the output
vector y = (Yt. yz ... , Y N ) T and the interconnection of the systems by
U = v - Hy (4.101)
180 4. Input-Output Analysis
2. The equations of ade motor rotating at speed w(t) (which is the output y(t))
when excited by the field current (which is the input u(t)) is modeled by writing
an electrical circuit equation in the armature with state variable i (t) and a
mechanical equation for the variation ofthe speed w(t) (the units in the equation
below are all normalized, and we have assumed that the load varies Iinearly
with w) as
(4.103)
y(t) = w(t) .
5.1 Introduction
The study of the stability of dynamical systems has a very rich history. Many
famous mathemat icians, physicists, and astronomers worked on axiomatizing the
concepts of stability. A problem, which attracted a great deal of early interest was
the problem of stability ofthe solar system, generalized under the title "the N-body
stability problem." One of the first to state formally what he called the principle of
"least total energy" was Torricelli (1608-1647), who said that a system of bodies
was at a stable equilibrium point if it was a point of (locaIly) minimal total energy.
In the middle of the eighteenth century, Laplace and Lagrange took the Torricelli
principle one step further: They showed that if the system is conservative (that is, it
conserves total energy-kinetic plus potential), then astate corresponding to zero
kinetic energy and minimum potential energy is a stable equilibrium point. In turn,
several others showed that Torricelli 's principle also holds when the systems are
dissipative, i.e., total energy decreases along trajectories of the system . However,
the abstract definition of stability for a dynamical system not necessarily derived
for a conservative or dissipative system and a characterization of stability were
not made till 1892 by a Russian mathematician/engineer, Lyapunov, in response
to certain open problems in determining stable configurations of rotating bodies
of fluids posed by Poincare. The original paper of Lyapunov of 1892, was trans-
lated into French very shortly there after, but its English translation appeared only
recently in [I 93l. The interested reader may consult this reference for many inter-
esting details, as weIl as the historical and biographical introduction in this issue
ofthe International Journal ofControl by A. T. Fuller. There is another interesting
5.2 Definitions 183
survey paper about the impact of Lyapunov 's stability theorem on feedback control
by Michel [208].
At heart, the theorems of Lyapunov are in the spirit of Torricelli 's principle.
They give a precise characterization of those functions that qualify as "valid energy
functions" in the vicinity of equilibrium points and the notion that these "energy
functions" decrease along the trajectories of the dynamical systems in question.
These precise concepts were combined with careful definitions of different notions
of stability to give some very powerful theorems . The exposition of these theorems
is the main goal of this chapter.
5.2 Definitions
This chapter is concemed with general differential equations of the form
(5.2)
denotes the partial derivative matrix of I with respect to x (the subscript 1 stands
for the first argument of I (x, t)), then ID) I (x , t) I :s I implies that f is Lipschitz
continuous with Lipschitz constant I (again locally, globally or semi-globally de-
pending on the region in x that the bound on ID 2 f (x , t)1 is valid). We have seen
in Chapter 3 that if I is locally bounded and Lipschitz continuous in x, then the
differential equation (5.1) has a unique solution on some time interval (so long as
x E Rh).
Using the Bellman Gronwalliemma twice (refer back to Chapter 3 and work out
the details for yourself) yields equation (5.3), provided that the trajectory stays in
the ball Rh where the Lipschitz condition holds. 0
5.2 Definitions 185
The preceding proposition implies that solutions starting inside B h will stay in
Bh for at least a finite time . Also, if f(x, t) is globally Lipschitz, it guarantees that
the solution has no finite escape time, that is, it is finite at every finite instant. The
proposition also establishes that solutions x(t) cannot converge to zero faster than
exponentially.
We are now ready to make the stability definitions. lnformally x = 0 is stable
equilibrium point if trajectories x (t) of (5.1) remain elose to the origin if the initial
condition Xo is elose to the origin. More precisely, we have the following definition.
Definition 5.4 Stability in the sense ofLyapunov. The equilibrium point x = 0
is called a stable equilibrium point 0/(5.1) iffor all to ~ 0 and E > 0, there exists
8(to, E) such that
Be - - - - - - - - - - - -
FIGURE 5.2 . An equilibrium point, which is not stable, but for wh ich all trajectories tend
to the origin if one includes the point at infinity
The previous definitions are local, since they concern neighborhoods ofthe equi-
librium point. Global asymptotic stability and global uniform asymptotic stability
are defined as folIows:
(5.8)
for all Xo E Bh , t 2: to 2: O. The constant a is calied (an estimate of) the rate of
convergence.
Global exponential stability is defined by requiring equation (5.8) to hold for aII
Xo E R". Semi -global exponential stability is also defined analogously except that
m, tx are aIIowed to be functions of h. For linear (possibly time-varying) systems
it will be shown that uniform asymptotic stability is equivalent to exponential
stability, but in general, exponential stability is stronger than asymptotic stability.
188 5. LyapunovStability Theory
An I.p.d.f. is locally Iike an "energy function." Functions which are globally Iike
"energy functions" are called positive definite functions (p.d.f.s) and are defined
as folIows:
In the preceding definitions of l.p.d.f.s and p.d.f.s, the energy was not bounded
above as t varied. This is the topic of the next definition
1. v (x , t) = jx l2 : p .d.f. , decrescent .
2. v(x , t) = x T P x, with PE IR n x n > 0: p .d.f. , decrescent.
3. v (x , t) = (t + 1)lx I2 : p.df.
4. v(x, r) = e- ' lx I2 : decrescent.
5. v (x , t ) = sin 2 ( jx I2 ) : l.p.df., decrescent.
6. v (x, t) = e' x T P X with P not po sitive definite: not in any of the abo ve classes.
7. v (x , t ) not explicitly depending on time t: decrescent.
Proof:
1. Since v is an l.p.d.f., we have that for some a(·) E K,
Such a8 always exists, since ß(to, 8) is a continuous function of8 andrr (s j ) > O.
We now claim that Ix( to)1 ~ 8 implies that Ix(t)1 < EI vt ::: to. The proof is
190 5. Lyapunov Stability Theory
it follows that Ix(to) I < EI . Now, if it is not true that Ix(t) I < EI for all t, let
t) > to be the first instant such that Ix(t) I 2: EI. Then
(5.15)
Hut this is a contradiction, since v(x(t), t) ~ 0 for all [x] < E) . Thus,
2. Since v is decrescent,
(5.16)
The hypotheses guarantee that there exist functions a (.), ß(.), y (.) E K such
that Vt 2: to, Vx E B, such that
T = a(r)/Y(~2) .
This choice is explained in Figure 5.3. We now show that there exists at least
one instant tz E [tl, t) + Tl when IXol < ~2' The proof is by contradiction.
Recall the notation that rjJ(t, xo, to) stands for the trajectory of (5.1) starting
from Xo at time to. Indeed, if
5.3 Basic Stability Theorems of Lyapunov 191
a(lxl)
r
a(~2) S V(S(tl
S ß(OI) - TY(02)
s ß(od - a(r)
< 0,
a(I,p(t.xo.tdl) S v(t.,p(t.xo,td)
s ß(02)
< aCE)
However, we emphasize that this correl ation is not perfect, since v(x, t) being
I.p.d.f. and - iJ (x, t) being l.p.d.f. does not guarantee local asymptotic stability.
(See Problem 5.1 for a counterexample from Massera [227]) .
2. The proof of the theorem, while seemingly straightforward, is subtle in that it
is an exercise in the use of contrapositives.
(5.17)
The Lyapunov function candidate is the total energy of the system, namely,
X2 ( XI
v(x) = ; + 10 g(a)da.
The first term is the energy stored in the inductor (kinetic energy of the body)
and the second term the energy stored in the capacitor (potential energy stored in
IH Xl
--. 1
~FrictioJ [(Xz)
the spring). The function v(x) is an l.p.d .f., provided that g(x,) is not identically
zero on any interval (verify that this follows from the passivity of g). Also,
when IX21 is less than Go. This establishes the stability but not asymptotic sta-
bility of the origin. In point of fact, the origin is actually asymptotically stable,
but this needs the LaSalle principle, which is deferred to a later section.
2. Swing Equation
The dynamics of a single synchronous generator coupled to an infinite bus is
given by
() = w,
(5.18)
w= -M- 1Dto - M-1(P - B sin(O».
Here 0 is the angle of the rotor of the generator measured relative to a syn-
chronously spinning reference frame and its time derivative is to. Also M is
the moment of inertia of the generator and D its damping both in normalized
units; P is the exogenous power input to the generator from the turbine and B
the susceptance of the line connecting the generator to the rest of the network,
modeled as an infinite bus (see Figure 5.5) A choice of Lyapunov function is
1
v(O, w) = "2MW2 + PO + B cos(O).
The equilibrium point is 0 = sin -I (~), w = O. By translating the origin to this
equilibrium point it may be verified that v(O, w) - v(Oo , 0) is an l.p.d.f. around
it, Further it follows that
yielding the stability of the equilibrium point. As in the previous example, one
cannot conclude asymptotic stability of the equilibrium point from this analysis.
3. Damped Mathieu Equation
This equation models the dynamics of a pendulum with sinusoidally varying
y
La
I jB ----11 LO
Admittance
p Input ofbus
power
(5.19)
X2 = -X2 - (2 + sin rjx].
The total energy is again a candidate for a Lyapunov funct ion, namely,
2
v(x, t) = XI
2
+ 2+smt
X2
.'
It is an l.p.d.f, since
x2 x2
x2+ x2 > x2+ > x 2 + -.1..
2 + sin t - I
2
I 2 - I 3
Further, a small computation yields
. xi ( 4 + 2 sin t + cos 1) 0
v(x , t) = - (2 + sin r)? :5.
2. There exists a function v(x, t) and some constants h, a" a 2, a3, a4 > 0 such
thatfor all x E Rh, t ~ 0
Iav~:, t) I s a4lxl ·
Proof:
(I) ::} (2)
Wc prove the three inequalities of (5.20) in turn, starting from the definition of
v(t,x) :
r
and define
where T will be defined later. From the exponential stability of the system at
rate a and the lower bound on the rate of growth given by Proposition 5.3, we
have
mlxle-a(r-I) ::: It/J(r, x, t)1 ::: Ixle-J(r-I) (5.22)
for x E Rh for some h. Also 1the Lipschitz constant of f (x, t) exists because of
the assumption that f (x , t) has continuous first partial derivatives with respect
to x. This, when used in (5.21), yields the first inequality of (5.20) for x E Rh'
(where h' is chosen to be hlm) with
(I - e- 2/T) 2
(I - e- 2aT)
al := a2 := m (5.23)
21 2a
• Differentiating (5.21) with respect to t yields
Note that d jdt is the derivative with respect to the initial time t along the
trajectories of (5.1). However, since for all bot the solution satisfies
we have that that ;1r-(It/J(T, x(t), t)J2) == O. Using the fact that t/J(t, x, t) = x
and the exponential bound on the solution, we have that
using this and the bound for exponential convergence in (5.25) yields
This completes the proof, but note that v(x, t) is only defined for XE Bh' with
h' = h/ m, to guarantee that ,p(r, x, t) E B h for all r ~ t (convince yourself of
this point).
Using the lower bound for v(t, x(t» and the upper bound for v(to, x (to» we get
with
D
198 5. LyapunovStability Theory
= lim r/>(t
n-->oo
+ tn - t1 , XO, to),
and let M be the largest invariant set in S. Then, whenever Xo E Oe> l/J(t, XO , 0)
approaches M as t -+ 00.
Proof: Let Xo E Oe' Now, since v(l/J(t. Xo. 0» is a nonincreasing function of
time we see that l/J(t. Xo. 0) E Oe Vt. Further, since Oe is bounded v(l/J(t. Xo. 0»
is also bounded below. Let
Co = lim v(l/J(t, Xo. 0»
t.....00
and let L be the to limit set of the trajectory. Then, v(y) = Co for y E L. Since L
is invariant we have that iI(y) = OVy E L so that L c S. Since, M is the largest
invariant set inside S, we have that L c M . Since set.
Xo. 0) approaches L as
t -+ 00, we have that s(t. Xo. 0) approaches M as t -+ 00. o.
Theorem 5.23 LaSalle's Principle to Establish Asymptotic Stabllity, Let v :
]Rn ~ ]R be such that on Oe = {x E ]Rn : v(x) ::s c}, a compact set we have
iI(x) ::s O. As in the previous proposition define
s= {x E Oe: iI(x) = O}.
Then, if S contains no trajectories other than x = 0 then 0 is asymptotically stable.
(5.30)
X2 = - f(X2) - g(xt>.
If f. g are locally passive. i.e.,
af(a)::: 0 Va E [-ao.aoJ.
then it may be verified that a suitable Lyapunov function (I.p.d.f.) is
V(XI. X2) = xi
2
+ r
Jo
l
g(a)da.
Now choose
Here, M, D stand for the moment of inertia and damping of the generator rotor ;
Pm stands for the exogenous power input , and P, stands for the local power
demand. This equation can be written in state space form with XI = (), X2 = w.
The equilibrium points of the system are at () = sin "" (Pm; Pe), W = O. Note
that the equilibria repeat every 2rr radians. Consider a choice of
Mw 2
v(() , w) = -2- + (Pe - Pm)() - B cos(()) (5.32)
with
for i = I, . .. , n. Here M;, D; stand for the moment of inertia and damping
of the ith generator. and Ps«, Pe; stand for the mechanical power input and
electrical power output from the i -th generator. The total energy function for
this system is given by
I' T ~
v(O, w) = 20 MO - . L:- Bij cos(O; - Oj) + 0 (P, - Pm) ,
• T
(5.34)
l < j .j=1
Neuron I
Neuron i
Tilx J
T i2 x 2 -:-I~2.}---.-----r---'-~
Ti NxN
g ( U)
-------::>i'~-----_U
du ; N
Ci d = LT;jxj - c,», + t.,
t j=1 (5.36)
x, =g(Ui).
Here Ci > 0 is the capacitance of the i th capacitor, and G i » 0 the conductance
of the i -th amplifier, Ti) stands for the strength of the interconnection between the
Xj and the i-th capacitor. I, stands for the current injection at the i-th node. Using
5.4 LaSalle's Invariance Principle 203
the relationship between u, and x., we get the equations of the Hopfield network:
where
I -
h;(x) = - dg I > 0 Vx.
C; du u=g -J(x)
v= 0 => dv = 0 Vi
ax ;
=> X= o.
Thus, to app1y LaSalle's principle to eonclude that trajeetories eonverge to one of
the equilibrium points, it is enough to find a region which is either invariant or on
whieh v is bounded. Consider the set
Q . = {x ERn: Ix;! ~ 1-E}.
We will prove that the region Q. is invariant for E small enough. Note that
d;/ = 2x;h;(x;) (t } =I
T;jxj - g-I (x;) + I;) .
Further, let v(x, t) be a p.d.j. which is periodic in t also with period T. Define
S = {x E ]Rn : ilex, t) = 0, Vt :::: O}
Then if il(x, t) ::5 0 Vt 2: 0, Vx E ]Rn and the largest invariant set in S is the
origin, then the origin is globally (uniformly) asymptotically stable.
Thenfor alllx(to)1 ::::: a il (al (r)), the trajectories x(·) are bounded and
Proof: The proof of this theorem needs a fact from analysis called Barbalat's
lemma (which we have left as an exercise: Problem 5.9; see also [259]), which
states that if ifJ (-) : IR f-+ IR is a uniformly continuous integrable function with
1 00
then
lim ifJ(t) = O.
t ..... 00
The requirement ofuniform continuity of ifJ(·) is necessary for this lemma, as easy
counterexamples will show. We will use this lemma in what folIows.
First, note that a simple contradiction argument shows that for any p < r ,
Thus /x(t)/ < r for all t ~ to, so that v(x(t), t) is monotone decreasing. This
yields that
1,~)
w(x(r))dr ::::: -
['0
t
v(x(r), r)dr
1'0
00 w(x(r))dr < 00.
lim w(x(t)) = O. o
t ..... 00
Remarks: The preceding theorem implies that x(t) approaches a set E defined
by
E := (x E B, : w(x) = O}.
However, the set E is not guaranteed to be an invariant set; so that one cannot, for
example, assert that x(t) tends to the largest invariant set inside E. It is difficult ,
in general, to show that the set E is invariant. However, this can be shown to be
the case when f (x , t) is autonomous, T-periodic, or asymptotically autonomous.
Another generalization of LaSalle 's theorem is given in the exercises (Problem
5.10).
206 5. Lyapunov Stability Theory
IX IS I ~ E.
Instability is of necessity a local concept. One seldom has adefinition for uniform
instability. Note that the definition of instability does not require every initial
condition starting arbitrarily near the origin to be expelied from a neighborhood
of the origin , it just requires one from each arbitrarily small neighborhood of the
origin to be expelled away. As we will see in the context of linear time-invariant
systems in the next section, the linear time-invariant system
x= Ax
is unstable, if just one eigenvalue of A lies in C~! The instability theorems have
the same flavor: They insist on iJ being an l.p.d.f. so as to have a mechanism for
the increase of v. However, since we do not need to guarantee that every initial
condition elose to the origin is repelled from the origin, we do not need to assurne
that v is an l.p.d.f.
We state and prove two examples of instability theorems:
Theorem 5.29 Instability Theorem. The equilibrium point 0 is unstable at time
to if there exists a decrescent function v : lRn x lR+ t-+ lR such that
1. iJ(x, t) is an l.p.df.
2. v(O, t) = 0 and there exist po ints x arbitrarily close to 0 such that v(x, to) > O.
Proof: We are given that there exists a function v(x, t) such that
v(x, t) s ß(lxl) xE e.,
iJ{x, t) ~ a(lxl) x E Bs •
We need to show that for some E > 0, there is no 8 such that
Now choose E = min(r, s) . Given 8 > 0 choose xs with IXol < 8 and v(xo , to) > O.
Such a choice is possible by the hypothesis on v(x , to). So long as 4J(t, xo, to) lies
in B, we have v (x (t ) , t) 2: 0, which shows that
This implies that Ix(t)1 is bounded away from O. Thus v(x(t), r) is bounded away
fromzero. Thus, v(x(t), t) will exceed äts) in finite time. In turn this will guarantee
that Ix(t) I will exceed E in finite time. 0
Proof: Choose E = r and given 8 > 0 pick Xo such that Ixo I < 8 and V(xo, to) >
O. When Ix(t)1 ~ r , we have
v(x, t) = AV(X, t) + VI (x, t) 2: AV(X, r).
for some function of dass K, so that for some ts. v(x(t), t) > ß(E) , establishing
that IX(t8) I > E . 0
Definition 5.31 State Transition Matrix. The state transition matrix cJ>(t, to) E
IR n x n associated with A(t) is by definition the unique solution of the matrix
differential equation
$(t, to) = A(t)cJ>(t, to), cJ>(to, to) = I. (5.45)
The flow of (5.44) ean be expressed in terms of the solutions to (5.45) as
x(t) = cJ>(t, to)x(to) . (5.46)
In partieular, this expression shows that the trajeetories are proportional to the
size of the initial eonditions, so that loeal and global properties are identieal. Two
important properties of the state transition matrix are as folIows:
1. Group Property
we have
d [ (cJ>(to,1))- I]
d cJ>(t, to) = -d
-d
to to
= -cJ>(to , t)-I A(to)cJ>(to, t)cJ>(to , t)-I
= -cJ>(t, to)A(to) .
1<I>(t,t)I:5mo, i e «.
Uniform convergence of <I> (t, to to 0 implies that
1
1<I>(t , t)1 :5 2' Vt 2: t) + 1'.
to + k'I' :5 t :5 to + (k + 1)1'.
Since
j= l
<I>(to + jT, to + jT - T),
it follows that
j=1
1<I>(to + j'I', to + jT - 1')1
:5 moT k
:5 2m 02-(l-to)/ T
:5 me- ).(I-to)
x= Ax (5.49)
with x E !Rn .
210 5. Lyapunov Stability Theory
v(x) = x T Px,
An easy ealculation yields that
iI(x) = xT(A T P + PA)x. (5.50)
If there exists a symmetrie, positive definite Q E JRn xn sueh that
ATp +PA = -Q , (5.51)
then we see that -il(x) is a p.d.f. Equation (5.51), whieh is a linear equation of
the form
ar: = Q
nxn nxn
where E is a map from IR f--+ IR referred to as aLyapunov equation. Given a
symmetrie Q E IR n x n , it may be shown that (5.51) has a unique symmetrie solution
P E JRn xn when E is invertible , that is, iff all the n 2 eigenvalues of the linear map
E are different from 0:
'A i + 'Aj =1= 0 V 'A i, 'A j E a(A) .
Proof: Define
Set) = 11 eATrQeArdT,
Set) = 11 eAT(t-r)QeA(I-r)dT.
S(t)=ATS+SA+Q .
5.7 Stability of Linear Time-Varying Systems 211
Also, since A has eigenvalues in C e:. the eigenvalues of the linear operator
S~ ATS+SA
AT S(oo) + S(oo)A + Q = O.
Hence the P ofthe claim satisfies equation (5.51). o
Theorem 5.36 Lyapunov Lemma. For A E IRn xn the following three
statements are equivalent:
1. The eigenvalues ofA are in Ce:. .
2. There exists asymmetrie Q > 0 sueh that the unique symmetrie solution to
(5.51 )satisfies P > O.
3. For all symmetrie Q > 0 (5.51) the unique symmetrie solution satisfies P > O.
4. There exists C E ]Rmxn (with m arbitrary) with the pair A , C observable.?
there exists a unique symmetrie solution P > 0 ofthe Lyapunov equation with
Q = CTC::: 0,
(5.52)
5. For all C E ]Rmxn (with m arb itrary), sueh that the pair A, C is observable,
there exists a unique solution P > 00f(5.52).
Proof: (iii) => (ii) and (v) => (iv) are obvious.
(ii) => (i) Choose v(x) = x T Px a p.d.f. and note that -il(x) = x T Qx is also a
p.d.f,
(iv) => (i) Choose v(x) = x T Px a p.d .f. and note that -il(x) = xTCTCx . By
LaSalle's principle, the trajectories starting from arbitrary initial conditions con -
verge to the largest invariant set in the null space of C, where iI(x) = O. By the
assumption of observability of the pair A, C it follows that the largest invariant set
inside the null space of C is the origin.
(i) => (iii) By the preceding claim it follows that the unique solution to (5.51) is
P = 1 00
e
ATI
QeAldt.
0= x T Px = 1 00
x Te
ATI
MT MeAlxdt = 100
2Recall from linear systems theory that a pair A. C is said to be observable itTthe largest
A-invariant space in the null space of Cis the origin.
212 5. Lyapunov Stabi1ity Theory
(i) :::} (v) By the preceding claim it follows that the unique solution to (5.51) is
P = 100 eATICTCeAldt.
This estimate may be used to show that P(t) as defined in (5.53) is weIl defined
and bounded. 0
and A(t) is bounded, say by m , then PU) of(5.53) is uniformly positive definite
for t ~ 0, i.e., there exists ß > 0 such that
ßx Tx :::: x T P(t)x.
Proof:
*
with
Ix=o
f (0) = O. so that 0 is an equilibrium point of the system. Define A =
E jRn xn to be the Jaeobian matrix of f(x). The system
i; = Az
is referred to as the linearization of the system (5.1) around the equilibrium point
O. For non-autonomous systems, the development is similar: Consider
x= f(x,t)
with f(O, t) =0 for all t ~ o. With
A(t ) = af(x, t) I
ax x=O
it follows that the remainder
f1 (x , t) = f(x , t) - A(t)x
is o(lxl) for eaeh fixed t ~ o. It may not, however, be true that
·
I1m sup
If, (x, t)1 =
0
. (5.54)
[x] ..... O t ~O [x]
that is f(x, t) is a funetion for which the remainder is not uniformly of order o(lx 1).
For a sealar example of this eonsider the funetion
f(x , t) = -x + t x ".
In the instanee that the higher order terms are uniforrnly, o(lx 1), i.e., if (5.54) holds,
then the system of (5.55)
i; = A(t)z (5.55)
is referred to as the linearization of (5.1) about the origin . It is important to note
that analysis of the linearization of (5.55) yields conclusions about the nonlinear
system (5.1) when the uniform higher-order condition of(5.54) holds.
5.8 The Indirect Method of Lyapunov 215
and
21xl 2
12xT P(t)f) (x, t)1 :5 -3- Vx E s. ,
so that
xTx
iitx , t) :5 --3- Vx E s..
Thus -il(x, t) is an l.p.d.f, so that 0 is a locally (since -il is an l.p.d.f. only in Br )
uniformly asymptotically stable equilibrium point of (5.1). 0
Remarks:
1. The preceding theorem requires uniform asymptotic stability of the Iinearized
system to prove uniform asymptotic stability of the nonlinear system. Coun -
terexamples to the theorem exist if the Iinearized system is not uniformly
asymptotically stable.
2. The converse of this theorem is also true (see Problem 5.18).
3. If the Iinearization is time-invariant, then A(t) == A and a(A) C C~, then the
non linear system is uniformly asymptotically stable.
216 5. Lyapunov Stability Theory
Al' P + PA o = 1
Assume, temporarily, that A o has no eigenvalues on the jta axis . Then by the
Taussky lemma the Lyapunov equation has a unique solution, and further more if
Ao has at least one eigenvalue in C~, then P has at least one positive eigenvalue.
Then v(x, t) = X T P x takes on positive values arbitrarily close to the origin and
is decrescent. Further,
T
iJ(x, t) = x (Al' P + PAo)x + 2x T Pfl(X, t)
= x T X + 2x T p I, (x, t).
Using the assumption of (5.54)it is easy to see that there exists r such that
xTx
iJ(x , t) ~ -3- vx E e.,
so that it is an l.p.d.f. Thus, the basic instability theorem yields that 0 is unstable.
In the instance that A o has some eigenvalues on the jt» axis in addition to at
least one in the open right half plane, the proof follows by continuity. 0
Remarks:
1. We have shown using Lyapunov 's basic theorems that if the the linearization
of a nonlinear system is time invariant then :
5.9 Domainsof Attraction 217
• Having all eigenvalues in the open left half plane guarantees local uniform
asymptotic stability of the origin for the nonlinear system.
• If at least one of the eigenvalues of the linearization lies in the open right
half plane, then the origin is unstable .
The only situation not accounted for is the question of stability or instability
when the eigenvalues of the linearization lie in the closed left half plane and
include at least one on the jw axis. The study of stability in these cases is
delicate and relies on higher order terms than those of the linearization. This is
discussed in Chapter 7 as stability theorems on the center manifold.
2. The technique provides only loeal stability theorems . To show global stability
for equilibria of nonlinear systems there are no short cuts to the basic theorems
of Lyapunov.
x= /(x) (5.56)
lim ~t(~.f(x» = 0,
t-+oo
+ D/l; + L
n
M ;ij; B;j sin(e; - ej ) = Pm; - Pe;.
;=1
Here e; stands for the angle of the i -th generator bus, M; the moment of inertia
of the generator, D; the generator damping, Pm;, Pe; the mechanical input-power
and the electrical output-power at the i-th bus, respectively. During the normal
operation ofthe power system, the system dynamics are located at an equilibrium
point of the system, ee E JRn. In the event of a system disturbance, usually in the
form ofa lightning strike on a bus, say, Bij.protective equipment on the bus opens,
interrupting power transfer from generator i to generator j. In this instance, the
dynamics of the system move from ee, since ee is no longer an equilibrium point
of the modified system dynamies. Of interest to power utility companies is the
so-called reelosure time for the bus hit by lightning. This is the maximum time
before the trajectory of the system under the disturbance drifts out ofthe domain
5.9 Domains of Attraction 219
As we have shown earlier in this section A(xs ) is an open set containing X s ' We
will be interested in characterizing the boundary of the set A(xs ) ' It is intuitive
that the boundary of A (x,) contains other unstable equilibrium points, labeled xi.
We define their stable and unstable manifolds (in analogy to Chapter 2) as
-l
(5.58)
Wu(Xi) = {x : lim cP,(x) =
' ....... -00
Al. All equilibrium points of the system are hyperbolic. Actually, we only need
the equilibrium points on the boundary of A(xs ) to be hyperbolic.
A2. If the stable manifold Ws(Xi) of one equilibrium point intersects the unstable
manifold of another equilibrium point Wu(Xj), they intersect transversally.
The reader may wish to review the definition of transversal intersection from
Chapter 3, namely, if X E Ws (r,) n Wu(Xj), then
Proof: The proof relies on the follow ing lemma, which we leave as exercise to
the reader (taking the help of [62] if necessary):
Lemma 5.47.
For the sufficiency, we first start with the case that x, has an unstable manifold
of dimension I on the boundary of A (x,,) . We would like to make sure that W u (Xi)
is not entirely contained in 8A(x,,). Indeed, if this were true, since all points on
the boundary of A (x s ) converge to one of the equilibrium points on the boundary
A(x,,), say X, then we have that
In going through the steps of the previous argument, we see that x is either stable
or has an unstable manifold of dimension I. Since x cannot be stable, it should
have a stable manifold of dimension 1. But then, by the last claim of Lemma 5.47,
it follows that
Wu(Xi) - {x;} n W,(x) - {x} =1= 0 and Wu(x) n A(x.,) =1= 0
implying that
Wu(Xi) n A(x,) =1= 0.
By induction on the dimension ofthe unstable manifold of x., we finish the proof.
o
Using this proposition, we may state the following theorem
Theorem 5.48 Characterization of the Stability Boundary. For the time in-
variant nonlinear system x = /(x) satisfying assumptions AJ -A3 above, let x. ,
i = 1, .. . , k be the unstable equilibrium points on the boundary 0/ the domain 0/
attraction 8 A (r,') 0/ a stable equilibrium point. Then,
k
8A(x s ) = UW.,(Xi)' (5.60)
i= 1
To show the reverse inclusion, note from the previous proposition that
222 5. Lyapunov Stability Theory
was proposed by Brayton and Tong in [41]. While we have shown that quadratic
Lyapunov functions are the most natural for linear systems, there has been arecent
resurgence in methods for obtaining piecewise quadratic Laypunov functions for
nonlinear systems [156]. In this paper, the search for piecewise quadratic Lya-
punov functions is formulated as a convex optimization problem using linear
matrix inequalities. In turn, algorithms for optimization with linear matrix in-
equality constraints have made a great deal of progress in recent years (see, for
example, the papers ofPackard [238], [19] and the monograph by Boyd, Balakr-
ishnan, EI Ghaoui, and Feron [36]). We refer the reader to this literature for what
is a very interesting new set of research directions.
5.10 Summary
This chapter was an introduction to the methods of Lyapunov in determining the
stability and instability of equilibria of nonlinear systems. The reader will enjoy
reading the original paper of Lyapunov to get a sense of how much was spelled
out over a hundred years aga in this remarkable paper. There are several textbooks
which give a more detailed version of the basic theory with theorems stated in
greater generality. An old classic is the book of Hahn [124]. Other more modem
textbooks with a nice treatment include Michel and Miller [209], Vidyasagar [317],
and Khalil [162]. The recent book by Liu and Michel [210] contains some further
details on the use of Lyapunov theory to study Hopfield neural networks and
generalizations to classes of nonlinear systems with saturation nonlinearities .
5.11 Exercises
Problem 5.1 A p.d.f, function v with -iJ not a p.d.f., Considerafunctiong 2 (t )
with the form shown in Figure 5.9 and the scalar differential equation
. g(t)
x = --x.
g(t)
width
o 2 3 4 5
FIGURE 5.9. A bounded function g2(t) converging to zero but not uniformly.
224 5. Lyapunov Stability Theory
v(x, t) = ~2
g (t)
[3 _10r g2(T:)dT:] .
Verify that v(x, t) is a p.d.f. and so is v(t, x) . Also, verify that the origin is stable
but not asymptotically stable. What does this say about the missing statement in
the table of Lyapunov's theorem in Theorem 5.16.
Show that the linearization is ineonc1usivein determining the stability ofthe origin.
Use the direet method of Lyapunov and your own ereative instinets to piek a V to
study the stability of the origin for -I ::: E ::: I.
Problem 5.5 Rayleigh 's equation. Consider the seeond-order Ray leigh system:
XI = -E(X? /3 - XI + X2),
5.11 Exerciscs 225
This system is known to have one equilibrium and one limit eyele for E :f:. O. Show
that the limit eycle does not lie inside the strip -I ~ XI ~ I. Show, using a suitable
Lyapunov funetion, that it lies outside a eircle of radius -/3.
1. X = 0 is exponentially stable.
2. lAi I < 1 for all the eigenvalues of A.
3. Given any positive definite symmetrie matrix Q E Rnxn, there exists a unique,
positive definite symmetrie matrix P E IRnxn satisfying
1 00
lim rP(t)
1--> 00
= o.
Give a eounterexample to show the neeessity of uniform eontinuity.
226 5. Lyapunov Stability Theory
Now show that for Ix(to)1 < Cl;' (ClI (r» show that the trajectories converge
uniformly, asymptotically to O.
Problem 5.11 Power system swing dynamics on T" x Rn [9]. Consider the
swing equations of the power system (5.33). Since the right-hand side is periodic
in each of the B; with period 2iT. we can assume that the state space of the system
is Si x . . . X Si X Rn or T" X Rn. Note that the function v(B, cu) of (5.34) is not
a weil defined function from T " x Rn ~ R. unless Pm; - Pe; = O. What are the
implications of the assertion in the text that bounded trajectories (on Rn X Rn)
converge to equilibrium points of the swing dynamics on the manifold T" x Rn?
Prove that trajectories of the swing equations are bounded on the manifold
T" x Rn. Can you give counterexamples to the assertion that bounded trajectories
on T" x Rn converge to equilibria (you may wish to review the Josephson junction
circuit of Chapter 2 for hints in the case that n = I)? Prove that all trajectories
of the swing equation converge to equilibria for Pm; - Pe; sufficiently small for
i = I , .. . , n .
x= A(t)x .
j
tH
Wo(t, t + 8):= t <!>T(r, t)C T (r)C(r)<!>(r, t)dr) 2: kl Vt. (5.63)
Now use Problem 5.10 to show that if there exists a uniformly positive definite
bounded matrix P (-) : R ~ Rn x n satisfying
x= A(t)x(t),
tU = (A(t) + K(t)C(t))w(t),
with the same initial condition at time to and give conditions on the terms of the
second observability Gramian using a perturbation analysis. You may wish to refer
to the cited reference for additional help.
f I
l+O dv(x , t)
dt
I
(5 .1)
:s -a3lx (t )1 2 ,
then x(t) converges exponentia11y to zero. This problem is very interesting in that
it allows for a decrease in v(x, t) proportional to the norm squared of x(t) over a
window of length 8 rather than instantaneously.
228 5. Lyapunov Stability Theory
Show that 0 is an exponentially stable equilibrium point of 5.68 for J1- small enough.
The moral of this exercise is that exponential stability is robust!
Problem 5.17 Bounded-input bounded-output or finite gain stability ofnon-
linear systems. Another application of Problem 5.16 is to prove L oo bounded
input bounded output stability of nonlinear systems (you may wish to review the
definition from Chapter 4). Consider the nonlinear control system
Prove that ifO is an exponentially stable equilibrium point of f (», t), Ig(x, t)1 :::
J1-lx land finally
h(O) = 0, Ih(x)l::: ßlxl "Ix,
then the control system (5.69) is L oo finite gain stable. Give the bound on the L oo
stabiIity.
Problem 5.18 Loeal exponential stability implies exponential stability ofthe
Iinearization. Prove the converse ofTheorem 5.41, that is, ifO is a locally expo -
nentially stable equilibrium point ofthe non linear system (5.1) then it is a (globaIly)
exponentially stable equilibrium point of the Iinearized system, provided that the
Iinearization is uniform in the sense ofTheorem 5.41 . Use the exponential converse
theorem of Lyapunov.
Problem 5.19 Intereonnected systems. The methods of Problem 5.16 may
be adapted to prove stability results for interconnected systems. Consider an
interconnection of N systems on ]Rn;, i = 1, . . . N, given by
N
Xi = f;(Xi, t) +L gij(XJ, .. . ,XN, t) (5.70)
j=1
with f; (r,, t) , gij(XI , ... , XN, t) vector fields on ]Rn; for i = 1, . .. , N. Now
assume that in each of the decoupled systems
Xi = f;(Xi, t) (5.71)
5.11 Exercises 229
the origin is an exponentially stable equilibrium point, and that for i, j = I , ... , N ,
N
Igij (xl, . .. , XN, t)1s L Yij IXj l.
j=1
Using the converse Lyapunov theorem for exponentially stable systems to generate
Lyapunov functions Vi (Xi , r) satisfying
i 2 i 2
0'Jix;I ::::: Vi (X i, t) ::::: 0'21x;I ,
-8Vi
8t
8Vi
+ -8Xi
f ; (x -
I "
t) < -0'
-
i
3
lx-I"
2
(5.72)
I
8Vi ::::: O'~lx;I,
-
-
I
8Xi
give conditions on the 0'\' O'~, O'~, O'~, Yij to prove the exponential stability of the
origin. Hint: Consider the Lyapunov function
N
L diVi(Xi),
i= 1
where d, > 0 and Vi (z,) is the converse Lyapunov fun ctionfor each ofthe systems
(5.71).
Sec [162] for a generalization of this problem to the case when the stability
conditions of equations (5.72) are modified to replace lXi I by f/Ji (z,) where f/Ji(.) is
a positive definite function, and
N
Igij (x l , ... , xN )1 ::::: L Yijf/Jj(Xj).
j=1
Problem 5.21 Satellite stabilization. Consider the satellite rigid body B with
an orthonormal body frame attached to its center of mass with axes aligned with
the principal axes of inertia as shown in Figure 5.10. Let bo E R 3 be a unit vector
representing the direction of an antenna on the satellite and do E R 3 a fixed unit
vector representing the direction of the antenna of the receiving station on the
ground. Euler's equations of motion say that if w E R 3 is the angular velocity
vector of the satellite about the body axes shown in Figure 5.10, then
lw+wx!w=r (5.73)
Here! = diag (11, h h). The aim ofthis problem is to find a controllaw to align
be with -do. In the body frame, do is not fixed. 1t rotates with angular velocity -w
230 5. Lyapunov Stability Theory
so that
do = -w x do (5.74)
Now consider the controllaw
r = -cxw + do x bo
applied to (5.73), (5.74). What are the equilibrium points ofthe resulting system in
]R3+3? Linearize the system about each equilibrium point and discuss the stability
of each. Is it possible for a system in ]R6 to have this number and type of equilibria.
Do the system dynamics actually live on a submanifold of ]R6? If so, find the state
space of the system. Prove that the controllaw causes all trajectories to converge to
one of the equilibrium points. It may help to use the Lyapunov function candidate
1 T 1 2
v(w,do) = 2: w lw+ 2:ldo +bol .
Another approach to satellite stabilization is in arecent paper by D' Andrea Novel
and Coron [77].
where W (x, t) denotes the displacement of the string at loeation x at time t, with
x E]O, 1[, t ::: O. The quantities m and T are the mass per unit length of the string
and the tension in the string, respectively. Let the string be fixed at x = 0 as shown
in Figure 5.11. Thus, W(O, t) = 0 for all t ::: O. Let a vertical control force u( ·)
be applied to the free end of the string at x = I. The balance of the forces in the
vertical direction (at x = I) yields:
aW(x, t)
u(t) = T - - -
I (5.76)
ax .r ee l
E(t) = ~
210
t m (aw(x,
at
t))2 dx + ~ [I T (aw(x , t))2 dx
210 ax
(5.77)
which consists ofthe kinetic energy (first integral in the expression above) and the
strain energy (second integral above). Show that if a boundary control
is applied, where k > 0, then B(t) .::: 0 for all t ::: O. See also [274].
Problem 5.23 Brockett's H dot equation [26], [132].
1. Consider the matrix differential equation (H(t) E JRnxn
iI = [H , G(t)] (5.79)
Here G(t) E JRnxn is a skew-symmetric matrix. The notation [A , B] := AB -
BA refers to the Lie bracket, or commutator, between A and B. Show that if
H (0) is symmetrie, then H (r) is symmetrie for all t. Further, show in this case
that the eigenvalues of H (r) are the same as the eigenvalues of H (0). Equation
(5.79) is said to generate an iso-spectral flow on the space of symmetrie matrices
SS(n) .
2. Consider the so-called double bracket equation
iI = [H, [H , Nll (5.80)
with N E JRnxn a constant matrix. The Lie bracket is defined as above . Show
that if H (0) and N are symmetrie, then (5.80) generates an iso-spectral flow as
weIl on SS(n) .
3. Now using the Lyapunov funetion candidate
v(H) = - traee(H T N)
232 5. Lyapunov Stability Theory
iJ(H) = -IHH , N] 11 2
with AI > A2 > . . . > An > O. Use the iso-spectrality of (5.80) and the
previous calculations to characterize the equilibrium points of (5.80). How
many are there?
5. Linearize (5.80) about each equilibrium point and try to determine the stability
from the linearization. What can you say if some of the eigenvalues of N, H (0)
are repeated?
Discuss how you might use the H dot equation to sort a given set of n numbers
AI , ... , An in descending order. How about sorting them in ascending order?
where the body angular velocity is given by (Wj, W2, ( 3)T E IR 3 , and 1, ,/2,13 are
the satellite moments of inertia, J 1 = Ji- h the rotor moments of intertia, and
a the relative angle of the rotor. Furthermore, Ai = I, + Ji, It is easy to verify
that regardless of the choice of u we have an equilibrium manifold w, = W3 = 0
and W2 = m arbitrary. This corresponds to the satellite spinning freely about the
intermediate axis. Prove that the control law
stabilizes the satellite about each of these equ ilibria. Note that you are not required
to prove anything about the variables a, ä.
FIGURE 5.12 . Satellite with a spinning rotor aligned with its central principal axis as
controller
Here , A E IRn x n has its eigenvalues in C~, and If(x, 01 ::: Ylxl . Now prove that
the perturbed linear system is stable if Y < JLA (/).
Problem 5.26 Stability conditions based on the measure of a matrix. Recall
the definition ofthe measure ofa matrix JL(A) from Problem 3.2. Use this definition
to show that iffor some matrix measure JLO, there exists an m > 0, such that
l'0
'O+T
JL(A(r))dr < -m Yt 2: T to 2: 0,
then the origin of the linear time -varying system of (5.44) is exponentially stable .
Apply this test to various kinds of matrix measures (induced by different norms) .
Problem 5.27 Smooth stabilizability [279]. Assume that the origin x = °of
the system
x= [t», u)
234 5. Lyapunov Stability Theory
is stabilizable by smooth feedback, that is, there exists u = k(x) such that 0 is
x
a locally asymptotically stable equilibrium point of = f(x, k(x». Now prove
that the extended system
x= f(x, z),
(5.83)
z= h(x, z) +u
is also stabilizable by smooth feedback. Hint: Use the converse Lyapunov function
associated with the x system and try to get z to converge to k (x). Repeat the problem
when the original problem is respectively globally or exponentially stabilizable.
By setting h(x, z) == 0, we see that smooth stabilizability is a property which is
not lost when integrators are added to the input!
Problem 5.28. Prove Lemma 5.47.
Problem 5.29. Prove Theorem 5.49. Now apply this theorem to the following
generalization of the power system swing dynamics
(5.84)
Xz = -Dxz - Vw(xd.
Here XI, Xz E IRn, w(xd is a proper function with isolated non-critical stationary
points and D E IRn xn is a symmetric positive definite matrix. Is this "second order"
system gradient like? Characterize the domains of attraction of its stable equilibria.
Show that all the equilibria of the system are either stable nodes, unstable nodes,
or saddles (i.e., that the eigenvalues of their linearizations are real).
Problem 5.30 Potential energy boundary surfaces. Prove rigorously that the
method sketched in the Example 5.45, a level set of W (ß) around a local minimum
ßo just touching a saddle or local maximum of W , is a (conservative) estimate of
the domain of attraction of the equilibrium ßo of the swing dynamics. You may
wish to use the "Lyapunov" function of (5.34) for helping you along with the proof.
6
Applications of Lyapunov Theory
In this chapter we will give the reader an idea of the many different ways that
Lyapunov theory can be utilized in applications. We start with some very classical
examples of stabilization of nonlinear systems through their Jacobian lineariza-
tion, and the circle and Popov criteria (which we have visited already in Chapter
4) for linear feedback systems with a nonlinearity in the feedback loop. A very
considerable amount of effort has been spent in the control community in applying
Lyapunov techniques to adaptive identification and control. We give the reader an
introduction to adaptive identification techniques. Adaptive control is far too de-
tailed an undertaking for this book, and we refer to Sastry and Bodson [259] for a
more detailed treatment of this topic. Here, we study multiple time scale systems
and singular perturbation and averaging from the Lyapunov standpoint here. A
more geometric view of singular perturbation is given in the next chapter.
Here f (x, u) : Rn X Rn, ~ Rn, and the objective is to find a feedback control
law, i.e., a law of the form
u(t) = k(x(t»,
236 6. Applications of Lyapunov Theory
[ :~] x =O
= A + BK.
Since K was chosen so that the eigenvalues of A + BK are in C~, we see that the
linearization of J at 0 is stable establishing the theorem . 0
Remarks:
1. The linear stabilizing controllaw also stabilizes the nonlinear system . Different
choices of K may, however, result in larger or smaller domains of attraction.
2. The hypothesis of the previous theorem can be weakened somewhat as follows:
Recall that a pair A, B is said to be stabilizable if the generalized eigenspace
corresponding to the unstable eigenvalues, i.e., those in C+, are contained in
the span of [B, AB , ... , A n-I B]. Colloquially, this means that the unstable
eigenmodes are controllable. In turn, this implies that there exists a K E IR n; x n
such that a{A + BK) C C~ . As in the preceding theorem the linear control
law can be used to stabilize the nonlinear system (6.1).
3. To explore the necessity of the stabilizability of A, B for stabilizing the non-
linear system, we consider the instance that A, B is not stabilizable and has at
least one unstable eigenmode in C~. In this instance no matter what the choice
of K, there is at least one eigenvalue of A + BK in C~. As a consequence,
we may use the instability theorem ofLyapunov's indirect method to conclude
that the nonlinear system of (6.1) cannot be stabilized regardless of the choice
of u{t) = k{x{t» . (See Problem 6.2).
4. The only situation where we cannot determine whether it is possible to stabilize
the nonlinear control system using state feedback is the one in that A, B is not
stabilizable, but the only C+ modes which are not controllable are on the jw
axis. In this instance, linear feedback alone will not suffice, and we will take
up this point again after we have studied center manifolds in Problem 7.26 of
the next chapter (see Behtash and Sastry [21]).
5. When the linearization is time varying, then stabilization of the resulting time
varying linear system stabilizes the nonlinear time varying system. In turn,
uniform complete controllability of the time varying linearization guarantees
the stabilizability of the linearization. This point is explored in the Exercises ,
Problem 6.3.
6. When full state information is not available, in the linear time invariant case, an
observer is constructed to reconstruct the state. In the Exercises (Problem 6.4)
we show how converse theorems may be used to construct nonlinear observers
for observer-based stabilization.
+ ..
--
./
. i
Y
=
=
Ax+bu
cx+du
y
•
: Ij>(t,') l~
FIGURE 6.1. The system ofLur'e's problem
input, and y(t) E JR its output , it follows that the equations of the system are:
x= Ax +bu,
y = cx +du, (6.5)
= -</J(y(t), 1).
u
This is referred to as </J (' , t) being a first and third quadrant nonlinearity. The Lur' e
problem consists in finding conditions on A, b, c, d such that the equilibrium point
ois a globally asymptotica11y stable equilibrium point of (6.5). This problem is also
referred to as the absolute stability problem since we would like to prove global
asymptotic stability ofthe system (6.5) for a whole class offeedback nonlinearities
</J. In this section , we focus attention on the single-input single-output Popov and
circle criteria. Generalizations of this to multi-input multi-output systems are left
to the Exercises (see Problem 6.7). We place the following assumptions on the
system:
1. A, b, cis minimal, that is A , b is completely controllable and A, cis completely
observable.
2. A is a matrix whose eigenvalues lie in C::'.
3. </J(., t) is said to belong to the sector [k l , k 2 ] for k2 ::: k, ::: 0, as shown in
Figure (6.2), i.e.,
k, a
2
:s a</J(a, t) :s k2a 2 •
Several researchers conjectured solutions to this absolute stability problem. We
discuss two of these conjectures here:
Aizerman's Conjecture
Suppose that d = 0 and that for each k E [k l , k 2 ] the matrix A - bkc is Hurwitz
(i.e., eigenvalues lie in C::'). Then the nonlinear system is globally asymptotica11y
stable. The conjecture here was that the stability of a11 the linear systems contained
in the class of nonlinearities admissible by the sec tor bound determined the global
asymptotic stability of the nonlinear system.
6.2 The Lur'e Problem, Circle and Popov Criteria 239
Kalman's Conjecture
Let r/J(a, t) satisfy for all a
k or/J(a, t) k
3 ::: oa ::: 4
Now suppose that d = 0 and that for each k E [k3 , k4 ] the matrix A - bkc is
Hurwitz (i.e., its eigenvalues lie in C~). Then the nonlinear system is globally
asymptotically stable. The conjecture here is that the stability of the linearization
determines the global asymptotic stability of the nonlinear system.
Both Aizerman's conjecture and Kalman 's conjecture are false. However, they
stimulated a great deal of activity in nonlinear stability theory. The criteria that
can be derived both rely on the following generalization of a lemma called the
Kalman-Yakubovich-Popov lemma, due to Lefschetz. Its generalization to multi-
ple input multiple output systems is called the positive real lemma and is discussed
in Problem 6.7.
Theorem 6.3 Kalman-Yacoubovich-Popov (KYP) Lemma. Given a Hurwitz
matrix A E ~nxn , and b E R", c T E ~n. d E ~ such that the pair A , b is
controllable , then there exist positive definite matrices P, Q E ~n xn. a vector
q E ~n. and scalar E > 0 satisfying
AT P + PA = -qq T - EQ
IT r;
(6.6)
Pb - - c = vdq
2
iff the scalar transfer function
h(s) = d + c(sl - A)-Ib
(1)= 00
00=00
+
Remarks:
1. Strictly positive real, or SPR, transfer functions are so called because they are
positive, or strictly positive respectively, on the imaginary axis. In particular,
since h(00) ::: 0, we must have d ::: O. In addition, since it is required that they
are analytic in C+ it follows that
2. From the definition it follows that the Nyquist locus of a strictly positive real
transfer function lies exc1usively in the first and fourth quadrant as shown in
Figure 6.3. Thus the phase of an SPR transfer function is less than or equal in
magnitude than n /2 radians . Thus, if d = 0, the asymptotic phase at w = 00 is
7r/2 so that the relative degree of h(s) is l. Thus, SPR transfer functions have
a relative degree of either 0 (d > 0) or I.
3. SPR transfer functions model what are referred to as passive linear systems.
In particular, network functions of linear networks with positive resistors,
inductors and capacitors are strictly positive real.
4. SPR transfer functions are minimum phase (i.e ., they have no zeros in the C~J.
(the proofis left as an exercise).
Proposition 6.4 Circle Criterion. Consider the feedback system ofFigure 6.1
described by the equations of(6.5}. Let the assumptions on minimality of A, b, c
and stability of A be in effect. Then the origin of(6.5} is globally, asymptotically
stable if
-llk
y
o
Note that the requirement of stability of the closed loop system for all possible
t/J(., t) in the sector [0, k] forces the open loop system to be stable (since t/J(., t)
may be chosen to be == 0). 0
This proposition can now be generalized to cover the case that t/J (., t) lies in a
sector [kl , kz] for kz ~ ks ~ o. Figure 6.5 shows how the feedback loop may be
transformed to yield a nonlinearity 1/1(., t) in the sector [0, kz - kJl.
The transfonned loop has the same form as the loop we started with , with the
difference that the linear system in the feed forward path is
g(s) = h(s~
1+ k)h(s)
and the nonlinearity in the feedback path is
To apply the preceding theorem, we need to guarantee that the system in the
forward path is stable. A necessary and sufficient condition for the stability of g(s)
is given by the Nyquist criterion: If h(s) has no poles on the jco axis and v poles
in the right half plane; then Nyquist 's criterion states that g(s) is a stable transfer
function iff the Nyquist contour of h(j w) does not intersect - t
and encircles it
counterclockwise v times. If h(s) has poles on the jto axis, the Nyquist contour
is indented to the left (or right) of the pole and the criterion applied in the limit
that the indentation goes to zero. If the indentation is made to the right of a pole,
the number of encirclements does not include the count of that pole, if it is made
to the left it includes the count of that pole. Using this, along with the estimate of
equation (6.7) gives the following condition, called the circle criterion:
Theorem 6.5 Circle Criterion. Let h(s) have v poles in C+ and let the non-
linearity t/J(., t) be in the sector [k" kz]. Then the origin of the system 0/(6.5} is
globally asymptotically stable iff
1. The Nyquist locus 0/ h (s) does not touch - t and encircles it v times in a
counterclockwise sense.
6.2 The Lur'e Problem, Cirele and Popov Criteria 243
2.
(6.15)
The inequality of equation (6.15) says that the Nyquist locus of h(jw) does not
enter a disk in the complex plane passing through - and -t t
as shown in
figure (6.6). Combining this with the encirclement condition, we may restate
the conditions ofthe preceding theorem as folIows: The Nyquist locus of h(jw)
encircles the disk D(k t , k 2 ) exactly v times without touching it.
2. k, < 0 < k2 • A necessary condition for absolute stability is that A have all
its eigenvalues in the open left half plane since, tjJ(. , t) == 0 is an admissible
nonlinearity. Proceeding as before from
(l +k2u)(1 +k1u) +k2ktv 2 > 0
yields
(6.16)
FIGURE 6.7. The graphical interpretation of the circle criterion when k2 ::: 0 > k,
FlGURE 6.8. Graphical interpretation of the circle criterion when k, < k 2 < 0
since k, k2 < O. Inequality (6.16) specifies that the Nyquist locus does not touch
or encircle -t (since v = 0) and lies inside the disk Dtk» , k 2 ) , as shown in
Figure 6.7.
3. k, < ka < O. The results of the first case can be made to apply by replacing
h by -h ok, by -k), and k2 by -k2 • The results are displayed graphically in
Figure 6.8 .
The circle criterion ofthis section should be compared with the L 2 input-output
stability theorem, also called the circle criterion in Chapter 4. There we proved
the input-output stability for a larger class of linear systems in the feedback loop
(elements of the algebra A were not constrained to be rational) . Here we have used
the methods of Lyapunov theory and the KYP lemma to conclude that when the
linear system in the feedback loop has a finite state space realization, the state of
the system exponentially converges to zero. a conclusion that seems stronger than
input output L 2 stability. Of course , as was established in Chapter 5 (Problem 5.17),
exponential intemal (state space) stability of a nonlinear system implies bounded
input bounded output (L oo ) stability. The added bonus of the input-output analysis
is the proof of L2 stability. The same comments also hold for the Popov criterion
which we derive in the next section . For more details on this connection we refer
the reader to Vidyasagar [317] (see also HilI and Moylan [139] and Vidyasagar
and Vannelli [318]).
6.2 The Lur'e Problem, Circle and Popov Criteria 245
Proposition 6.6 Popov Criterion. Consider the Lur'e system of (6.5) with the
nonlinearity rjJO in the sector [0, k]. The equilibrium at the origin is globally
asymptotically (exponentially) stable, provided that there exists r > 0 such that
the transfer function
I + k(l + rs)h(s)
is strictly positive real, where h = c(sl - A)-Ib.
Proof: Consider the Lyapunov function
v(x) = x T Px +r l Y
krj>(a)da.
Choose r sm all enough that 1 + rkcb > O. Then consider the conditions for the
transfer function
positive realness of
Remarks:
1. The graphical test for checking Popov's condition is based on whether there
exists an r > 0 such that
I
-!c.
A
Re (l +rjw)h(jw) ~ (6.20)
To check this graphicaIly, plot the real part of h(jw) against ca x Im h(jw).
On this plot , called the Popov plot, inequality (6.20) is satisfied if there exists
a straight line of slope 1/r passing through -1/ k on the real axis, so that the
Popov plot lies undemeath the line, as shown in Figure 6.9. Since the slope of
the line is variable (with the only constraint that it be positive, since we are
required to check for the existence of an r ), the test boils down to finding a line
of positive slope through -1/ k on the real axis so above the Popov plot of the
open loop transfer function .
2. When the nonlinearity cPO lies in a sector [k l , k2] one proceeds as in the case of
the circle criterion to transform the system so that the nonlinearity cP (y) - k, Y
lies in the sector [0, k2 - k l ] . The Popov condition then reduces to requiring
that the transfer function
A h(s)
g(s) =
1 + k]h(s)
A
I
- - + (1 +rs)g(s)
k -k]
A
is SPR .
wlmh(jw)
Reh(jw)
x= f(x(t), y(t))j
(6.21)
~f'
Ey = g(x(t), y(t))
j
equations degenerate to
x= f(x(t), y(t))
~. (6.22)
o= g(x(t), y(t))
The reduced order system is also referred to as the slow time sca1e system. The
fast time scale system is the system obtained by rescaling time t to r = t /E to get
8x
-8. = eft», y) ,
(6.25)
8y
- = g(x, y) .
8.
I
248 6. Applications of Lyapunov Theory
ax
ar
= 0,
T (6.26)
ay
ar = g(x, y).
The dynamies of the sped-up system T in the fast time seale r shows that the x
variable is "frozen." The set of equilibria of the speeded up system T is precisely
M , the eonfiguration spaee manifold. The fast dynamics of
ay
ar = g(x, y)
deseribe whether or not the manifold M is attraeting or repulsive to the parasitic
dynamics. Now eonsider linear, singularly perturbed systems of the form
x= Allx + A 12y,
(6.27)
Ey = A 2!x + A 22y .
x= A11x + A I2Y
(6.28)
0= A 21x + A 22 y
If A 22 is nonsingular, then the linear algebraic equation ean be solved to yield the
redueed system
(6.29)
along with
Y = -A l21 A 21x .
The eonfiguration spaee manifold is M = {(x, y) : Y = -A l 2! A2!X} . Also, the
fast time seale system in the fast time seale is
ay
-
ar
= A 21x + A 22 y . (6.30)
Thus the stability of the matrix A 22 determines whether or not the manifold M is
attraeting to the sped up dynamics . We generalize this notion to nonlinear systems:
Define
MI = {(x, y) : g(x, y) = 0, detD 2g (x , y) =1= O}.
Let (xo, Ya) E MI . Then by the implicit funetion theorem there is a unique integral
eurve of (6.22), y(t) = (x(t) , y(t)) through (xa, Ya) defined on [0, a[ for some
a > O. Some trajeetories of the original system }:E originating from arbitrary
(x(O), y(O)) E IR n+ m for all E > 0 tend uniformly (consistently) to y(t) as E ..j.. 0
on all closed intervals of ]0, a[ as follows (see also Figure 6.10). We state these
theorems without proof, loeal versions of them were proved by Levin, Levinson
6.3 Singular Perturbation 249
(XO'~1(t)
FIGURE 6.10. The consistency requirement
[179], Miseenko, Pontryagin and Boltyanskii [212] for bounded time intervals,
and Hoppensteadt [144] for infinite time intervals. A good version of the proofs
is given in Khalil [162]; see also [163]. Finally, a modem geometrie version of
these proofs using the Center Manifold theorem of the next Chapter 7 was given
by Fenichel [97].
Theorem 6.7 Consistency over Bounded Time Intervals. Given that a unique
solutioncurve y(t) = (x(t). y(t)) startingfrom (xo, Yo) E MI 0/1; existson [0, a[,
then 38 > 0 such that solutions O/1;f startingfrom x(O). y(O) with Ix(O) - xol +
ly(O) - Yol < 8 converge uniformly to y(t) on all closed subintervals of]O, a[ as
E ~ 0, provided that the spectrum 0/ D2g(X(t) , y(t)) over the trajectory y(t) 0/1;
lies in the left half complex plane and is bounded away from the jto axis.
Remarks:
1. If the pointwise eonsisteney eondition D2g(XO. Yo) C C,: holds for (xo . Yo) E
M it holds in a neighborhood of (xo, yo). Thus, we label the subset M; of M
given by
B(t) 0 )
r:' (t)D 2g(x(t). y(t))P(t) = (6.31)
( o C(t)
for 0 :5 t < a where B(t) E IR m, xm l has speetrum in C,: bounded away from
the jto axis and C(t) E IRm -m,xm-m , has speetrum in C~ bounded away from
the jio axis.
250 6. Applications of Lyapunov Theory
Theorem 6.8 Consistency from Stable Initial Manifolds over Bounded Time
Intervals. Given that a unique solution curve y(t) = (x(t), y(t)) starting from
(xo, Yo) E MI oJE exists on [0, a[, then there exists an mt dimensional manifold
S(E) C IRm depending on E for 0 < E ::: Eo such that ifthe initial vector for E l is
(xo, Yl (0)) with Yl (0) E S(E), then the solution oJ(6.21} satisfies the inequalities
(with bE IRm , representing coordinatesJor S(E) in a neighborhood of ye).
1. lim,.....o S(E) exists and is an ml dimensional manifold that is the local stable
manifold of the equilibrium Yo of the "frozen" boundary layer system T. This
is depicted in Figure 6.11. In the sequel, we refer to the limit S(O) as s~g to
show its dependence on the equilibrium Yo with Xo frozen.
2. The preceding theorems were stated as 'local' -they can be made global in a
certain sense which we will discuss in Chapter 7. Actually the parameterized
center manifold theorem of Chapter 7 has been used by Fenichel to derive
'global' versions ofboth Theorems 6.7 and 6.8.
It is of obvious importance to understand when trajectories of the full system
(6.21) converge to those ofthe singularly perturbed system (6.22). In particular, it
is important to understand which of the solutions of g (x , y) = 0 are chosen for a
given value of x by the system.
5(0)
slip off M and transit infinitely rapidly (jump) in the y variable to another portion
of M along the singularly perturbed (infinitely fast) limit of the parasitic dynamics .
Thus, a valid definition of a solution concept for the singularly perturbed limit of
(6.22) viewed as the limit of the augmented system of (6.21) has to allow for the
possibility of jumps (in the y variable , the x variable is constrained to be at least
an absolutely continuous function oftime from the differential equation in (6.22))
from those parts of M that are not attractive, that is, for which
However to make this intuitive picture correct, we will need to be sure that the
frozen boundary layer system is "gradient-like" in the sense ofthe previous chapter
(cf. Section 5.9), that is to say, for each xo, each trajectory of
dy
dr = g(xo, y), y(O) = Y (6.32)
converges to one of the equilibrium points of the system (6.32). In particular this
is guaranteed if g (x, y) is indeed the gradient of some scalar function, that is,
g(xo, y) = D! Sex, y)
forsome S(xo, y) : jRm ~ jR that is a properfunction for each xs. The construction
that we have described may be made more mathematically precise as a singular
foliation of jRn+m, with "fibers" of varying dimension attached to a "base space"
M. The slow dynamics live on the base space and the infinitely fast dynamics on
the fiber space. Now, define M; to be the set of points for which the equilibria of
the frozen boundary layer system are hyperbolic. That is
It is clear that the stability properties of the base points changes on M h • We refer
to the points in Mh as the non-singular points. Of course, as we just pointed out,
if the non-singular points are not stable, there could be jumps off them onto other
segments of M .
At this point , the reader will find it instructive to return to the degenerate van der
Pol example of Chapter 2 (equation 2.25) and relate the construction given here
to that system . It follows in particular that any trajectories starting in the middle
section of the curved resistor characteristic are liable to slip off the manifold M
and transition to one or the other of the two outer legs of the resistor characteristic
(cf. Figure 2.31). In re-examining this example, it appears that the truly interesting
and amazing feature of the behavior of singularly perturbed nonlinear systems
comes from the changing nature of the stability of points on M as x changes, i.e.,
at points x E Mh (cf. the "impasse" points in Figure 2.30); that is to say, x acts
as a bifurcation parameter for the dynamics of the frozen boundary layer system .
Thus, a solution concept for the singularly perturbed limit has to allow for jumps
in the y variable as (x , y) transitions through Mh. We will take up this point in the
next chapter, in Section 7.10 .2, after we have studied bifurcations.
252 6. Applications of Lyapunov Theory
Here H(z) is called the stored energy of the capacitor and inductor and h(·) is
called the constitutive equation of the capacitors and inductors. We will assume
that h is a diffeomorphism, or else continuous changes in the charge and flux may
result in discontinuous changes in the voltage across the capacitor and current
across the inductors .
Yl x
.---------..., 2
+ +
Resistive
n-Port
g(x, y) = O. (6.34)
Here Y E !Rn is the vector of capacitor port currents (Yl) and inductor port voltages
(Y2), with reference directions chosen such that x T Y represents the power into the
n-port. Now, Coulomb's law and Faraday's law yield the equation
z= -y. (6.35)
This is to be combined with
x= - D Th (h - l (x ) )y ,
(6.37)
0= g(x, y) .
then the circuit equations of 6.37 can be solved to give normal form equations of
the form
However, if the circuit equations admit some hybrid representation, of the form
XA = fA (YA, XB),
(6.38)
YB = fB(YA, XB),
x= -DTh(h -l(x)) [ YA ],
fB(YA, XB) (6.39)
0= XA - fA(YA, XB).
E YIA YI8 +
-,
x18-L...
,.,..... E
Resistive - -I
n-Port -----,
+ X28 E I
Y2A E .~J
+
Y28
FIGURE 6.13. The introduction of parasitic capacitors and inductors to normalize the
nonlinear circuit
x E Rn (if multiple solutions exist), the understanding is that the second of these
two equations is the singularly perturbed limit of
(6.40)
1. ü(x, YA) = :;. (XA - JA(YA, XB)) ~ 0 for YA rf. {YA : XA = JA(YA, XB)}'
2. {YA : ü(x , YA) = O} is the set of equilibrium points.
3. v(x, YA) is a properfunction of YA, foreach value of x, that is the inverse image
of compact sets is compact,
Note that the statement above has iC~ only because the boundary layer system
(6.40) has - JA in its right-hand side.
6.4 Dynamics of Nonlinear Systems and Parasitics 255
Transmission
Generators Network Loads
M ;w +D ;w; = pr - p/'
(6.41)
8; =w;,
where for i = 1, . . . , g, M ;(D;) represent the moment of inertia and damping
constant, co, is the departure of generator frequency from synchronous frequency,
and 0; is the generator rotor angle measured relative to a synchronously rotating
reference. Moreover, pr represents the exogenously specified mechanical input
power, and p/ is the electrical power output at the i-th node. Now let 0; for
i = I, ... , I be the angle of a set of I nodes which have no generation, and let
p/ be the power demanded at these nodes. Now define B~K be the admittance of
the line joining nodes i and j for the generator nodes, and B~, BI) be similarly
defined. If two buses are not connected then Bij = O. Then the so-called loadjlow
equations are given by
(6.42)
Combining, equations (6.41) and(6.42), we get
8 =w,
w= -M- 1 Dto - M - J P" - M - J jII(o, 0) , (6.43)
0 = !,(o,O) - r '.
256 6. Applications of Lyapunov Theory
function explicitly by
~ a.s" + ... + Ci) kpnp
P = = -~- (6.44)
s" + ßnsn-) + + ßJ dp
where the 2n coefficients CiJ, ... , a; and ßI , , ßn are unknown. This expression
is a parameterization of the plant, that is, a model for which only a finite number
u
of parameters need to be determined. If yp and p stand for the input and output
of the plant and the initial condition at t = 0 is zero, then it follows that from
algebraic manipulation,
snyP = (CinS n- 1 + .. .+ Ci I)Up - (ßnsn-I + .. . + ß,)yp .
The foregoing is an expression that is linear in the parameters, but unfortunately
involves differentiation of the signals, Up , yp, an inherently noisy operation. In
order to get around this problem, we define a monie Hurwitz polynomial, i(s) E
lR[s] := s" + Ansn- i + ... + A). Then we rewrite the previous equation as
then the new representation of the zero initial eondition response of the plant ean
be written as
~ a*(s) ~ b*(s) ~
yp(s) = -~-up(s) + -~-yp(s) , (6.47)
A(s) A(s)
and also
a*(s)
pes) = ~ ~
A(s) - b*(s)
The parameterization is unique if the given plant numerator and denominator are
coprime. Indeed, ifthere exist two different polynomials a* + ~a(s), b* + ~b(s)
of degree at most n - 1 eaeh so as to yield the same plant transfer funetion , then
an easy calculation shows that we would have
~a(s) k pnp ~
-~- = --~ - = -pes).
~b(s) dp
This contradicts the coprimeness of np, dp since ~b(s) has degree at most n - 1.
The basis for the identifier is astate spaee realization of equation (6.46). In
258 6. Applications of Lyapunov Theory
0 0 0 0
0 0 0 0
A= b)..=
0 0 0 0
-AI -A2 -A3 -An 1
so as to yield
s
1
(s I - I\T I b, = - A-
A(s)
r I--...ü . : l e - - - - - - - - - - - - - , r - - Yp
tiJl = Aw l + bAup,
(6.52)
tiJ2 = Aw 2 + bAyp.
to reconstruct the states of the plant. It is interesting to note that the error w - w p
decays to zero regardless of the stability ofthe plant. Thus, the generalized state of
thc plant can be reconstructed without knowledge of the parameters of the plant.
Thus, the (true) plant output may be written as
yp(t) = (j*T w(t) + E(t),
where the signal E(t) stands for additive exponentially decaying terms caused by
initial conditions in the observer and also in the plant parameterization. We will
first neglect these terms, but later we will show that they do not affect the properties
of the identifier. In analogy with the expression of the plant output, we define the
output of the identifier to be
Yi(t) = (jT(t)W(t) . (6.53)
We define the parameter error to be
cP(t) := (j(t) - (j* E ]R2n
Yp
r
sometimes written in terms of the parameter error as 0 = ifJ, since r/J = () - ()* with
()*constant. Two c1asses of algorithms are commonly utilized for identification:
gradient algorithms and least-squares algorithms.
Gradient Algorithms
The update law
0= ifJ = -ge;w , gE IR> 0, (6.55)
is the so-called standard gradient algorithm, so named because the right hand side
is the negative of gradient of the identifier output error squared, i.e., el(() with
respect to (), with () assumed frozen . The parameter g is a fixed positive gain called
the adaptation gain, which enables a variation in the speed of the adaptation. A
variation of this algorithm is the normalized grad ient algorithm
o= ifJ = ge.ui
l+yw Tw '
g , Y E IR > 0 (6.56)
The advantage of the normalized gradient algorithm is the fact that it may be used
even when the signals w are unbounded, as is the case with unstable plants.
Least-Squares Algorithms
These are derived from writing down the Kalman-Bucy filter forthe state estimation
problem for the state ()* of a (fictitious) system representing the plant with state
variables being the unknown parameters:
O*(t) = 0
with output yp(t) = w T (t)()*(t) . Assum ing that the right hand sides of these
equations are perturbed by zero mean, white Gaussian noise of spectral intensities
6.5 Single-Input Single-Output Linear Time-Invariant Systems 261
Ö = -gPwe;,
(6.57)
P=Q-gPww T, P>O , Q>O,g>O
The matrix P is called the covariance matrix: and acts on the update law as a
time varying directional adaptation gain. It is common to use aversion of the least
squares algorithm with Q = 0, since the parameter ()* is assumed to be constant.
In this case the preceding update law for the covariance matrix has the form
dP dP-1
- = -gPww T P {:} - - = gww T (6.58)
dt dt
where g > 0 is a constant. This equation shows that d P -I / d t ::: 0, so that P -I may
grow without bound. Thus, some directions of P will become arbitrarily smalI ,
and the update in those directions will be arbitrarily slow. This is referred to as the
covariance wind-up problem. One way of preventing covariance wind-up is to use
a least squares with forgetting factor algorithm, defined by
dP
- = -g(-AP + Pww T P)
dt '
(6.59)
dP- 1
- - = g(-AP- + ww I T) g , A > O.
dt
Yet another remedy is to use a least squares algorithm with covariance resetting,
where P is reset to some predetermined positive definite value whenever Amin(P)
falls under some threshold. A normalized least-squares algorithm may be defined
as
. Pwe;
() - -g g, y > 0,
- I +ywTpw'
. Pww T P
P - - g -----:=-- (6.60)
- I +ywTpw '
. ww T
p-I =g .
1+ ywT(P -I) -l w
Again, g, y are fixed parameters and P (0) > O. The same modifications as be-
fore may be made to avoid covariance wind-up. We will assume that we will use
resetting in what folIows, that is,
2In the theory of Kalman-Bucy filtering, P has the interpretation of being the
covariance of the error in the state estimate at time t,
262 6. Applications of Lyapunov Theory
• e, E L z.
• ,p E L oo•
Proof: The differential equation describing the evolution of f/J is
,p• = -gww T ,p .
With Lyapunov function v(,p) = ,pT,p, the derivative along trajectories is given by
ti = _2g(,pT w)z = -2ge; :::: O.
Hence,O :::: v(t) :::: v(O) for all t ~ 0, so that v,,p E L oo • Further, since v(oo) is
weil defined, we have
iidt = -2g 1 00
e;dt s 00
• ~
l+ y wT w E LznL oo •
• ,p E L oo , if> E L z n L oo •
Proof: With the same Lyapunov function v(,p) =,pT f/J, we get
2ge~
ti = < O.
1 + ywTw -
I
Hence.D :::: v(t) < v(O) for all t ~ 0, so that from the same logic as before we
have u, f/J , e, / Jl + yw T w E L oo• From the fonnula
. ww T
,p = - g 1 + ywTw f/J
6.5 Single-Input Single-Output Linear Time-Invariant Systems 263
1.
2.
ko/~P(t)~kl/ kl1/~p-l(t)~koJ ./
In addition, the time between resets is bounded below since it is easy to estimate
from (6 .60) that
dP- 1
--< g
dt - YAminP(t)
Thus, the set ofresets is ofmeasure O. Now, choose thedecrescent, p.d.f. v(t/l, t) =
t/lT p-I (t)t/l , and verify that between resets
e2
iJ = -g I < O.
I +ywTpw -
Thus, we have 0 :s v(t) :s v(O) and given the bounds on P(t), it follows that
rp, ~ E L oo' From the bounds on P(t), it also follows that
e,
-,--------:T::--- E L oo .
1+ yw Pw
Also,
1
00 ei
- 0 iidt < 00 =} T E L2.
1+ yw Pw
This establishes the first claim of the theorem. Now, writing
. e, Pw
rp = -g ,
}I + yw T Pw }I + yw T Pw
we see that the first term on the right hand side is in L 2 and the second term is in
L oo, so that we have ~ E L2. 0
Theorems 6.9-6.11 state general properties of the identifiers with gradient, nor-
malized gradient and normalized least-squares respectively, They appear to be
somewhat weak in that they do not guarantee that the parameter error rp approaches
o as t -+ 00 or for that matter that the identifier error e, tends to O. However, the
theorems are extremely general in that they do not assume either boundedness of
the regressor signal w (.) or stability of the plant. The next theorem gives conditions
under which it can be proven that e, (.) -+ 0 as t -+ 00. This property is called the
stability ofthe identifier.
Theorem 6.12 Stability ofthe Identifier. Consider the identifier with the error
equation (6.54) with either the gradient update law 0/ (6.55) or the normalized
gradient update law 0/(6.56) or the normalized least squares 0/(6.60) with co-
variance resetting given by (6.61). Assume that the input reference signal up(')
is piecewise continuous and bounded. In addition, assume that the plant is either
stable or is embedded in a stabilizing loop so that the output Yp( ·) is bounded.
Then the estimation error e, E L2 n L oo , with e, (t) converging to 0 as t -+ 00.
In addition, ~ E L oo n L 2 • and ~(t) cottverges to 0 as t -+ 00.
The time-varying linear system of (6.62) has a negative semi-definite A(t). The
condition for parameter convergence of this update law (and the others as weil)
relies on the following definition of persistency of excitation:
(6.63)
~ =0,
(6.64)
y(t) = w(t)ifJ(t) .
1
10 + 0
w(r)w T (r)dr.
10
6.6 Averaging
Averaging is a method of approximating the dynamics of a (slowly) time-varying
system by the dynamies of a time invariant system. More precisely, consider the
system
x= ef t», t, E), x(O) = Xo . (6.66)
Here E > 0 is a small parameter whieh models the fact that the dynamies of x are
slowly varying with respect to the time variation of the right hand side of (6.66).
6.6 Averaging 267
(6.67)
where
I 1 ,0+T
fav(x):= Iim - f t», r, O)dr, (6.68)
T ->oo T '0
assuming that the limit exists . The method was proposed initially by Bogoliuboff
and Mitropolsky [33], developed by several authors including Volosov lindSethna
[271], and Haie [125], and stated in geometrie form by Amold [12] and Gucken-
heimer and Holmes [122]. Our treatment closely follows Chapter 4 of the book by
Sastry and Bodson [259].
1 ( ~+T I
IT },o f t», r , O)dr - fav(x) ::: y(T) (6.69)
for all to > 0, T ~ 0, x E Rh. The function y(T) is calied the eonvergenee
funetion.
The funetion f (x , t , 0) has mean value f av(x) if and only if the function
has zero mean value.1f f (x, t , 0) is periodic, then d (x , t) is periodic and integrable.
If d(x , r) is bounded in addition, y(T) in (6.69) is of the form af T for some a.
If f (x , t , 0) is almost periodic in t (review the definition from Problem 4.1) and
d(x, t) is bounded, it may be verified (see []25]) that dix , t) has zero mean value.
However, y (T) may not be of order 1I T (it could for example be of order 1I../T).
We now address the following two issues:
1. The closeness of the trajectories of the unaveraged system (6.66) and the
averaged system (6.68) on intervals ofthe form [0, T IE] .
2. The relationship between the stability properties of the two systems .
for alt t E ~+, x E Bi , Moreover, ify(T) = a/T T for somea ::: 0, r E]O, 1], then
~(E) = 2aE •
T
Remarks:
1. The main point of this lemma is that although the exact integral with respect
to t of d(x, t) may be an unbounded function of time, there exists a function
w, (x, t) whose first partial derivative with respect to t is arbitrarily close to
d(x, t).
2. The preceding lemma may be proved with weaker hypotheses (in particular,
without the hypotheses on aw(x, t)/ax) if no es timate on the derivative of
the approximate integral w,(x, t) with respect to x is needed (see [259] for
details). We choose the stronger hypotheses to be able to determine stability of
the unaveraged system from the stability of the averaged system.
Proof: Define
Note that
wo(x, t):= 1 1
d(x, t yd»,
w,(x, t) = wo(x , t) - E 1 1
e-,(t -r)wo(x, i yd».
1 1
so that we have
Using the fact that if d(x, t) is bounded say by ß then y(-) is also bounded by ß
a simple calculation yields"
IEWE(x, t) 1 := HE)
for some HE) ofClass K. In the instance that y(T) = ai T"; equation (6.71) can
be integrated explicitly to give
where r is the standard gamma function. Since the 1f(2 - r) I < I when r E]O, 1],
it follows that we may choose ~(E) = 2aE' . This gives the first estimate of (6.70).
By construction, we may now check that
aw E(x , t)
- - - - d(x, t) = -EWE(X, t),
at
so that the second estimate of (6.70) is verified. Using the hypotheses on
ad(x, t)
ax
yields the third estimate of (6.70). The last estimate follows simply because d(x, t)
is bounded. 0
We are ready to state the conditions for the basic averaging theorem. The
assumptions are as folIows, for some h > 0, EO > 0
Al. x = 0 is an equilibrium point of the unaveraged system, i.e., 1(0, t , 0) == O.
Actually, note that the conditions on I are only required to hold at E = O.
The function I (x, t , E) is Lipschitz continuous in x E Rh uniformly in
t, E E [0, EO], i.e.,
4The interval [0. oo[ needs to be broken into the intervals [0• .JEl and l.JE. oo[ to
produce this estimate.
270 6. Applications of Lyapunov Theory
A3. The function d (x, t) = f (x , t , 0) - fav (x) satisfies the conditions of Lemma
6.16 .
A4. IXol is chosen small enough so that the trajectories of (6.66) and (6.67) He
in Bh for t E [0, T /EO] (such a choice is possible because of the Lipschitz
continuity assumptions).
Theorem 6.17 Basic Averaging Theorem. lfthe original unaveraged system
of(6.66) and the averaged system of(6.67) satisfy assumptions AJ-A4 above, then
there exist afunction 't/r(E) ofClass K, such that given T > 0 there exists b T > 0,
ET > osuch thatfort E [0, T/E] ,
[J + E aw(z , t)] Z. = -
aw(z, t)
+ Ef(z + EW.(Z , t)), t , E). (6.73)
az at
Tbe right-hand side of this equation may be written to group terms as folIows :
. [
Z= I+E
aw.(z,
az
t)]-I [ Efav(Z)+Ep(Z,t ,E)
]
(6.74)
= Efav(z) + Ep(Z, t, E),
z(O) = Xo,
where
aw.(z, t)]-l [_ aw.]
p(Z ,t ,E)= [ I+E az p(Z,t ,E)-Ea;-fav(z) ,
6.6 Averaging 271
Iz(t) - xav(t)1 :5 Elav l' lz(r) - xav(r)dr + E1/J"(E) l' Iz(r)dr. (6.76)
To use the bounds above we need zt-), xavO E Rh; this is guaranteed by A4 above
by choosing E small enough. Using the Bellman Gronwall lemma on equation
(6.76) we get
Proof: From the converse Lyapunov theorem for exponential stability (Theorem
5.17) it follows that there exists a converse Lyapunov function v(xa v ) for the
averaged system (6.67) satisfying
I~
oX I s
av
a4lxa v l·
Note the presence of the E in v(x av ) to account for the multiplication by E in the
right hand side of (6.67). We will now apply this Lyapunov function to study the
stability of the unaveraged system transformed into the form of the second equation
of (6.74), namely,
.
VI(6.79) (z) = . + ov ( )
OZ ep Z , t, E
VI(6 .67) (z)
(6.80)
~ -w3ld + W41/J(E)ld .
Define
a3 - 1/J(E»a4
a(E) := .
a2
We may choose EI such that a(E) > 0 for E ~ EI. Then, using the methods of
Section 5.3.4, it may be verified that
s
Iz(t)1
V;;
§"e-W(f)(t-to)lz(to)l .
Using the estimate for Ix(t)1 iri terms of Iz(t)1 from the Jacobian of the
transformation
we get that
(6.81)
o
6.7 Adaptive Control 273
Remarks:
1. The preceding theorem is a local exponential stability theorem. It is easy to see
that if the hypotheses of Lemma 6.16 are satisfied globally and the averaged
system is exponentially stable, then the unaveraged system is also globally
exponentially stable.
2. The proof of Theorem 6.18 gives useful bounds on the rate of convergence of the
unaveraged system. The rates of convergence of the averaged and unaveraged
systems are, respectively,
a3 a3 - ljI(E)a4
Äav = E - , Ä€ = E .
2a2 m2
Since the averaged system is autonomous it is often easy to find Lyapunov
functions for it and use them in turn to get estimates for the convergence of the
unaveraged system (see Problem 6.16 for an example ofthis).
3. The conc1usions of Theorem 6.18 are quite different from the conclusions of
Theorem 6.17. Since both x(t) and xav(t) go to zero exponentially, it follows
that the error x(t) - xa v(t) also goes to zero exponentially. Theorem 6.18 does
not, however, relate this error bound to E . However, it is possible to combine
the two theorems to give a composite bound on the approximation error for all
time [0, oo[ of the form of equation (6.72).
YP = -apYp + kpu p,
(6.82)
Ym = -amYm + kmr.
274 6. Applications ofLyapunov Theory
Now note that the second error equation is of the same kind encountered in the
context of identification in Section 6.5, and any one of the set of identification
laws in this section, such as the gradient law or the least squares law or any of its
variants, may be applied. For example, a gradient law is given by
Co = rbt = -geinYp,
(6.86)
do = rb2 = -geinM(yp) .
However, the proof of stability is now more involved, since after ein and cP have been
shown to be bounded, the proof that Yp is bounded requires a form of "bounded-
output bounded-input" stability for the equation (6.85).
The two algorithms discussed here are direct algorithms, since they concen-
trate on driving the output error eo to zero. An alternate approach is the indirect
approach, where any of the identification procedures discussed in Section 6.5 are
used to provide estimates of the plant parameters k p , apo These estimates are, in
turn, used to compute the controller parameters. This is a very intuitive approach
but may run into problems if the plant is unstable. The generalization of the pre-
ceding discussion of both input-error and output-error adaptive control of general
linear systems and also multi-input multi-output linear systems is an interesting
and detailed topic in itself and is the subject of the monograph by Sastry and
Bodson [259]. In this monograph it is also shown how to apply the averaging
theorems of the preceding section to analyze the stability and robustness of adap-
tive control schemes. Output error schemes were first proven by Morse and Feuer
[I Oll, Narendra, Valavani, and Annaswamy [228], and input error schemes were
introduced by Goodwin and Mayne [116]. See also the discrete time treatment in
Goodwin and Sin [117] and Astrom and Wittenmark [14].
XI = X2 + I1 (XI) ,
X2 = X3 + h(xl , X2),
(6.87)
Xi = Xi+J + /;(XI, X2, ... , Xi),
form (6.87) are given in Chapter 9. The idea behind back-stepping is to consider
the state X2 as a sort of "pseudo-control" for XI . Thus, if it were possible to make
X2 = -XI - 11 (XI), the XI state would be stabilized (as would be verified by the
simplest Lyapunov function VI (XI) = (I /2)xr . Since X2 is not available, we define
ZI =X j ,
Z2 = X2 - al(x\) ,
2 1 = -ZI + Z2,
aal -
22 = X3 + h(XI , X2) - -a (X2 + 11 (Xl)) := X3 + h(zl, Z2),
XI
• 2
VI = -ZI + ZIZ2·
Z3 = X3 - a2(ZI, Z2) ,
I 2
V2 = VI + 2Z2 '
To derive the formula for a2(ZI, Z2), we verify that
21 = -ZI +Z2,
22 = -ZI - Z2 + Z3,
Z2 + Z2Z3 ·
• 2 2
V2 = -ZI -
Proceeding recursively, we have at the i -th step that we define
I 2 2 2
Vi = 2(ZI + Z2 + ... + Zi )
to get
2i = -Zi, - Zi +Zi+l ,
.
Vi
2
= -ZI - .. . - Zi
2
+ ZiZ i+ l·
6.9 Summary 277
6.9 Summary
In this chapter we have given the reader a snapshot of a number of very interesting
and exciting areas of application of Lyapunov theory: stabilization of nonlinear
systems, the Circle and Popov criteria, adaptive identification, averaging, singular
perturbations. We gave a very brief introduction to a very large and interesting body
of literature in adaptive control as weil. For adaptive control there is a great deal
more to be studied in terms of proofs of convergence (see, for example , [259]). In
recent years there has been a resurgence of interest in new and streamlined proofs
of stability using some unified concepts of detectability and stabilizability (see, for
example, Morse [219], Ljung and Glad [186], and Hespanha and Morse [138]).
There is a very interesting new body of literature on the use of controlled Lya-
punov functions; we gave the reader a sense of the kind of results that are obtained
in Section 6.8. However, there is a lot more on tracking of nonlinear systems using
Lyapunov based approaches, of which back-stepping is one constructive exam-
pIe. Of special interest is an approach to control of uncertain nonlinear systems
developed by Corless and Leitmann [70]. For more details on this topic we refer
278 6. Applications of Lyapunov Theory
to two excellent survey paper one by Teel and Praly [297], the other by Coron
[71], and the recent books of Krstic, Kanellakopoulos, and Kokotoviö [171] and
Freeman and Kokotovic [106]. The use of passivity based techniques for stabi-
lizing nonlinear systems has also attracted considerable interest in the literature:
for more on this we refer the reader to the work of Ortega and co-workers [235;
236], Lozano, Brogliato, and Landau [189], and the recent book by Sepulchre,
Jankoviö, and Kokotovic [269]. An elegant geometrie point of view to the use of
passivity based techniques in the controls of robots and exercise machines is in the
work ofLi and Horowitz [180; 181].
6.10 Exercises
Problem 6.1 Stabilizing using the linear quadratic regulator. Consider the
linear time -invariant control system
x= Ax +Bu (6.88)
with P E lRn x n being the unique positive definite solution of the steady state
Riccatti equation (with A , C complete1y observable)
(6.89)
Prove that the closed loop system is exponentially stable using the Lyapunov
function x T P x . What does this imp1yfor stabilization of nonlinear systems of the
form
x= f(x, u)?
x= f(x, u) .
Show that if the linearization has some unstable modes that are not controllable,
i.e., values of A E C~ at which the rank ofthe matrix [AI - A , B] is 1ess than n, that
no nonlinear feedback can stabilize the origin. Also show that if the linearization
is stabilizable, i.e there are no eigenva1ues A E C+ for which rank [AI - A , B] is
1ess than n, then the nonlinear system is stabilizable by linear state feedback.
Finally, for the case in which there are some jto axis eigenvalues that are not
controllable give an example of a control system that is not stabilizable by linear
feedback but is stabilizable by nonlinear feedback. Look ahead to the next chapter
for hints on this part of the problem.
6.10 Exercises 279
Here (X. > O. Define pet) := W(t, t + 8)-1 . Show that the feedback controllaw
u = _B T (t)P(t)x
about a given nominal trajectory u('), xo( .) , that is, to drive the tracking error
x(·) - x°(-)
to zero. Give the appropriate conditions on the linearization and rate of convergence
of the nonlinear system to its nominal trajectory.
Problem 6.4 Observed based stabilization. Consider the control system with
x E Rn, U ERn;, y E Rno
x = ft» , U),
(6.90)
y = g(x).
Assurne that the state feedback controllaw
u =h(x)
locally exponentially stabilizes the closed-loop system. Assurne further that the
system is exponentially detectable, i.e., to say that there exists an "observer"
z= r (z, u, y) (6.91)
such that r(O, 0, 0) = 0 and there exists a function W : Rn X Rn ~ R satisfying,
for x, Z E B p , u E B p ,
Problem 6.5. Use the KYP Lemma to show that the zeros of an SPR transfer
function lie in the open left half plane (C~).
Problem 6.6 Circle and Popov Criterion examples. Using either the circle
criterion or the Popov criterion deterrnine ranges of sector-bounded nonlinearities
in the feedback loop that can be tolerated by the following linear time invariant
systems:
1. S(S~ 1)2'
2. s(s+l)
1
•
3. (s 1)~s+2)2 ' Note that the open loop system is unstable here.
Problem 6.7 Multivariable circle and Popov criteria [162]. Consider the
multi-input rnulti-output generalization of the Lur'e system
x = Ax + Bu,
Y = Cx + Du, (6.92)
u= -~(Y, t).
The multivariable generalization of the KYP lemma called the PR lemma [5]
states that a transfer function Z(s) E ]Rp xp with minimal realization C(sl -
A)-l B + D is strictly positive real (that is, it is analytic in C, satisfies Z*(s) =
Z(s*) ,5 and for S E C+ > 0, ZT (s") + Z(s) is positive definite), ifand only ifthere
exist P, Q symmetric positive definite matrices, W E ]Rp xp, L E ]Rpxn, satisfying
ATp+PA=-LTL -EQ,
PB - C T = -LTW, (6.94)
D+D T = WTW.
Problem 6.10. For the identifier structure of this chapter, we chose A, b).. to
be in controllable canonical form so as to get w 1, w 2 to be dirty derivatives of
282 6. Applications of LyapunovTheory
the input and output. Verify that other choices of A , b.. would also give unique
parameterizations of the parameters a", b", for example,
s+a
a > 0,
or
Problem 6.11. Consider (jJ(t) = [cos(log(l + 1)), sin(log(l + t))]T E JR2. Show
that it satisfies the conclusions of Theorem 6.12 on d(jJ / dt , (jJ but do not converge.
Can you contrive an example plant P and input u p (.) for which this is in fact the
parameter error vector?
Problem 6.12 Parameter convergence for the normalized gradient update
law. Prove that if the input u p is bounded and either the plant is stable or is
embedded in a feedback loop so as to stablize the output, and the regressor w(t)
is persistently exciting , then the parameter error converges to zero exponentially.
How do you expect the rate of convergence of the normalized gradient update law
to compare with the rate of convergence of the gradient update law,
Problem 6.13 Parameter convergence for the least squares update law, Con-
sider first the least squares update law of (6.57) with Q = 0 applied to the identifier
of (6.54). Assume that the input Up is bounded and either the plant is stable or is
embedded in a feedback loop so as to stablize the output, and the regressor w(t)
is persistently exciting. Now consider the Lyapunov function
v«(jJ, t) = (jJT r:' (t)(jJ.
For the system
rP = -gP(t)w(t)wT(t)(jJ
(6.96)
P= -gPw(t)w T (t)P
6.10 Exercises 283
it follows that v(rjJ , t) is a p.d.f. but is not decrescent when w is persistently exciting
(verify this!) . Now, show that
r
That is
exists uniforrnly in t , Prove that if the plant is stable , then the regressor wO is
also stationary. You may wish to usc the linear filter lemma 4.3 to prove this and
establish the connection between the power spectral density of u p and w.
Now, show that the persistency of excitation condition for a stationary regressor
w is equivalent to the statement that its autocovariance at 0, Rw(O) E IR 2n x 2n , is
i:
positive definite. Use the inverse Fourier transforrn formula
Rw(O) = Sw(w)dw
and the forrnula for Sw(w) that you have derived above to prove that w is persistently
exciting if and only if Sup(w) =1= 0 at at least 2n points (i.e., that the input signal
is sufficiently rich) . You will need to use the coprimeness of the numerator and
denominator polynomials of the plant transfer function in an essential way.
(6.97)
Consider the foIlowing observer based on the use of the Jull state x given by
Give eonditions under whieh this nonlinear observer eum identifier (first proposed
in [I72], [I 68]) has parameter eonvergenee as weIl as ~ x as t ~ 00. Apply x
this to a model ofthe induetion motor[192] withx = [id, i q, ifJd, ifJqf E jR4, where
id, ifJd stand for the direet axis stator eurrent and flux, and i q, ifJq for the quadrature
axis stator eurrent and flux.
Rs M2
a = --, ß-~
- a L;'
o = 1- - - ,
a L, L sL r
with RH L, standing for stator resistanee and induetanee and Rn L, for rotor
resistance and induetanee . M stands for the mutual induetanee.
ß w
-(a + ß) 0
Ls aL,
w ß
f(x) =
0 -(a + ß) x,
a L, Ls
-aaLs 0 0 co
0 -saa L, -w 0
0
X2
a L, 1
-XI
gJ (x) = 0 o L,
g2(X) = g3(X) =
0 X4
-X3
0
Assume that a, ß are unknown parameters. Simulate the system with parameters
a = 27.23, ß = 17.697, o = 0.064, L , = 0.179 . How many input sinusoids did
you need to guarantee parameter eonvergenee.
Repeat this exercise for the normalized gradient update law (6.56), and the
least-squares (6.57) with Q = 0, and normalized least-squares update laws (6.60).
Compare simulations of the unaveraged and averaged systems for simple choices
of the plant and the other parameters of the identifier. You may be surprised at how
close they are for even modcrately large values of E.
Problem 6.17 Discrete time averaging [15]. Develop a theory of averaging for
discrete time nonlinear systems with a slow update, that is, for the case that
Xk+l = Xk + Äf(xk, k)
with Ä small. Give conditions under which the evolution of thc system may be
approximated by a differential equation.
Problem 6.18 Circuit dynamies. Sometimes resistors need not be either volt-
age or current controlled. Consider the series interconnection of a I F capacitor
with a resistor whose i-v characteristic is as shown in Figure 6.17, characterized
by a function t/t(i, v) = O. Discuss how you would obtain a solution concept for
the circuit equations for this simple circuit, that is.
iJ= i,
o = t/tU , v).
Problem 6.19. Consider the swing equation for the power system with PV loads
of (6.43). Prove that the fast frozen system is a gradient system. Characterize the
stable and unstable equilibria of this system in the limit E -+ O.
Problem 6.20. Stabilize the nonlinear system
. 3
XI = X2 - X I'
X2 = U.
Use back-stepping but note that it may not be necessary to cancel the term x?
Problem 6.21 Adaptive back-stepping. Consider the back-stepping controller
for systems in strict feedback form as described in Section 6.8. Assume that the
nonlinearities f;(XI, X 2 • • • • • Xi) may be expressed as WT<Xi. X 2, .••• x, W, where
W i (XI , X2, .•• , Xi), rj" E IR.P. Here the functions W i are known, and the parameters
0* E IR P are unknown. In the back -stepping controller replace 0* by its estimate
o(t) and derive the equations for the Zi in terms of the parameter errors 1> = 0 - 0* .
o
Give a parameter update law for the estimates to guarantee adaptive stabilization.
Use the usual quadratic Lyapunov function.
7
Dynamical Systems and Bifurcations
The zeros of f are the fixed points of the flow <p, . They are also referred to as
equilibria or stationary solutions. Let i be one such equilibrium and define the
288 7. Dynamical Systems and Bifurcations
Iinearization of (7.1) as
z= Df(x)z . (7.2)
Proposition 7.2. The linearization of the jlow of (7.1) at i is the jlow of the
linearization 0/(7.2}, i.e.,
Theorem 7.3 Hartman Grobman Theorem. Consider the system of(7.1} with
the equilibrium point i. 1f Df(i) has no zero or purely imaginary eigenvalues,
there is a homeomorphism h defined on a neighborhood U ofi taking orbits ofthe
jlow ifJ, to those ofthe linear jlow e,D! (x) of(7.2}. The homeomorphism preserves
the sense ofthe orbits and is chosen to preserve parameterization by time.
Remarks:
Example 7.4. Consider the system deseribed by the seeond order equation
x + Ex2.i + x = O.
The linearization has eigenvalues ±j for alt E. However, the origin is attraeting
for E > 0 and repelling for E < O. (Prove this using the Lyapunov funetion
v(x , .i) = x 2 + .i 2.)
The next result builds on the theme of the Hartman-Grobman theorem. We need
the definitions of loeal insets and loeal outsets of an equilibrium point i:
I Roughly speaking, that no integer linear combinations of the eigenvalues sum to zero.
7.1 Qualitative Theory 289
The insets and outsets of this system are as shown in Figure (7.1). Note that the
inset is a closed half spaee (not a manifold) and the outset a closed half line, the
positive X2 axis (also not a manifold).
Yet another example pietured in Figure 7.1 is the system
·2 2
Xl=x l - x 2 ,
X2 = 2XIX2·
Here the inset is the plane minus the positive x axis, but inc\uding the origin (not a
manifold) and the outset is the positive x axis, also inc1uding the origin (also, not
a manifold).
In the instanee that the equilibrium point i is hyperbolie, the following theorem
establishes that they are manifolds. First define E-' C Rn, EU C Rn to be the stable
and unstable eigenspaees of the linear system (7.2), that is, E S, EU are the spans
of the eigenveetors and generalized eigenveetors eorresponding to the eigenvalues
in C~, C::- respeetively. Further, let their dimensions be n s , n u respeetively.
Outset
\
\
\
\
Remarks:
1. The Hartman-Grobman theorem and the stable-unstable manifold theorem are
depicted in Figure 7.2.
2. The manifolds Wi~c' Wl~c are both invariant (think about why this may be
true). They are, indeed, the nonlinear analogues of the stable and unstable
eigenspaees, E" , EU, of the linearization.
The loeal invariant manifolds W/oc(i) , Wl~c(i) have global analogues W s(i),
WU(i) which may be defined as fo11ows
W"(i) =U1JI(Wi~c(i»,
I~O
WU(i) =U1JI(WI~c(i».
I~O
The existenee of unique solutions to the differential equation (7.1) implies that
two stable (or unstable) manifolds of distinet fixed points i I , i 2 eannot interseet.
However, stable and unstable manifolds of the same fixed point ean interseet and are
in faet a souree of a lot of the eomplex behavior eneountered in dynamical systems.
It is, in general, diffieult to eompute W s, W U in closed form for a11 but some
textbook examples; for others it is an experimental proeedure on the computer. At
any rate, here is an example where we ean do the ealculation by hand
Xl =XI,
• 2
X2 = -X2 +X 1.
7.2 Nonlinear Maps 291
Finally, note that if XI (0) = 0, then XI (t) == 0, so that ws (0) = E". Nonlinear
systems possess limit sets other than fixed points, for example limit eycles. It is
easy to study their stability by first studying nonlinear maps.
Zn+1 = Bz n .
We define the stable and unstable eigenspaees for B having no eigenvalues of
modulus 1 as folIows:
E-' = Spanln, generalized eigenvectors of modulus < I} ,
EU = Spanln, generalized eigenvectors of modulus > I} .
If there are no multiple eigenvalues then the rates of eontraetion (expansion) in
E-'(EU) are bounded by geometrie series, i.e., 3c > 0, a < 1 sueh that
If DG(i) has no eigenvalues of unit modulus, its eigenvalues determine the sta-
bility of the fixed point i of G. The linearization theorem of Hartman-Grobman
and the stable-unstable manifold theorem results apply to maps just as they do for
flows.
It is important to remember that flows and maps differ erucially in that, while the
orbit or trajectory of a flow lPr (x) is a eurve in !Rn, the orbit G" (x) is a sequenee
of points. (The notation G" (x) refers to the n-th iterate of x under G , that is,
G" = Go· . . 0 G(x) n times .) This is illustrated in Figure 7.3 .
The insets and outsets of a fixed point are defined as before:
As before, the inset and outset are not necessarily manifolds unless the fixed point
is hyperbolie.
Remark: As for flows, the global stable and unstable manifolds W' (i), W U (i)
are defined by
WS(i) = UGn(Wi~c(i)),
n::;:O
Note that a period-k point is a fixed point of Gk(x). The stability of such a
period-k point Po is determined by the linearization DG k (Po) E ]Rnxn. By the
ehain rule ,
Proposition 7.12. If Po, ... , Pk-l constitute an orbit ofperiod-k, then the set of
eigenvalues ofeach ofthe k matrices DGk(po), DG k(Pl), "" DGk(Pk_l) is the
same.
P(q) = rPr(q) ,
where r (q) is the time taken for the orbit rPt (q) based at q to first return to ~. The
map is weIl defined for U sufficiently small as depicted in figure (7.4) . Note that
in general, -r(q) # T but lim, ..... p -r(q) = T. (For a proofthat the Poincare map is
weIl defined see Problem 7.4.)
Now note that p is a fixed point of the Poincare map P : U t-+ U . It is not
diffieult to see that the stability of p for the diserete-time dynamical system or
map P refleets the stability of y for the flow rPt . In particular if p is hyperbolie
and D P (p), the linearized map has n, eigenvalues of modulus less than 1 and n u
eigenvalues of modulus greater than 1 (n s + n u = n - 1), then the dimension of
W"(p) is n s and the dimension of WU (p) is n u • If we define the loeal insets and
outsets of y in Ü as
wtoc(p) = Wi~c(Y) n ~,
WI~c(P) = WI~c(Y) n ~ .
This is depicted in Figure 7.5. The formal proof of this is somewhat involved but
is in-effect that of the Hartman-Grobman theorem. In fact the Hartman Grobman
theorem can be generalized to arbitrary invariant manifolds (rather than merely
equilibrium points and closed orbits) and under a hyperbolicity assumption shown
to result in stable and unstable manifolds (this is a celebrated result of Hirsch ,
Pugh, and Shub [140)).
XI = XI - X2 - XI (x~ + x~)
X2 = XI + X2 - X2(X~ + x~)
which in polar coordinates with r E lR+,() E SI is
r = r (l - r 2) ,
8=1.
Choose as section ~ = {er, (}) E lR+ X SI : r > O. () = O}. The equations above
can be explicitly integrated to give the flow
WS(p)
Thus
P(r) = ( 1 + ( r 2
1
-
)
1 e- 4JT
)-1/2
with the first return time r = 2rr for all r . Note that r = 1 is the location of the
limit eycle and that DP(I) = e- 4JT < 1, so that y, a circular limit eycle ofradius
1, is an attraeting closed orbit.
It is not, in general, easy to compute the Poincare map, since it involves integrat-
ing the differential equation and computing the first return time experimentally.
However, the eigenvalues of the linearization of the Poincare map may be com-
puted by linearizing the flow about the limit cycle y . This is referred to as the
Floquet teehnique. The linearization of (7.1) about the flow of y is
~ = Df(y(t»g. (7.6)
Since A(t) := Df(y(t» is a T periodic matrix the equation (7.6) is a periodic
linear system with the following property for its state transition map.
Proposition 7.14. The state transition matrix of(7.6) may be written as
cI>(t, 0) = K(t)e Bt . (7.7)
Now equation (7.7) follows from the definitions. To establish the periodicity of
KO note that
K(t + T) = cI>(t + T, O)e-B(t+T)
= cI>(t + T, T)cI>(T , O)e-B(t+T)
= cI>(t + T, T)e- Bt
= cI>(t, O)e- Bt
= K(t) . o
It follows from the proposition that the behavior of the solutions in the neighbor-
hood of y is determined by the eigenvalues of e T B • These eigenvalues A), .•• , An
are called the eharaeteristie, or Floquet multipliers, and the eigenvalues of B are
referred to as the characteristic exponents of y . If v E ]Rn is the tangent to y (0), then
v is the eigenvector corresponding to the characteristic multiplier 1. The moduli
of the remaining n - 1 eigenvalues, if none are unity, determine the stability of y.
Choosing the basis appropriately to make the last column of e B T = (0, . . . ,0, l)T
the matrix D P (p) of the linearized Poincare map is simply the matrix belonging
to JR(n -l)x(n-l) obtained by deleting the n-th row and column of e T B •
7.3 Closed Orbits, Poincare Maps, and Forced Oscillations 297
i = fex, t)
i = 1
linearization ofthe Poincare map ofthe periodic orbit, DP(i), has k eigenvalues
inside the unit disk.
Proof: We give only a formal proof: A more rigorous proof is not very involved;
however, it needs an application ofthe implicit function theorem in function spaces.
For the formal proof let us postulate that the solution which we are looking for
is of order E. If a periodic solution of the form i + Ey(t) + h. o. 1. were to exist
then using this in the right hand side of the equation above using the Taylor series
expansion
f(i + Ey(t)) = EDf(i)y(t) + (h.o.t.)
and equating terms of order E yields? the following linear equation for y (t)
( l - eAT)y(O) =
T
l
eA(T-r)g(i, r)dr.
y(O) = (l - eAT)- 1 l T
eA(T-r)g(i, r)dr.
E (e AT (y(O) + 8(0» + l T
eA(T-r)g(i, r)dr) .
E(Y(O) + e AT8(0) .
Thus the Poincare map P has the form
P(E8(0)) = e
AT E8(0) + h.o.t. in E
Thus the linearization of the Poincare map for E small has eigenvalues close to the
eigenvalues of eAT (close in the sense that they differ by terms of order at most
2This is the part of the proof that is formal, though apparently plausible. since it involves
equating tenns of an asymptotic series. It can be made rigorous using the implicit function
theorem.
7.3 Closed Orbits, Poincare Maps, and Forced Oscillations 299
E). Thus the number of eigenvalues of the linearization of the Poincare map inside
the unit disk is equal to the number of eigenvalues of A in C~ . The statement of
the proposition thus folIows . 0
Figure 7.7 shows a forced system with a period-2 point. Let us pursue this point
in some detail in the context of the forced oscillation of a planar system as in
Problem 7.5 (the reader may wish to work his way through the problem now).
We consider the case of 8 = 0 in equation (7.72). Dur treatment follows [329]
elosely. As is pointed out in the problem, there is no difficulty with computing the
Poincare map explicitly in this linear case. The Poincare map is weIl defined so
long as w t= Wo, that is, the frequency of forcing is different from the frequency of
oscillation. In this case it should not be surprising that the solution breaks down
into a superposition of solutions of frequency wand Wo . We will explore this case
,/
- -
........
/
\
" ~,/
- -----
FIGURE 7.8. The Poincare map for a linear forced system
now. First verify from the Problem 7.5 that the Poincare map is given by
+( A (l-eos((2rr;))) ) .
WoA sin 2rr-;;;
x(t) = A eos(wt),
y(t) = -Awsin(wt).
In the figure, we have shown the projeetion ofthis orbit on the x-y plane at t = 0
as well as the rotation of this orbit through (2rr)/(w) seeonds. The projeetion of
the orbit on the x-y plane is a eirele so that the rotation of the orbit through 2rr
seeonds is a torus. A quick ealculation with the Poincare map of (7.11) reveals that
the torus is invariant. As we saw in Chapter 3 a eonvenient way to draw the torus
is as a reetangle with opposite edges identified. The orbit on the torus as shown
in Figure 7.9 is a line joining diagonal edges. The two axes are labeled () for the
rotation in the x-y plane and ifJ for the revolution in the t direetion. Both of these
angles are normalized to Zrr, We now diseuss subtleties of the trajeetories on this
invariant torus:
Subharmonics of order m
Suppose that w = mwo for some m > I, an integer. From an inspeetion of the
Poincare map of (7.11) it is elear that for (x , y) =I (A, 0) all points are period m
points. Thus for m rotations in the () direetion we have one revolution in the ifJ
direetion. This trajeetory is shown on the torus flattened out into a square in Figure
7.10.
7.3 Closed Orbits, Poincare Maps, and Forced Oscillations 30 I
2n ....-----r------.,.
o 2n
9
2n r---~r---~
o
9
Ultraharmonie of order n
Suppose that nw = Wo for some n > 1. Now verify from equation (7.11) that every
point is a fixed point ofthe Poincare map. It is interesting to see that all trajectories
complete n revolutions in the ifJ variable before they complete one rotation in the
() variable. An example for n = 2 is shown in Figure 7.11.
Ultrasubharmonie of order m, n
Suppose that nuo = nwo for some relatively prime integers m, n > l. Then all
points except (A , 0) T are period m points with m rotations along with n revolutions
302 7. Dynamical Systems and Bifurcations
2n r------:::;..---------,"""
o
FIGURE 7.12. A subultrahannonic of order 2, 3
Quasiperiodie response
Finally assume that the ratio (w)/(Wo) is irrational. This corresponds to a case
where only the point (A < O)T is a fixed point of the Poincare map. None of
the other points are periodic. Further, when one draws the trajectories on the
torus, one sees that any trajectory does not elose on itself, it appears to wind
around indefinitely. It is possible to show that all trajectories wind around densely
on the surface of the torus. (This is an interesting exercise in analysis.) Such
trajectories are called irrational winding lines, and since each trajectory is dense
on the torus, the limit set of each trajectory is the whole torus. The trajectories
are called quasiperiodic , since the act of approximating the irrational number
arbitrarily elosely by a rational number will result in a ultrasubharmonie oscillation.
We have seen how complicated the dynamies can be for a forced linear system
when there is resonance and sub/ultra-harmonie periodie orbits . This is more
generally true. Not only is the study of the dynamics of these systems subtle and
complex, but the presence of subharmonic orbits in a forced nonlinear system is
often a symptom that parametric variation in the model will cause complicated
dynamics .
Since the definition of the Poincare map for a forced system of the form of (7.8)
relies on a knowledge of the flow of the forced system, it is not easy to compute
unless the differential equation is easy to integrate. However, perturbation and
averaging methods have often been used to approximate the map in several cases.
we will give a precise definition and then explain its !imitations. The first notion
to quantify is the closeness of two maps. Given a C' map F : ]Rn ~ ]Rn ; we say
that a er map G is a c' E perturbation 01 F if there exists a compact set K such
that F (x) = G (x) for x E ]Rn - K and for all i], . .. , in with i J + ... + in = r ~ k
we have
. a . (F-G)I <E.
Iax;' .. . ax~n
Intuitively, G is a c- E perturbation of F if it and alI its first k derivatives are E
close. We then have the following definition:
Definition 7.17 C k Equivalence. Two er maps Fand Gare c' equivalentjor
(k ~ r) ifthere exists a C' dijfeomorphism h such that
hoF = G oh.
CO equivalence is referred to as topological equivalence.
The preceding definition imp!ies that h takes orbits of F, namely [F" (x)} ; onto
orbits of G , namely (Gn(x)} . For vector fields we define orbit equivalence as
folIows:
Definition 7.18 Orbit Equivalence. Two C' vector fields 1 and gare C k orbit
equivalent if there is a C k dijfeomorphism h that takes orbits rpf(x) 01 1 onto
orbits rpf(x) 01 g preserving sense but not necessarily parameterization by time.
Thus we have that given tl there exists ti such that
h(rpf.<x)) = rp~(h(x)) .
Also. CO orbit equivalence is referred to as topological orbit equivalence.
We may now define a robustness concept for both vector fields and flows, referred
to as structural stability.
Definition 7.19 Structural Stability. AC' map F : ]Rn ~ ]Rn (respectively, a
C' vector field I) is said to be structurally stable if there exists E > 0 such that
all Cl E perturbations 01 F (respectively I) are topologically (resp. topologically
orbitally) equivalent to F (resp . f).
At first sight it might appear that the use of topological equivalence is weak,
and we may be tempted to use c' equivalence with k > O. It turns out that C k
equivalence is too strong: In particular, it would force the linearizations of the map
(vector field) and its perturbation about their fixed (equilibrium) points to have
exactly the same eigenvalue ratio. For example, the systems
(XI)
X2 -
(1 0) (
0 1+ E
XI
X2
)
are not c' orbit equivalent for any k =1= 1, since X2 = CIXI is not diffeomorphic
to X2 = c2lxtl1+ f at the origin. Note, however, that CO equivalence does not
304 7. Dynamical Systems and Bifurcations
[ -Io 0] ,
-2
[-I ], 0-1
and
[-3 -1]
I -3
x=Ax
is structurally stable if A has no eigenvalues on the jio axis . The map
is structurally stable if B has no eigenvalues on the unit disk, Thus, it follows that a
map (or vector field) possessing even one non -hyperbolic fixed point (equilibrium
point) cannot be structurally stable , since it can either be lost under perturbation or
be tumed into a hyperbolic sink, saddle, or source (see the section on bifurcations
for an elaboration ofthis point). Thus, a necessary conditionfor structural stability
ofmaps and flows is that all fixed points and closed orbits be hyperbolic.
As promised at the start of this section, we will discuss the limitations of the
definition of structural stability using as an example a linear harmonie oscillator
Elementary considerations yield that trajectories other than the origin are closed
circular orbits. Let us now add small C OO perturbations to this nominal system:
Linear Dissipation. Consider the perturbed system
It is easy to see that for E =1= 0 the origin is a hyperbolic equilibrium point that is
stable for E < 0 and unstable for E > O.The trajectories of this are not equivalent
to those of the oscillator, but perhaps this is not surprising since the unperturbed
system did not have a hyperbolic equilibrium point. However, if the perturbed
system were Hamiltonian, then the presence of the dissipation term would not be
an allowable perturbation.
Nonlinear Perturbation. Consider the perturbed system
XI = -X2,
• 2
X2 =XI +EX 1 .
7.5 Structurally Stable Two Dimensional F10ws 305
This system now has two equilibria at (0, 0) and at (- ~, 0). The origin is still a
center, and the other equilibrium point is a saddle, which is far away for E smalI.
The perturbed system is still Hamiltonian with first integral given by H(x , y) =
xi!2 x? -
+ EX? /3. The phase portrait ofthis system as shown in Figure 7.13 has
a continuum of limit cycles around the origin merging into a saddle connection for
the saddle at (-~, 0). The point to this example is to show that though Eis small
for large values of XI, the perturbation is substantial and the phase portrait changes
substantially.
The conclusions to be drawn from the preceding examples are the following:
(iv)
FIGURE 7.14. (i) A homoclinic orbit or saddle loop, (ii) a double saddle loop, (iii) a
homoclinic eycle involving two saddles, and (iv) a homoelinie eycle involving three saddles.
It was shown by Andronov [122] that nonwandering points in ]R2 may be c1assified
as:
1. Fixed points .
2. Closed orbits.
3. Unions of fixed points (saddles) and trajectories connecting them.
Sets of the last kind in the foregoing list are called heterocLinic orbits when they
connect distinct saddles and homocLinic orbits when they connect a saddle to
itself. Closed paths made up of heteroclinic orbits are called homoclinic cycles.
Figure 7.14 shows four different kinds of homoclinic cycles. Dream up differential
equations which generate these phase portraits (see Exercises). Since saddle con -
nections are formed by the coincidence of stable and unstable manifolds of saddles,
it is intuitive that they are not structurally stable. An important theorem of Peixoto
characterizes structural stability in ]R2. In its original form it was stated for flows
on compact two-dimensional manifolds. To apply it to ]R2 we assume that tbe flow
points inwards on the boundary of a compact disk D C ]R2.
Theorem 7.21 Peixoto's Theorem. A vector field (or map) on ]R2 whose flow
ls eventually confined 10 a compact disk D C ]R2 is structurally stable iff
1. The number 01fixed points and cLosedorbits is finite and they are hyperbolic.
2. There are no orbits connecting saddle points.
7.5 Strueturally Stable Two Dimensional Flows 307
(i) (ii)
~
(iii)
FIGURE 7.15. Strueturally unstable ftows in the plane: (i) a saddle loop, (ii) a nonhyperbolie
fixed point, (iii) a nonhyperbolie limit cycle, and (iv) a eontinuum of equilibria .
When J.L = 0, the system has two saddles at (0, ±1) and a heteroclinic orbit
connecting the two saddles (see Figure 7.16). For J.L =1= 0 the saddles are perturbed
to (±J.L, ± 1) + 0 (J.L2) and the heteroclinic orbit disappears.1t is useful to see that
the system has an odd symmetry, that is, that the equations are unchanged if x., X2
are replaced by -XI, -X2. Notice, however, the marked difference in the behavior
0/ trajectories 0/ initial conditions starting between the separatrices 0/ the two
saddles fo r J.L < 0 and J.L > O.
Example 7.23 Appearance and Disappearance of Periodic Orbits from
Homoclinic Connections. Consider the system
XI =X2,
(7.13)
X2 = -XI + x~ + J.LX2 .
308 7. Dynamical Systems and Bifureations
The phase portraits 0/ Figure 7.17 show two equiLibria at (0,0) and (1,0). For
J-L < 0 the equiLibrium at (0, I) is a stable focus and for J-L < 0 an unstable focus.
At J-L = 0 the system is Hamiltonian, and the origin is a center surrounded by a
eontinuum 0/ limit eycles bounded by a homoclinic orbit 0/ the saddle (0, 0) .
Example 7.24 Emergence of a Limit Cycle from a Homoclinic Orbit. Fig-
ure 7.18 shows an example 0/ a finite period limit eycle emerging from a saddle
eonneetion. An example 0/ this was shown in Chapter 2 when a finite period limit
eycle was born from a homoclinic orbit on IR x SI (note that the saddles that are
2rr radians apart are eoincident on IR x SI).
XI = X2 - XI (x~ + x~),
z
X2 = -XI - xz(x + x~) .
3. For the following system show that the XI , X2 axes, and the lines Xz = ±XI are
invariant and draw the phase portraits in these invariant sets.
.
XI = 3
xI -
3XIX Z
Z'
Xz = -3X~X2 + xi.
To some analysts, a system truly manifests non linear behavior only when its lin-
earization is not hyperbolic. For hyperbolic equilibria and maps, we had stable and
unstable manifold theorems. In the next sections, we discuss their counterparts for
non-hyperbolic equilibria.
Xz = X2·
as shown in Figure 7.19. Note that for XI < 0 the solution curves of (7.14) approach
o"flatly," that is, all derivatives vanish at the origin. For XI ~ 0 the only solution
curve approaching the origin is the Xl axis . We can thus obtain a smooth manifold
by piecing together any solution curve in the left half plane along with the positive
half of the Xl axis.
Let us study the implications of this theorem in the instance that A has eigen-
values only in C_, the closed left half plane. Further, we will assurne that the
equilibrium point i is at the origin. Then, one can use a change of coordinates to
represent the nonlinear system in the form
y= By + g(y, z) ,
(7.15)
Z = Cz + h(y, z).
Here B E JRn,xn, with eigenvalues in <C ~ , C E JRnoxno with eigenvalues on the jto
axis, and g, h are smooth functions that along with their derivatives vanish at O.
In fact, the way that one would transform the original X system into the system of
(7.15) is by choosing a change of coordinates of the form
[ ~] = Tx
such that
7.6 Center Manifold Theorems 311
Thus, we can approximate nez) arbitrarily closely by series solutions. The con-
vergence of the asymptotic series is not guaranteed, and further, even in the instance
of convergence as in the Kelley example above, the convergent series is not guar-
anteed to converge to the correct value. However, this is not often the case, and the
series approximation is valuable.
y=-y+az 2 ,
Z= zy .
Equation (7.16) specializes to
n/(z)(zn(z)) + n(z) - az 2 = O.
Choose a Taylor series for n(z) = az 2 + bz 3 + cz 4 + .. '. Thenusing this in
the preceding equation and setting powers 01 Z2 to zero gives a = a, and the
order-2 solution is f/J(z) = az 2 • Further, setting powers 011z13, lzl" to zero yields
b = 0, c = -2a 2 and the order-4 solution f/J(z) = az 2 - 2a 2z4 • Thus, theflow on
the center manifold is given up to terms of order-S by using the order-2 solution
as
The stability 01 the center manifold is determined then by the sign 01 a .The flows
for the two cases are given in Figure 7.20.
a>O a<O
y= + g(y , z. /L),
B(/L )y
(7.21)
z= C (/L)z + h(y, z. /L),
314 7. DynamicalSystems and Bifurcations
g(O, 0, 0) = 0, Dg(O, 0 , 0) = 0,
h(O, 0, 0) = ° Dh(O, 0 , 0) = 0.
The same assumptions as in the unparameterized case hold for B, C with the
additional assumption that their eigenvalues lie on the jio axis and in C~ at /L = 0.
The way to deal with this situation is to make /L astate variable as follows (in a
technique known as suspension):
Ii- = 0.
Consider the equilibrium point of this system at y = 0, Z = 0, /L = 0. There are
no + p eigenvalues of the linearization on the jco axis and n, of them in C~. Thus,
a center manifold is a graph from the z. /L space to the y space of the form 7T(Z, /L)
with the dynamics on the center manifold given by
Continuing with the analogue to the system of (7.15), consider the discrete time
system
FIGURE 7.21. Bifurcation diagram of a cubic equation-the solid line indicates sinks and
the dashed line indicates sources
The definition is not very satisfactory, since it impels one to study the detailed
fine structure of ftows, for which complete descriptions do not exist, namely, the
structurally unstable ftows. Consequently, construction of a systematic bifurca-
tion theory leads to very difficult technical questions. However, we do wish to
study more than merely the bifurcation of equilibria (so called static bifurcations).
Another point to note is that a point of bifurcation need not actually represent a
change in the topological equivalence class of a ftow. For example, the system
x = - (J12 X + x 3 ) has a bifurcation value J1 = 0, but all of the ftows in the family
have an attracting equilibrium at x = O. The changes in the topological type of the
topological characteristics of the ftow are represented as Iod in the (x, J1) space
of the invariant sets of (7.21), for example fixed points, periodic orbits. These are
referred to as bifurcation diagrams.
7.8 Bifurcations of Equilibria of Vector Fields 317
This gives rise to the so called fold bifurcation shown in Figure 7.22 with two
equilibrium points for /L ::: 0 and none for /L < O. Thus, /L = 0 is a point at
which two equilibrium points coalesce, producing none for /L < O.
318 7. Dynamical Systems and Bifurcations
-,
---
FIGURE 7.22. Fold bifurcation
fl
FIGURE 7.24. Pitchfork bifurcation
7.8 Bifurcationsof Equilibria of VectorFields 319
If the second term is positive, we have the situation of Figure 7.22, with two
equilibria for /L > 0; if it is negative, we have two equilibria for /L < 0, which
would also be a saddle node or fold bifurcation . Further, requiring that
og (0,0) =1= 0
o/L
implies the existence of /L(z) near z = 0, /L = O. We also have that
og
g(O,O) = 0, -(0,0) = O. (7.30)
oz
Differentiating the equation
g(z, /L(z)) = 0 (7.31)
yields (see Problem 7.14)
d/L 2G.(0 0)
dZ ' (7.32)
-(0) =
dz ~(O 0)
dfJ- '
and
The term in (7.32) is zero from the first equation in (7.30). Thus, the system
of (7.29) goes through a fold bifurcation if it is nonhyperbolic (satisfies (7.30))
and in addition,
:~ (0, 0) =1= 0,
(7.34)
o2 g
OZ2 (0,0) =1= O.
If one were to write the Taylor series of g(z, /L) in the form
g(z , /L) = ao/L + a, Z2 + a2/LZ + a3/L2 + terms of order 3, (7.35)
320 7. Dynarnical Systems and Bifurcations
then the conditions (7.34) imply that ao, a, =1= 0, so that the normal form t for
the system (7.29) is
As far as the original system of (7.27) is concemed, with J.L E R , if the pararn-
eterized center manifold system g(z, J.L) satisfied the conditions of (7.34), the
original system would have a saddle node bifurcation, meaning to say that a
saddle and anode (equilibrium points of index + land - 1) fuse together and
annihilate each other.
2. Transcritical Bifurcation
In order to get an unbifurcated branch of equilibria at z = 0, we will need to
explicitly assume that
g(z , J.L) = zG(z, J.L),
where
g(z, J.L)
z =1= 0,
G(z,J.L)= ~'
{ z =0.
az (0, J.L)
a2 G 3
G a (7.36)
az2 (0,0) = az3 (0,0),
aG a2g
-(0,0) = --(0,0) .
aJ.L az aJ.L
If the last expression is not equal to zero, there exists a function J.L(z) defined
in a neighborhood of 0 such that
G(z, J.L(z» = O.
To guarantee that J.L(z) ~ 0 we require that
dJ.L
dz (0) =1= O.
3This usage is formal: There exists a transformation that will transform the system of
(7.29) into the normal form, but the details of this transformation are beyond the scope of
this book.
7.8 Bifurcations of Equilibria of Vector Fields 32I
a2g
--(0,0) i= 0, (7.38)
az aJ1-
a2g
az2 (0,0) i= 0.
In terms of the Taylor series of g(z , J1-) of (7.35), the conditions (7.38) imply
that ao = 0, al, az i= 0, so that the normal form for the system (7.29) is
Z = J1-Z ± Z2,
As far as the original system of (7.27) is concemed, with J1- E R, if the param-
eterized center manifold system g(z , J1-) satisfied the conditions of (7.38), the
original system would have a transcritical bifurcation, meaning to say that a
saddle and anode exchange stability type.
3. Pitchfork Bifurcation
In order to get an unbifurcated branch at z = 0 we will need to assume as in
the transcritical bifurcation case above that
g(z, J1-) = zG(z, J1-)
with the same definition of G (z, J1-) as above. To guarantee the pitchfork we will
need to give conditions on G(z, J1-) to make sure that it has afold bifurcation,
since the pitchfork is a combination of an unbifurcated branch and a fold. We
leave the reader to work out the details in Problem 7.16 that the conditions for
a pitchfork are the nonhyperbolicity conditions of (7.30) along with
ag
-(0,0) = 0,
aJ1-
a2g
az 2 (0, 0) = 0,
(7.39)
a2g
az aJ1- (0, 0) i= 0,
a3 g
az 3 (0, 0) i= 0.
In terms of the Taylor series of g(z , J1-) of (7.35), the conditions (7.39) imply
that that the normal form for the system (7.29) is
Z= J1-Z ± Z3.
As far as the original system of (7.27) is concemed, with J1- E R, if the pa-
rameterized center manifold system g(z , J1-) satisfied the conditions of (7.39),
322 7. Dynamical Systems and Bifurcations
the original system would have a pitch fork bifurcation, meaning to say that
either two saddles and anode coalesce into a saddle, or two nodes and a saddle
coalesce into anode.
x = f (», JL)
(7.40)
with ct(JLo) = O. As in the previous section, to keep the notation simple we will
(without loss of generality) set JLo = O. To be able to reduce this system to anormal
form, it is convenient to conceptualize the system of (7.40) on the complex plane
Z = Zl + jZ2. It can then be shown (again, the details ofthis are beyond the scope
of this book; the interested reader may wish to get the details from [329]) that a
normal form for the system is of the form
Close to JL = 0, using the leading terms in the Taylor series for ct(JL), a(JL), b(JL),
equation (7.42) has the form
where a' refers to differentiation of a with respect to J-L . Neglecting the higher-order
tenns gives the system
r= + ar 3 ,
dur
(7.44)
o= W + CJ-L + br",
where d = a'(O), a = a(O), w = w(O), C = w'(O), b = b(O). It follows readily
that for 11; < 0 and J-L smal1, the system of (7.44) has a circular periodic orbit of
radius
and frequency w + (c - bad J-L). Furthermore, the periodic orbit is stable for a < 0
and unstable for a > O. Indeed, this is easy to see from the pitchfork for the
dynamics of r embedded in (7.44). We will now c1assifythe four cases depending
on the signs of a, d. We will need to assume them both to be non-zero. If either
of them is zero, the bifurcation is more complex and is not referred to as the Hopf
bifurcation.
1. Supercritical HopfBifurcation. Supercritical Hopf (more correctly Poincare-
Andronov-Hopf) bifurcations refer to the case in which a stable periodic orbit
is present close to J-L = O.
a, d > 0 , a < O. The origin is stable for J-L < 0, unstable for J-L > 0, and
surrounded by a stable periodic orbit for J-L > O. This situation is shown in
Figure 7.25.
b. d < 0, a < O. The origin is unstable for J-L < 0 and stable for J-L > O. The
origin is surrounded by a stable periodic orbit for J-L < O.
2. Subcritical Hopf Bifurcation. Subcritical Hopf bifurcations refer to the case
in which an unstable periodic orbit is present close to J-L = O.
a, d > 0, a > O. The origin is stable for J-L < 0 and unstable for J-L > O. In
addition, the origin is surrounded by an unstable periodic orbit for J-L < O.
b, d < 0, a > O. The origin is unstable for J-L < 0, stable for J-L > 0, and
surrounded by an unstable periodic orbit for J-L > O. This is shown in Figure
7.26.
It may be verified using the Poincare-Bendixson theorem that the conc1usions
derived on the basis of the system (7.44) also hold for the original system (7.43)
with the higher order tenns present for J-L smal1.The signs of the two numbers a, d
detennine the type of bifurcation. The value a = a (0) is referred to as the curvature
coefficient (see Marsden and McCracken [201]) for a fonnula for a in tenns ofthe
original system f(x, J-L)). The value a < 0 corresponds to the supercritical Hopf
bifurcation and a value a > 0 to a subcritical Hopf bifurcation. As noted before,
when a = 0, the bifurcation may be more complicated. The value
d = da(J-L) (0)
dJ-L
324 7. Dynamical Systems and Bifurcations
FIGURE 7.25. Supercritical Hopf bifurcation: a stable periodic orbit and an unstable
equilibrium arising from a stable equilibrium point
FIGURE 7.26. Subcritical Hopf bifurcation: a stable equilibrium and an unstable periodic
orbit fusing into an unstable equilibrium point
represents the rate at which the real part 01 the eigenvalue 01 the linearization
crosses the jco axis. If d > 0 the eigenvalues cross from C_ to C+ as f.L increases,
and conversely for d < O.
(7.45)
7.9 Bifurcationsof Maps 325
Suppose that (7.45) has a fixed point at x (f.L) . The associated linear map at f.Lo is
given by
If the eigenvalues of the matrix D, G(x(f.Lo), f.Lo) do not lie on the boundary of
the unit disk, the Hartman-Grobman theorem yields conclusions about the sta-
ble and unstable manifolds of the equilibrium. When there are eigenvalues of
D 1G(x(f.Lo) , f.Lo) on the boundary ofthe unit disk, the fixed point is not hyperbolic
and the parameterized center manifold may be used to derive a reduced dimensional
map on the center manifold
In the remainder of this section we will assume f.L E IR and in addition discuss the
simplest cases when eigenvalues of the linearization of (7.45) cross the boundary
of the unit disk:
1. Exactly one eigenvalue of D 1 G (x (f.Lo) , f.Lo) is equal to 1, and the rest do not
have modulus 1.
2. Exactly one eigenvalue of D 1G(x(f.Lo), f.Lo) is equal to -1, and the rest do not
have modulus 1.
3. Exactly two eigenvalues of D, G(x(f.Lo), f.Lo) are complex and have modulus 1,
and the rest do not have modulus 1.
with z, f.L E IR. We will assume without loss of generality that at f.Lo = 0, the
°
linearization of this system at the fixed point Z = has a single eigenvalue at 1;
thus, we have
8G
G(O, 0) = 0, -(0,0)
8z
= 1. (7.48)
° °
This system has two fixed points for either f.L < or u. > and none for u. < 0,
u. > 0, respectively, depending on the sign of Z2 in (7.49). For a general system
(7.47), the same techniques as the continuous time (vector field) case ofthe previous
section can be used to verify that (7.47) undergoes a saddle node bifurcation if
326 7. Dynamical Systems and Bifurcations
Transcritical Bifurcation
The prototype map here is of the form
G(z , JL) = z + JLZ ± Z2. (7.51)
There are two curves of fixed points passing through the bifurcation point namely
z = 0 and z = ±JL. The more general system of (7.47) undergoes a transcritical
bifurcation , if
8G
-(0,0) =0,
8JL .
8 2G
8z 8JL (0,0) =I- 0, (7.52)
8 2G
- 2 (0, 0) =I- o.
8z
The proof of this is delegated to Problem 7.18.
Pitchfork Bifurcation
Consider the family of maps given by
G(z, JL) = z + JLZ ± Z3. (7.53)
There are two curves of fixed points , namely z = 0 and JL = ±Z2. The system
of (7.47) undergoes a pitch fork bifurcation if in addition to (7.48, we have (see
Problem 7.19):
8G
8JL (0, 0) = 0,
8 2G
8z2 (0,0) = 0,
(7.54)
82G
8z 8JL (0,0) =I- 0,
8 3G
8z3 (0, 0) =I- O.
7.9 Bifurcations of Maps 327
(7.56)
we see that the map G 2(z , f-L) has a pitchfork at u: = 0, with three fixed points for
f-L > 0 at z = 0, z = ±.jf-L(2 + f-L)/2. Fixed points of G 2 are period-two points 0/
the original system of (7.55). This situation is shown in Figure 7.27. Returning
to the general one parameter family of maps (7.47), the conditions for having an
eigenvalue at - 1 are that
aG
G(O,O) = 0, -(0,0) =-1. (7.57)
az
and in addition ,
aG 2
aJ-L (0,0) = 0,
aG 2
az 2 (0,0) = 0,
(7.58)
a2 G 2
aZaJ-L (0,0) # 0,
a3 G 2
~(O,O) #0.
Note that the four conditions of (7.58) are precisely the pitchfork conditions of
(7.54) applied to G 2 • The sign of the ratio of the two nonzero terms in equation
(7.58) teIls us the side of J-L = 0 on which we have period-two points.
r ~ r + (dJ-L + ar 2)r,
(7.59)
e ~ e + ifJo + ifJI J-L + br",
The conditions for the existence of this normal form are a little more subtle than
in the continuous time case, in that the pair of eigenvalues at the boundary of the
unit disk at J-L = 0 are each required not to be ± I or any of the (complex) square
roots , cube roots or fourth roots of 1. This is called avoiding strong resonances.
The amazing intricacy of what happens if there are strong resonances is discussed
in Amold [12]. Strong resonances occur when a (continuous) system is forced at
frequencies that are commensurate in ratios of I/I , 1/2, 1/3, 1/4 to the natural
frequency of the system . The fixed point of (7.59) at the origin is
is invariant under the dynamics of (7.59). This invariant cirele is said to be asymp-
totically stable if initial conditions "elose to it" converge to it as the forward
iterations tend to 00. Then it is easy to see that the invariant circle is asymptotically
stable for a < 0 and unstable for a > O.
As in the case of the Hopf bifurcation, the four cases corresponding to the signs
of a, d give the supercritical and subcritical Naimark-Sacker bifurcation. We will
not repeat them here. However, we would like to stress the differences between
the situation for maps and vector fields: In the Hopf bifurcation the bifurcating
invariant set (a periodic orbit) was a single orbit, while in the Naimark-Sacker
bifurcation the bifurcating set consists of an invariant circle containing many
orbits. We may study the dynamics of (7.59) restricted to the invariant cirele and
conclude that it is described by the circle map
It follows that the orbits on the invariant circle are periodic if 4>0 + (4)1 - ~ I-t
is rational, and if it is irrational then the orbits are dense on the invariant cirele .
Thus, as I-t is varied the orbits on the invariant cirele are altemately periodic and
quasi-periodic.
1. When a single eigenvalue of the linearization passes through the origin, what
other bifurcations can one expect other than the fold, transcritical, and pitch-
fork? Is it possible to have more sophisticated branching of the number of
equilibria?
2. In Chapter 2 we saw that the transcritical and the pitchfork may be perturbed or
unfolded into folds and an unbifurcated branches. Is there a theory ofunfolding
of the bifurcations and a taxonomy of them?
3. What happens when more than one eigenvalue of the linearization crosses the
origin?
The answers to this set of questions fall into the domain of singularity theory. The
initial work in this area done by Thom [30I] drew a great deal of press under the
330 7. Dynamical Systemsand Bifurcations
title of "catastrophe theory" and also strong scientific criticism [13]. Our short
treatment follows the book by Poston and Stewart [244].
a
FIGURE 7.28. Two folded-over sheets of solutions of the canonical cusp catastrophe
meeting at the "cusp point" at the origin
c
fold surface
c
cusp Iines
FIGURE 7.29. Lines of cusps and fold comprising the swallowtail bifurcation (on the left)
and projections of the bifurcation surfaces on the b-c plane for a > 0, a = 0, and a < O.
right shows the projection of the catastrophe set into the b - c plane for differing
values of a showing the number of real solutions of x 4 + ax 2 + bx + c = 0 in the
different regions. The derivation of these regions is relegated to the exercises
(Problem 7.23). The codimension of the swallowtail is 3.
4. Canonical Butterfly Catastrophe
Consider the function f (x , a, b, c, d) = x 5 + ax 3 + bx? + cx + d. Figure 7.30
shows the bifurcation diagram of the butterfly catastrophe. The figure shows a
succession of bifurcation diagrams for various values of a, b, Each of the inset
332 7. Dynamical Systems and Bifurcations
[]~EJEJEJ
[fJ [I] ~,[2] EJ
o
m[][][]E]
[I][fJ0[L]~
[]0rn~0
-+--------4-------_a
o
FIGURE 7.30. Bifurcation diagram of the swallowtail as a succession of two-dimensional
bifurcation diagrarns
Cusps
~
single dual
fold
)-.
b
dual cusp
cusp
saddle
el lipric clliptic
umbilics umbilics
A detaiied analysis ofthe bifurcation set ofthis umbilic, shown in Figure 7.33
is rather complicated, since it involves a hierarchy of fold points, cusp points,
swallowtails and even lines of elliptic and hyperbolic umbilics. For details we
refer the reader to Poston and Stewart [244].
If pis the rank defect of D) f(xo, /1-0) , choose nonsingular matrices [Pu I U] and
[P v I Y] with U , Y E ~.nxp such that
D 1N (qO , /1-0) = O.
Since both N and its first derivative vanish at qo, /1-0, N is referred to as the bifurca-
tionfunction. The bifurcation function is a function on a lower dimensional space
(jRP), and the preceding procedure is referred to as the Lyapunov-Schmidt proce-
dure (not unlike one you saw in Section 4.1.3). The Lyapunov-Schmidt procedure
is also a generalization of the splitting lemma of Problem 3.34 in Chapter 3.
The bifurcation functions are now analyzed by transforming them into one of
the "canonical" catastrophes described in the previous section for the case that
p = I, 2. Whether or not the full unfolding is observed in a given application
depends on the nature of the parametric dependence of the bifurcation function
on /1-. However, as a consequence of the completeness of the unfolding presented
earlier, if the first non-vanishing terms in the Taylor series of the bifurcation func-
tion match one of the canonical catastrophes described in the previous section , the
bifurcation pattern that will be observed is a section ofthe fold, cusp, swallowtail,
butterfly, or elliptic, hyperbolic or parabolic umbilic.
x = f(x, y) ,
(7.67)
0= g(x, y),
336 7. Dynamical Systems and Bifurcations
x = f t», y),
(7.68)
Ey = g(x, y) ,
where E E JR+ models the parasitic dynamics associated with the system which
were singularly perturbed away to get (7.67). One sees that if one were to define
solutions of (7.67) as the limit of trajectories of (7.68), then so long as the values
(x, y) represent stable equilibria of the frozen or fast boundary layer system (x is
the frozen variable),
dy
d-c = g(xo , y) . (7 .69)
As in the previous chapter, we will assume that the frozen boundary layer system
is either a gradient system or a gradient-like system. Then the limit trajectories as
E ..l- 0 of (7.68) will have y varying continuously as a smooth function of time.
However, once a point of singularity has been reached, the continuous variation
of y is no longer guaranteed. By assumption, we have that the frozen boundary
layer system (7.69) is gradient-like, that is, there exist finitely many equilibrium
points for the system for each Xo and further more all trajectories converge to
one of the equilibrium points . This assumption is satisfied if, for example, there
exists a proper function S(x, y) : JRn X JRm H- JR such that g(x, y) = D2S(X , y),
and S(x, y) has finitely many stationary points, i.e., where g(x, y) = 0 for each
x E IRn. We will denote by S~? the inset of the equilibrium point Yi (xo) of the frozen
boundary layer system (7.69) with Xo frozen. The index of the p(xo) equilibria is
i = 0, ... , p(xo). Motivated by the analysis of stable sets of initial conditions in
singularly perturbed systems as discussed in Section 6.3 of the preceding chapter,
we will define the system to allow for a jump from Yo if for all neighborhoods V
7.10 More Complex Bifurcations of VectorFields and Maps 337
Further, if all sufficiently small neighborhoods V of (xo, Yo) in {xo} x ]Rm can be
decomposed as
V = (V n SXO) u (V n SXO) u ... u (V n SXO)
Yo YI Yq
with V n S;~ i- 0, for i = 1, . .. , q , then the system (7.67) allows for jumps from
(xo, Yo) to one of the following points: (xo, YJ), ... , (xo, Yq). It is useful to see
that from this definition jump is optional from non- singular points if they are not
stable equilibrium points of the frozen boundary layer system . Also the definition
does not allow for jumps from stable equilibria of the boundary layer system.
With this definition ofjump behavior it is easy to see how the degenerate van der
Poloscillator of Chapter 2 fits the definition. The reader may wish to review the
example of (2.24) and use the foregoing discussion to explain the phase portrait of
Figure 2.31. Other examples ofjump behavior in circuits are given in the Exercises
(Problems 7.27 and 7.28). These problems illustrate that jump behavior is the
mechanism by which circuits with continuous elements produce digital outputs.
In general terms it is of interest to derive the bifurcation functions associated
with different kinds of bifurcations of the equilibria of the frozen boundary layer
system and elassify the flow of the systems (7.67) and (7.68) elose to the singularity
boundaries. For astart to this elassification, the interested reader should look up
Sastry and Desoer [263] (see also Takens [293]). The drawbacks to our definition
of jumps are:
1. It is somewhat involved to prove conditions under which trajectories of the
augmented system (7.68) converge to those defined above as f t O. The chief
difficulty is with the behavior around saddle or unstable equilibria of the frozen
boundary layer system. A probabilistic definition is more likely to be productive
(see [262]) but is beyond the scope of this book.
2. The assumption of " gradient like" frozen boundary layer dynamics is needed to
make sure that there are no dynamic bifurcations, such as the Hopf bifurcation
discussed above and others discussed in the next section, since there are no
limits as f t 0 for elosed orbits in the boundary layer system dynamics (again
a probabilistic approach is productive here, see [262]) .
3. It is difficult to prove an existence theorem for a solution concept described
above, because of the possibility of an accumulation point of jump times, re-
ferred to as Zeno behavior. In this case a Fillipov-like solution concept (see
Chapter 3) needs to be used.
t E [0, T(JL)) with y(O) = y(T(JL)) represent a periodic orbit with period T(JL) E
IR+ of the parameterized differential equation on IRn given by
x = /(x , JL). (7.70)
As in Seetion 7.3 we may define the Poincare map associated with the stability of
this closed orbit. Denote the Poincare map P(· , JL) : JRn-1 ~ JRn-1 where JRn-t
is a loeal parameterization of a transverse seetion 1; loeated at y (0). Without loss
of generality map y(O) to the origin. Then, the stability of the fixed point at the
origin of the diserete time system eonsisting of this map, namely
(7.71)
deterrnines the stability of the closed orbit. So long as the closed orbit is hyper-
bolie, that is the fixed point at the origin of (7.71) is hyperbolie, the fixed point
ehanges eontinuously with the parameter JL, as does the periodic orbit y(' , Jk)
of the eontinuous time system. However, when eigenvalues of the Poineare map
system (7.71) eross the boundary of the unit disk, the bifureations of this system
refleet into the dynamics of the continuous-time system:
1. Eigenvalues 0/ the Poincare map cross the boundary 0/ the unit disk at 1. There
are likely to be ehanges in the number of fixed points: for example, through folds,
transcritical, or piteh fork bifureations of the Poincare system . For example, a
saddle node in the Poincare map system will be manifested as a saddle periodic
orbit merging into a stable (or unstable) periodic orbit of (7.70). This is referred
to in Abraham and Marsden [1] as dynamic creation or dual suicide depending
on whether the two periodic orbits are ereated or disappear. Dynamic pitehforks
eonsisting of the merger of three periodie orbits into one are also possible.
2. Eigenvalues 0/ the Poincare map cross the boundary 0/ the unit disk at -1. A
period doubling bifureation in the Poincare map system (7.71) may be mani -
fested as an attraetive closed orbit of the system (7.70) beeoming a saddle closed
orbit as one of the eigenvalues of the associated Poincare map goes outside the
unit disk, and a new attraetive closed orbit of twice the period is emitted from
it (see Figure 7.34). The reader should be able to fi11 in the details when the
closed orbit is initially a saddle orbit as well, This bifureation is referred to as
subharmonic resonance, flip bifurcation or subtle division [1 J.
3. Eigenvalues 0/ the Poincare map cross the boundary 0/ the unit disk at com -
plex locations . A Naimark-Sacker bifureation ofthe system 7.71 results in the
ereation of an invariant torus T 2 in the original system (7.70). Thus, as the
eigenvalues of the Poincare map eross from the inside of the unit disk to the
outside, the stable closed orbit beeomes a saddle closed orbit surrounded by
a stable invariant torus (see Figure 7.35). The dynamics on the stable invari-
ant torus will eonsist of a finite number of closed orbits some attraeting. This
bifureation is referred to as Naimark excitation.
One other class of dynamic bifureations is the saddle connection bifureation
which we first saw in Chapter 2 in the eontext of the Josephson junction, when
a saddle eonnection bifureated into a finite period closed orbit on the eylinder
7.10 More Complex Bifurcations of Vector Fields and Maps 339
0
d
.. ..
©) • #
~ I
'- ... # .
l
FIGURE 7.34. The flip bifurcation of a stable closed orbit into a saddle closed orbit (shown
dotted) surrounded by a period -two stable closed orbit
0
0
@
. .
.. . • #
I
0 ·· ...
#
.
.. .
•
FIGURE 7.35. Naimark excitation showing the emergence of a stable invariant torus
surrounding a closed orbit changing from being stable to a saddle
R X SI. An example of saddle switching where the outset of a saddle moved from
the domain of one attractor to another was presented earlier in this chapter. These
bifurcations are global, in that they are difficult to predict by any linearization or
characteristic values.
One can now conceive of other more complex bifurcations: One could, for
example, conceive of a generalization of the Naimark excitation where a torus T k
gets excited to a torus T k+ l (calIed a Takens excitation). However, the list of all
340 7. Dynamical Systems and Bifurcations
the possible bifurcations of dynamical systems is in its infancy: The examples that
we have given are just a start in beginning to dive deeply into the many mysteries
of nonlinear dynamical systems.
physics literature with the work of Ott, Grebogi and Yorke, but more recently in
the controlliterature see [324; 230; 188].
7.12 Exercises
Problem 7.1. Give a proof of Proposition 7.2.
Problem 7.2. Give several examples of systems that have equilibrium points
whose stable and unstable manifolds intersect.
Problem 7.3. Use the chain rule to prove Proposition 7.12.
Problem 7.4. Use the fact that solutions of a differential equation depend
smoothly on their initial conditions over a compact time interval to prove that
the Poincare map is weil defined for closed orbits.
Problem 7.5 Forced oscillation. Consider the planar second-order system
x + 8i + w5x = E cos(wt) (7.72)
Derive the conditions for a stable forced oscillation and the Poincare map for the
case that Wo =I ta. Now consider the case where co = Wo. Show that there is no
difficulty when 8 =I O.
Problem 7.6 Recurrent points and minimal sets. In Chapter 2, Problem 2.14
you showed that in ]R2, all recurrent points are either equilibrium points or lie on
closed orbits. Give a few examples of why this is false in higher dimensions. Also,
in Chapter 2, Problem 2.15, you showed that in ]R2 all unbounded minimal sets
consists of a single trajectory with empty a, w limit sets. Give examples of the
falseness of this in higher dimensions.
Problem 7.7. Compute the center manifold for the following systems and study
their stability
1. from [122]
y= -y - a(y + Z)2 + ß(yz + i),
z= a(y + Z)2 - ß(yz + Z2).
2. from [122]
•
XI = -x22 ,
·
X2 = -X2 + XI2 + X I X 2·
3. from [122] with a =I 0:
• 2 2
XI = aX I - x 2 ,
•
X2 = -X2 + XI2 + XIX2.
342 7. Dynamical Systems and Bifurcations
4.
• 2 5
X I =X IX 2 -XI'
• 2
X2 = -X2 +X 1•
S. from [155]
XI = aX2 + u,
(7.73)
•
X2 = 3
-X2 + bXI X 2•
m
Verify that the Iinearization of the control system has an uncontrollable eigenvalue
at O. Verify that it satisfies the necessary condition for stabilization from Problem
6.9 of Chapter 6. Now, consider the state feedback law:
u = -kxl + 1/I(XI),
where 1/1(0),1/1'(0) = O. Show that the origin is a stable (asymptotically stable,
unstable) equilibrium ofthe center manifold in the following cases:
1. m = 0, ab < 0 asymptotically stable center manifold for all values of k.
2. m = 1 always unstable.
3. m = 2 asymptotically stable for k > max(O, ab).
4. m ~ 3 asymptotically stable for all k > O.
XI = -JLIXI - JL2X2 + XI X 2,
1 2
X2 = JL2 X I - JLI X 2 + Z(X 1 + x 2) ,
2
7.12 Exercises 343
where /LI represents a damping factor and /L2 ade-tuning factor. Note that the
system is Hamiltonian for /LI = O. Draw the phase portrait to explicitly indicate
heteroclinic orbits . Is there resemblance to any of the portraits shown in Figure
7.15. What happens for /LI =1= O?
Problem 7.10 Models for surge in jet engine compressors [234]. A simple
second order model for explaining surge in jet engine compressors (which can
cause loss of efficiency and somet imes stall) is given by
x = B(C(x) - y),
. 1 1
(7.74)
Y = B(X - f a- (y) .
Here x stands for the non-dimensional compressor mass flow, y represents the
plenum pressure in the compressor unit, B stands for the non-dimensional com-
pressor speed, and a, the throttle area. For definiteness, we will assume the
functional form ofthe compressorcharacteristic C(x) = - x 3 + (3/2)(b + a)x 2 -
3abx + (2c + 3ab 2 - b 3 ) 13, and the functional form of the throttle characteristic
fa(x) = (x 2 Ia2)sgn(x) . Here, a, b, CE lR+ are constants determining the quali-
tative shape of the compressor characteristic. The goal of this problem is to study
the bifurcations ofthis system for fixed a, b, C with bifurcation parameters a , B.
1. Linearize the system about its equilibria and find the stability number and type
of the equilibria. Classify saddle nodes and possible Hopf bifurcations in the
positive quadrant where x, y > O. In particular show that an equilibrium at
(x*, y*) is asymptotically stable provided that x* f}. Ja, b[. For x* E Ja, b[
determine the stability as a function ofthe parameters B, a,
2. Use theorems and methods from Chapter 2, including the determination of an
attractive positively invariant region and the Bendixson theorem to establish
regions in the positive quadrant where the closed orbits lie.
3. Study what happens to the dynamics of (7.74) in the limit that the compressor
speed B ~ 00 .
Problem 7.11. Use the center manifold stability theorem to establish the local
asymptotic stability of the origin of the system
1.
Xln+l = Xl n + X3 n,4
3
X2n+1 = -Xl n - 2X2n - Xl n,
1 2
X3n+1 = X2n - 2X3n + X2 n '
3.
3
Xln+l = Xl n - X3 n,
I 3
X3n+l = Xl n + 2X3n + Xl n'
Problem 7.13 Failure of series approximation of center manifolds. Find the
family of center manifolds for the system
y= _y + Z2,
. 3
Z = -z ,
by explicitly solving the center manifold PDE. Now evaluate its Taylor series at
the origin. Can you see what would happen if one were to try to obtain the Taylor
series approximation?
Problem 7.14 Derivation of fold conditions. By differentiating the equation
(7.31) prove the formulas given in equations (7.32, 7.33) .
Problem 7.15 Derivation of transcritical conditions. Derive the conditions
(7.37) and (7.38) for the transcritical bifurcation by following the steps for the fold
bifurcation except applied to the function G(x, f-L). Use the relationships (7.36) to
obtain conditions on g(x, f-L).
Problem 7.16 Derivation of pitchfork bifurcation. Derive the conditions
(7.39) for the pitch fork bifurcation by applying the steps of the fold bifurcation
to the function G(x, f-L).
Problem 7.17 Derivation ofthe discrete-time fold. Derive the conditions for
the discrete-time fold or saddle-node bifurcation, that is, equations (7.50).
Problem 7.18 Derivation of the discrete-time transcritical. Derive the
conditions for the discrete-time transcritical bifurcation, that is, equations (7.52).
Problem 7.19 Derivation of the discrete-tlme pitchfork. Derive the condi-
tions for the discrete-time pitchfork bifurcation, that is the equations (7.54).
7.12 Exercises 345
Problem 7.20 Period doubling in the one hump map. Use the conditions for
period doubling (7.58) to check the successive period doubling of the one hump
map (1.3)
Xn+l = hxn(l - x n)
of Chapter 1 at h = 3, h = 1 = ../6, etc.
Problem 7.21 Double-zero eigenvalue bifurcations [329]. Consider the two-
dimensional system with two parameters /LI, /Lz given by
XI =Xz,
(7.76)
Xz = /LI + /LzXz + x? + XIXZ .
This is actually a normal form for a double-zero eigenvalue (with nontrivial Jordan
structure) bifurcation at /LI = /Lz = O. Draw the bifurcation diagram in the
/LI, /Lz plane taking care to draw bifurcation surfaces and identifying the various
bifurcations. Also, repeat this problem with the last entry of equation (7.76) being
-XIXZ'
Problem 7.22 Bifurcation diagram ofa cusp. Derive the bifurcation diagram
for the cubic equation x 3 + ax + b = O. The key is to figure out when two out of
the three real roots coincide and annihilate. You can do this by determining values
ix
of x at which both x 3 + ax + b = 0 and x 3 + ax + b = O. Of course, you can
also solve the cubic equation in closed form and get the answer that way. Be sure
to derive Figure 7.28.
Problem 7.23 Bifurcation diagram of a swallowtail. Derive the bifurcation
diagram of a swallowtail (Figure 7.29). Use the method of the previous problem
to determine the regions where the number of solutions transition from 4 to 2 to O.
Problem 7.24 Bifurcation diagram ofan elliptic umbilie, Derive the bifurca-
tion diagram of an elliptic umbilic as shown in Figure 7.31. Use computer tools
judiciously to help you fill in the number and type of solutions in the different re-
gions. It is useful to note that the elliptic umbilic of equation (7.61) is the gradient
of the function V(XI , Xz) = x? - 3x1X~ + a(x? + x~) + bx, + CXz.
Problem 7.25 Bifurcation diagram of a hyperbolic umbilic. Derive the bi-
furcation diagram of an elliptic umbilic as shown in Figure 7.32 . Use computer
tools and elimination of one of the variables XI , Xz judiciously to help you fill in
the number and type of solutions in the different regions. It is useful to note
that the hyperbolic umbilic of equation (7.62) is the gradient of the function
V(XI , Xz) = x? + xi + aXlxz + bXI + Cxz.
Problem 7.26 Stabilization ofsystems with uncontrollable linearization [21].
Consider the problem of stabilizing a nonlinear, single input system by state feed-
back, to simplify matters assume that the control enters linearly: we consider
systems of the form
X = f(x) +bu
346 7. Dynarnical Systems and Bifurcations
with x E jRn and /(0) = O. In Chapter 6 we established that the system is not
stabilizable by smooth feedback if the linearization is not stabilizable for some
S E C+, rank[s I - Alb] < n. We also showed how a linear feedback loca11y
asymptotically stabilizes the system if rank [s I - Alb] = n for a11 s E Co In
this exercise, we study how it may be possible to use some non linear (smooth)
feedback law to asymptotically stabilize systems which may have some jto axis
uncontro11able modes, that is, for some w, rank [jwI - Alb] < n. The current
problem focuses on a very specific system; for the more general theory refer to
Behtash and Sastry [21]. Consider the system
Xl = Xz - x~ + XIX; - 2XZX3,
XZ = x~ + XIX3, (7.77)
X3 = -5x3 + u.
Check that there is a double-zero eigenvalue of the Iinearization that is not linearly
contro11able. Now, choose
Write the dynamics on the center manifold for this system. Determine a, ß, y
to have an asymptotically stable center manifold system. You may wish to use
quadratic or quartic Lyapunov functions to help you along.
To study the robustness of the control law that you derived above, perturb the
first equation of (7.77) by EXI' Then, for a11 positive E, for a11 smooth feedback
the origin is unstable. However, verify that the controllaw that you derived earlier
actually stabilizes the Xl, Xz variables to a neighborhood of the origin. Can you
actually determine the w limit set of trajectories by doing a bifurcation analysis
of the perturbed system (7.77) with the perturbation parameter E as bifurcation
parameter?
Problem 7.27 Two tunnel diodes [66]. Consider the circuit ofFigure 7.36 with
two tunnel diodes in series with a I Henry inductor. The constitutive equations
of the two tunnel diodes are also given in the figure along with the composite
characteristic. By introducing E parasitic capacitors around the two voltage con-
trolled resistors (as suggested in Section 6.4) as shown in the figure, give the jump
dynamics of the circuit.
FIGURE 7.36. Two tunnel diode circuit with parasitics shown dotted
transistor as given in the equations (7.78) below. The circuit equations with the
two transistors assumed identical are given by
.
V e2 -
.
Vel
I (I 0
= C ')
-I ,
Here VT, 10 , I s are an constants; R , C stand for the resistance and capacitance
values respectively. Simplify the equations to get an implicit equation for v .-
Ve2 - Vel :
1
iJ = C(lo - i),
(7.79)
2/o-i
0= v - (2/0 - 2i)R - VT log - . - .
I
After suitable addition of parasitic capacitors to the circuit, prove that the circuit
oscillates in a manner reminiscent of the degenerate van der Poloscillator.
8
Basics of Differential Geometry
The material on matrix Lie groups in this chapter is based on notes written by
Claire Tomlin, 0/ Stanford University and Yi Ma.
In Chapter 3 we had an introduction to differential topology with definitions of
manifolds, tangent spaces and their topological properties. In this chapter we study
some elementary differential geometry which will provide useful background for
the study of nonlinear control. In this chapter, we will study tangent spaces , vector
fields, distributions, and codistributions from a differential geometrie point ofview.
We will prave an important generalization of the existence-uniqueness theorem
of differential equations for certain classes of distributions, called the Frabenius
theorem. This theorem is fundamental to the study of contral of nonlinear systems
as we will see in the following chapters. Finally, we study matrix Lie groups, which
are the simplest class of manifolds which are not vector spaces . The tangent space
to Lie groups are especially simple and their study proves to be very instructive.
In this context we will also study left invariant control systems on Lie groups. In
some sense these are the simplest class of nonlinear contral systems, behaving like
linear systems with a Lie graup as state space.
This definition is somewhat abstract, but it will help the reader to think about tan-
gent vectors as differential operators . For instance if M = jRm, and the coordinate
directions are r, for i = I, . . . ,m , examples of tangent vectors are the differential
operators E ja = a/arj at every point a E jRm. They act on smooth functions by
differentiation in the usual way as
E;af= -
af .
ar;
It is intuitively clear that the tangent space to jRm at any point can be identified
with jRm. We try to make this point clear by showing that the E ;a defined above
for i = I, . . . ,m are a basis set for TajRm.
It follows that every vector field in jRm can be described as
a ;(x)E;a
for some choice of smooth functions a;(x). This construction can be used to
develop an understanding of general vector fields on m dimensional manifolds. To
this end, we develop a few definitions . If F : M f-+ N is a smooth map, we define
a map called the differential map or push forward map as folIows:
Definition 8.2 Push Forward Map or Differential Map. Given a smooth map
F : M f-+ N we define the push forward, or differential, map
F* : TpM f-+ TF(p)N.
Indeed, it is easy to check that F*pXp E TF(p)N and it is linear on the reals. As
for checking whether F*pXp is a derivation, note that
F*-;(p) = (F*p)-I .
By abuse of notation, these basis elements are simply referred to as a:.I p with the
understanding that if f E COO(p), then '
af af 0 1/f-1
-a Ip(f) = a
X; r,
Notice that the abuse of notation is consistent with the fact that f 01/f-I is a local
representation of f. When the coordinate chart used to describe the neighborhood
of the manifold changes the basis for TpM changes . For instance a given vector
field X p E TpM may be represented differently with respect to the basis given by
U, 1/f namely a:; as
m a
x, = Lcx;(p)-a. '
;=} X,
352 8. Basics of Differential Geometry
Problem 3.29 gives the matrix transforming a into ß. This discussion may be
extended to a deseription of the tangent map to a smooth map F : M f-+ N . Indeed,
let U, x and V, Z be eoordinate charts for M, N in neighborhoods of P E M and
F (p) E N . Then, with respect to the bases a~j and a~j the representation of the
tangent map is the Jaeobian ofthe loeal representation of F, namely c/J 0 F 0 1/1-1 =
(FI (XI, .•. ,xm), .. . , Fm (XI , .. • ,xm»T. That is,
F*p (-a.al)
X, P
aX,
(Zj) = -a. (z, 0 F)l p = er,.:
-a
X,
The tangent bundle of a smooth manifold is defined as
with the restrietion that 7T 0 Xis the identity on M . Here 7T is the natural projection
from TM onto M. and X is referred to as a smooth section 0/ the tangent bundle
TM.
The second condition in the above definition guarantees that there is exactly one
tangent vector at each point p E M. Further, the tangent vectors X pETp M vary
with respect to p in a smooth fashion. As a consequence of this definition, one can
derive a local representation for a vector field consistent with a coordinate chart
U,1/J = U, XI , . .. , X m for the manifold M. At a point p E M the vector X p is
equal to L:r= 1 a, a:, .For other points in a neighborhood of the point p the formula
for the vector field is given by
m a
x, = L(Xi(P)- (8.2)
[ ee l aXi
where the a, (p) are smooth functions of p E M . Vectorfields represent differential
equations on manifolds. Indeed if a (r) : ]a, b[ H- M is a curve on the manifold it
is said to be an integral curve of the vector field X if
o-(t) = X(a(t)) (8.3)
(8.4)
By the existence and uniqueness theorem for ordinary differential equations , the
existence ofintegral curves for a given smooth vector fieldis guaranteed. The vector
field is said to be complete if the domain of definition of the integral curves can be
chosen to be ]-00, 00[. In this case, the integral curves of a vector field define a
one-parameter (parameterized by t) family of diffeomorphisms <P, (x ) : M H- M
with the understanding that <P,(x) is the point on the integral curve starting from
initial condition X at t = O. (Make sure to convince yourselfthat the existence and
uniqueness theorem for the solutions of a differential equation guarantees that the
map <P,(x) is a diffeomorphism). This one parameter family of diffeomorphisms
is referred to as the flow 0/ the vector field X.
Vector fields transform naturally under diffeomorphisms. First, consider coor-
dinate transformations: let y = T (x) be a coordinate transformation from the local
coordinates U, x to the coordinates V, y. Then, if the vector field X is represented
as L:;:I Xi(x) a:, or L:r=1 Yi(y) a~, in the two coordinate systems, it follows that
YI(T(x)) ] _ aT [XI~X)]
- (x) . .
ax .
[
Ym(T(x) Xm(x)
354 8. Basicsof Differential Geometry
YF(p) := F.pX p.
In the sequel we will usually deal with the case that F is a diffeomorphism, in
which case the vector field Y is said to be F induced and is denoted by F.X . If
g : N ~ lR is a smooth function , it follows that
Y(g) = F.X(g) = (X(g 0 F» 0 F - 1•
Lxh(p) = X(h)(p)
The Lie derivative of a function h with respect to a vector field X is the rate of
change of h in the direction of X.
Definition 8.6 Lie Bracket of two Vector FieIds. Given two smooth vector
fields X, Y on M, we define a new vector field calied the Lie bracket [X , Yl by
setting for given h : M ~ lR
(8.5)
It is necessary to verify that the definition above does yield a vector field [X, Yl,
Linearity over the reals is easy to verify for the definition of equation (8.5). To
check the derivation property we choose two functions g, h : M ~ lR and note
that
[X, Y]p(gh) = Xp(Y(gh)) - Yp(X(gh»
= X p(Y(g)h + Y(h)g) - Yp(X(g)h + X(h)g)
= X p(Y(g))h(p) + X p(h)Yp(g) + Xp(Y(h))g(p) + Xp(g)Yp(h)
- Yp(X(g»h(p) - Yp(h)X p(g) - Yp(X(h»g(p) - Yp(g)Xp(h)
= [X , Y]p(g)h(p ) + g(p)[X , Ylp(h).
8.1 Tangent Spaces 355
We may now derive the expression in local coordinates for [X, Y] . Indeed if in
local coordinates (XI , . .. x m ) the vector fields have the representations
m 8 m 8
X = LX;(x)-, Y = LY;(X)-
;=1 8x; ; =1 8x;
Then, choosing a coordinate function Xj, we get
[X,Y] = Lm(mL-
8Y .
X; -J 8X ' ) 8
_Jy; - (8.6)
j=1 8x;
;=1 Bx, 8xj
To get a feel for this formula consider the following example:
Example 8.7 Lie Brackets ofLinear Vector Fields. Consider two linear vector
fields with coordinates given by X = Ax, Y = Bx on Rm, that is,
m
X ;(x) = La;jXj,
j=1
m
Y;(x) = Lb;jXj.
j=1
2. Jacobi Identity
[[X, Y], Z] + [[Y, Z], X] + [[Z, X], Y] = o.
3.
[fX, gY] = fg[X , Y] + fLx(g)Y - gLy(f)X.
356 8. Basics of Differential Geometry
(8.7)
Proof: Let g : N 1-+ ~ be a smooth function. Then we have
I(x) =
If a (x) : lRn 1-+ lRis a smooth function, then it is easy to see that a (x) I (x) is also
a vector field. Further, the sum of two vector fields is also a vector field. Thus, the
space of all vector fields is a module over the ring of smooth functions on U C lRn ,
denoted by COO(U). (A module is a linear space for which the seal ars are elements
of a ring.) Also, the space of vector fields is a vector space over the field of reals .
We now make the following definition:
In the preceding definition, span is understood to mean over the ring of smooth
functions , i.e.• elements of Il at the point x are oftheform
Cl, (x)X) (x) + ClZ(X)Xz(x) + ...+ Clm (x)X m (x)
with the Cl; (x) all smoothfunctions ofx.
Remarks:
1. A distribution is a smooth assignment of a subspace of R" to each point x.
2. As a consequence of the fact that at a fixed x, Il (x) is a subspace of R" , we
may check that if Il), Ilz are two smooth distributions, then their intersection
111(x) n Il z(x) defined pointwise, i.e., at each x is also a distribution as also
their sum (subspace sum) Il) (x) + Ilz(x). Also note that the union of two
distributions is not necessarily a distribution (give a counterexample to convince
yourself of this). The question of the smoothness of the sum distribution and
the intersection distribution is answered in the Proposition 8.11 below.
3. If X (x) E ]Rmxn is the matrix obtained by stacking the X ;(x) next to each other,
then
Il(x) = Image X (x) .
We may now define the rank of the distribution at x to be the rank of X (x).
This rank, denoted by m (x) is not, in general, constant as a function of x . If
the rank is locally constant, i.e., it is constant in a neighborhood of x , then xis
said to be a regular point of the distribution. If every point of the distribution
is regular, the distribution is said to be regular.
Proposition 8.11 Smoothness ofthe Sum and Intersection Distributions. Let
Il) (x), Ilz (x) be two smooth distributions. Then their sum Il, (x) +!lz (x) is also a
smooth distribution. Let Xo be a regular pointofthe distributions Il l , !lz. 111 nllz.
Then there exists a neighborhood ofxo such that 111 n Ilz is a smooth distribution
in that neighborhood.
Proof: Let
Il, =span{X1, .... X d1}, Il) = spanlr}, ... , YdJ .
It follows that the sum distribution
111 + Il z = span{X, (x), . .. , X d1(x), Y 1 (x) , .. . , Yd2 (x)}
is also a smooth distribution. As for the intersection, vector fields in the intersection
are obtained by solving the following equation expressing equality of vectors in
Il] and Il z as follows
d, d:
La;(x)X;(x) - Lbj(x)Yj(x) = 0,
;=1 j=1
l l
Indeed, consider
~I = span [ : ~2 = span [ 1 ~ XI
Q = spanle». .. . , wd.
Remarks:
1. The rank of a codistribution may be defined ju st like the rank of a distribu-
tion. Also, regular points for codistributions and regular codistributions are
analogously defined.
8.3 Frobenius Theorem 359
to] == O.
3. Given a distribution li.(x) we may construct its annihilator to be the codistri-
bution Q of all covectors that annihilate li.. This is abbreviated as Q = li..L
or li. = Q.L. As in the instance of intersections of distributions care must be
taken about smoothness. It is not always true that the annihilator of a smooth
distribution (resp. codistribution) is a smooth codistribution (resp. distribution).
See, however, Proposition 8.14 below.
Proposition 8.14 Smooth Annihilators. Let Xo be a regular point 01a smooth
distribution li.(x). Then Xo is a regular point 01 the annihilator li..L. Also. in a
neighborhood olxo. li..L is a smooth codistribution.
Proof: See Exercises.
LXiAj(X) == °
for i = I, ... , d. Now a simple calculation establishes that
L[Xi .XklAj == 0.
Since, by assumption, the differentials {dAI, ... ,dAn-d} span Ill-, we deduce that
thc vector field [Xi, X k] is also in Il. Since X], Xk are arbitrary, it follows that Il
is involutive.
The proof of the sufficiency of the involutivity condition is by construction:
Denote by ,pt;(x) the flow along the vector field X , for ti seconds starting from x
at 0. Thus, ,pt i (x) satisfies the differential equation
d x, x,
dt,pt (x) = X (,pt (x)), ,poX(
' x) = x.
where the 1frj(x) are real-valued functions. We claim that the last n - d of these
functions are independent solutions of the given partial differential equations.
Indeed, from the identity
8 F-
-
[ Bx
-
1
J_ [8FJ
- =/
8t
x -F(tI •...•I. )
it follows that the last n - d rows of the Jacobian of F - 1, i.e., the differentials
d1frd+l, . .. , d1frn annihilate the first d columns of the Jacobian of F. But, by the
second claim above, the first d columns of the Jacobian of F span the distribu-
tion !i.. Also, by eonstruetion the differentials d1frd+!, . . . , d 1frn are independent,
completing the proof.
For the proof of claim (l) above, note that for all x E ]Rn the flow <Pt is
weIl defined for t j small enough. Thus the map F is weIl defined for tl , ..• , t n
sufficiently small. To show that F is a loeal diffeomorphism, we need to evaluate
the Jacobian of F. It is convenient at this point to introduce the notation for push
forwards or differentials of maps M (x) : ]Rn 1-+ ]Rn by the symbol
8M
(M)* = -
8x
Now by the chain rule we have that
8F _ XI Xi- I 8 x, X.
-8 - (<Pl I )*'" (<Pli-! )*-8 (<Pli 0· .. 0 <PI. )
tj tj
A. XI) (A. Xi- l) I'. (A. Xi
= ('1' 11 * ' .. '+'1;_ 1 *J; '1'1;
A.X.) •
0 · · · 0 '1'11/
at;
8F _
- (A.X 1 )
'f'11 *'"
(A.Xi- l)
'f'li _1 *J i
I'. (A.Xi
'f'li 0 '"
A.x.)
'f'1.
are linearly dependent. To do this we will show that for small It land two vector
fields r , () E !i.
(<p~)*r O<P~I(X) E !i.(x)
We will, in fact, specialize the ehoice of r to be one of the X, and define
Vj(t) = (<P~I)*Xi 0 <p~(x)
for i = I, .. . , d. Since
362 8. Basics of Differential Geometry
and
d 01
dt (f 0 rJ>~(x» = ox () 0 cf>~(x),
it follows that the vector field Vi (1) satisfies the differential equation
• (J (J
Vi = (rJ>-t)*[(), X i] 0 rJ>t (x).
By hypothesis, ß is involutive, so that we may write
d
[(},X i] = L/-LijXj,
j=l
1. The group of units of Mn (JK) is the set of matrices M for which det(M) =1= 0,
where 0 is the additive identity of JK. This group is called the general linear
group and is denoted by GL(n, JK).
2. SL(n, JK) C GL(n, JK) is the subgroup of GL(n , JK) whose elements have
detenninant 1. SL(n, JK) is called the special linear group.
3. Orthogonal Matrix Groups . O(n, JK) C GL(n, JK) isthesubgroupofGL(n , JK)
whose element are matrices A which satisfy the orthogonality condition:
ÄT = A- 1, where ÄT is the complex conjugate transpose of A. Examples
of orthogonal matrix groups are:
a, O(n) == O(n , JR.) is called the orthogonal group
b. U(n) == U(n, C) is called the unitary group
c, Sp(n) == Sp(n ,lHI) is called the symplectic group
If, for A E GL(n,lHI), Ä denotes the complex conjugate of the quaternion,
defined by conjugating each element using
x + IY +]Z + kw = X - l y -]Z - kw,
then
-T
Sp(n) = {B E M n (1HI), B B = I}.
An equivalent way of defining the symplectic group is as a subset of G L (2n , C),
such that
Sp(n) = {B E GL(2n , C) : B T J B = J; iJT = B- 1},
where the matrix J is called the infinitesimal symplectic matrix and is written
as
0
J=
[ -ln xn
We will see in the next seetion that matrix groups are manifolds. Owing to their
special strueture, it is possible to eharaeterize their tangent spaees. Here we will
first give an algebraie definition of the tangent spaee. In the next seetion, we will
see that it is straightforward to verify that the following algebraic definition of the
tangent spaee to the matrix group is the same as the geometrie definition given
earlier in this ehapter.
Proof: If y(.) and a (.) are two eurves in G, then y' (0) and a' (0) are in T . Also,
ya is a eurve in G with (Y(j)(O) = y(O)a(O) = I.
d , ,
du (y(u)a(u))= y (u)a(u) + y(u)a (u),
(ya)'(O) = y '(O)a(O) + y(O)a'(O) = y'(O) + a'(O) .
Sinee ya is in G, (ya)'(O) is in T. Therefore, y'(O)(j(O) + y(O)(j'(O) is in T ,
and T is closed under veetor addition. Also, if y' (0) E T and r E IR, and if
we let (j(u) = y(ru) , then a(O) = y(O) = land (j'(O) = ra'(O) . Therefore,
ra'(O) E T, and T is closed under sealar multiplieation. 0
Wenow introduee a family of matrices that we will use to deterrnine the dimensions
of our matrix groups.
so(n) = {A E M n(IR) : AT + A = O} .
2. Let su(n) denote the set of skew-Hermitian matrices
Now considcr the orthogonal matrix group. Let y : [a, b] t-+ O(n) , such that
y(u) = A(u), where A(u) E O(n) . Therefore , AT(u)A(u) = I. Taking the
derivative of this identity with respect to u, we have:
A /T (u)A(u) + AT (u)A'(u) = O.
Since A(O) = 1,
A/ T (0) + A'(O) = O.
Thus, the vector space TI O(n) of tangent vectors to O(n) at I is a subset of the
set of skew-symmetric matrices, so(n) :
TI O(n) C so(n).
A2 A3
exp(A) := e = I
A
+ A + 2T + 3T + ... .
The matrix logarithm log : Mn (11{) t-+ Mn(11{) is defined only for matrices close
to the identity matrix I:
(X - 1) 2 (X - 1) 3
log X = (X - 1) - 2 + 3 - + ....
Proposition 8.19. A E so(n) implies eA E SO(n).
Proof: Define A = a '(O), where a(u) = log y(u), (i.e., y(u) = e'r(u» . We
need to show that a(u) = Au, a line through 0 in Mn (OC).
' ( ) I' a(u
a u = 1m
+ v) - a(u) I' log y(u + v) -log y(u)
= 1m ---::....:........-------'''-'--
v--+o V v--+ o V
Thus,
dirn O(n, OC) ~ dimso(n , OC) .
But we have shown using our definition of the dimension of a matrix group that
dirn O(n , OC) :5 dimso(n, OC) .
Therefore,
dirn O(n , OC) = dimso(n, OC),
and the tangent space at I to O(n, OC) is exactly the set of skew-symmetric ma-
trices. The dimension of so(n, R) is easily computable: We simply find a basis
for so(n , R). Let Eij be the matrix whose entries are all zero except the ij-th
entry, which is I, and the ji-th entry, which is - 1. Then Eij, for i < j, form
a basis for so(n). There are (n(n - 1))/2 of these basis elements, Therefore,
dirn O(n) = (n(n - 1))/2.
Similarly, (see Problem 8.8), one may compute thatdim SO(n) = (n(n -1))/2,
dirn V (n) = n 2 , dirn SV (n) = n 2 - I, and dirn Sp(n) = n(2n + I).
such that
X g = dLg(XI).
Note thatdL g is the push forward map (L g)* . The vectorfield formed by assigning
X g E TgG for each g E Gis called a left -invariant vector field.
o
Also, if X and Y are left-invariant vector fields, then X + Y and r X, r E IR
are also left-invariant vector fields on G. Thus, the left-invariant vector fields of G
form an algebra under [', '], whichiscalled theLie algebra ofG and denoted .L(G).
The Lie algebra .L(G) is actually a subalgebra of the Lie algebra of all smooth
vector fields on G. With this notion of a Lie group's associated Lie algebra, we
can now look at the Lie algebras associated with some of our matrix Lie groups.
We first look at three examples, and then, in the next section, study the general
map from a Lie algebra to its associated Lie group.
Examples:
• The Lie algebra of GL(n, IR) is denoted by gl(n, IR), the set of all n x n real
matrices. The tangent space of G L (n , IR) at the identity can be identified with
2 2
IRn , since G L (n, IR) is an open submanifold of IRn • The Lie bracket operation
is simply [A, B] = AB - BA, matrix multiplication.
• The special orthogonal group SO (n) is a submanifold ofG L(n, IR) , so TI SO(n)
is a subspace of TIGL(n , IR). The Lie algebra of SO(n), denoted so(n), may
2
thus be identified with a certain subspace ofIRn • We have shown in the previous
section that the tangent space at I to SO(n) is the set of skew-symmetric
matrices; it turns out that we may identify so(n) with this set. For example, for
SO(3), the Lie algebra is
One can show that rP~(t) is a one-parameter subgroup of G. Now the exponential
map of the Lie algebra L(G) into G is defined as exp : TIG f-+ G such that for
s E R,
exp(l;s) = rP~(s).
2!
W
3.
W
+ ... ,
which ean be written in closed form solution as:
• W wZ
e
W
= I + j;1 sin Iwl + Iwl z (l - eos Iwl).
for w = 0 and
A= I
W
+ - z (1 -
wz
eos Iwl) + - 3 (Iwl
.
- sm Iwl).
Iwl Iwl
ofthe first kind relative to the basis {Xl, X 2 , •• • , X n } . A different way of writing
coordinates on a Lie group using the same basis is to define () : jRn 1-4 G by
for g in a neighborhood of I . The functions «(}I , (}2, • • . , (}n) are called the Lie-
Cartan coordinates ofthe second kind.
An example of a parameterization of SO(3) using the Lie-Cartan coordinates
of the second kind is just the product of exponentials formula:
= Sin~(}l) COS«(}l )
[
o
o
COS«(}3)
sin«(}3)
l n ~ [~ ~1 ~ l
where R E SO(3) and
x ~ [~ ~ ~I y ~ [~I ~ i
Definition 8.28 Actions ofa Lie Group. lfM is a differentiable manifold and G
is a Lie group, we define a left action ofG on M as a smooth map <I> : G x M 1-4 M
such that :
372 8. Basics of Differential Geometry
The formula given in this lemma is a measure of how much X and Y fail to
commute over the exponential: If [X, y) = 0, then Adexp{x )Y = Y.
8.4 Matrix Groups 373
[X;, X j ] = I>tXk'
k
~adi X; k
exp(p,X1)X; exp(-PIX,) = X; + L...J --~-PI'
k=1 k.
The terms adi1X; are calculated using the structure constants:
n
adx,X; = L C~!Xnl
n,='
n n
adt X; = LL c~!C~~\Xn2
n\=1 n2= 1
~adtX; k
X;+b-k-!-PI '
ad y X = ~ Bnadnx
exp (ad y ) - I ~ n! y
where the B; are the so-called Bernoulli numbers. In fact , the recursive definition
ofBemoulli numbers is simply the simplification ofthe exponential formulas given
above.
8.5 Left-InvariantContra) Systems on Matrix Lie Groups 375
g= g (t.=1
X i Ui ) '
where the u, are the inputs and the Xi are a basis for the Lie algebra Ltg) . In the
above equation, g X, is the notation for the left-invariant vector field associated
with Xi. The equation represents a driftless system, since if u, =0 for all i, then
g = O.
In the next subsection, the state equation describing the motion of a rigid body
on SE (3) is developed. The following subsection develops a transformation, called
the Wei-Nonnan fonnula, between the inputs u., i.e. , the Lie-Cartan coordinates
of the first kind and the Lie-Cartan coordinates of the second kind. Applications
to steering a control system on SO(3) are given in the Exercises (8.11) .
t'(s) = K(s)n(s),
(n(s), n(s» = I,
so that n'(s) .L n(s) .
The binormal to the curve at s is denoted by b(s), where
or equivalently,
Let
n'(s) = r(s)b(s) ,
where res), called the torsion of the motion, measures how quickly the curve is
pulling out ofthe plane defined by n(s) and res). Thus
We thus have
a '(s) = res),
r'(s) = K(s)n(s),
b'(s) = r(s)n(s).
Since res), n(s) , and b(s) are all orthogonal to each other, the matrix with these
vectors as its columns is an element of SO(3):
Thus,
8.5 Left-Invariant Control Systems on Matrix Lie Groups 377
and
0 -K(S) 0 I1 ]
d K(S) 0 r(s) I0
dsg(s) = g(s) 0 -r(s) 0 I0 .
[
o 0 0 10
These are known at the Frenet-Serret equations of a curve. The evolution of the
Frenet-Serret frame in JR3 is given by
g =gX,
where g E SE(3) and X is an element of the Lie algebra se(3). We may regard
the curvature K (s) and the torsion r (s) as inputs to the system, so that if
Ul = K(S),
Uz = -r(s),
then
which is a special case of the general form describing the state evolution of a left
invariant control system in SE(3) .
An example of the general form of a left invariant control system in SE(3) is
given by an aircraft flying in JR3:
0
U3
g=g
-Uz U,
[
o o
The inputs Uj, uz, and U3 control the roll, pitch, and yaw of the aircraft, and the
input U4 controls the forward velocity.
Specializing the above to SE (2), we have the example of the unicycle rolling
on the plane with speed U I :
(8.10)
and torques which drive the motion. A more realistic approach would therefore be
to formulate the steering problem with adynamie model of the rigid body, which
uses these forces and torques as inputs . Dynamic models are more complex than
their kinematic counterparts, and the control problem is harder to solve.
where the u, are inputs and the X , are a basis of the Lie algebra C(g). We may
express g in terms of its Lie-Cartan coordinates of the second kind:
Thus,
TI exp(yjXj)Xi TI exp(yjXj)
n i-I n
g= L y;'(t)
t.
i= 1 j =J i »!
n n
= g LY;'(t) L~ki{Y)Xko
i=1 k=1
where the last equation results from Lemma 8.30. Ir we compare this equation
with the state equation, we may generate a formula for the inputs to the system in
terms of the Lie-Cartan coordinates:
y~(t)
8.7 Exercises 379
so that
-I
8.6 Summary
This chapter has given the reader an abbreviated vista into differential geometry.
The treatment is rather abbreviated especially in topics of interest to Riemannian
geometry where an inner product structure is placed on the tangent space. There
are some excellent textbooks in differential geometry including Boothby [34] and
[325]. For a great deal of the recent work on the intersection of differential ge-
ometry, c1assical mechanics and nonlinear control theory, see for example Murray
[224], and Kang and Marsden [159].
For more on Lie groups we refer the reader to Curtis [75]. More advanced
material is to be found in [312]. The literature on control of systems on Lie groups
is very extensive with early contributions by Brockett [43], [44], and Baillieul
[16]. The use of Lie groups in kinematics and control of robots introduced by
Brockett was developed in Murray, Li and Sastry in [225]. Steering of systems on
Lie groups was studied by Walsh, Sarti and Sastry in [320], [257]. More recently
the use of the Wei Nonnan fonnula and averaging was studied by Leonard and
Krishnaprasad [178] and Bullo, Murray, and Sarti [51], Interpolation problems
for systems on Lie groups can be very interesting in applications such as the
"Ianding tower problem" posed by Crouch, see for example Crouch and Silva-
Leite [73] and Camarinha et al [58]. Steering of systems on some other c1assical
Lie groups is very important in quantum and bond-selective chemistry and is a
growing area of research, see especially the work of Tarn and coworkers [295;
294] and Dahleh, Rabitz, and co-workers [76]. Optimal control of systems on Lie
groups is the subject of arecent book by Jurdjevic [157], see also Montgomery
[218],
8.7 Exercises
Problem 8.1. Prove Proposition 8.3.
Problem 8.2. Prove that the tangent bundle TM of a manifold is also a manifold
of dimension twice the dimens ion of M .
380 8. Basics of Differential Geometry
Problem 8.3. Find a geometric description of the tangent bundle of a circle SI, a
torus SI x SI and a cylinder Si x IR. More generally what can you say about the
tangent bundle of the product of two manifolds MI x M2.
Problem 8.4. Prove that the definition of the involutivity of a distribution is basis
independent. More precisely, show that if for
it has been verified that [f;, Jj] E ß, then for another basis
ß(x) = spanjg}, . .. , gm (x)}
Problem 8.7. Prove that the exponential map maps the Lie algebras o(n) ,
su(n), sp(n), se(n) into their corresponding Lie groups O(n), SU(n) , Sp(n),
SE(n), respectively, Also prove that the logarithm map maps the same Lie groups
into their respective Lie algebras.
Problem 8.8. Find the dimensions of the Lie Groups O(n), SU(n), Sp(n),
SE(n), SL(n) .
seconds. Of course, you can also steer in an arbitrary time T seconds. Equation
(8.12) is a roll-pitch-roll coordinatization of SO (3) or a generalization of the
Lie Cartan coordinates of the second kind discussed in this chapter. Can you
think of other strategies for steering the satellite from gi to gf?
3. Consider the two input situation of the previous example, with the input di-
rections normalized. Consider the inputs U I = ao + a, sin 2rr t and U2 =
b, + b2 cos 2rr t. Can you find values of ao, al , bo, b, to steer the system from
s. to gf?
4. Consider the case that b 3 = 0 and b, , bi are linearly independent, but U2 == 1.
This corresponds to the case that the satellite is drifting. Derive conditions under
which you can steer from gi to gf in 1 second.
Problem 8.12 Steering the Unicycle. Consider the unicycle of equation (8.10).
Steer it from an initial to a final configuration in SE(2) using piecewise constant
inputs UI, U2.
Problem 8.13 Stiefel Manifolds. Consider the space of all matrices U E Rnxm
with m ~ n, such that U U T = I E Rmxm. Prove that the space of these matrices
is a manifold. Find its dimension. Characterize the tangent space of this manifold.
What can you say about this class of manifolds, called Stiefel manifolds when
m = n and m = n - 1.
Problem 8.14 Grassmann Manifolds. Considerthe space of all m dimensional
subspaces in Rn (m ~ n). Prove that this is a smooth manifold (the so called
Grassmann manifold), usually denoted by G(n, m). Characterize the tangent space
of this manifold and find its dimension. Consider all the orthonormal bases for
each m-dimensional subspace in G(n, m). Convince yourself that G(n, m) is the
quotient space V (n, m)/O(m), where V (n, m) is the Stiefel manifold defined in
the previous problem. Check that
dim(G(n, m)) = dim(V (n, m)) - dim(O(m)).
Problem 8.15 Several Representations of SO(3). Prove that the following Lie
groups can be naturally identified with each other:
1. SO(3);
2. the quotient SU(2)/(±l) ofthe group SU(2) by its center (an element in the
center of a group is one which commutes with all elements in the group);
3. the quotient S3/ (± I) of unitary quatemions (see Chapter 3 for the definition)
by its center.
Prove that, as manifolds, each of them is diffeomorphic to all the following
manifolds, and therefore induces the same group structure on them:
1. V (3,2) the Stiefel manifold of all 3 x 2 orthogonal matrices in R 3x2;
2. TI S2 the set of unit tangent vectors to the unit sphere in R 3.
3. IRp3 the set of alliines through the origin in R 4 (this is also the Grassmann
manifold G(4, I), see Problem 8.14).
8.7 Exercises 383
Problem 8.16 Essential Manifold [197; 107]. This exercise gives a geometrie
characterization of one of the most important manifolds used in computer vision,
the so-called essential manifold:
E = {Rß IRE SO(3), pE ]R3,lpl z = I}
L: -f.' ~:,)
where ß is the skew symmetrie matrix
ß ~ ER'"
associated to P = (PI , pz. p3)T E ]R3. Show that this is a smooth manifold.
Following the steps given below establish the fact that the unit tangent bundle of
SO (3), i.e., the unit ball of tangent vectors at each point on the manifold, is a
double covering of the essential manifold E :
1. Show that Rß is in so(3) for R E SO(3) if and only if R = 13x3 or R = einr.
2. Show that given an essential matrix X E E, there are exactly two pairs (R I , PI)
and (R2, pz) such that R, Pi = X, i = I, 2 (Can you find an explicit formula
for Ri, Pi? Consider the singular value decomposition of X) .
t
3. Show that the unit tangent bundle of SO (3) (using the metric trace(A T A) for
A E so(3» is a double covering ofthe essential manifold E.
4. Conclude that E is 5 dimensional compact connected manifold.
Problem 8.17 Conjugate Group of SO(3). In this exercise, we study the
conjugate group of SO(3) in SL(3) which plays a fundamental role in camera
self-calibration in computer vision [196]. Given a matrix A E SL (3), a conjugate
dass of SO(3) is given by:
G = {ARA-I IRE SO(3)}.
Show that G is also a Lie group and determine its Lie algebra? Follow the steps
given below to establish necessary and sufficient conditions for determining the
matrix A from given matrices in the conjugate group G:
1. Show that the matrix A E SL(3) can only be determined up to arotation matrix,
i.e., A can only be recovered as an element in the quotient space SL(3)j SO(3)
from given matrices in G.
2. Show that, for a given C E G with C i= I, the real symmetrie kernel of the
linear map:
L: S~ S- CSC T
is exactly two dimensional.
3. Show that, given C, = A R, A -1 E G, i = I, ... , n, the symmetrie form
S = A A T is uniquely determined if and only if at least two axes of the rotation
matrices R; 's are linearly independent.
9
Linearization by State Feedback
9.1 Introduction
In this chapter we begin with a study of the modem geometrie theory of nonlinear
control. The theory began with early attempts to extend results from linear control
theory to the nonlinear case, such as results on controllability and observability.
This work was pioneered by Brockett, Hermann, Krener, Fliess, Sussmann and
others in the 1970s. Later, in the 1980s in a seminal paper by Isidori, Krener,
Gori-Giorgi, and Monaco [150] it was shown that not only could the results on
controllability and observability be extended but that large amounts of the linear
geometrie control theory, as represented , say, in Wonham [331] had a nonlinear
counterpart. This paper, in turn, spurred a tremendous growth of results in nonlinear
control in the I 980s. On a parallel course with this one, was a program begun by
Brockett and Fliess on embedding linear systems in nonlinear ones. This program
can be thought of as one for linearizing systems by state feedback and change
of coordinates . Several breakthroughs, beginning with [45; 154; 148; 67], and
continuing with the work of Byrnes and Isidori [56; 54; 55], the contents are
summarized are in Isidori's book [149], yielded a fantastic set of new tools for
designing control laws for large c1asses of non linear systems (see also arecent
survey by Krener [170)). This theory is what we refer to in the chapter title. It does
not refer to the usual Jacobian Iinearization of a nonlinear system, but rather the
process of making exactlylinear the input-output response of nonlinear systems. It
is worthwhile noting that this material is presented not in the order of its discovery,
but rather from the point of view of ease of development.
9.2 SISO Systems 385
Thus, we see that there exist functions (l(x) , ß(x) such that the state feedback law
u = a(x) + ß(x)v renders the system input-output linear. If the original system
(9.1) were minimal, i.e., both controllable and observable (here we use these terms
'Infinitely differentiable, or C"", is shorthand for saying that we are too lazy to count
the number of times the function needs to be continuously differentiable inorder to useall
of the fonnulas in the ensuing development.
2 Atthispoint, thereadermayfind ituseful torethink thedefinitions ofLyapunov stability
of Chapter5 in the language of Lie derivatives.
3Note that this is a stronger requirement than L gh(x) =j:. 0 forall x EU , so as to make
sure that the controllaw doesnot become unbounded.
386 9. Linearization by State Feedback
without giving the precise definitions, the reader will need to wait till Chapter 11
to get precise definitions of these notions for nonlinear systems), then the control
law (9.3) has the effect of rendering (n - 1) of the states of the system (9.1)
unobservable through state feedback. (Convince yourself that this is what is going
on in the linear case, where you know precise definitions and characterizations of
minimality. That is, in the case that fex) ~ Ax, g(x) = b, convince yourselfthat
the observability rather than controllability of the original system (9.1) is being
changed.)
In the instance that L gh(x) == 0, meaning that L gh(x) = 0 V X EU, we
differentiate (9.2) to get
L gL~h(x) == 0 Vx E U, i = 0, ... , y - 2,
(9.10)
LgL;-1 h(xo) i= O.
9.2 SISO Systems 387
Remarks:
1. This definition is compatible with the usual definition of relative degree for
linear systems (as being the excess of poles over zeros). Indeed, consider the
linear system described by
x= Ax + bu,
(9.11)
y =ex.
Now note that the Laurent expansion of the transfer function is given by
2b
eb
e(s I - A)
-I
b= -
S
+ -eAb
s2
eA
+-s3
- + ... ,
valid for [s] > max, IAi(A)I, so that the first nonzero term gives the relative
degree of the system. Thus, if the system has relative degree is y then eb =
eAb = . . . = eAy- 2b = 0 and eAy-Jb ::J O. This is consistent with the
preceding definition, since for i = 0, I, 2, . .. , we have that
LgL~h(x) = eA ib.
1
2. It is worthwhile noting that if L gLr h(xo) ::J 0, then there exists a
1
neighborhood U of Xo such that L g L r h(x) is bounded away from zero.
3. The relative degree of some nonlinear systems may not be defined at some points
r
Xo E U C Rn . Indeed, it may happen that for some y, L g L I h(xo) = 0, but
LgLr lh(x) ::J Ofor x near xo. Forexample, it may be the case thatL gLrl h(x)
changes sign on a surface going through Xo as shown in Figure 9.1.
Example 9.2. Consider the nonlinear system given in state spaee form by
it follows that
8h
Lgh(x) = -g(x) = 0,
8x
and that
8h
LJh(x) = - fex) = xz,
8x
so that we may obtain
LgLfh(x) = 1.
Thus with this choice of output the system has reLative degree 2 at each vaLue of
xo. However, ifthe outputfunction is y = xz. it is easy to see that Lgh(x) = 1 so
that the system has reLative degree J. ALso. if the output is y = cos Xz it may be
verified that Lgh(x) =- sinxz. Note that the system has reLative degree J except
at points at which Xz = ±mr. where it is undefined.
"Normal" Form for SISO Nonlinear Systems
If a SISO nonlinear system has a relative degree y ~ n at some point Xo. it
is possible to transform the nonlinear system into a "normal" form as folIows . In
order to do this, we will recall from the previous chapter. that the Lie bracket of
two vector fields f, g denoted [f, g] is given in coordinates by
8g 8f
[f.g] = 8xf(x) - 8x g(x) . (9.12)
Denote by ad] the operator from vector fields to vector fields defined by
ad] g = [f, g]. (9.13)
We define successive applicationsof ad] by
adjg:= [f, adj-I g] for k >: 0 (9.14)
I I
Proposition 9.4 Equivalence of Conditions on h,
I LgL}h(x) == 0
o ~ k s t-tVx E U
{:=:}
0
Lad;gh(x) ==
s k s t-tVx E U
0).
Proof: The proposition is trivial for u. = O. Now, by Claim 9.3 above,
(9.16)
= -LfLgLfh + L gL }h(x) .
Using the fact that L s L fh (x) == 0, it follows that
L ad2gh(x)
f
== 0 {:> L gL}h(x) == 0 (9.20)
Now define
rPl (x) = h(x) ,
rP2(X) = L fh(x),
(9.21)
In words, the rPi (x) are the coordinates described by y and the first y - I time
derivatives of y. It is a consequence of the definition of relative degree that these
y - I time derivatives of y do not depend on u. We claim that these coordinates
qualify as a partial (since y ~ n) change of coordinates for the system. For this
purpose, we need to check the following claim:
Claim 9.5 Partial Change of Coordinates. The derivatives drPi (x) for i =
I, ... , y are linearly independent (over c oo(IRn ) ) in U.
Proof: We first claim that for all i +j ~ y - 2 we have that for x E U
(9.22)
Thus, the proof of the equation (9.22) follows by recursion on the index i . Also
continuing the iteration one step further yields that for i + j = y - I,
(9.23)
o
o L ad fy - 2 g L fh(xo)
=
*
* *
l
The anti-diagonal entries are all equal to ±LgLr h(xo) and are hence nonzero,
establishing that the ranks of the two matrices belonging to lR y x n and lRn x y on
the left hand side of equation (9.23) are each y (the product has rank y, and
consequently each matrix should have rank y!). Thus, dcjJ; (xo) are linearly inde-
pendent. Since these vectors are independent at Xo they are also independent in a
neighborhood of Xo. 0
As a by-product of the proof of the preceding claim we have the following claim
which also establishes that if a SISO system has strict relative degree y that y ~ n.
Claim 9.6 Independence of g, adjg , . . . , adJ - 1g. The vec tor fields g , adrg ,
. . . , adJ - 1gare linearly independent. Consequently y ~ n .
(9.24)
dl'/n-I (x)
dl'/;(x)g(x) = 0 Vx E U. (9.25)
9.2 SISO Systems 391
dh(x)
dLfh(x)
dL';-'h(x) (9.26)
d'l t(x)
d'ln -t (x)
has rank n at xo. (Why? Consider the definition of strict relative degree .) Using
this fact as weil as the Claim 9.5 above, we may choose n - y of the 'Ii(X); for
notational convenience we will say the first n - y of them, so that the matrix
dh(x)
dLfh(x)
dL j- 1h(x) (9.27)
dn, (x)
h(x)
L fh(x)
~l = ~2 ,
~2 = ~3,
(9.29)
~y = b(~ , 1]) + a(~ , 1])u ,
~ = q(~ , 1] ),
y =~l '
(9.30)
Note the lack of input terms in the differential equations for 1]: This is a consequence
of (9.25). The system descript ion (9.29) is called the "normal" form for the system
(9.1)
l ]l
Example 9.7. Con sider the control system
-x ~ COS X 2 ]
x = COS X I COSX2 + 1 u,
X2 o
y = h (x) = X3.
This system has relative degree 2. as may be verifi ed by the following calcula tion
L gh(x ) = 0, L fh( x ) = X2 ,
L gL fh(x ) = 1, L }h(x) = COSXI COS X2.
1] (x ) = Xl - sin x, = rP3(X).
9.2 SISO Systems 393
Of course , other solutions are easily obtained by adding constants to 1/(x). The
Jacobian matrix of the transformation is
o
o,p
ox
= [ ~ 1 01 ]
1 -COSX2 0
is easily seen to be nonsingular, and in fact , it is a global dijJeomorphism with
inverse given by
XI = 1/ + COS~2,
X 2 = ~2 ,
X3 = ~I '
In the new coordinates, the equations are described by
~I = ~2,
~2 = cos(1/ + sin ~2) COS~2 + U,
~ = (71 + sin ~2)3 - cos 2 ~2 cos(1/ + sin ~2).
~I = ~2,
~2 = ~3,
(9.31)
~n = b(~) + a(~)u,
Y =~I'
The feedback law given by
1
u = -[-b(~) + v] (9.32)
a(~)
yields a linear n-th order system. In the original x coordinates, the transformation
of the system (9.1) into a linear system with transfer function I/sn consists of a
change of coordinates given by
<1> : x ~ ~ = (h, .. . , L'F1hl (9.33)
and a static (memoryless) feedback
I
u= I
n
[-L fh(x) + v], (9.34)
L gL'F h(x)
394 9. Linearization by State Feedback
dh , dLfh, .. . ,dL;-1 h
are linearly independent in a neighborhood of xo. The cIosed loop system, since
it is controllable from the input v can now be made to have its poles placed at the
zeros of a desired polynomial s" + CXn-1sn-l + ...+ CXo by choosing
Thus, combining (9.34) and (9.35) in the " old" x coordinates yields
Let us now ask ourselves the following question: Given the system of (9. I) with
no output,
is it possible to find a hex) such that the relative degree of the resulting system
of the form (9. I) is precisely n? There is a rather elegant solution to this problem
due to Brockett [45] (see also Jacubcyzk and Respondek [154] and Hunt, Su and
Meyer [148]).
The system (9.1) has relative degree n at xo iff
and LgL;-lh(xo) :f:. O. Using the results of Proposition 9.4 it follows that the
system has relative degree n at Xo iff for some neighborhood U of xo:
ah
-[g(x) a dfg(x)·· · a dn
f - 2 g(x)] = O. (9.40)
ax
Note that this is a set of n - I first order linear partial differential equations. The
last equation of (9.39) is not incIuded, but since the results of Claim 9.5 guarantee
that the vector fields
g(x) , .. . , ad;-2g(x), ad;-Ig(x) (9.41)
are linearly independent at xo, it follows that any non trivial hex) which satisfies
the conditions of equation (9.40) will automatically satisfy the last equation of
(9.39). Now, also, the distribution spanned by the vector fields
(9.42)
9.2 SISO Systems 395
is (Iocally) of constant rank n - 1 (this was one of the by-products of the proof
of Claim (9.5). The Frobenius theorem (Theorem 8.15) guarantees that the equa-
tions (9.40) have a solution iff the set of vector fields in (9.42) is involutive in a
neighborhood of xo.
Remarks and Interpretation: The conditions on (9.41), (9.42) are the neces-
sary and sufficient conditions for the solution of (9.39). We may interpret these
conditions as necessary and sufficient for the choice of an output variable h(x)
so that the system is fully state linearizable by state feedback and a change of
coordinates. It may be verified (see Exercises) that if Xo = 0 and !(xo) = 0 and
a!
A = -(xo), b = g(xo). (9.43)
ax
then the condition of (9.41) reduces to
rank[b, Ab , . .. , An-1bj = n. (9.44)
x= !(x) + g(x)u
and a point Xo E !Rn. Assuming that the system is controllable, in the sense that the
equation (9.41) holds, solve the following state space, exact linearizationproblem:
Find, if possible, a feedback of the form
u = a(x) + ß(x)v
and a coordinate transformation z = <J>(x) so as to transform the system into the
form
z = Az + bu, (9.45)
which is linear and controllable. In other words, we ask for the existence of
functions of x, a(x), ß(x) , <J>(x), such that
4J
( aax (f(x) + g(x)a(x») x = t/J -l(Z) = Az
4J
( aax (g(X)ß(X») x=t/J - l(Z) = b
Theorem 9.8 State Space Exact Linearization. The state space exact lin-
earization problem is solvable (that is, there exist functions a(x), ß(x), <1>(x»)
such that the system ofequation (9.37) can be transformed into a linear system of
theform of(9.45) iffthere exists afunction A(X) such that the system
x= f(x) + g(x)u ,
y =A(X)
has relative degree n at xo.
Proof: In the light of the preceding discussion, the proof of the sufficiency of
the condition is easy. Indeed, if the output A(X) results in a relative degree of n,
then we may use
A(X)
LfA(X)
<1>(x) = L}A(X) (9.46)
0 0 0 0
0 0 0 0
z= z+ u. (9.48)
0 0 0 0
0 0 0 0
For the proof of the necessity, we first note that relative degree is invariant under
changes of coordinates; that is, if
-
f(z) -a f(x) )
= (a<1> , g(z) = (aa<1> g(X») ,
x X= <1>- I (Z) x X=<1> -I(Z)
and
h(z) = h(<1>-1 (z»,
then, an easy calculation verifies that
L KL kj h- (z) = [ k
LIiLjh(x) ]
x = rp- I (Z) '
The last step uses the fact that L Ii L}h (x) == 0 for k < y - 1. From the equation
(9.49) it follows that
L IißL}+liah(x) = 0 V 0 S k S y - 1,
and further if ß(xo) =1= 0, then
1
LIißLi~~ah(xo) = ß(xo)LIiLi - h(xo) =1= O.
We have shown that relative degree is invariant under state feedback and coordinate
transformation. Thus, ifthe given nonlinear system can be fully linearized by state
feedback and change of coordinates to the form of equation (9.34), then since
the pair A, b is completely controllable, we may assume using further linear state
feedback, if necessary, that they are in the form of equation (9.48), that is, the
controllable canonical form. To these equations append the output y = zi. The
resulting system has relative degree n, as an easy calculation shows. Since the
relative degree is unaffected by state feedback and coordinate change, as we have
just established, it must follow that the output y = Z I in the original x coordinates
labeled y = ).,(x) has relative degree n. 0
L jh(x) = XI - x2 L lih(x) = 0,
L }h(x) = -Xl - xi L IiL jh(x) = 0,
L}h(x) = -2X2(XI + xi) LIiL}h(x) = -(1 + 2X2)eX2.
398 9. Linearizationby State Feedback
We see that the system has reLativedegree 3 at each point, and using the feedback
controL
~I = h(x) = X3,
~2(X) = Lfh(x) = XI - X2,
~3(X) = L}h(x) = -XI - xi-
yieLds
x = [I - bcAy-l ] Ax + b v,
cAy-1b cAy-lb
y =cx ,
with transfer function 1/sY from v to y. Since the original linear system is minimal ,
it follows that n - y of the eigenvalues of
(/ - bcAy-1 /cAy- 1b)A
are located at the zeros ofthe original system and the remaining at the origin. Thus,
we have placed n - y of the closed loop poles at the zeros of the original system.
The exact input-output linearizing controllaw may be thought of as the nonlinear
counterpart of this zero-canceLing controllaw.
We will now attempt to think about the nonlinear version of zero dynamics.
We have not, as yet, discussed questions of minimality for nonlinear systems. In
particular, since no assumptions will be made about the observability properties
of the system (9.1) to begin with, some of the variables which appear in our
9.2 SISO Systems 399
discussion of zero dynamics may have been unobservable prior to the application
of state feedback." We can use the intuition of linear systems to define the zero
dynamics of SISO nonlinear systems. We first make a slight detour to solve the
so-called output-zeroing problem:
Output-Zeroing Problem
Find. if possible, an initial state x E U and an input Ii(t) such that the output y(t)
is identically zero for all t ::: O.
The answer to this problem may be easily given, using the normal form of (9.29).
First. note that by differentiation, we have
4Recall that state feedback does not affect controllability of a system but may affect
observability.
400 9. Linearization by State Feedback
restricted to M, as shown in Figure 9.2. These are referred to as the zero dynamics
of the system (9.1). M is parameterized by 1]; thus, the dynamics of
~=q(O ,1]) (9.56)
are the internal dynamics consistent with the constraint that y (t ) == 0 and hence
are referred to as the zero dynamies. The description ofthe zero dynamics in terms
of (9.56), a system evolving in jR(n- y ), allows for a certain greater economy than
(9.55), which is a system evolving in jRn. A similar interpretation holds for the
case that the output is not required to be identically zero, but is required to follow
some prescribed function y(t) (see Problem 9.4). In order to define the notion
of minimum phase nonlinear systems we will need to stipulate that 1] = 0 is
an equilibrium point of (9.56). This, in turn, is equivalent to stipulating that the
point xo, which is an equilibrium point of the undriven system, i.e., f(xo) = 0
also corresponds to zero output. Further, the map <1> is chosen to map Xo into
~ = 0, 1] = o.
Definition 9.10 Minimum Phase. The nonlinear system (9.1) is said to be 10-
cally asymptotically (exponentially) minimum phase at Xo ifthe equilihriumpoint
1] = 0 of(9.56) is locally asymptotically (exponentially) stahle.
:~ (0, 0)
are in C"- then the system is exponentially minimum phase (locally). It is also
easy to see that if even one eigenvalue of the linearization above is in C~ that the
9.2 SISO Systems 401
system is nonminimum phase. Finally, the system may be only asymptotically but
not exponentially minimum phase if it has all the eigenvalues of its linearization
in C_ but some of them are on the jto axis. Problem (9.8) establishes the con-
nection between the eigenvalues of the linearization and the zeros of the Jacobian
linearization of the nonlinear system. It is of interest to note that if some of the
eigenvalues of
are on the jio axis, then the linearization is inconclusive in determining the locally,
asymptotic minimum phase characteristic of the equilibrium point. A detailed
Lyapunov or center manifold analysis is needed for this (in this case, there is no
hope of the system being exponentially minimum phase. Why?).
. [X3 -xi
x = -X2
2
XI -X3
with output y = Xl. Then the system has relative degree 2 for all x . the change
of coordinates to get the system in "normal form " is ~l = Xl, ~2 = X3 - xi- To
choose 1], note that the two covector fields orthogonal to g(x) are
[100], [011].
[~ -3xi
o
itfollows that a choice ofn such that it is independent Of~l, ~2 and also orthogonal
to g is 1](x) = X2 + X3. In these coordinates, the system looks like
~I = ~2,
~2 = b(~, 1]) + a(~, 1])u ,
. t: 2
1] = 51 - 1].
~ = -1] ,
with output Y = XI . The system has relative degree 2, and it is easy to see that
~I = Xl and ~2 = xz. Also, we may choose 1/1 = X3 and 1/z = X4 - xz . The system
dynamics are given by
~I = ~2
~2 = sin 1/1 + ~l +U
~l = 1/2 + ~2
~2 = - sin 1/1
At the equilibrium points ofthe system Xl = 0, Xz = 0, X3 = ntt, X4 = °the zero
dynamics are given by
~I = nz-
in = - sin 1/1•
Depending on whether the integer n is even or odd, the linearization 0/ the zero
dynamics is either a center (eigenvalues on the jto axis) or a saddle (one stable
and the other unstable eigenvalue).
y~Y-I)(t)
Let us denote the right hand side of this equation, consisting of Yd and its first
y - 1 derivatives by ~d. Then it follows that the input Ud(t) required to generate
the output Yd(t) is given by solving the equation
y~Y) = b(~d , 1/) + a(~d , 1/)Ud (9.57)
9.2 SISO Systems 403
to yield
(9.58)
Since a(O, 0) is guaranteed to be nonzero if the system has strict relative degree,
it follows that a (~d , 71) =1= 0 for ~d , 71 small enough. The 71 variables appear to be
unconstrained by the tracking objective and as a consequence we may use any
initial condition as 71(0). As for the ~ variables, exact tracking demands that
~(O) = ~d (0) .
Thus, the system generating the controllaw is described as
(9.59)
with the tracking input given by Ud as given by (9.58) so long as a(~d , 71) is non-
zero, is referred to as the inverse system. The inverse system is shown in Figure
(9.3) with a chain of y differentiators at the output of the plant followed by the 71
dynamics which take values in lRn - y and the read out map of (9.58) to generate
the tracking input. It should come as no surprise that the dynamics of the inverse
system are :
The preceding discussion has established that nonlinear SISO systems that have
strict relative degree are uniquely invertible (i.e., the Ud required for exact tracking
is unique) at least in the instance that Yd and its first y - 1 derivatives are small
enough.
r- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ,
----------------------- ----------- -~
1 y y-l
U = y-l [-Ljh(x) - a y_1L j hex) - . .. - aILjh(x) - aoh(x)]
L gL j hex)
(9.60)
Proof: Consider the system in the normal form with the feedback control law
of (9.60) . The closed loop system has the form
~ = Ag,
(9.61)
~ = q(g, TI)·
with
0 0 o
0 0 o
A=
0 0 0
-ao -al -a2 -a y-l
The result of the theorem now follows by checking that the eigenvalues of the
linearization of equation (9.61) are the eigenvalues of
A
dq dq ; , 0) ] .
[ d~ (0 , 0)
dTJ
The preceding theorem is not the strongest possible; using center manifold
theorems of Chapter 7, we can extend the theorem to the case when the system
is only asymptotically minimum phase (i.e., the eigenvalues of the linearization
He in C_, the closed left half plane possibly on the jw axis) and still derive the
conclusion that the closed loop system is asymptotically stable (see Problem 9.9).
It is also easy to modify the controllaw (9.60) in the instance that the control
objective is tracking rather than stabilization. Consider the problem of getting the
output y(/) to track a given prespecified reference signal Ym (I). Define the tracking
error eo to be equal to y - Ym' The counterpart of the stabilizing control law of
(9.60) is
(9.62)
In (9.62) above eh stands for the ith derivative of the output error eo. Also, though
it is not immediately obvious, the feedback law of (9.62) is astate feedback law
since
Theorem 9.14 Bounded Tracking. Consider the system of (9.1) with the as-
sumptions ofTheorem 9.13 in effect with. the strengthening ofthe minimum phase
assumption asfollows:
Now, ifthe reference trajectory Ym(r) and itsfirst y derivatives are bounded then the
controllaw of(9.62) results in bounded tracking , i.e., eo and itsfirst y derivatives
tend to zero asymptotically and the state x is bounded.
Proof: The first order of business is to write down the closed loop equations for
the system using the controllaw of equation (9.62). For this purpose we define the
error coordinates ei+1 = eh for i = 0, .. . , y -1 . These coordinates will be chosen
to replace the ~ coordinates. The TI coordinates will, however, be unchanged. It is
easy to check that the form of the closed loop equation in the new coordinates is
a great deallike that of equation (9.61). Thus, we have
e= Ae ,
(9.64)
406 9. Linearization by State Feedback
o o o
o o o
A=
o o o
~ = -e+
Now, since the zero dynamics of the nonlinear system described by (9.56) are
exponentially stable and Lipschitz continuous it follows by a converse theorem
of Lyapunov (Theorem 5.17) that there exists a Lyapunov function satisfying the
following properties:
av av
= ~q(O, 1/) + ~(q(~, 1/) - q(O, 1/»
2
.::: - a 411/1 +a311/ILIH
Thus V is negative for 1771 ~ ((l + E)Ci3LK)/Ci4 for any E > O. Using the bounds
on V one can establish that all trajectories in 77 are confined to a ball of radius
(verify this). Thus, the 77 variables are bounded. Since the ~ variables were already
established as being bounded and the diffeomorphism <t> is smooth, it follows that
the entire state x is bounded. 0
Remarks:
1. Ifthe nonlinear system is only locally exponentially stable, then the hypotheses
of the theorem need to be modified to allow only for those tracking trajectories
that keep the 77 variables in the region where the dynamics are exponentially
stable.
2. The global Lipschitz hypothesis on the zero dynamics is needed in the proof
in two places, first, to invoke the specific converse Lyapunov theorem and
second, to obtain the bound on V. This is a strong assumption, but unfortunately
counterexamples exist to the statement of the theorem, if this is not satisfied.
Yp = hp(x) .
Here x E ]Rn, U E ]Rp . Y E ]RP, and j, g; are assumed to be smooth vector fields
and h j to be smooth functions. There is a class of MIMO systems for which the
development very closely paralleis that for the SISO case of the previous section.
We start with this class:
In (9.66) above, note that if each of the L K; h j (x) == 0, then the inputs do not appear
in the equation . Define Y) to be the smallest integer such that at least one of the
inputs appears in yF,that is,
(9.67)
with at least one of the LK;(Lj-1 h) =1= 0, for some x. Define the p x p matrix
A(x) as
A(x) := : (9.68)
[
LK1L;-l hp
Using these definitions, we may give adefinition of relative degree for MIMO
systems:
Definition 9.15 Vector Relative Degree. The system (9.65) is said to have
vector relative degree YI , Y2, .. . , Yp at Xo if
[
y~~ ] =[ L~t ] + A(x) [ U;I ] . (9.70)
YP Lf s, up
Since A(xo) is non-singular, it follows that A(x) E lR. pxp is bounded away from
nonsingularity for x E U a neighborhood U of xo, meaning that A -I (x) and has
bounded norm on U . Then the state feedback controllaw
(9.71)
(9.72)
9.3 MIMO Systems 409
Note that the system of (9.72) is, in addition, decoupled. Thus, decoupling is a by
product of linearization. A happy consequence of this is that a large number of
results conceming SISO nonlinear systems can be easily extended to this dass of
MIMO nonlinear systems. Thus, further control objectives, such as model match-
ing, pole placement, and tracking can be easily accommodated. The reader should
work through the details ofhow both MIMO exact and asymptotic tracking control
laws can be obtained for the case that the MIM0 system has vector relative degree.
The feedback law (9.71) is referred to as e static, statefeedback linearizing control
law.
(9.76)
410 9. Linearization by State Feedback
p
~~ = bp(~ , TI) + Laf(~, TI)Uj,
j=l
Here
b;(~, TI) = Ljih ; 0 <t>-l(~, TI)
= hp(x) = L fhp(x) = .. . = LI
y -I
hp(x) = O}.
The dynamics ofthe TI variables, with the controllaw of'ii(I) in effect, are given
by
~ = q(O, TI) - P(O, TI)A-I (0, TI)b(O , TI) . (9.79)
If !(xo) = 0 , h , (xo) = ... = hp(xo) = 0 and <t> maps Xo into (0,0), then
it follows that TI = 0 is an equilibrium point of the zero dynamics of (9.79) .
9.3 MIMO Systems 411
The MIMO system (with vector relative degree) is said to be minimum phase, as
before, if the equilibrium point 0 is asymptotically stable; similarly, exponentially
minimum phase.
Proposition 9.16 FuIl State MIMO Linearization. Suppose that the matrix
g(xo) has rank p . Then, there exist p functions AI, .• . , Ap such that the system
x= fex) + g(x)u ,
y = A(X),
YI + Y2 + ... + Yp = n
iff
1. For each 0 ~ i ~ n - I the distribution G; has constant dimension in a
neighborhood U ofxo.
2. The distribution G n - 1 has dimension n.
3. For each 0 ~ i ~ n - 2 the distribution G; is involutive.
Proof: Since G n - I has dimension n, it follows that there exists some K such
that dirn G,,_I = n but dirn G,,-2 < n with K ~ n . Set dirn G,,-2 = n - PI. Since
G ,,-2 is involutive, it follows that there exist PI functions A; (x), i = 1, ... , PI
412 9. Linearization by State Feedback
such that
span{dA; : 1 ~ i ~ PI} = G; _2
Hence
dA;(x)adjgj(x) == 0 (9.82)
for 0 ~ k ~ K - 2, 1 ~ j ~ P, 1~ i ~ PI. Further, the PI x P matrix
AI(x) = {a;~(x)} = {L gjLrIA;(x)}
has rank PI at Xo . Otherwise, we would have CI , ..• , cPI E IR. such that for 1 <
j~p
PI
Lc;LgjLrIA;(XO) = O.
;= 1
Using equation (9.82) and Proposition (9.4) we get for 1 ~ j ~ P
PI
L(-l)K-Ic;dA;(x)ad;-1 gj(xo) = O. (9.83)
;=1
Again, using equation (9.82) yields for I ~ j ~ P that
PI
L C;dA;(x)adj gj(xo) = 0 for alI 0 ~k s K - 2.
;= 1
Also, equation (9.83) yields that the foregoing equation is also valid for k = K - 1.
This shows that
PI
L C;dA; (xo) E G; _I (xo)·
;=1
Hut by assumption G K - 1 has dimension n and since the row vectors dAI (xo) , . .. ,
dA p l (xo) are Iinearly independent, we must have that alI the C; are zero. Thus, the
A; (x) qualify as a good choice of output coordinates for our problem. Of course
PI ~ P since rank A I(x) E IR.PI XP = PI ~ minimum (PI, p) for each x. If
PI = P we are done, since we have P outputs with the relative degrees each given
by y; = K. It is also c1ear (verify this) that in this case n = p«. If PI < p, then
one has to continue the search for additional outputs. To this end we try to find
new functions in G;_3' We start by constructing a subset of G;_3 described by
QI = span{dAI, .. ·, dA p " dLfAI, . .. , ... , dLfA p i }
To verify that QI is indeed in G;_3 first note that dA;(X), I ~ i ~ PI which are
in G; _2 by construction are also in G;_3 since G K - 3 C G K- 2. Also since
dLfA;(x)adjgj(x) = 0
9.3 MIMO Systems 413
PI
LC;dA;(XO) + d;dLfA;(XO) = 0
;= 1
Using the linear independence of the rows of A I (x), we see that the foregoing
equation impIies that the d, are zero. SimiIarIy, the C; are also zero. Thus the
dimension of G K - 3 is at least Zp« . If it is, in fact, strictly greater than 2pI we
define
P2 = dirn G; _3 - 2pI.
Since by hypothesis, G K-3 is involutive, there exist 2pI +P2 exact differentials that
span G; _3' Ofthese 2pI can be chosen to be those spanning QI and the remaining
m i can be chosen to be the differentials of A;(X) , PI + I :s i :s PI + P2. By
construction these functions satisfy
L /ijL }A ;(X) = 0
I :s i s PI
(9.84)
PI +I ~i s PI + P2
has rank PI + P2 at Xo. Now, if PI + P2 = P, we are done, since we will have
found P output functions with relative degrees YI = Y2 = . . . = YPI = K and
YPI+I = YPI+2 = ... = Ym = K - 1. Further, YI + ... + Yp = n, because
is to say that the rank of A (x) is r for all x near xo, then we may be able to extend
the system by adding integrators to certain input channels to obtain a system that
does have vector relative degree. The following algorithm makes this precise. In
what follows the variables with the tilde are the new state space variables which
are being generated during the course of the extension algorithm .
1. If the rank Ä(x) = p on Ü , stop: The system (j, g, h) has a vector relative
degree.
2. If rank Ä (x) is not constant in a neighborhood of xo, stop: The system cannot
be extended to a system with vector relative degree.
3. If rank Ä (x) = r, continue to Step 2.
Ä(x)ß(x)
are identically zero on Ü. This is possible, since the rank of Ä(x) is constant on
Ü. Partition ß as
r
such that ß\ consists of the first columns of ß,
Step 3 Extend the system by adding one integrator to each ofthe first r redefined
input channels. Specifically, define Z\ E IR;, W2 E IRP-; by
y = h(x) .
l[:: l
The extended state is x = (x T, zf>T and the new input u = (wj, wD
T. The
output is unchanged, but the output function is defined to be h(x) = h(x).
x,
Step 4 Rename i , Ü, j, g and h as U, j, i, and h respectively and return to
Step I .
416 9. Linearization by StateFeedback
(9.86)
u = ßI (X)ZI + ß2(X)W .
Once dynamic extension has been performed successfully both exact and
asymptotic tracking can be achieved on the decoupled systems . Here is an example:
Example 9.17. Consider the planar vehicle shown in Figure 9.4 with XI , Xz rep-
resenting the x, y coordinates 01 the vehicles position and r/J = X3 , the vehicle
(3)\'
x
heading. Assume that the inputs are the vehicle speed and turning rate. The system
equations are
(9.87)
YI =Xl.
Y2 = X2·
[ ~: ] = [ :~::: ~] [ :: ] .
Since A (x) has rank 1 and already has a zero second column we set Z1 = U 1• W2 =
U2 and differentiate the outputs once more to get
j
Xl z: COSX3 1 0 0
X2
X3
= ",~nx, +
0
0
WI+
0
W2 ·
ZI 1 0
Now differentiating the outputs till the inputs W\. W2 appear, we get
[ >
l
i
yi
]=[ COSX3
sin x,
-ZI sin x, ] [ W, ] .
Z, COSX3 W2
[ :: ] = [~:isnx;3 ;~:::]
ZI ZI
[ :: l
to decouple the system. Recall also that ZI = WI. U, = ZI.
The one question that we have not fully answered in this section is: Is it possible
to input-output linearize a MIMO system without decoupling it? For this we refer
the reader to [149].
~I = + Lph(x)w,
;2
~2 =;3 + LpLjh(x)w ,
(9.91 )
.
;y = b(;, 11) + a(;, 11)U + LpL jy-I h(x)w,
~ = q(;, 11) + P(;, 11)W.
In (9.91) above, P(;, 11) E ~n-y is the vector (L p11t. .. . , L p11n_y)T expressed in
the ;, 11 coordinates. Using the condition of (9.90) yields that the first y equations
above do not contain w. Thus the usual Iinearizing controllaw
I
u = a(;,11) [-b(;, 11) + v] (9.92)
Remarks:
1. The conditions of Theorem (9.18) say that the relative degree of the distur-
bance to the output needs to be strictly greater than the relative degree of the
unperturbed nonlinear system.
2. Disturbance rejection with disturbance measurement
If measurements of the disturbance w were available, then the form of the
feedback law could be changed to
u = a(x) + ß(x)v + 8(x)w . (9.98)
Then an easy modification ofthe preceding theorem (cf. Exercise 9.10) is that
disturbance rejection with disturbance measurement is possible iff
LpLjh(x) == 0, 0 sis y - 2. (9.99)
In words, this says that the problem of disturbance decoupling with disturbance
measurement feedback is solvable if and only if the disturbance relative degree
is greater than or equal to the relative degree of the unperturbed system. This
problem appears to be somewhat artificial, since disturbances are seldom mea-
surable. However, the problem is of conceptual value in solving the problem of
model reference control, explained next.
3. Given a linear reference model of the form
Z = Az +br,
(9.100)
Ym = cz,
with z E IRn." r E IR, Ym E IR and A, b, c matrices of compatible dimensions,
find, if possible, a feedback law that makes the output of the system (9.1) equal
to Ym' Combining equations (9.1) and (9.100) yields
r plays the role of the disturbance w and eo := hex) - cz the role of the
(error) output. We are allowed astate feedback law which depends on x , z, r.
420 9. Linearization by State Feedback
Remark: It is important to stress that the results of Proposition 9.19 only de-
couple the ~ variables ofthe normal form from the perturbations of t:..j, Sg: The
IJ variables associated with the zero dynamics may be perturbed. In particular, the
zero dynamics may be destabilized by the perturbation. Further characterization
of the perturbations is needed to study this aspect,
Since the disturbance functions associated with tsf', t:.g are respectively I, u
it is possible to apply the techniques associated with the disturbance decoupling
problem with measurement of disturbance. We apply the result for Sf alone since
thc disturbance associated with t:.g is the control u that we seek. The solvability
condition for this scenario is a generalization of a condition referred to in the
9.4 Robust Linearization 421
Compare equations (9.104) and (9.105) carefully and note that the lauer conditions
are weaker than the former. The generalized matehing conditions are useful in
devising a dass of linearizing controllaws called sliding mode controllaws which
are the topic of the next subsection.
Before we undertake a detailed discussion of sliding mode control laws, let
us note the generalizations of the preceding conditions for MIMO systems that
are linearizable (decouplable) by static state feedback, namely, systems with vec-
tor relative degree. Thus we consider systems of the form (9.65) with additive
disturbance w,
for i = 1, . . . , p .
Proof: Since the system has vector relative degree, it is possible to choose the
d L jh i (x) for 0 ::s j ::s Yi - 1, 1 ::s i ::s p and the complementary 11 coordinates
such that we obtain the following modification of the normal form coordinates of
422 9. Linearization by State Feedback
equation (9.76):
"I
~I = ~2 + Lph, (x)w,
)
"I
~2 = ~3 + LpLfh l (x)w,
I
P
~~I =bl(~,rJ)+ La)(~ ,11)Uj+LpLj,-lhl(X)W,
j=1
·2
~I = ~2 + L ph 2(x)w,
2
"2
~2 = ~32 + L pLfh2(x)w,
P
~;l = b2(~, 11) + LaJ(~ , 11)Uj + LpLj-1 h 2(x)w ,
j=1
(9.108)
p
~~ = bp(~, 11) + LaJ(~, TI)Uj + LpLj-1 hp(x)w,
j= 1
Yp = ~r.
In equation (9.108) above, W(~, 11) E Rn-y stands for the vector (LpTlI, . .. ,
LpTln-y) expressed in the s , 11 coordinates. Using the condition (9.107) and the
usuallinearizing controllaw
U = A-I(~, 11)[-b(~, TI) + v]
yields the TI states unobservable and consequently isolates y from w. This
completes the "if" part of the proof.
The "only if" part of the proof proceeds as in the 5150 case by showing that
for each of the yi to be independent of w we need to have that for given U =
a(x) + ß(x)v we have
LpL}+gahi(x) == 0, 0 S k S Yi .
Proceeding recursively, as before, yields the conditions of (9.107) . 0
9.5 SlidingMode Control 423
Yp = hp(x).
Applying the preceding theorem to this system yields the following result on robust
linearization.
Proposition 9.21 MIMO Robust Linearization. Consider the uncertain non-
linear system (9.109). Assume that the nominal system (9.65) has vector relative
degree Yl, ... , Yp' Then, if the perturbations S], 6g j satisfy for j = 1, .. . , p
The assumption is that we would like to devise a tracking contral law for
tracking Ym(t) with as few assumptions as possible on the perturbation vector
fields ft..f(x), ft..g(x). The assumption in effect will be that the perturbation vec-
tor fields satisfy the generalized matehing conditions of (9.105). Under these
conditions the SISO normal form of the perturbed system can be written by
choosing the coordinates to be the coordinates for the unperturbed system, namely
h(x), L fh(x) , . . . , L r1h(x), I'/(x) with I'/(x) satisfying
L gl'/(x) = O.
In these coordinates the system can be written as
~l = ~2,
~2 = ~3,
(9.111)
Here ft..b(g , 1'/) = LßfL r1h, ft..q(~, 1'/) = Lßfl'/, ft..p(~, 1'/) = Lßgl'/. Further, we
will assume that there is a magnitude bound on the nonlinear perturbation described
by
I ft.. b (~ , 1'/)1< K.
Now given the desired output tracking function YmO. we define the so-called
sliding surface using eo(t) = Ym(t ) - y(t)
y- I y-2
S (x , t ) = eo + ay-2eo + ... +aoeo
= y~-I - L r
l
h + aY _2(y~-2 - L r2h) + ...+ ao(Ym - h(x»,
where the coefficients a, are chosen so that the polynomial sy-l + ay _2sy- 2 +
...+ ao is Hurwitz. Note that if the trajectory is confined to the sliding surface the
error tends to zero exponentially according to the zeras of this polynomial. The
sliding mode control law is a discontinuous contral law chosen to make the time
varying manifold described by Ix : s(x, t) = O} a manifold which is attractive to
initial conditions in its neighborhood in finite time. Indeed, consider the following
sliding mode contrallaw:
u= I
y_1 y
[Ym - L fYh (x) + a y-2eoy-l + .. .+ aoeo. - I.IKsgns ( x, t )] .
L gL f h(x)
(9.112)
This contral law is an exact linearizing contral law of the form
1
u= I [-L jh(x) + v]
L gLr h(x)
9.6 Tracking for Nonminimum Phase Systems 425
with the choice of v as given in equation (9.112) above. The choice of 1.1 in the
form of the control law is not magical, any number greater than I would suffice.
The controllaw of (9.112) is chosen so that in using it in the normal form of (9.111)
wc get that
We will need to assume that gD(-) is bounded, that is to say, YD, YD, ... m - ) ,yb 1
are all bounded. It is assumed that the equilibrium point of the undriven system Xo
is mapped onto g = 0, TJ = O. It remains to determine a bounded solution to the
driven zero dynamics equation
(9.115)
8q
Q(I) := 8TJ (gD(t), 0), (9.116)
o ], (9.117)
Az(t)
where AI (1) E IRn , x n, represents an exponentially stable linear system, and the
state transition matrix of Az(t) E IRnuxnu is exponentially unstable, that is to say
that there exist m I, m-; Ä 1, Äz such thatthe state transition matrices of AI (1), Az(l)
denoted respectively by et>l (t , (0), et>z(I, (0) satisfy
• l17vOloo is small enough and the undriven zero dynamics are hyperbolic, that
is,
:~ (0, 0) E JR(n-y)x(n- y)
is hyperbolic.
• Q(/) in equation (9.116) is slowly time -varying .
It is easy to verify that the transition matrix
7](/) = i:
bounded 7](') solving the driven zero dynamics equation (9.115):
7]m(t) = i:
scheme of Chapter 3 to this equation, that is,
Tbe main drawback of the controllaw (9.123) is that it is not causal, since the
cornputation of 17v(t) needs future values of ~v(t) . If YvO is specified a-priori
428 9. Linearizationby State Feedback
ahead of time, then this calculation can be done off-line. Another way that this
is remedied is to use a preview of a certain duration, 1:1, meaning to say that the
controllaw at time t is actually the one that would have been the correct one at time
t - 1:1 with the additional proviso that the integral in equation (9.122) runs from
-00 to 1:1. Also, when the desired trajectory is only specified on the interval [0, 00[,
the integral in (9.122) runs only from [0, 1:1] corresponding to setting the desired
output to be identically zero for t S O. Once the feedforward controllaw has been
chosen, the overall scheme needs to be stabilized. For this purpose the system in
normal form is linearized about the nominal trajectory, i.e., g(t) == gD(t), 1J(t) ==
1JD(t), u(t) == uff(t). Of course, the linearization could also be performed in the
original x coordinates, since XD(t) = <1>-1 (gD(t), 1JD(t)), where <I> stands for the
diffeomorphism ofthe zero dynamics (refer to equation (9.28)). In these original x
coordinates define the error coordinates e(t) = x(t) - XD(t), Ufb = u(t) - Uff(t).
Then the linearized system is of the form
Now the stabilization scheme of Problem 6.3 of Chapter 6 can be applied. More
precisely, we will need to assume that the following controllability Gramian for
the error system of (9.124) is uniformly positive definite for some 8 > 0
1+8
W(t, t + 8) = [
I exp4a(I-r) \I1(t , r)b(r)b(r)T\I1(t, r)T d t,
Here a > 0, and \I1(t, to) is the state transition matrix of A(t). Define P(t) :=
W (t, t + 8)-1. Then we may show that the feedback controllaw
Ufb = _b T (t)P(t)e
exponentially stabilizes the error system of (9.124). The composite controllaw
used for asymptotic tracking of the nonlinear system is
The subscript "ff" stands for feedforward or the exact tracking control law and
the subscript "fb" stands for the stabilization of the overall system. It is useful to
understand the philosophy of this approach. The computation of U ff (r) consisted in
finding the unique bounded solution of the driven zero dynamics (9.115) to obtain
the 1'JD(t) and in turn the entire desired state trajectory XD(t). The stabilization was
then applied not just to the g dynamics as in the case of input-output linearization
by state feedback, but rather to the full state. The two drawbacks of the scheme
that we have presented are:
1. The exact formula for uff(t) is non-causal if YD(t) is not specified ahead of
time and needs to be approximated.
9.6 Tracking for Nonminimum Phase Systems 429
w= s(w) ,
(9.126)
v» = r(w),
with w E lR.c• Thus, we are required to regulate the output e of the composite
system described by
x= fex) + g(x)u,
W = s(w) , (9.127)
e = hex) - r(w),
We are now ready to state the solution of the feedback regulator problem:
iff there exist smooth maps n : lRn, ~ R", u ff : lRn, ~ lR such that
a7r
aw s(w) = f(7r(w» + g(7r(w»uff(W) ,
(9.131)
h(7r(w» - r(w) = O.
Proof: For the necessity, let us assume that there is a given feedback law a (x, w)
that does the requisite two functions. With A, b , K, L as defined in (9.129), it fol -
lows that theeigenvalues of A+bK lie in C~. Let T E lRn x n , be the unique solution
of the linear equation (the uniqueness follows from the fact that the spectrum of
A + bK is disjoint from that of S)
T S - (A + bK)T = bL. (9.133)
9.6 Tracking for Nonminimum Phase Systems 431
We use the Poisson stability of the exosystem for this purpose: For the sake of
contradiction assurne that there exists Wo close to zero such that Ih(rr(wo» -
r(wo)1 := E > O. Then there exists a neighborhood V of Wo such that for all
w E V we have that Ih(rr(w) - r(w)1 > E/2. Now, we are guaranteed that
the trajectory starting from rr(wo) , Wo results in the output error going to zero
as t ~ 00. Thus, there exists a T E R+ such that for t ~ T we have that
Ih(rr(w(t») - r(w(t))l < E/2. By the assumption of Poisson stability of the
exosystem, Wo is recurrent; hence there exists a t > T such that w(t) E V.
But inside V we have that Ih(rr(w(t» - r(w(t»1 > E/2. This establishes the
contradiction.
For the sufficiency, if we have a solution to the partial differential equation
(9.131). we choose K such that the eigenvalues of A +bK lie in C ,:- (such a choice
always exists by the stabilizability hypothesis) and define
a(x, w) = uff(w) + K(x -rr(w».
We claim that x <n (w) = 0 is an invariant manifold under this controllaw. Indeed,
on this manifold,
d arr
dt (x -rr(w» = !(rr(w» + g(rr(w»(uff(w) + K(x -rr(w») - aw s(w).
The right hand side is zero on the set x - n (w) = 0 by the first equation of
(9.131) . This equation also yields that the invariant manifold is locally attracting
since g(O) = band the eigenvalues of A = bK are in C ':- . On this invariant
manifold we have by the second equation of (9.131) that
h(rr(w» - r(w) = 0,
so that the output error is zero on this invariant manifold. Consequently the man-
ifold {(x, w) : x - tt (w)} is referred to as the stable, invariant, output zeroing
manifold when equations (9.131) hold and the controllaw of (9.132) is used. 0
432 9. Linearization by State Feedback
Remarks:
1. The Bymes-Isidori regulator guarantees that if we stabilize the closed loop
system, then there exists an invariant manifold x = 7T(W) which is a center
manifold of the composite system, and furthermore the center manifold can be
chosen so as to have the output error be identically zero on it.
2. The draw back of the Bymes-Isidori regulator is that it does not give explicit
solutions for the partial differential equation (9.131). However, it does not
assume the existence of relative degree, or minimum phase on the part of the
open loop system. Indeed, the first of the cquations of (9.131) is a statement of
the existence of an invariant manifold of the form x = 7T (w) for the composite
system
i = fex) + g(x)u/f(w),
W = S(W), (9.135)
e = hex) - r(w) ,
for a suitably chosen u/f(w), The second equation asserts that this invariant
manifold is contained in the output zeroing manifold
h(7T(w» - r(w) = O.
The stabilizing term K (x - 7T (w» is designed to make this invariant manifold
contained in the output zeroing manifold attractive. Center manifold theorems
may be used to obtain solutions to the partial differential equation (9.131). It
is instructive to note that in the linear case, that is f(x) = Ax, g(x) = b,
s(w) = Sw, hex) = CX, r(w) = Qw, the equation (9.131) has a solution ofthe
form x = flw, u/f(w) = f(w) with Il E Rn xn. satisfying the matrix equation
M s := {(x, w) : x = 7T(W)}
(9.137)
9.7 Observers with Linear Error Dynamics 433
For the proof of this and other interesting details about the solvability by series
ofthe equations (9.131) we refer the reader to the recent book ofByrnes, Delli
Priscoli, and Isidori [57].
3. In Problem 9.31 we study calculation of n(w), ufJ(w) when the open loop
system has relative degree and has the normal form of (9.29).
4. Like the Devasia-Chen-Paden scheme the overall scheme has two steps, one
that produces a feedforward input U f f (w) and the other that stabilizes the closed
loop system Ufb(X , w) = K(x - n(w» .
x= fex),
(9.138)
y = hex).
Now, if there exists a change of coordinates of the form z = rf>(x) such that the
vector field fand the output function h become
Proof: If the observer linearization problem could be solved, and the pair A, C
were observable, since a further linear transformation of coordinates and linear
output injection could convert the pair into observable canonical form, i.e., there
exist TE IRn x n , L E IRnxno such that
o o
T(A + LC)T- 1 ~ .I. [
o o ~]
.~ . '
CT - 1 = [00 .. . 0 1],
o
we could assurne without loss of generality that A , C are already in this desired
form, i.e.,
o o
o o C = [00 ... 0 1].
o
Then, it follows that there exists ifJ such that
0 0 · ·· 0 0]
aifJ 1 0 ... 0 0
[ -ax f(X)] ..-=4>- '(,) = Z +k([O 0 ···0 I]z) (9.142)
[ .• •
o 0 .. . 1 0
and
h(ifJ-l (z)) = [00 ... 0 1] z. (9.143)
If these equations hold then we have
h(x) = Zn(x),
aZI
~ f(x) = k l (Zn(x)),
aZ2
~ f(x) = Zl (x) + k 2(Zn(x)),
aZn
~ f(x) = Zn-I (x) + kn(Zn(x)) .
Thus observe that
aZn
L fh(x) = ~ f(x) = Zn-I (x) + kn( Zn(x)) ,
L }h(x) = aZn- 1 f(x) + akn aZn f(x),
ax aZn ax
ak aZ
n n
= Zn-2 + -a -a f(x) + kn-
Zn X
J (Zn(x)),
9.7 Observers with Linear Error Dynamics 435
From the fact that a necessary condition for the observer linearization problem is
equation (9.141) it follows that it is possible to define a vector field g(x) satisfying
dh(x) 0
dLfh(x) 0
g(x) = (9.144)
dLr2h(x) 0
dLr1h(x)
The following proposition gives a necessary and sufficient condition for the
solution of the observer linearization problem
Proposition 9.24 Observer Linearization: Necessary and Sufficient Condi-
tions. The observer linearization problem is solvable ijf
1. dim(span{dh(xo) , dLfh(xo), .. . , dLrl h(xo)}) = n.
2. There exists a mapping F : V C jRn /-+ U a neighborhood 01 Xo such that
Proof: We already know about the necessity of the first condition from the
preceding proposition. Suppose that there is a diffeomorphism r/J such that the
system can be transformed into the linearized observer form. Now set F(z) =
r/J-I (z). Also, define (actually, recall from Chapter 3) the push forward of a vector
field e(z) by a diffeomorphism x = F(z) as
r» (aFe) =
az Z=F -I(x)
.
Thus, the push-forward of a vector field is equivalent to transforming it into new
coordinates. Now, define the constant vector fields
o
o
etc..
o o
Then,define
We claim that
Indeed, we have that f is the push forward of fez) given as Az + k(cz), that is
Zn-I + kn(Zn)
The proof of the claim now follows by induction: The claim is c1early true for
k = O. If it is true for k then
adr'.(x) = [f, adj.] = (_l)k F*[](z), ek+Jl.
Lad'rh(X) = 0
f
9.7 Observers with Linear Error Dynamics 437
In turn this implies that r satisfies the equations of (9.144). Since the solutions of
equations (9.144) are unique, the result follows.
The proof of the sufficiency is as follows: Suppose that the conclusion (1) of the
proposition holds and let r be the solution of equation (9.144). Using the results
of Proposition 9.4 it follows that the matrix
dh(x)
dLfh
[r(x) adfr(x) . .. adr'r(x)]
dLr1h(X)
has rank n near xo. Thus, the vector fields r, adjt; . .. , ad;-I rare linearly inde-
pendent near xo. Now assurne that F is a solution of the partial differential equation
(9.145). Set cl> = F- 1 and
fez) = cl>d(x) .
= cI>*[f, adjrl
= [], (-I) k ek+Il
= (-1 )k+' a] .
az k+,
In turn , this implies that for 0 .:s k .:s n - 2, we have
a]k+2 = I, aj;
-'- = 0, for i =1= k + 2.
aZk+1 aZk+'
Using these relations the functional form ofthe] follows. Further, since we have
that for 0 .:s k < n - I that
438 9. Linearization by State Feedback
h c F = h(z) = Zn,
The following theorem (which has some independent interest as weil) gives
conditions on a given lR n x n matrix that make it the Jacobian of a diffeomorphism:
Theorem 9.25. Let XI (x) , . . . , Xn(x) be n given linearly independent (over the
ring 0/ smoothfunctions) vector fields . Then there exists a dijfeomorphism 1/J
lRn (x) 1-+ lRn (z) satisfying
CJl/J [ K, (x)
CJx X"(x) ] ~ ~
[
(l/Ji).X i = Ci (x)
o
Since the K, commute, it follows that Ci 0 cJ>-1 (g) is a function of gi alone. Thus,
there exist functions J.Li such that
aJ.Li 1
CJgi =Ci(cJ>-I(g)) '
The transformation obtained by composing J.L and g.
z = 'l/f(x) = (J.LI (gI) • .. . , J.Ln(gn))T,
is the required diffeomorphism. D
9.8 Summary
This chapter has covered an introduction to input-output linearization by state
feedback for continuous time nonlinear systems . Our treatment has been rather
brief, but there are many other interesting subtleties of the theory which are dis-
cussed in Isidori (149] and Nijmeijer and van der Schaft [232]. Questions ofpartial
feedback linearization, robust and adaptive linearization are discussed in the books
by Marino and Tomei [199]. Krstic, Kanellakopoulos and Kokotovic (17Il. and
Sastry and Bodson [259] (see also the Exercises in the next section). The main
omission from this chapter is the coverage of a large body of literature on tech-
niques for the linearization of discrete-time and sampled-data nonlinear systems .
One of the reasons for doing this is that it is not possible to do justice to this
subject without a thorough rethinking of the ideas of relative degree and relative
degree under sampling. The main reasons why the discrete-tirne and sampled-data
theory are so much richer than their continuous counterparts is because even if
440 9. Linearization by State Feedback
discrete time systems start off affine in the input, a one step time delay already
results in a non-affine dependence of the output on the input. For more details, we
refer the reader to Monaco and Normand-Cyrot [213], [214] (and their monograph
in-preparation ), Grizzle [121], and Lee, Arapostathis, and Marcus [176]. Another
active area of research is the elaboration of the construction of non-linear observers.
We gave areader just a flavor of the results in the last section. It is of interest for
nonlinear control systems to not reconstruct the full-state but merely the part that
is needed for the state feedback law (a(x) , ß(x) rather all of x ). For this and other
interesting points, we refer the reader to a thriving literature in nonlinear control.
9.9 Exercises
Problem 9.1. Prove Claim 9.5.
Problem 9.4. Consider the output reproduction problem for the system (9.1): Find,
if possible, a pair x, Li(t) such that the output y(t) is identially equal to )I(t). Find
the internal dynamics consistent with this output and relate them to the zeros of
the system. Prove that if )I(t) and its derivatives are bounded, then the state x is
bounded as weil.
x = Ax + v(x) (9.146)
with x E IRn, A E IRn xn, and v(x) a vector of polynomials in x of degree greater
than or equal to 2. We wish to find a transformation of the form x = y + h(y) :
IRn 1-+ IRn, with h(O) = 0, Dh(O) = 0 to transform (9.146) into
y= Ay.
1. Find the equation that h should satisfy. Use the operator LA taking smooth
vector fields in IRn into smooth vector fields 1/1 (x) in IRn given by
2. Let us study the operator in the case that A is diagonal with real eigenvalues
AI, . . . , An. Showthat foreach choice of integers er . , m 2, ... ,mn, the following
9.9 Exercises 441
+- i -th entry
o
with the 1 in the i-th location, Show that
In this equation p stands for the coefficient of wind drag, dm the mechanical drag
and m the mass of the vehicle. Further, the engine dynamics are modeled by the
first order system
. ~ u
~ = - r(x) + mr(x)'
Here ~ is astate variable modeling all of the complex engine dynarnics, u is the
throttle input, and r(x) is a velocity-dependent timeconstant. Also F = ms . Write
these equations in state space form and examine the input-output linearizability of
the engine from input u to the output x . What are the relative degree, zero dynamics
of the system?
Problem 9.7 Control of satellites. Consider the control of a rigid spacecraft by
a single input about its center of mass. The equations of motion of this system are
given by:
Here RES 0 (3) is a 3 x 3 unitary matrix modeling the orientation of the spacecraft
relative to the inertial frame, and w E IR3 models the instantaneous angular velocity.
J stands for the moment of inertia of the spacecraft about a frame attached to its
center of mass (if the frame lines up with the principal axes J is diagonal) and
442 9. Linearizationby State Feedback
S(W), also sometimes cal1ed wx, is the skew matrix representation (S(w) E so(3»
of the cross-product operator, given by
Further, B E )R3 is the input matrix. To define the scalar output y we define the
rol1-pitch-yaw (also known as XYZ Euler angles) representation of R as
R = eS(i )q,e-S(yli )e S(7.) l/! ,
z
where X. y, stand for unit vectors along the coordinate directions. The angles
~,O,1/I are referred to as rol1, pitch, and yaw respectively. Be sure to compute
Ras a function of~ , 0,1/1. Now determine the relative degree of the system for
choice of output y = ~, y = 0 and y = 1/1. Assume that J is diagonal and B = x.
Think about what would happen if B = Yor B = Z.
Problem 9.8 Zeros of the Iinearization. Let 0 be an equilibrium point of the
undriven nonlinear system of (9.1) and A = Df(O), b = g(O), c = Dh(O) its
Jacobian Iinearization. Assume that the Jacobian Iinearization is minimal. Show
that the eigenvalues of the Iinearization of the zero dynamics of (9.56) are the same
as the zeros of the Jacobian Iinearized linear system. What can you say about the
minimum phase or nonminimum phase properties of the nonlinear system when 0
is a zero of the Jacobian Iinearized system?
Problem 9.9. Using the center manifold methods of Chapter 7, show that the
stabilizing control law of (9.60) stabilizes the nonlinear system when the zero
dynamics are asymptotical1ystable (i.e., the center manifold, if it exists, is stable).
Problem 9.10. Show that the disturbance decoupling problem with measurement
feedback is solvable iff the conditions of (9.99) hold.
Problem 9.11. Using Problem 9.10 show that the Model Reference Adaptive Con-
trol problem can be solved iff the relative degree of the reference model of (9.100)
exceeds that of the system (9.1). Further, show that the controllaw is given by
(9.102). Derive the equation satisfied by the output eo. Modify the controllaw of
(9.102) so that it can be made zero asymptotical1y for arbitrary initial conditions
on x, z, and any input r .
Problem 9.12. Prove Proposition 9.19.
Problem 9.13. Show that the differentials
dLJh;(x), 0::: j s y; - I, 1::: i s p, (9.148)
are Iinearly independent ifthe system (9.65) has wel1defined relative degree. Hint:
Use the nonsingularity of the decoupling matrix of (9.68).
Problem 9.14 Brunovsky Normal Form. Assume that the conditions for ful1
state MIMO Iinearization have been satisfied, i.e., the conditions ofTheorem 9.16
9.9 Exercises 443
are met. Derive the normal form of the MIMO system under these conditions,
referred to as the Brunovsky normal form . The indices 0 ~ «, ~ K of the m
outputs are referred to as the Kronecker indices .
Problem 9.15 Chained form conversion. In this chapter you have studied how
to do full state linearization. Here we will give sufficient conditions for conversion
of a driftless system into a different kind of canonical form, called chained or
Goursat normal form. This is discussed in greater detail in Chapter 12. Consider
x = gl (x)u J + g2(X)U2 . (9.149)
Define the following three distributions:
ZJ = VI,
Z2 = V2,
Z3 = Z2 V J ,
(9.151)
Can you apply your results to the kinematics of a front wheel drive car:
(9.152)
. I
X4 = - - t an x 3 u J .
COSX4
Problem 9.16 Rigid robot computed torque controller [225; 285]. The dy-
namics of an n link rigid robot manipulator with joint angles () are given
by
M«()Ö + C«(), 0) = u. (9.153)
Here () E IRn is the vector of joint angles and u E IRn is the vector of motor torques
exerted by the motors at each joint. Further M«() E IRnxn is a positive definite
444 9. Linearizationby State Feedback
matrix called the moment of inertia matrix and C(0 , (}) E jRn stands for Coriolis,
gravitation, and frictional forces . Using as outputs the angles () linearize the system
of (9.153) . What are the zero dynamies of this system? When the joint angles are
required to follow a desired trajectory Od(') give the asymptotie tracking control
law. This is known as the computed torque controllaw for robot manipulators. (see
[225], Chapter 4, or [285], Chapters 7-11 for details).
Problem 9.17 Rigid robot PD set point controller [225; 285; 20]. Consider the
rigid robot dynamies of equation (9.153). From physieal considerations it follows
that the second term on the left -hand side of the dynamies actually has the form
Here C (0, 0) E jRn xn represents the contribution of the Coriolis and centripetal
terms, N (0) E Rn represents the gravitational forces on the robot links, and KI E
jRn xn is a positive definite matrix representing the friction terms in the robot joints.
It is abasie fact [225] that for open-link robot manipulators
M«() - 2C«(), 0)
is a skew symmetric matrix . For the case that Od(t) == 00 , consider the following
proportional derivative (PD) controllaw:
(9.155)
«e.o," ="2()
1 "T • 1 T
M(O)O +"2 0 KpO.
Problem 9.18 Modified PD control ofrigid robots [225]. Consider once again
the rigid robot equat ions of (9.153) with the form of C as given in (9.154). The
controllaw of the previous problem (9.155) needs to be modified to the form
when ()(.) is required to track a non-constant Od(-). Use this controllaw to prove
that the tracking error e(·) = ()(.) - ()d(-) converges exponentially to zero. Use the
Lyapunov function
. 1 .T 1
v(e, e) ="2e M«()e +"2e Kpe
• T
+ ee T M(O)e
for suitably chosen E.
motor angles as 8m E jRn . The coupling between the motor and the robot joints is
assumed to be through a stiff spring:
In the equation above M m , Bm stand for the moment of inertia matrix and the
damping matrix associated with the motors, K is the matrix of torsional spring
constants between the motor and robot links (usually a diagonal matrix), and u is,
as before, the vector of input torques. Linearize the dynamics of (9.157) using 8
as the outputs.
Discuss what happens to the linearization in the limit that the torsional springs
become stiff, i.e., K ~ 00. In particular, can you recover the linearizing control
law of the previous problem (9.16)?
Problem 9.20. Use the linearizing controllaw ofProblem 9.16 on the dynamics
of (9.157). Show that if the control objective is for the joint angles 8 to track a
reference trajectory that the tracking error is of order I/IK I.
YI = XI
Y2 = X2
Problem 9.22 MIMO Sliding for decouplable systems [100]. Consider the
uncertain nonlinear system of (9.109). Assume that the nominal nonlinear system
of (9.65) has vector relative degree. Further assume that the uncertainty satisfies
the generalized matehing condition of (9.114) . Construct multiple sliding surfaces
and a sliding controllaw which gives robust tracking under certain conditions on
the perturbation.
Xl = + (CI - xdu l
-'1
XI = X2 + I1 (X I) + 6 fl (XI)
X2 = X3 + h(x l. X2) + t1h(XI, X2)
Xn = U
Here t1 i /; ( .) are unknown but are bounded with known Lipschitz constants. The
control objective is for y = XI to track YdO := Xld . Now define the error surface
SI = XI - Xld . After differentiating we get
Si =Xi -Xid
Now prove using a succession of Lyapunov functions s; that given the Lipschitz
constants of the Ji , l!t.Ji there exist k l , ... , kn , r 2, ... , r n such that a bounded
trajectory can be tracked asymptotically arbitrarily c1osely.The proof uses singular
perturbation ideas and the reader may wish to consider the limit that the t , = 0 to
get the intuition involved in this scheme.
Problem 9.26 Poincare Iike control system linearization. The aim of this
problem is to give necessary conditions to convert a single input nonlinear system
z = Az +bu (9.160)
using a change of coordinates z = f/J (x) alone and not state feedback. Thus, define
the push forward operator f/J. operating on I(x), g(x) by
f/J.I = Az cP.g = b
Verify that
In words this equation says that the push forward of the Lie brackets of 11, h
is the Lie bracket of the push forward of 11 , h . Use this fact to give necessary
conditions on I, g , adf g , ad} g, .. . , to enable the conversion of the system of
(9.159) into (9.160). Show that your conditions imply the conditions for full state
Iinearization of (9.159). Can you say whetherthese conditions that you have derived
are sufficient?
+ go(x)u + L
P
x= lo(x) 0;· f;(x)
; =1 (9.161)
y = h(x)
The functions 10, go , Ji(x) are considered known but the parameters O· E lRP
are fixed and unknown. These are referred to as the true parameters. We will
explore a controllaw for making y track a given bounded trajectory Yd(t). Assume
that L gOh(x) is bounded away from zero and the control law is the following
448 9. Linearization by State Feedback
U = _1_ (-LfOh
LgOh
+ Yd + a(Yd - y) - t 0;
;=1
(t)Lf;h) (9.162)
Here O;(t) is our estimate of ();* at time t. Prove (using the simplest quadratic
Lyapunov function of Y - Yd, 0-()* that you can think ot) that, underthe parameter
update law
Lf,h ]
iJ~- [ L:,h (Y-Yd)
(9.163)
for all initial conditions 0(0) and x(O), y(t) - Yd(t) ~ 0 as t ~ 00 . Note that
you are not required to prove anything about O(t)!
Sketch the proof of the fact that if the true system of (9.161) is exponentially
minimum phase, then x(·) is bounded as weil. Can you modify the controllaw
(9.162) and the parameter update law (9.163) for the case that the right hand side
ofthe system (9.161) has terms ofthe form
P
~)Ji(x) + g;(x)u)(}t
;=1
with Ji, g; known and ()* E jRP unknown. You can assume here that
P
LgOh + L,o;(t)Lgjh
;=1
is invertible.
Problem 9.28 Convergence ofthe Picard iteration for the Devasia Paden Chen
method [85]. Prove that the Picard Lindelöf iteration applied to equation (9.122)
converges by showing that the right hand side is a contraction map for k 2 small
enough.
Problem 9.29 MIMO Devasia Chen Paden Scheme. Verify that the Deva-
sia Chen Paden scheme can be applied to nonminimum phase MIMO systems,
provided that they are decouplable by (possibly dynamic) state feedback.
Problem 9.30 MIMO Byrnes Isidori Regulator. Assume that the open loop
system is a square MIMO system. Retrace the steps in the proof to give necessary
and sufficient conditions for the existence of the Bymes Isidori regulator.
Problem 9.31 Byrnes Isidori regulator for systems which have relative degree.
First consider a S1S0 system with normal form given by (9.29). Note that in these
coordinates h(~ , 7/) = gl. Use this to get a more explicit formula forrr(w) , Ufj(W)
forthe regulator. Hin! gl = r(w) := JTI (w), g2 = ;:s(w) := rr2(w), . ...
For the MIMO case (assuming that the system is decouplable by static state
feedback) repeat this calculation.
10
Design Examples Using Linearization
This chapter has been heavily based on the research work and joint papers with
John Hauser, ofthe University ofColorado, George Meyer, ofNASA Ames, Petar
Kokotovic of the University of Califomia, Santa Barbara, and Claire TomIin, of
Stanford University.
10.1 Introduction
In this chapter we discuss the application of the theory of input-output linearization
by state feedback to a few classes of applications. We have deliberately chosen two
examples where the application of the theory is not completely straightforward.
Several more straightforward applications of the theory are covered in the Exercises
of this chapter and Chapter 9. Our motivation for choosing non-straightforward
applications is twofold:
1. To show that whenever one applies theory there is always a need to make a
number of practical "tweaks" to make the theory go.
2. To introduce some very important new ideas conceming the
a, control of nonregular systems or nonlinear systems that fail to have relative
degree ,
b, control of "slightly non-minimum phase systems" where the presence
of high speed non-minimum phase dynamics makes the application of
traditional input-output linearization infeasible .
The first example studied, the ball and beam system, is an example of a system
which fails to have relative degree. We discuss how to approximately control
450 10. Design Examples UsingLinearization
10.2.1 Dynamics
Consider the ball and beam system depicted in Figure 10.1. Let the moment of
inertia of the beam be J, the mass of the ball be M, and the acceleration of gravity
be G. Choose as generalized coordinates for this system the angle, (), of the beam
and the position, r , of the ball. Then the Lagrangian equations of motion are given
by
o = ;: + G sin () - riP,
(10.1)
"C = (Mr z + J)Ö + 2MrrO + MGrcose,
where "C is the torque applied to the beam and there is no force applied to the ball.
Using the invertible transformation
XI Xz 0
XZ XIX 4Z - G smX3
' 0
+ 0
u,
X3 X4
X4 0 I (10.3)
, '--v--'"
!(x) g (x )
Y = XI ,
'-v-'
h (x)
where X = (x(, Xz, X3 , X4)T := (r, r , (), O)T is the state and Y = h(x) := r is the
output of the system (i.e., the variable that we want to control). Note that (10.2) is
a nonlinear input transformation.
Y=XI.
Y =Xz,
.. = XIX Z - G'
Y sm xr, (10.4)
4
y(3) = xzx~ - GX4COSX3 + 2X,X4 u.
, , , ~
b(x) a (x )
452 10. Design Examples Using Linearization
At this point if a(x) were nonzero in the region of interest, we could use the control
law
I
u=-(-b(x)+v] (10.5)
a(x)
y(3) = v . (10.6)
Unfortunately, for the ball and beam, the control coefficient a (x) is zero whenever
the angular velocity X4 = iJ or ball position Xl = r are zero . Therefore, the relative
degree of the ball and beam system is not weil definedt This is due to the fact that
neither is
(10.7)
Y =Xl = K ,
Y= X2 = 0, (10.8)
y.. = K x 42 - G'
SInX3 = 0 ,
To find the dynamics that may evolve on this manifold, we note that
(10.10)
implies that
. G
X4 = O or X4 = 2K COSX3 · (10.1 I)
10.2 The Ball and Beam Example 453
Thus we have the possibility of three distinct behaviors for the zero dynamics
(which are constrained to live on M):
where adig denotes the iterated Lie bracket [f, [f, .. . [f, g] .. .]]. In turn this is
given by
does not lie within the span of the first three columns (vector fields) of (10.15).
Failing this condition, we see that it is not possible to full-state linearize the ball
and beam system.
51 52 54
v
f
'------
f
'------ -
f -
f y
'l' (x ,v)
~l=
----
X2
~F I/>2(X)
• 2
~,=b
;2= -Gsinx3+xIX4'
'"-...-----'
~3=t/>3(X ) ----
"'2 (x )
or
~2= ; 3 + 1/12(X) ,
~3= -GX4cosx3, ~3= ;4,
~
~4=t/>4(X) ~4= b(x) + a(x)u =: V(X, U).
GX4sin x, + (-G COSX3) u,
• 2
;4=
'-v---' '--v---'
b (x) a (x )
(10.18)
'-I
'Cl
.rrOr 11' • ~. 10 , 151
"
....-::::::;=
0 ,
:~; 1
--~:7
-e a
I__
' -'
,-=-,
'~ '
e::::::-'-
- 0 001
". U, 60 )~ 90 1 0~ UO .
60
" cd> r 111 .. ~ 10 . I ))
FIGURE 10.3. Simulation results for Yd (t) = R cos TC t /30 using the first approximation
«a) e = Yd - rPJ, (b) t/f2 . (c) e, (d) r)
456 10. Design Examples Using Linearization
Approximation 2. Again, let ~I = 4Jl (x) = hex). Then, along the system
trajectories, we have
~,= X2 ,
'-.,-'
~2=Q>2(X)
•
~2 = .-G sin x, + XIX4'2 ~1 = ~2,
,
~J=Q>J(X) ~2=h
or
•
~3 = ,- GX4cos x3 +X2 X4, +2X2X4
"-v-'
U,
2
~3 = ~4 + lh(x),
~4=Q>4(X) "'J(x .u) ~4 = hex) + a(x)u = : v(x , u).
= x,x 4 + (-G COSX3 + 2X2X4), u ,
· 4
~4
'-v-' '
b(x) a(x)
(10.1 9)
This time the approximate system is obtained by modifying the g vector field.
Pulling back the modified g vector field (obtained by neglecting 1{f3 (x , u» to the
original x coordinates, we get
o
o o
o 2GXIX4 COSX3 - 4XIX2Xl
o +
(10 .20)
\.10, 1\)
"r:'<l,....... ',","1
~;=
~ I-
• I_ _ '~ '
-e.a '1 - : : : ,/ ' "
"" ".I01\t
FIGURE IDA.Simulation results for Yd (t) = R cos 7Tt /30 using the second approximation
(Ia) e = Yd - c/>h (b) 1/13 , (c) (d) r) e,
10.2 The Ball and Beam Example 457
. ."r>,• •,
FIGURE 10.5. Simulation results for Yd(t) = R eos TC t /30 using the Jaeobian approxima-
tion (Ia) e = Yd - cP" (b) tfr3, (e) B, (d) r)
Jacobian Approximation
To provide a basis for comparison, we will also calculate a linear control law
based on the Jacobian approximation. Previously, we used the invertible nonlinear
transformation of (10 .2) to simplify the form of X4. Since we are only allowed
linear functions in the control, we must work directly with the original input T and
the true angular acceleration (j = X4 given by
-MGxI COSX3 - 2MxIX2X4 1
X4 = 2
Mx) +J
+ 2
Mx) +J
T . (10.21)
We willlinearize about x = 0, T = O.
Since the output is a linear function of the state, wc begin with s, = ifJl (x) =
h(x) . Then, along the system trajectories, we have
~IX = X2 ,
'-.,-'
~2=if>2(X)
. MG 2 -G
~4= - - X J + - T
J J
'"-,.---' '-.,-'
hex) u (x)
MG 2x, COSX3
+ 2MGxlx2x4 MG 2x]
+ --------,.------ ---+ -- (G G) T
Mxr+ J J J Mxr +J "
(10.22)
458 10. Design Examples Using Linearization
The Jacobian approximation is, of course, obtained by replacing the f vector field
by its linear approximation and the g vector field by its constant approximation.
Figure 10.5 shows the simulation results from the Jacobian approximation. Un-
fortunately, the control system with the linear controller is not stable for R greater
than about 7.
The following table provides a direct comparison of the error e = Yd - ~l for
the three approximations:
Note that approximation 2 provides better tracking for this dass of inputs by
about an order of magnitude over approximation 1. Due to the large excursions
from the origin, the Jacobian approximation is no longer a good approximation so
the system becomes unstable . Of course, the other approximations will eventually
become unstable as R becomes large.
~I = ;2,
~2 = ; 3,
(10.23)
;3
•
=
2
BX2X4 - BGX4COSX3 + 2BxI X4U,
iJ = u.
(10 .24)
10.2 The Ball and Beam Example 459
I Z
;(4 = (-BXzx4 + BGX4COSX3 + v) . (10.25)
2BxI X4
Setting v = 0 gives the zero dynamics of the system (10.23) under the controllaw
(10 .24) wh ich are of the type (c) of equation (10.12) of the preceding subsection,
namely,
G COSX3
;(4 = -----=-
2K
where Kx~ = G SinX3 . (10.26)
These zero dynamics evolve on the one -dimensional manifold SI ,causing the beam
to oscillate between X3 = 0 and X3 = TC . Now consider the problern of tracking
a desired output trajectory Yo (t) corrcsponding to a periodic motion of the ball
along the beam. State trajectories which implement yo(t) traverse the region in
which XIX4 = 0, since X4 must necessarily change sign . In Mo, the approximate
tracking law
where the Cl; are selected so that S4 + Cl4S3 + Cl3SZ + ClZS + ClI is Hurwitz. We
expect the system to stabilize nicely to this 8-region (around X3 = 0) since the
zero dynamics correspond to a single equilibrium. In the region {x : IX1X41 ::: 8},
the exact tracking law of equation (10.24) may be used to stabilize the output of
the system to yo(t):
3
v=Yo - "(I:
(3) (;-1») '
L...JCl; s; -Yo
;=1
where s 3 + Cl3Sz + ClZS + Cli is Hurwitz. Because the zero dynamics in this region
are constrained to He on Si, we cannot expect such a nice behavior of the inverse
dynamics of the system in this region. Figure 10.6 shows the result of tracking
y0 (t) = 1.9 sin (1.3t) + 3 using control based on approximate linearization only,
and Figure 10.7 shows the result of tracking the same trajectory using a controllaw
that switches between approximate and exact tracking. The approximate tracking
scheme is unstable, due to the neglect ofthe 2BxIX4 term away from the singularity
where the magnitude of XI is large. The switched scheme is stable, yet the driven
dynamics cause the be am to continually flip upside down and right-side up.
460 10. Design Examples Using Linearization
,',
! ~,
Time (sec)
FIGURE 10.7. Tracking YD(t) = 1.9sin(1.3t) + 3 (shown as a dashed curve) using the
switched tracking law for the same parameters as in the previous figure, and /) = 3. The
system is stable, although the driven dynamics X3 = B flip between B = 0 and B = tt , as
expected.
x= I(x) + g(x)u,
(10.28)
Y = hex),
where x E IR", u, Y E IR, I and gare smooth vector fields on IR", and h is a
smooth function, which/ail to have relative degree, referred to as nonregular. We
will assume, as before, that x = 0 is an equilibrium point of the undriven system.
1
Thus, LgLi- hex) = 0 at x = 0 but is not identically zero in a neighborhood U
l
of x = 0, i.e., L gLr hex) is a function that is of order O(x) rather than 0(1) .
Here and in the sequel, we will use the 0 notation. Recall that a function 8(x) is
said to be O(x)" if
. 18(x)l . .
11m - - exists and IS =1= O.
Ixl..... O lxi"
Also, functions thatare O(x)o are referred to as 0(1). By abuse ofnotation, we will
also use the notation 0 (x, u)2 to mean functions of x, u which are sums of terms
of 0(x)2, O(xu), and 0(u)2. Similarly for O(x, u)P . Then the relative degree of
the system is not weil defined and the input-output linearizing controllaw cannot
be derived in the usual fashion. Hence, we seek a set of functions of the state,
(!>i (x), i = 1, ... , y, that approximate the output and its derivatives in a special
way. The integer y will be determined during the approximation process . Since
our control objective is tracking, the first function , cPI (x), should approximate the
output function, that is
where 1/10 (x , u) is 0 (x, U)2 (actually, 1/10 does not depend on u, but for consistency
below we include it). Differentiating cPl (x) along the system trajectories, we get
until at some step, say y, the control term, LgcPy(x), is 0(1), that is,
Definition 10.1 Robust Relative Degree. We say that a non linear system
(10.28) has a robust relative degree olY about x = 0 ifthere exist smoothfunctions
462 10. Design Examples Using Linearization
where the functions Vri(X, u) , i = 0, ... , y, are O(x, u)2 and a(x) is 0(1).
Remarks:
The robust relative degree is easily characterized (see Problem 10.1 to be the
relative degree of the Jacobian Iinearized version of the system (10.28) about
x = 0, u = 0:
z= Az +bu,
(10.36)
y = cz,
where a and ß are smooth functions and cx(O) = 0, while ß(O) =1= O. In order to
show that this procedure produces an approximation of the true system, it is Ieft
to Problem 10.3 to prove that the functions rPi (.) can be used as part of a local
nonlinear change of coordinates.
Proposition 10.2. Suppose that the nonlinear system (10.28) has robust rel-
ative degree y. Then the functions rPi ('), i = 1, .. . , y, are independent in a
neighborhood 01 the origin.
With y independent functions rPi (.) in hand, we can, by the Frobenius theo -
rem, complete the nonlinear change of coordinates with a set of functions, 11i (x),
i = 1, . . . , n - y, such that
Choosing n - y of these independent of the rfJi (x) as before we can rewrite the
true system (10.28) as
Y = ~I + 1/Io(x, u),
where q(~, 1/) is Lf1/ expressed in (~, 1/) coordinates. Note that the form (10.39)
is a generalization of the standard normal form of Chapter 9. with the extra terms
1/Ii(X, u), i = 0, . .. , y of O(x, u)2. Thus the controllaw
I
U = a(~ , 1/) [-b(~ , 1/) + v] (10.40)
approximately linearizes the system (10.28) from the input v to the output yup
to terms of 0 (x, U)2. If the robust relative degree of the system (10.28) is y = n,
then the system (10.28) is almost completely linearizable from input to state as
weIl (since there will be no 1/ state variables). This situation was investigated by
Krener [169], who showed that the system
x = f(x) + g(x)u (10.41)
is full-state linearizable to terms of 0 (x , u)P if and only if the distribution
improve the quality of the approximation, one may insist on choosing these terms to
be O(x)P forsome p ~ 2. There is less latitude in thechoiceofthe functions t/Jl(x).
They must be neglected if they are 0 (x) or higher and not neglected if they are
0(1) (this determines y). We cannot, in general, guarantee that an approximation
of 0 (x, u)P for p > 2 can be found . At this level of generality, it is difficult to
give analytically rigorous design guidelines for the choice of the functions t/J/ (x) .
However, from the ball and beam example of Section 10.2, it would appear that it
is advantageous to have the t/J/ (x) be identically zero for as long (as large an i)
as possible. We conjecture that the larger the value 01 the first i at which either
t/J/ (x) or t/Jl(x) are nonzero, the better the approximation. It is also important
to note the distinction between the nonlinear feedback controllaw (10.40) which
approximately linearizes the system (10.39), and the linear feedback control law
obtained from the Jacobian linearization of the original system (10.28) given by
1
u= I [-cAY x + v]. (10.44)
cAY- b
It may be verified (reader, please verifyl) that they agree up to first order at x = 0
since cAy-1b = a(O) and cAY = dLt<Py(O) = dh(O) . It is also useful to note
that the controllaw (10.40) is the exact input-output linearizing controllaw for
the approximate system
~Y -I =;Y' (10.45)
~Y = b(;, TI) + a(; , TI)u ,
~ = q(;, TI),
Y=;I .
In general, we can only guarantee the existence of controllaws of the form (10.40)
that approximately linearize the system up to terms of 0 (x, U )2; the Jacobian law
of (10.44) is such a law. In specific applications, we see that the controllaw (10.40)
may produce better approximations (the ball and beam of Section 10.2 was lin-
earized up to terms of O(x, u)3). Furthermore, the resulting approximations may
be valid on larger domains than the Jacobian linearization (also seen in the ball and
beam example) . We try to make this notion precise by studying the properties en-
joyed by the approximately linearized system (10.28), (10.40) on a parameterized
family of operating envelopes) defined as follows
Definition 10.3 Operating Envelopes. We call U, C IRn, E > 0, a family 01
operating envelopes provided that
U8 C U, whenever 8< E (10.46)
and
sup{8 : B8 C Uf } = E, (10.47)
10.3 ApproximateLinearization for Nonregular SISOSystems 465
• It is not necessary that each U, be bounded (or compact) , although this might
be useful in some cases .
• Since the largest ball that fits in U, is B" the set U, must get smaller in at least
one direction as E is decreased.
The functions 'tfri(X, u) that are omitted in the approximation are of O(x, U)2
in a neighborhood of the origin . However, if we are interested in extending the
approximation to (larger) regions, say of the form of U" we will need the following
definition:
Definition 10.4. Afunction'tfr ::lRn x :IR 1-+ :IR is said to be ofuniformly higher
order on U, x B u C :lRn x :IR. E > 0, iffor some er > 0 there exists a monotone
increasing function of E. K,. such that
Remarks:
1 [ (y) (y-l) ]
u=a(~,1J) -b(~,1J)+Yd +CXy-I(Yd -~Y)+"'+CXO(Yd-~)),
(10.49)
Theorem 10.5. Let U,. E > O. be a family of operating envelopes and suppose
that:
• The zero dynamics ofthe approximate system (10.45) (i.e.• ~ = q(O. 7/)) are
exponentially stable and q is Lipschit; in ~ and 7/ on 4>(U,)for each E.
• The functions 'tfri(X, u) are uniformly higher order on U, x B u •
Thenfor E sufficiently small and desired trajectories with sufficiently small deriva-
tives (Yd, Yd• . . . • y(y»). the states ofthe closed loop system (10.28), (10.49) will
remain bounded, and the tracking error will be O(E) .
466 10. Design Examples Using Linearization
e\ ~I Yd
e2 ~2 Yd
= (10.50)
(y-I)
ey ~y Yd
Then the closed loop system (10.28), (10.49) (equivalently, (10.39) , (10.49)) may
be expressed as
= +
ey_1 0 ey-J 1/Iy-1 (x , u(x, Yd))
ey -ao - al -ay_1 ey 1/Iy(x, u( x, Yd))
~ = q(~, TI),
(10 .51)
or, compactly,
where Yd := (Yd , Yd, . . . ,Y~Y») . Since the zero dynamics are exponentially stable,
the converse Lyapunov theorem implies the existence of a Lyapunov function for
the system
satisfying
for some positive constants k l , k2 , k 3 , and k 4 • We first show that e and TI are
bounded. To tbis end, consider as Lyapunov function for the error system (10.52)
(10.54)
(10 .55)
10.3 Approximate Linearization for NonregularSISO Systems 467
(10.56)
and the function, 1{! (x , u), is uniformly of higher order with respeet to U, x Bu
and u(x, Yd) locally Lipschitz in its arguments with u(O, 0) = 0:
(10.58)
Define
(10.61)
468 10. Design Examples Using Lincarization
Then for all JL S JLo and all E S mintjz., 4i" ,), we have
iJ <
-
_l:f.
4
_ /Lk312 71 1 + JL(k4k1 b
2
q d )2 + (EK 1b
e x d
)2. (10.62)
3
v
Thus, < 0 whenever 1711 or [e] is 1argewhich implies that 1711 and [e] and, hence , I~I
and [x], are bounded . The above analysis is valid for (x , u) E U, x Ba. Indeed, by
choosing bd sufficiently small and appropriate initial conditions, we can guarantee
that the state remains in U, and the input is bounded by a . Using this fact, we may
abuse notation and write the function 1/f(x, u(x, Yd)) as E1/f(t) and note that
e= Ae + E1/f(t) (10.63)
is an exponentially stable linear system driven by an order E input. Thus, we
conclude that the tracking error will be 0 (E) . 0
aircraft as a rigid body upon which a set of forces and moments act. Then, with
r E ~3, R E SO(3), and W E ~3 being the aircraft position, orientation, and
angular velocity, respectively, the equations of motion can be written as
mr = Rla + mg,
R= co x R, (10.64)
where la and Ta are the force and moment acting on the aircraft expressed relative
to the aircraft. The subscript "a" means that a quantity is expressed with respect
to the aircraft reference frame. Depending on the aircraft and its mode of flight,
the forces and moments can be generated by aerodynamics (lift, drag, and roll-
pitch-yaw moments), by momentum exchange (gross thrust vectoring and reaction
controls to generate moments), or a combination of the two. The flight envelope
of the aircraft is the set of flight conditions for which the pilot and/or the control
system can affect the forces and moments needed to remain in the envelope and
achieve the desired task.
shows the aircraft with the coordinate frame A attached at the (nominal) center of
mass. The x -axis is directed forward toward the nose of the aircraft and is also
known as the roll axis since positive rotation about the x -axis coincides with rolling
the aircraft to the right. The y-axis is directed toward the right wing and is called
the pitch axis (positive rotation is a pitch up). The z-axis is directed downward
and is also known as the yaw axis (we yaw to the right about this axis). Also
shown in Figure 10.8 is the (inertial) runway coordinate frame R. The X-, v-, and
z-axes ofthe runway frame are often oriented in the north, east, and down (N-E-D)
directions, respectively. Four exhaust nozzles on the turbo -fan engine provide the
gross thrust for the aircraft. These nozzles can be rotated from the aft position (used
for conventional wing-borne ftight) forward approximately 100 degrees allowing
jet-borne ftight and nozzle braking. The throttle and nozzle controls thus provide
two degrees of freedom of thrust vectoring within the x-z plane of the aircraft.
(If the line of action of the gross thrust does not pass through the object center of
mass, then this thrust will also produce a net pitching moment.)
In addition to the conventional aerodynamic control surfaces (aileron, stabilator,
and rudder for roll, pitch, and yaw moments, respectively), the Harrier also has a
reaction control system (ReS) to provide moment generation during jet-borne and
transition ftight. Reaction valves in the nose, tail, and wingtips use bleed-air from
the high-pressure compressor of the engine to produce thrust at these points and
therefore moments (and forces) at the aircraft center of mass . The design of the
aerodynamic and reaction controls provides complete (three degree of freedom)
moment generation throughout the ftight envelope of the aircraft. Since moments
are often produced by applying a single force rather than a couple, a nonzero force
(proportional to the moment) will usually be seen at the aircraft center of mass.
Using the throttle, nozzle, roll, pitch, and yaw controls we can produce (within
physicallimits) any moment and any force in the x -z plane of the aircraft. The
function :F, taking the control inputs
is complex and depends upon the state of the aircraft system (airspeed, altitude,
etc.). The projection of this function taking the moments and the x and z compo-
nents of force as outputs is one-to-one and hence invertible. That is, given a desired
aircraft moment and (x -z plane) force that is achievable at the current aircraft state,
there is a unique control input vector (throttle, nozzle, roll, pitch, yaw) that will
produce that force and moment. Letting u = (fax, faz' T:l, this function can be
written as
A
"
"
"
"
"
"
"
where r , i , R, and ca compose the aircraft state. Given the function C, we are free to
consider the desired moment and (x-z) force as the aircraft control input in place of
the true control input. The idea of inverting the algebraic nonlinearities present in
the system has been applied to real flight control problems [207]. It is now natural
to ask what form the (state dependent) function taking u to fay will take. Since
moments are produced by applying forces to the aircraft, one is hopefuI that the
resulting y-axis force at the aircraft center of mass will be a (state dependent) linear
function of the moment acting on the aircraft. Note that this is not necessarily the
case. For example, consider the generation of a right rolling moment (from the
pilot's point of view) during jet-bome flight, Figure 10.9 shows the geometry of
the wingtip reaction control valves. For small ranges of moment, the Ieft reaction
valve opens and blows downward, creating a net upward force . Once the left
reaction valve is fully open, the right reaction valve opens and blows upward,
which reduces the net upward force. In this case, there is a nonlinear coupling
between the rolling moment and the force in the vertical (z-axis) direction on
the aircraft. This case is easily reconciled, however, since we can directly affect
the vertical (z-axis) force using the throttle and nozzle. Clearly, forces in the x-z
plane and moments about the y-axis will not contribute to y-axis forces . Thus
we consider the y-axis forces generated by rolling (x-axis) and yawing (z-axis)
moments. Yawing moments are generated by applying a force at the tail of the
aircraft (by aerodynamic or reaction control methods). As long as this force is
effectively applied at the same point regardless of the magnitude of the moment,
there will be a (state dependent) linear relationship between the z-axis moment and
the resulting y-axis force. The coupling between rolling (x-axis) moments and y-
axis forces is more subtle and is the result of the geometry of the reaction control
system. As Figure 10.9 shows, the forces used to generate the rolling moment
are not perpendicular to the y-axis of the aircraft. Thus, when a positive rolling
moment is commanded, a negative force is generated in the y-axis direction (i.e., the
airplane will initially accelerate to the left when it is commanded to go right). Also,
as mentioned above, depending on the magnitude of the rolling moment, the right
reaction valve could be actively blowing upward or be fully closed. Fortunately,
the distance to and angle ofthe upward and downward reaction valve thrust vectors
are equal. For this reason, the relationship between the rolling moment and y-axis
472 10. Design Examples Using Linearization
mr ) = ( (10.68)
( Jiva
where B is the (state dependent) 6 x 5 matrix providing the full vector of aircraft
forces and moments given the control input u. In particular, B has the form
0 0 0 0
0 0 ßroll 0 ß yaw
0 0 0 0
B= (10.69)
0 0 0 0
0 0 0 1 0
0 0 0 0
where ßroll and ß yaw are the (scalar) functions giving the coupling between the roll
and yaw moments and the y-axis force.
.r
FIGURE 10.10. The planar vertical takeoff and landing (PVTOL) aircraft
10.4 Nonlinear Flight Control 473
I
x
,-----l E }-------)I y
I
FIGURE 10.11. Block diagram of the PVTOL aircraft system
where '-1' is the gravitational acceleration and E is the (smalI) coefficient giving
the coupling between the rolling moment and the lateral acceleration of the aircraft.
Note that E > 0 means that applying a (positive) moment to roll left produces
an acceleration to the right (positive x). Figure 10.11 provides a block diagram
representation of this dynamical system. The PVTOL aircraft system is the natural
restrietion ofV/STOL aircraft to jet-borne operation (e.g., hover) in a vertical plane.
Our study of this simple planar model is but the first step in an ongoing project
to understand and develop robust methods for the control of highly maneuverable
aircraft systems.
x(YJ) = VI,
(10.72)
y<Y2) = V2 .
Here V is our new input, and z is used to denote the entire state of the system
(including compensator states, if necessary). Proceeding in the usual way, we
differentiate each output until at least one of the inputs appears. This occurs after
474 10. Design Examples Using Linearization
differentiating twice and is given by (rewriting the first two equations of (10.70»
~
[ y
] = [
-1
0] + [ - sin 0
cos 0
E
E
c~s 0
sm 0
] [ U I ].
U2
(10.73)
x= VI,
ji = V2 (10.75)
.. 1
() = -E (sin () + COSOVI + sin(}v2) .
This feedback law makes our input-output map linear, but has the unfortunate
side-effect of making the dynamics of () unobservable. In order to guarantee the
internal stability of the system , it is not sufficient to look at input-output stability,
we must also show that aIl internal (unobservable) modes ofthe system are stable
as weIl.
The first step in analyzing the internal stability ofthe system (10.75) is to look
at the zero dynamics of the system. Constraining the outputs and derivatives to
zero by setting VI = V2 = 0 (and using appropriate initial conditions), we find the
zero dynamics of (10.75) to be
.. I
() = - sin (). (10.76)
E
2.5
10 .
x ( c) u1
0. 8 1. 25
0 .4 o.
O. -1. 25
O. 7 .5 O. 7.5
12 . 5 (b ) t heta (d ) u2
125.
O.
O.
-12 .5
- 1 25 .
7 .5 O. 7.5
20 . theta_dot(theta)
O.
- 20 .
- 1 5. - 7. 5 o. 7.5
FlOURE 10.13. Response of nonminimum phase PVTOL system to smoothed step input
stable, but not asymptotically stable, and is surrounded by a family of closed orbits
with periods ranging from 2n..jE up to 00 (there is a heteroclinic orbit connecting
two saddles). Outside of these orbits is a family of unbounded trajectories. Thus,
depending on the initial condition, the aircraft will either rock from side to side
forever or roll continuously in one direction (except at the isolated equilibria) .
Thus, this nonlinear system is nonminimum phase . Figure 10.13 shows the re-
sponse of the system (10.75) when (VI. V2) is chosen (by a stable feedback law)
476 10. DesignExamples Using Linearization
so that x will track a smooth trajectory from 0 to 1 and y will remain at zero. The
bottom section ofthe figure shows snapshots ofthe PVTOL aircraft's position and
orientation at 0.2 second intervals. From the phase portrait of () (Figure 10.13(e»,
we see that the zero dynamics certainly exhibit pendulum-like behavior. Initially,
the aircraft rolls left (positive () to almost 2Jr. Then it rolls right through four rev-
olutions before settling into a periodic motion about the -3Jr equilibrium point.
Since VI and V2 are zero after t = 5, the aircraft continues rocking approximately
±Jr from the inverted position. From the above analysis and simulations, it is clear
that exact input-output linearization of a system such as (10 .70) can produce un-
desirable results. The source of the problem lies in trying to control modes of the
system using inputs that are weakly (E) coupled rather than controlling the system
in the way it was designed to be controlled and accepting a performance penalty
for the parasitic (E) effects. For our simple PVTOL aircraft, we should control the
linear acceleration by vectoring the thrust vector (using moments to control this
vectoring) and adjusting its magnitude using the throttle.
Xm = - sin ()u 1,
jim = cos()u) - I, (10 .77)
jj = U2,
[
~m ] = [ 0] +[ - sin () 0] [ UI ] . (10.78)
Ym -I cos e 0 U2
Now, however, the matrix multiplying u is singular, which implies that there is no
static state feedback that willlinearize (10 .77). Since U2 comes into the system
(10.77) through J, we mustdifferentiate (10.78) at leasttwo more times. Let a, and
U1 be states (in effect, placing two integrators before the u 1 input) and differentiate
(10.78) twice, giving
+ [- sin()
[X~4)]
y~4)
= [ (Sin()lP.u1 - 2(COS()e~1 ]
(-COS())()2 U1 - 2(sin()()ul cos()
(10.79)
10.4 Nonlinear Flight Control 477
J J
x
y
J
FIGURE 10.14. Block diagram ofthe augmented model PVTOL aircraft system
The matrix operating on our new inputs (ü I , uz) T has deterrninant equal to U I
and therefore is invertible as long as the thrust U I is nonzero. Figure 10.14 shows a
block diagram of the model system with U 1 and UI considered as states. Note that
each input must go through four integrators to get to the output. Thus , we linearize
(10.77) using the dynamic state feedback law
+ 2(COSe)ÖUl] [VI])
[::J
- sine
co~e ] ([-(SinB)lFul
= [ cos e smB
---
'z
(cosB)9 Ul
.
+ 2(sinB)9ul + Vz
Ul Ul
resulting in
(10.81)
Unlike the previous case (equation (10.75)), the linearized model system does not
contain any unobservable (zero) dynamics. Thus, using a stable tracking law for
V, we can track an arbitrary trajectory and guarantee that the (rnodel) system will
be stable.
Of course, the natural question that comes to mind is: Will a control law based
on the model system (10.77) work weil when applied to the true system (I 0.70)? In
the next section we will show (in a more general setting) that, if E is small enough,
then the system will have reasonable properties (such as stability and bounded
tracking).
How small is small enough? Figure 10.15 shows the response of the true system
with epsilon ranging from 0 to 0.9 (0.01 is typical during jet-borne flight, i.e.,
hover, for the Harrier). As in Section 10.4.3, the desired trajectory is a smooth
lateral motion from x = 0 to x = I with the altitude (y) held constant at O.
The figure also shows snapshots of the PVTOL aircraft 's position orientation at
0.2 second intervals for E = 0.0, 0.1, and 0.3 . Since the snapshots were taken
478 10. Design Examples Using Linearization
?:'9"i~:7,c-::;1,
1.
O · t-~~~fJ
O. 2.5 5. 7.5 1 0.
O. 2.5 5. 7 .5 10 .
E=0.3
FIGURE 10.15. Response ofthe true PVTOL aircraft system under the approxirnate control
at uniform intervals, the spacing between successive pictures gives a clue to the
aircraft 's velocity and acceleration. The movie of the trajectories provides an even
better sense of the system response. Interestingly, the x response is quite similar to
the step response of a non-minimum phase linear system. Note that for E less than
approximately 0.6, the oscillations are reasonably damped. Although performance
is certainly worse at higher values of E, stability does not appear to be lost until
10.4 Nonlinear Flight Contra) 479
E is in the neighborhood of 0.9. A value of 0.9 for E means that the aircraft will
experience almost Ig (the acceleration of gravity) in the wrong direction when
a rol1ing acceleration of one radian per second per second is applied. For the
range of Evalues that will normal1y be expected, the performance penalty due to
approximation is smal1, almost imperceptible.
Note that while the PVTOL aircraft system (10.70) with the approximate control
(10.80) is stable for a large range of E, this control allows the PVTOL aircraft to
have a bounded but unacceptable altitude (y) deviation. Since the ground is hard
and quite unforgiving and vertical takeoff and landing aircraft are designed to be
maneuvered close to the ground, it is extremely desirable to find a controllaw that
provides exact tracking of altitude if possible. Now, E enters the system dynamics
(10.70) in only one (state-dependent) direction. We therefore expect that one should
be able to modify the system (by manipulating the inputs) so that the effects of
the E -coupling between rolling moments and aircraft lateralacceleration do not
appear in the y output of the system.
Consider the decoupling matrix of the true PVTOL system (10.70) given in
(10.73) as
- sin II E COS II ] .
(10.82)
[ cosll Esinll
To make the y output independent of E requires that the last row of this decoupling
matrix be independent of E. The only legal way to do this is by multiplication on
the right (i.e. , column operations) by a nonsingular matrix V which corresponds
to multiplying the inputs by V - I . In this case, we see that
]
E
[ - sinll E cosO ] [ 1 -,tanO ] ~ [ - ,;n O cosO
(10.83)
cos e E sinO 0 1 cosO 0
[ Ül
Ü2 ]=[~
EtanO
][", l "2
(10.84)
][: l
-sinll cos(J
cosll sin (J (10.86)
" I
480 10. Design Examples Using Linearization
2 .5 x
1. 25
o.
FIGURE 10.16. Response of the true PVTOL aircraft system under the approximate control
with input transformation
[ :~ ] = [ ~
-f tan ()
][:~ l
Figure 10.16 shows the response of the true system using the controllaw spec -
(10.87)
ified by equations (10.86) and (10.87) for the same desired trajectory. With this
control law, our PVTOL aircraft maintains the altitude as desired and provides
stable, bounded lateral (x) tracking for f up to at least 0.6. Note, however, that the
system is decidedly unstable for f = 0.9. Since we have forced the error into one
direction (i.e., the x-channel), we expect thc approximation to be more sensitive to
the value of f. In particular, compare the second column of the decoupling matrices
of (10.73) and (10.85), i.e.,
maintain level flight with a large roll angle. Since the physicallimits of the aircraft
usually place constraints on the achievable trajectories, a control law analogous
to that defined by (10.86) and (10.87) can be used for systems with small E on
reasonable trajectories.
x= fex) + g(x)u,
(10.89)
y = hex),
where x ERn, u, y E Rm, and f : Rn ~ Rn and g : Rn ~ Rnxm are smooth
vector fields and h : Rn ~ Rm is a smooth function with h(O) = O. Wc will also
assume that the origin is an equilibrium point of (10.89), i.e., f(O) = 0, and will
consider x in an open neighborhood U ofthe origin, i.e., the analysis will be local.
All statements that we make , such as the existence of certain diffeomorphisms,
will be assumed to merely hold in U. Also, when we say that a function is zero,
it vanishes on U, and when we say that it is nonzero, we mean that it is bounded
away from zero on U . While we will not precisely define slightly nonminimum
phase systems, the concept is easy enough to explain.
(~T, r{l = (~l := hex) , TlI (x), ... , TIn-I (X»T, (10.91)
and
- T -T T - - - - T
(~ ,TI ) = (~l := hex), ~2(X) := Lfh(x), T/I (x), ... , T/n-2(X» , (10.92)
with
aT/i
a;g(x) =0, i=I , . .. ,n-l, (10.93)
482 10. Design Examples Using Linearization
and
aijj
-g{x) =0. i=1 •...• n-2. (10.94)
ax
System J (true system):
~I = L jh{x) + L llh{x)u.
(1O.95)
iJ = ««. TJ)·
System 2 (approximate system):
~1 = ~2,
.:. 2
~2 = Ljh(x) + LIILjh(x)u, (1O.96)
~=q(~,ij).
Note that the system (10.95) reprcsents system (1O.89) in normal form and the
dynamics of q(O. TJ) represent the zero dynamics of the system (1O.89). System
(10.96) does not represent the system (10.89), since in the (~. ij) coordinates of
(10.92), the dynamics of (10.89) are given by
~ I = ~2 + L llh(x)u,
L jh(x) + L IILjh(x)u,
- 2 (10.97)
~2 =
~=q(~.ij).
System (10.89) is said to be slightly non-minirnum phase if the true system
(10.95) is nonminimum phase but the approximate system (10.96) is minimum
phase. Since Llih(x) = E1/J{X), we may think of the system (1O.96) as a pertur-
bation of the system (10.95) (or equiva1ent1y (1O.97». Of course, there are two
difficulties with exact input-output linearization of (1O.95):
• The input-output Iinearization requires a 1arge contro1 effort, since the
Iinearizing control is
* 1 -Ljh(x) +v
u (x) = L gh(x) (-Ljh(x) + v) = E1/J{X) • (1O.98)
This could present difficulties in the instance that tbere is saturation at the
control inputs.
• If (10.95) is non-rninirnum phase, a tracking control 1aw producing a linear
input-output response may result in unbounded TJ states.
Dur prescription for the approximate input-output 1inearization of the system
(10.95) is to use the input-output linearizing contro11awfor the approximate system
(10.96); name1y
(10.99)
10.5 Control of Slightly Nonminimum Phase Systems 483
el = ~t - Yd,
(10.101)
e2 = ~2 - Yd .
yields
el = e2 + E 1/!(x)u: (x)
e2 = -ClI e2 - Cl2el (10.102)
~ = q(~, ~).
The preceding discussion may be generalized to the case where the difference in
the relative degrees between the true system and the approximate system is greater
than 1. For example, if
f
but L s L h (x) is not of order s, we define
(10.104)
(10.105)
~Y-l = ~Y + E1/!y-1 (x)u,
z: y y-t
~y = Lfh(x) + LgL f h(x)u,
~ = q(~, ~) .
484 10. DesignExamples Using Linearization
~, = ~2 ,
(10.106)
~ Y-I = ~Y '
.:. y
~y = LJh(x) + LgL Jy-l h(x)u,
~ = ij(~ , ij).
The approximate tracking controllaw for (10 .106) is
When the control objective is stabilization and the approximate system has no
zero dynamics, we can do much better, In this case, one can show that the control
law that stabilizes the approximate system also stabilizes the original system (see
Problem 10.6).
Assume that the system has vector relative degree Yl , .. . , Ym, with decoupling
matrix defined to be A (x) E lRm xm with
L K' ur'»
f 1
A(x) = : (10.110)
[
L KI e f m-'h m L em-1h
Km f m
so that
(10.I 12)
(10.113)
To take up the ideas of Section 10.5.1, we first consider the case where A(x) is
non-singular but is close to being singular, that is, its smallest singular value is
uniformly small for x EU . Since A(x) is close to being singular, i.e., it is close in
norm to a matrix of rank m - 1, we may transform A (x) using elementary column
operations to get
[ :~ ~ ] (V"(xW' [ ~~ ] .
(10.115)
486 10. Design Examples Using Linearization
Now, the normal form of the system (10.109) is given by defining the following
local diffeomorphism of x E ]Rn,
f - h 2(x),
I
~l = h2(X), , ~~ = L
.2 2
~I = ~2 ' (10.117)
m-I
(10.118)
We will assume that (10.109) is nonminimum phase, that is to say that the origin
of (10.118) is not stable.
Now, an approximation to the system is obtained by setting E = 0 in (10 .117).
The resultant decoupling matrix is singular, and the procedure for linearization by
(dynamic) state feedback (the so-called dynamic extension process) proceeds by
differentiating (10.117) and noting that
where
We then get
(Ym ~,+I)
Ym-I
y~m+l)
(10.121)
where
X
I
= (T -0 -O)T
X .uI • . ..• um_l (10.123)
is the extended state. Note the appearance of terms of the form h~, ... ,h~_1 in
(10.121). System (10.121) is linearizable (and decouplable) if A1(x l ) is nonsin-
gular. We will assume that the singular values of AI are all of order I (i.e.• AI
is uniformly nonsingular) so that (10.121) is linearizable. The normal form for
the approximate system is determined by obtaining a loeal diffeomorphism of the
statesx.u1-0 • .. .• u
-0 (mn+m-I) grven
' by
m_1 Ell".
m-I
1
(€1 = h l (x) • .. . . €~I = Lj,- Ih, (x), €~I+l = Lj' h l (x) + Lä?jüJ.
j= 1
m-I
€r = h2(x) • ...• €;2 = L?-I h2(x). €~+I = L?h 2(x) + LätüJ.
j =1
m-I
i;m
51 = h m(X ) •• ... 5Ym
i;m
= ur:»
f m( )' 5Ym+1 = LYmh
i;mX f m
( ) +" - 0 -0
L...amjuj•
X
j= 1
r/) . (10.124)
is given by
(10.125)
al.
In (10.125) above, bJ (~ , ij) and (~, ij) are the ith element and row of b' and Al,
respectively, in (10.121) above (in the €, ij coordinates). The approximate system
used for the design of the linearizing control is obtained from (10.125) by setting
E = O. The zero dynamics for the approximate system are obtained in the ~ = 0
subspace by linearizing the approximate system using
(10.126)
to be
Note that the dimension of ij is one less than the dimension of Tl in (10.118) . It
would appear that we are actually determining the zero dynamics of the approxi-
mation to system (10.109) with dynamic extension-that is to say with integrators
appended to the first m - 1 inputs ü?, ü~ , . . ., ü~_l' While this is undoubtedly
true, it is easy to see that the zero dynamics of systems are unchanged by dynamic
extension (see also [54]). Thus, the zero dynamics of (10.127) are those of the
approximation to system (10.109). The system (10.109) is said to be slightly non-
minimum phase if the equilibrium Tl = 0 of (10.118) is not asymptotically stable ,
but the equilibrium ij = 0 of (10.127) is.
1t is also easy to see that the preceding discussion may be iterated if it turns
out that A I (~ , ij) has some small singular values. At each stage of the dynamic
extension process m - 1 integrators are added to the dynamics of the system, and
the act of approximation reduces the dimension of the zero dynamics by 1. Also,
10.5 Control of Slightly Nonminimum Phase Systems 489
if at any stage of this dynamic extension process there are two, three , . .. singular
values of order E, the dynamic extension involves m - 2, m - 3, .. . integrators.
If the objective is tracking, the approximate tracking controllaw is
J)
(10.128)
S Yi +
J
+ ",i
"'1 SYi + ...+ ",i
"'Yi+l' I' -
- I, .. . , m, (10.129)
Then for E sufficiently small, the states ofthe system (J 0.125) are bounded and the
tracking errors satisfy
-I
leI I = I~J - Ydll ~ ke,
-2
le21 = I~I - Yd21 ~ ke,
(10.130)
As in the SISO case, stronger conclusions can be stated when the control
objective is stabilization and the approximate system has no zero dynamics.
490 10. Design Examples Using Linearization
vary continuously (as a set) with E, but the number of zeros and their locations
vary in much more subtle fashion, since the relative degree, that is, the smallest
integer Y (E) such that
c(E)AY(E)-lb(E) i- 0
may change at some values of E (see Problem 10.9). We now discuss this for the
nonlinear case. We will highlight the case where y(E) = r + d for E i- 0 and
y (0) = r. As a consequence of the definition of relative degree we have that
y(E) = rand y(O) = r + d implies that VE i- 0,
L gh(x , E) = L gLfh(x, E) = .. . = LgL72h(x, E) = 0 "Ix near Xo (10.132)
To keep the singularly perturbed zero dynamics from demonstrating multiple time
scale behavior, I we assume that for 0 ::: k ::: d
LgL71+kh(x, E) = Ed-kak(X, E) (10.134)
h(X,E)
Lfh(x, E)
~= (10 .135)
L'j+d-1h(x, E)
where the first r coordinates correspond to the coordinates of the perturbed system,
and the full set of r + d coordinates, with E = 0, are the coordinates of the nominal
system. It may be verified, that for small E we have the following normal form (in
the sense of Chapter 9) :
~I=b
~2 = ~3,
• d
~r = ~r+J +E ao(~ , n, E)U,
. d-I
~r+1 =~r+2+E a l(~ ,rJ ,E)U, (10.136)
. d-2
~r+2 = ~r+3 E +
a2(~ , rJ, E)U,
We have introduced the smooth functions a, b, and q, we invite the reader to work
out the the details of how a and b depend on /, g, and h , Using the change of
.
EZd =
a
--ZI + Edb
ao
r, = q(z, TI , E) .
Note that TI E IRn - r - d , Z E JRd. Also, we have abused notation for q from equation
(10.136). Thus , the zero dynamics appear in singularly perturbed form, i.e.,
with n - r - d slow states (TI) and d fast states (z). This is now consistent with the
zero dynamics for the system at E = 0 given by
r, = q(O, TI, 0), (10.140)
Thus, the presence of small terms in LgL71+kh(x, E) for 0 :s k :s d causes the
presence of singularly perturbed zero dynamics. The Jacobian matrix evaluated at
Z = 0, E = 0 of the fast zero subsystem is obtained as
tory YD(t). If the output y(t) is identically equal to YD(t), it follows that, for the
perturbed system,
YD(t)
YD(t)
~D(t) =
yg-I)(t)
Then the driven dynamics of the system are given by (10.136) with the choice of
error coordinates
namely,
• D
EV\ = al v\ + v2D '
.
EV2 = a2 vI
D
+ v3D '
(10.143)
We will need to distinguish in what follows between the following two cases:
1. Each perturbed system has vector relative degree at a point xo, but as a function
of E the vector relative degree is not constant in a neighborhood of E = O.
2. The perturbed system has vector relative degree, but the nominal system with
E = 0 fails to have vector relative degree.
494 10. Design Examples Using Linearization
One could also consider as a third case, a scenario in which the perturbed system
and nominal system fail to have vector relative degree, but they need different
orders of dynamic extension for Iinearization. We will not consider this case here .
Case 1: Both the perturbed system and the nominal system have vector relative
degree
Suppose that the system has vector relative degree (rl (E), r2(E)), with
rl(E)=S
r2(E) = r VE i= 0,
r2(0)=r+d .
1. The first row is identically zero for x near Xo and all E for i < S - I and nonzero
at (xo , 0) for i = s - 1.
2. The second row is identically zero for x near Xo when E i= 0 for j < r - 1, and
is identically zero when E = 0 for j < r + d - I.
3. The matrix (l 0.145) is nonsingular at (xo, E) with E i= 0 for i = s -I , j = r - 1,
and is nonsingular at (xo, 0) with E = 0 for i = s - 1, j = r + d - I .
As in the SISO case, we will assume that there are only two time scales in the zero
dynamics by assuming that for 0 ~ k ~ d,
o
] Ak(X, E),
(10.146)
where Ak(X , E) is a matrix of smooth functions, and both Ao(xo, 0) and Ad(XO, 0)
are nonsingular. By using elementary column operations to modify gl, g2, we can
assume that Ao(x, E) = 1 E 1R2x2. Since the first row of the left hand side of
(10.146) does not change with k, we can assume Ak (x, E) to be of the form
(10.147)
Then, using variables TI E lRn - s - r - d to complete the basis, the MIMO normal form
of the system is given by
"I I
~1 = ~z'
(10.148)
.z
~r = ~r+1
Z
+ Ed (Yo(;, TI, E)UI + ao(;, TI, E)UZ),
~;+\ = e+z + Ed - I (Y, (~, TI, E)UI + a, (~, TI . E)UZ) ,
"z
;r+d = bz(; , TI . E) + Yd(; , TI, E)U, + ad(~, TI, E)UZ .
~ = q(;, TI , E) + P(; . TI. E)U.
Defining, in analogy to the SISO case,
d-II:Z
ZI = ~;+ 1 ' Zz = E~;+Z' Zd = E 5r+d'
it may be verified, using the controls
UI = -bI,
Yo 1 ZI
Uz= -b l - --,
ao Edao
that the zero dynamics are given by
For the driven dynamics, one considers, as in the SISO case, new "error
coordinates" given by
1:2 (r)( ) (1: 2 (r+I)( »
VI = 5r+1 - YD2 t , V2 = 5r+2 - YD2
E t ,
d-I(1:2 (r+d-I)( » (10 .150)
Vd = E 5r+d - YD2 t.
In these coordinates
EVl = al VI + V2 + Ed(Yhl - bl)(YI + al Yo),
EV2 = a2VI + V3 + Ed(Yhl - b l)(Y2 + a2YO),
. = xad V'
EVd + Ed(b 2 + (Yd + adYo )(YDI
(s)
- b) (r+d») '
I - YD2
(10.152)
We denote the first matrix in (10.152) above by Ak(X, Ul, . • • , u~k-I), E) for
0::; k ::; d and assume that Ad(X, U" ... , u~k-I), E) is nons ingular. In fact , it may
be verified that ßk(X, UI, , u~k-I), E) = ßo(x, E) and
Now, define
~; = Lr l h, (x, E),
~;+]= L}h,(x, E) + ßo(x, E)Ul,
1:1
'>s+d =
Ls+d-1h (
f ] x, E) + 'f'1
A. ( • (d-2»)
x , UI, UI, . .. , u 1 + ß0 (x, E)u (d-I)
1 '
~~ = h 2 (x , E),
f; = L7 Ih 2(X , E),
e+l = Li h 2(X , E) + 8o (x , E)U].
(10.153)
' 1 1
~s =~S+I '
i,1 1:1
5s+1 ='>s+2+ E
d-I
YI (I:,>,11,E) U2,
(10.154)
•2
~r = ~r+1
2
+ Ed ao(~, 11, E)U2 ,
P
'>r+1 = '>r+2 + E
1:2 d-I (I: )
a, ,>,11, E U2.
'2 -
~r+d = b2(~, 11, E) + 8o(~, 11. E)UI + ad(~, 11, E)U2 ,
~ = q(~, 11, E) + P(~, 11 . E)U.
498 10. DesignExamples Using Linearization
Zl = "r+l'
1:2
Z2 -
_ 1:2
E5r+2' • •. , Zd = Ed-!1:5r+d
2
10.7 Summary
This chapter, along with the exercises that follow, gives the reader a sense of how
to apply the theory of input-output linearization by state feedback to a number of
applications. The main point that we made in the ball and beam example is that
even if a system fails to have relative degree and is singular in a certain sense, it
can be approximately input-output linearized for small signal tracking. For large
signals a switched scheme ofthe form proposed in Tomlin and Sastry [308] may be
useful. Further results inspired by the ball and beam example for MIMO systems
which fail to have vector relative degree are in [120; 17; 142]. Another interesting
approach to approximate feedback linearization using splines is due to Bortoff
[35]. The application of the methods of input-output linearization to flight control
featured approximate linearization for slightly nonminimum phase systems. While
the chapter dealt with VTOL aircraft, slightly nonminimum phase systems are
encountered in conventional take-off and landing aircraft [305; 129] as weIl as in
helicopter flight dynamics [165] as illustrated in the exercises (Problems 10.7 and
10.8 Exercises 499
10.8 Exercises
Problem 10.1 Robust relative degree. The robust relative degree ofthe nonlin-
ear system (10.28) is equal to the relative degree of the Jacobian linearized system
(10.36) and so is well defined.
Problem 10.2 Robust relative degree under feedback. Prove that the robust
relative degree of a nonlinear system (10.28) is invariant under astate dependent
change of control coordinates of the form
u(x, v) = a(x) + ß(x)v,
where a and ß are smooth functions and a(O) = 0, while ß(O) =1= O.
Problem 10.3. Prove Proposition 10.2.
Problem 10.4. Provide a proof for Theorem 10.6.
Problem 10.5 Byrnes-Isidori regulator applied to tbe ball and beam system
[298]. Apply the Bymes-Isidori regulator of Chapter 9 to the ball and beam
system when the signal to be tracked is a constant plus a sinusoid. Thus, the
exo-system may be modeled by
(10.156)
500 10. Design Examples Using Linearization
1 gravity
Mimic the procedure of Problem 9.31 and approximate the solution of the POE
obtained by a polynomial in w. The paper [298] discusses the small domains of
attraction that are obtained by the resuiting controller. 00 you also find that the
domain of validity of your controller requires that w 1 (0) , W z (0) , W 3 (0), w be very
smaII? See also [299] for some hints on how to do better.
T
[ ; ] = R(O) [ R (a) [ - : ] +[ (10.158)
10.8 Exercises 501
cos(O) - sin(O) ].
R(O) = (10.160)
[ sin(O) cos(O)
Physica11y, the presence of E models the fact that the process of generaing pitching
moment results in a sma11 downward force . Use your analysis and the methods of
this chapter to explain the fact that aircraft dip down first when commanded to rise
in the vertical, y, direction.
Linearize the system from the inputs UI, ui to the outputs YI = X - u, Y both
for E equal to zero and not equal to zero.? Here v is the horizontal "trim" velocity.
For the case that E #- 0 derive the zero dynamies. Plot the zero dynamics for some
normalized representative nurnbers (for a OC 8) given by E = 0.1, v = 0.18 , at. =
30, ao = 2, C = 6. Use the methods of this chapter to explain which controllaw
you would use .
Problem 10.8 Automotive engine control (adapted from [64]). Consider the
two state model of an automotive engine coupled to transmission and wheels:
m= CI T(a) - czwm,
I (10.161)
w= J[T; - TI - Td - Trl.
Here, we have
• m is the mass of air in the intake manifold of the car in kg.
• t» is the engine speed in rad/sec. The speed of the car is the output y = 0.1289w
meters per second.
• T (a) is the throttle characteristic of the car, with a = u being the input throttle
angle in degrees.
T(a) = I - cos(1.14a - 1.06)
2y ou will need a symbolic algebra package for the case that E = O. Use D , L as lang as
you can in your calculations.
502 10. Design Examples Using Linearization
Cl = 0.6 kg/sec,
C2 = 0.0952,
C3 = 47469 N.m/sec,
C4 = 0.0026 Nrnsec",
We would like to build a controller to cause the output sped of the car to transition
from y = 20 m/sec to y = 30 m/sec:
1. Step 1 Build a second-order linear reference model such that the output of the
reference model, YM smoothly transitions from 20m/sec to 30m/sec in about 5
seconds. For example, use a linear model with transfer function
w~
S2 + 2~wns + w~
and design W n and ~ so that with a step reference input r(t) going from 20 to
30, the output YM(t) has the desired characteristic.
2. Step 2 Now build a feedback linearizing controller such that y(t) tracks YM(t).
Note that the input throttle angle Cl does not enter linearly in the dynamies
(10.161). One strategy is to make Cl astate and define the input u to be
Cl = U. (10.162)
Now the composite system (10.161, 10.162) is linear in the input u, Do you
have any ideas about what you may be sacrificing for this easy fix? Another
strategy is to invert the nonlinearity T(Cl), at least locally. Compare the two
approaches.
3. Step 3 Simulate the controller and see how sensitive the performance of the
controller is to a 20% error in the values of the parameters Cl , C2, C3, C4.
4. Step 4 Use the techniques of Chapter 9 to build a sliding mode linearizing
controller for the system with 20% error in the parameters.
5. Use the techniques of Problem 9.27 to build an adaptive linearizing controller
for the system with a larger initial error in the parameter estimate of, say, 50%.
Problem 10.9 Zeros and poles of linear systems. Consider a family of SISO
minimal linear systems with parameters A(E) E Rn xn, b(E) E Rn, C(E)T E Rn.
Let y(E) be the relative degree ofthe linear system and assume that itjumps from
r at E = 0 to r + d at E #- O. Give asymptotic formulas for how d zeros of the
linear system at E #- 0 tend to 00 as E ~ O. In other words , give the zero locus
of the linear system. What implications does this have for the effect of numerical
perturbations on the calculation of the zeros of a linear system, since a linear system
"generically" has relative degree I?
Problem 10.10 Multiple time scales in zero dynamies. In the case that
L gL7' +k does not satisfy the one-time-scale assumption of (10.134) derive the
10.8 Exercises 503
XI =X2,
X2 = -sinxsul +E2 COSXSU2,
X3 = X4,
X4 = COSXSUI + E 2 sinxSU2 - I,
• CTOL Dynamics
XI =X2,
X2 = (-D + UI) COSXs - (L - E2u2) sinxs,
Xs = X6,
X6 = U2,
that are small relative to the others at some step in the algorithm. Discuss the zeros
that are being singularly perturbed to 00 as a result of any approximations that you
make in the decoupling procedure.
Problem 10.13 "Zeros" ofnonregular systems [308]. Consider a SISO non-
linear system with LgLr2h(xo) = O(lx -xon and LgL;:-1 h(x) = 0(1). Thus,
the nonlinear system has robust relative degree y, but the exact zero dynamics are
1
potentially in JRn-y+l. Since f(xo) = 0, it follows that Lr h(xo) = O(lx -xoI S ) .
Classify what you think would happen if one used a switching controllaw for track-
ing YD (-) in the three cases, a sort of classification of the "bifurcation" in the zero
dynamics:
1. s < r (No zero dynamics are defined)
2. s = r (Zero dynamics are well-defined, and high-dimensional invariant sets
are possible as in the case of the ball and beam.
3. s > r (Zero dynamics are well-defined).
Problem 10.14 Bounded tracking for nonminimum phase nonlinear systems
[307]. Coinbine the discussion of zeros for perturbed nonlinear systems and the
discussion of the Devasia-Chen-Paden approach of Chapter 9 to come up with an
approach for tracking for nonlinear control systems with singularly perturbed zero
dynamics, of the form
In this chapter we simply neglected the high frequency zeros and assuming the rest
of the system was minimum phase simply used the tracking controllaw associated
with input-output linearization. Here try the following two step approach:
1. Step 1: Find the controllaw uO ( .) for tracking the non linear system with E = 0
and with relative degree r + d. This is done using the Devasia-Chen-Paden
approach if the nominal system (E = 0) is nonminimum phase.
2. Step 2: Use the nom inal control uO to get the nominal normal form coordinates
~o, '70 . Comparing the perturbed normal form of (10 .136) with the standard
normal form at E = 0, derive the formula
Subtracting the equation for the derivatives of ~p , iJo from that for ~, iJ results
in
(10 .164)
10.8 Exercises 505
EUd = ad(~, 7], E)VI + Ed (b(~, 7] , E) - b(~o, 7]°,0) - a(~o, 7]0, O)uo),
Ud+1 = q, (~, 7], E) - ql (~o, 7]°,0),
(10.165)
Now apply the Devasia-Chen-Paden algorithm to the system of (10.165) to find
the value of VI (t) and hence the control u(t, E) for exact tracking by (10.163).
The controllaw of (10.163) is bounded in t for each E. Prove by analyzing the
system of (10.165) that as E tends to zero the control u (t , E) is bounded.
Problem 10.15 Control of a helicopter [165]. The equations of motion of a
helicopter can be written with respect to the body coordinate frame, attached to
the center of mass of the model helicopter. The x axis is pointed to the body head,
and the y axis goes to the right of the body. The z-axis is defined by the right-hand
rule, as shown in Figure 10.18. The equations of motion for a rigid body subject
to an extemal wrench F b = [fb, .bf applied at the center of mass and specified
(10.166)
where I is an inertia matrix E iR3 x 3 • Define the state vector [p v" R wbf E
iR3 X iR3 x SO(3) X iR3 , where p and vP are the position and velocity vectors
of center of mass in spatial coordinates, R is the rotation matrix of the body axes
relative to the spatial axes, and the body angular velocity vector is represented by
wb. Using jJ = v" = R vb and &i = R T R, we can rewrite the equations of motion
of a rigid body.
p vP
iJP ~Rfb
= m (10.167)
R R&l
itl t:' (rb - wb X Iw b)
For R E SO(3), we parameterize R by ZYX Euler angles with rP, e, and 1/1
about x, y, z-axes respectively, and hence R = eZVJ eYOeXq, The helicopter force
and moment generation system consists of a main rotor, a tail rotor, a horizontal
stabilizer, a vertical stabilizer, and fuselage, which are denoted by the subscripts
M, T, H, V , F , respectively. The force experienced by the helicopter is the resul-
tant force of the thrust generated by the main and tai! rotors, damping forces from
the horizontal and vertical stabi!izer, aerodynamic force due to the fuselage, and
gravity. The torque is composed of the torques generated by the main rotor, tail
rotor and fuselage, and moment generated by the forces as defined in Figure 10.19.
In hover or forward flight with slow velocity, the velocity is so slow that we can
ignore the drag contributed from the horizontal and vertical stabilizers, and the
fuselage. As shown in [I65] the extemal wrench can be written as
UM 9M P
-~- uT
Actuator
Dynamics
9
B
Rotary
Wing
Force &
Moment
Generation
fb
lb
Rigid Body
Dynamics
vP
R
I Dynami cs Process
-----':'A.- A Wb
The forces and torques generated by the main rotor are controlIed by TM, also and
bI s, in which al s and b is are the longitudinal and lateral tilt of the tip path plane of
the main rotor with respect to the shaft respectively. The tail rotor is considered as
a source of pure lateral force Y T and antitorque QT, which are controlIed by TT.
The forces and torques can be expressed as
with sa l.< = sin a l... ca! s = cos al s, sb ., = sin b lso cbi, = COS bI s. The moments
generated by the main and tail rotors can be calculated by using the constants
{IM , YM, h M , h T , Ir} as defined in Figure 10.18. We approximate the rotortorque
equations by Q ; ~ CFT;1.5 + D ;Q for i = M , T. Following are representative
values of some of the constants above:
The operating region in radians for al... bi s and Newtons for TM, TT is described
by: la!s l ::: 0.4363, Ib Isi::: 0.3491, -20.86::: TM ::: 69.48, -5.26::: TT s 5.26.
To represent the system in an input-affine form, we assume that the inputs of
the nonlinear system are the derivatives of TM, TT, a ls and b Is' Then, the system
508 10. Design Examples Using Linearization
equations become
~Rfb
m
IIJCli
r l ( r b - wb X Iw b)
= (10.168)
(10.169)
where w = [w), W2, w3, W4] are defined as auxiliary inputs to the system. Here
the states x are in IRI 6 and the inputs W E ]R4. We have chosen 6 output variables,
however, since there are four control inputs the maximum number of outputs
p=R
- TM sin
~MsinbI S -TT
ab] + [ 00 ] , (10.170)
[
-TM cos c., cosb, I
R =Rw, (10.171)
w= r!(r b - wb X Iw b), (10.172)
(10.173)
while equations (10.171) and (10.172) remain the same. Verify that this system is
input-output linearizable for each of the set of outputs discussed above.
11
Geometrie Nonlinear Control
In this chapter, we give the reader abrief introduction to controllability and ob-
servability of nonlinear systems. We discuss topics of constructive controllability
for nonlinear systems, that is, algorithms for steering the system from a specified
initial state to a specified final state. This topic has been very much at the forefront
of research in nonlinear systems recently because of the interest in non-holonomic
systems. We give areader of the power of differential geometric methods for
nonlinear control when we discuss input-output expansions for non linear sys-
tems and disturbance decoupling problems using appropriately defined invariant
distributions.
In equation (11.1) above, x = (Xl , X2 , ... , xn)T, are local coordinates for a smooth
manifold M (the state space manifold), and I, gl , . . . , gm are smooth vector fields
on M represented in local coordinates. For the rest of this section unless otherwise
stated we will work exclusively in a single coordinate patch and will refer to
X E Rn. Also a number of results in this chapter will need the vector fields I, g;
to be analytic rather than just smooth. Thus , the assumption 01 analytic vector
fields andfunctions will be in force throughout this chapter. The vector field 1 is
1I.l Controllability Concepts 511
referred to as the drift vector field and the vector fields gj are referred to as the input
vector fields. The inputs (ut. ... , um)T are constrained to Iie in U C IRm. Input
functions U j (-) satisfying this constraint are known as admissible input functions
or controls. The eontrollability, or aeeessibility, problem consists of deterrnining
to which points one can steer trajectories from Xo at the initial time: The results
in controllability first appeared in the work of Sussmann and Jurdjevic [29Il,
Hermann and Krener [136], and Sussmann [289], [290]. See also Nijmeijer and
van der Schaft [232] for a nice exposition. In this chapter our use of the phrases
"controllability distribution" and "accessibility distribution" are not standard, but
are cons istent with Murray, Li, and Sastry [225]. We begin with the following
definition:
Definition 11.1 Controllable. The non linear system (JJ.J) is said to be con-
trollable if for any two points xo, XI there exist a time T and an admissible control
defined on [0, Tl such that for x(O) = Xo we have x(T) = XI .
The reader is undoubtedly familiar with the result from linear systems theory
that the control system
m
X = Ax + Lbju j (11.2)
i= 1
is nonsingular. Then, the conclusion of the theorem follows from the implicit
function theorem applied to the map x(T, ~). To do this we first note that x(t,~)
satisfies equation (11.1), namely,
m
x(t ,~) = f(x(t,~)) + Lgj(x(t , ~))Uj(t, ~).
j=)
The preceding proposition covers the situation where the underlying linear sys-
tem is controllable. There are a large number of systems in which this faiIs to be
the case. In the next subsection we consider the controllability of drift free control
systems , i.e., f(x) == O. Then since A = 0, the linearization is not controllable
unless m > n. Before we do this, we introduce a discussion of the use of Lie
brackets in studying controllability. Consider the following two input drift-free
control system:
(I 1.4)
It is clear that from a point Xo E Rn we can steer in all directions contained in the
subspace of the tangent space TxoRn ~ Rn given by
spanlg , (xo) , g2(XO)}
simply by using constant inputs. It mayaiso be possible to steer along other direc-
tions using piecewise constant input functions as follows: Consider the following
example which is also a way of motivating the definition of the Lie bracket which
we have seen in Chapter 8.
(1,0), t E [0, h[,
(0, I), t E [h, 2h[,
U(t) =
(-1,0) , t E [2h, 3h[,
+ h
2 (I
- -
2 dx
dg;
(xo)g) (xo)
dx
1
+ -dg 2 (xo)gl (xo) + - -dg2 (XO)g2(XO) + ...) .
2 dx
Here we have used the Taylor series expansion of g2(XO + hg, (xo)) = g2(XO) +
h ~ (XO)gl (xo) + ." . Continuing this one step further yields (the details are
interesting to verify) that
2g dg 2 ) + ... .
x(3h) = Xo + hg2 + h 2 (d- g i dg ,
- -g2 + -1 -g2
dx dx 2 dx
Finally, we have
int R V (xo, S T) # 0 ,
Proof: The proof follows by recursion. For the first step, choose some vector
field I! E ~. For small EI define the manifold
NI = {Ih (t l , x) : Itll < Ed·
By choosing Öl sufficiently small we may guarantee that N! C V. For the k-th
step, assume that N, C V is a k-dimensional manifold. If k < n then there exists
some x E Nk and fk+l E ~ sueh that fk+1 f/. TxNkt sinee if this were not so, then
~x C TxNk in some neighborhood of Xo. This would eontradict the fact that the
dimension of ~ is greater than the dimension of Ni, For Ek+! small enough define
Nk+! = {lPft+,(tk+I. x) 0 ' " o IPf,(tlt x) : It;! < Ö;, i = 1, ... , k + I} C V.
From arguments similar to those in the proof ofthe Frobenius theorem 8.15, it can
be verified that Nk+1 is a (k + 1) dimensional manifold, since the /; are linearly
independent. The construetion is depicted in Figure (11.1). If k = n then N, is
an n-dimensional manifold, and by eonstruction N, C R V (xo , Öl + ... + öd.
Thus R v (xo, ö) contains an open set in its interior. The conclusion of the theorem
follows since Ö ean be chosen arbitrarily small. 0
,
,,~'---+----"",'N
, 2
,, • ,,
r
,, ,,
the origin. Choose, an input u I that steers Xo to some XI E R v (xo, ~ T /2) staying
inside V . Denote the flow corresponding to this input <Pu' For T /2 ~ t ~ T choose
an input s; (t) = -u) (T -t). Since the systemhas no drift, the input u' reverses the
flow, and <Pu'I (T, x) = <PUl (T /2, X) -I . By continuity ofthe flow, <Pu'I (T, x) maps a
neighborhood of XI into a neighborhood of xo. By concatenating inputs that steer
a neighborhood of Xo to a neighborhood of XI and the flow of <Pu; we see that we
can steer the system from Xo to a neighborhood of Xo in T seconds. The proof is
shown in Figure (l 1.2). 0
Thus, we have seen that for drift free control systems,local controllability is assured
by the controllability rank condition given by
Here, we will now develop some concepts from the theory of Lie algebras. The
level of difficulty in steering controllable systems is proportional to the order of
Lie brackets that one has to compute starting from the initial distribution tl in order
to arrive at ~. Thus, we will study the growth of this distribution under different
orders ofLie brackets. Given a distribution tl = spanjg} , . .. , gm} associated with
a drift-free control system, define a chain of distributions G, by GI = tl and
forall XE U.
Proof: (Necessity) Suppose that G p (X) ~ IRk for some k < n. Since the system
is regular, the distribution G p is regular and is involutive . Let N be an integral
manifold of G p through Xo (note that we are invoking the Frobenius theorem
here!). Then if XI E U does not He on N it is not possible to find an admissible
control to steer the system to Xl , since trajectories of (11.5) are confined to N .
(Sufficiency) Join the points xo, XI by a straight line. Cover the line by a finite
number of open balls of radius E. Since the system is locally controllable it is
possible to steer between arbitrary points (see Theorem 11.5). Now we can steer
the system from Xo to XI by steering within the open sets and concatenating the
resultant trajectories . 0
11.2 Drift-Free Control Systems 517
r; = rank G],
For other terms in the filtration one has to be more careful in keeping track of the
maximum possible new elements for two reasons:
[f, g] = - [g , f].
(l1.7)
The notation jli, j < i means all integers less than i which divide i , If a, = Bj
for all i, we say that the filtration has maximum growth .
The Philip Hall basis is a systematic way of generating a basis for a Lie algebra
generated by some gl, . .. , gm, which takes into account skew symmetry and the
lacobi identity. Thus , let L (gI, . . . , gm) be the Lie algebra generated by a set of
vector fields g" . . . , gm' One approach to equipping L (gI, . . . , gm) with a basis
is to list all the generators and all of their Lie products. The problem is that not all
Lie products are linearly independent because of skew symmetry and the lacobi
identity.
Given a set of generators {gi, ... ,gm}, we define the length of a Lie product
recursively as
l(g j) = 1, i=I , . .. , m ,
I([A, B]) = I(A) + I(B),
where A and B are themselves Lie products. Altematively, I(A) is the number
of generators in the expansion for A. A Lie algebra is ni/potent if there exists an
integer k such that all Lie products of length greater than k are zero. The integer k
is called the order 01ni/potency.
5 I8 I 1. Geometrie Nonlinear Control
Optimal Control
Perhaps the best formulated method for finding trajectories of a general control sys -
tem is the use of optimal contro!. By attaching a cost functional to each trajectory,
we can limit our search to trajectories wh ich minimize a cost function. Typical cost
functions might be the length ofthe path (in some appropriate metric), the control
cost, or the time required to execute the trajectory. lf the system has bounds on the
11.3 Steering of Drift-Free Nonholonomic Systems 519
magnitudes of the inputs, it makes sense to solve the motion planning problem in
minimum time . It is weIl known that for many problems, when the set of aBowable
inputs is convex, then the time-optimal paths result from saturating the inputs at
all times (this is often referred to as bang-bang control). The inputs may change
between one set of values and another at a possibly infinite number of switching
times . Choosing an optimal trajectory is then equivalent to choosing the switching
times and the values of the inputs between switching times.
Canonical Paths
A third approach to steering nonholonomic control systems is by choosing a family
of paths that can be used to produce desired motions . For example, we might
consider paths for a car that cause a net rotation of any angle or a translation in
the direction that the car is facing. We can then move to any configuration by
reorienting the car, driving forward, and again reorienting the car. The path used
to cause a net rotation might consist of a set of parameterized piecewise constant
inputs or a heuristic trajectory. The set of canonical paths used for a given problem is
usually specific to that problem. In some cases the paths may be derived from some
520 11. Geometrie Nonlinear Control
unifying principle. For example, if we could solve the optimal control problem in
closed form, these optimal paths would form a set of canonical paths. In the case of
time-optimal control, we might consider paths corresponding to saturated inputs as
canonical paths, but since it is not clear how to combine these paths to get a specific
motion, we distinguish these lower level paths from canonical paths. Canonical
paths have been used by Li and Canny to study the motion of a spherical fingertip
on an object [183].
We now give a bit more detail about some of the various methods of nonholo-
nomic motion planning being pursued in the literature. The following subsections
are not organized according to the different approaches, but from a pedagogical
point of view. In Section 11.4, we study the steering of "model" versions of our
nonholonomic system,
(11.8)
By model systems we mean those that are in some sense canonical. We use some
results from optimal control to explicitly generate optimal inputs for a dass of sys-
tems. The class of systems that we consider, called first-order control systems, have
sinusoids as optimal steering inputs. Motivated by this observation, we explore the
use of sinusoidal input signals for steering second- and higher-order model control
systems. This study takes us forward to a very important model dass which we
refer to as chained form systems.
In Section 11.5 we begin by applying the use of sinusoidal inputs to steering
systems that are not in a model form. We do so by using some elementary Fourier
analysis in some cases of systems that resemble chained form systems. The ques-
tion of when a given system can be converted into a chained form system is touched
upon as weIl. Then we move on to the use of approximations of the optimal input
class by the Ritz approximation technique. Finally, we discuss the use ofpiecewise
constant inputs to solve the general motion planning problem.
q, = UI.
q2 = U2, (11.9)
q3 = qlu2 - q2U,.
For this system,
111U12dt.
Using the fact that q; = U; for i = 1,2, an equivalent description of the last
equation in (11.9) is as a constraint of the form
q3 = qlq2 - q2ql.
Similarly, the Lagrangian to be minimized can be written as qr
+ q~. Using a
Lagrange multiplier ).,(t) for the constraint, we augment the Lagrangian to be
minimized as folIows:
(/I + ).,q2 = 0,
(h - ).,q\ = 0, (11.10)
). = O.
Equation (11.10) establishes that ).,(t) is constant and, in fact, that the optimal
inputs satisfy the equation
[ :~ ] = . : ~).,] [ :~ ] := A [ :~ ] .
Note that the matrix A is skew symmetrie with )., constant, so that the optimal
inputs are sinusoids at frequency ).,; thus,
Ul(t) ]
[ U2(t)
=[COSAt sin).,t
-SinAt] [ u\(O)
cos At U2(0)
].:= el\tu(O)
Having established this functional form of the optimal controls, given values for
qo and qf one can solve for the u(O) and A required to steer the system optimally.
However, from the form of the control system (11.9), it is clear that the states
q\ and q2 may be steered directly. Thus, it is of greatest interest to steer from
q(O) = (0,0,0) to q(1) = (0,0, a). By directly integrating for ql and q2 we have
that
q3(1)=
1o
Further, the total cost is
1
(q\U2-q2U\)dt=--1 (u\(0)+u
2 2)
2(0) =a
).,
11.4 Steering Model Control Systems Using Sinusoids 523
=
Since Ä = Znit , it follows that the minimum cost is achieved for n -1 and that
lu(O)12 = Zn a , However, apart from its magnitude, the direction of u(O) E JR2 is
arbitrary. Thus, the optimal input steering the system between the points (0. O. 0)
and (0. O. a) is a sum of sines and eosines at a frequency 27f (more generally
(27f) / T if the time period of the steering is T) .
The generalization of the system (11.9) to an m-input system is the system
qj = u., i = 1, . .. ,m,
(1UI)
i<j=I, ... . m.
q = u,
(1U2)
Y= qu T _ uq",
The Euler-Lagrange equations for this system are an extension of those for the
two-input case, namely
ij - Aq = 0,
Ä=O.
where A E so(m) is the skew-symmetric matrix ofLagrange multipliers associated
with Y . As before, A is constant, and the optimal input satisfies
ü = Au,
so that
It follows that e/" E SO (m). It is of special interest to determine the nature of the
input when q(O) = q(l) = 0, Y(O) = O. and Y(1) is a given matrix in so(m) . In
this context, an amazing fact that has been shown by Brockett [46] is that when m
is even and Y is nonsingular, the input has m /2 sinusoids at frequencies
While the proof of this fact is somewhat involved and would take us afield from
what we would like to highlight in this section, we may use the fact to propose the
following algorithm for steering systems of the form of equation (11.11):
524 11. Geometrie Nonlinear Control
qi =Ui, i = I, . . . , m,
qij = qiUj, 1 :5 i < j :5 m, (11.13)
qijk = qi jUk, 1 :5 i, j, k :5 m (mod Jacobi identity).
Because theJacobi identity imposes relationships between Lie brackets ofthe form
[g i, [g j , gk]] + [g k, [g i , gj]] + [g j , [gk, g i]] = 0
forall i, j, k, it follows that not all state variables ofthe form of qij k are controllable.
For this reason, we refer to the last of the preceding equations as " mod Jacobi
identity," Indeed , a straightforward but somewhat laborious computation shows
that
q231 - q 132 = q lq23 - q 2ql3.
From Problem 11.1 it may be verified that the maximum number of controllable
qijk is
(m + l)m(m - 1)
3
Constructing the Lagrangian with the same integral cost criterion as before and
deriving the Euler-Lagrange equations does not, in general, result in a constant set
of Lagrange multipliers. For the case of m = 2, Brockett and Dai [48] have shown
that the optimal inputs are elliptic functions (see also the next section). However,
we can extend Algorithm 1 to this case as folIows:
Algorithm 2 Steering Second-Order Model Systems.
1. Steer the qi to their desired values. This causes drift in all the other states.
2. Steer the q ij to their desired values using integrally related sinusoidal inputs.
If the i th input has frequency Wi, then q ij will have frequency components at
to, ± Wj. By choosing inputs such that we get frequency components at zero,
we can generate net motion in the desired states.
llA SteeringModel ContralSystems Using Sinusoids 525
3. Use sinusoidal inputs a second time to move all the previously steered states in
a closed loop and generate net motion only in the qijk direction. This requires
careful selection ofthe input frequencies such that co, ±Wj =I 0 but Wi +Wj +Wk
has zero-frequency components.
The required calculations for Step 2 above are identical to those in Algorithm 1.
A general calculation of the motion in Step 3 is quite cumbersome, although for
specific systems ofpractica1 interest the calculations are quite straightforward. For
example, if m = 2, equation (11.13) becomes
41 = UI,
42 = U2,
412 = qlU2,
4121 = q12 U I ,
4122 = Q12 U2.
To steer Ql , q2, and ql2 to their desired locations, we apply Algorithm 1. To steer
ql2J independently of the other states, choose Ul = a sin 21Tt, U2 = b cos 41Tt to
obtain
a 2b
ql21 (1) = ql21 (0) - 161T 2 '
clt = Ul,
clz = UZ,
cl3 = qZUl,
(11.14)
cl4 = q3Ut ,
with
0
0
qz 0
gt = gz = (11.15)
q3 0
o
We define the system (11.14) as a one-chain system. The first item is to check the
controllability of these systems . Recall that the iterated Lie products as ad;1 gz,
defined by
Lemma 11.8 Lie Bracket Calculations. For the vector jields in equation
(11.15), with k ::: 1,
o
(Here the only nonzero entry is in the (k + 2)-th entry.)
11.4 Steering Model Control Systems Using Sinusaids 527
o o
Now assume that the fonnula is true for k. Then
o
o
ad~;lg2 = [gi, ad~lg2] = (_1)k+1 o
o
o
Proposition 11.9 Controllability of the One-Chaln System. The one-chain
system (11.14) is maximally nonholonomic (controllable) .
Proof: There are n coordinates in equation (11.14) and the n Lie products
{gi , s» . ad~1 g2}, I ~ i ~ n - 2,
are independent by Lemma 11.8. o
To steer this system, we use sinusoids at integrally related frequencies . Roughly
speaking, if we use a I = sin 27ft and"2 = cos 27fkt, then 43 will have components
at frequency 27f(k -I), 44 at frequency 27f(k -2), etc. 4k+2 will have a component
at frequency zero and when integrated gives motion in qk+2 while all previous
variables return to their starting values.
Algorithm 3 Steering Chained Form Systems.
1. Steer q, and q2 to their desired values.
2. For each qk+2, k 2: I, steer qk to its final value using u) = a sin 27f t, U2 =
b cos 27fkt, where a and b satisfy
a
qk+2(l) - qk+2(O) = ( 47f
)k k!·
b
528 11. Geometrie Nonlinear Contra)
Proof: The proof is eonstrnctive. We first show that using u I = a sin 2rr t ,
u 2 = b cos 2rr kt produces motion only in qk+2 and not in qj , j < k + 2 after
one period, by direct integration. If qk- ) has terms at frequency 2rrni, then qk has
corresponding terms at 2rr (n i ± 1) (by expanding products of sinusoids as sums of
sinusoids). Sincethe only way to have s , (I ) ::f= qj{O) is to have e, have acomponent
at frequency zero, it suffices to keep track only of the lowest frequency component
in each variable; higher components will integrate to zero. Direct computation
starting from the origin yields
a
ql = - ( I - cos Zzrr),
2rr
q2 = ~sin2rrkt,
J
2rrk
as = ab sin 2rrkt sin 2rrt dt ,
Ztt]:
= ~ ab (Sin2rr{k - l)t _ sin2rr{k + I)t) ,
+ 1)
J
22rrk 2rr{k - I) 2rr{k
2
q4 = -1 a b
2 2rrk . 2rr{k - I)
sin2rr{k - l)t· sin2rrt dt +"' ,
1 a 2b
-2 sin 2rr{k - 2)t + ... ,
2 Zttk . 2rr{k - 1) . 2rr{k - 2)
It follows that
and all earlier qi'S are periodic and hence qi(l) = qj{O), i < k. If the sys-
tem does not start at the origin , the initial conditions generate extra terms of the
form qi-l {O)UI in the i-th derivative , and this integrates to zero, giving no net
contribution. 0
For the case of systems with more than two inputs, there is a generalization to
multi -chained form systems; we refer the reader to [226], [53], [302] , and [304].
11.5 General Methods for Steering 529
FIGURE 11.3. Steerable unicycle. The unicycle has two independent inputs: the steering
input controls the angle of the wheel, c/>; the driving input controls the velocity of the cart
in the direction of the wheel. The configuration of the cart is its Cartesian location and the
wheel angle.
X =COSrPUI,
Y = sinrP Ulo (1l.l6)
,p = U2·
An approximation to this system is obtained by setting cos rP ~ I, sin rP ~ rP, and
relabeling x as XI, y as X3 , and rP as X2 to get
XI = UI,
X2 = U2 , (1l.l7)
530 11. Geometrie Nonlinear Control
To steer this system, we first use UI, U2 to steer XI, X2 to their desired locations;
this may cause X3 to drift. Now use UI = a sin(wt) , U2 = ß cos(wt) and note that
after 2rr / co seconds, XI and X2 complete a periodic trajectory and the X 3 coordinate
advances by an amount equal to
mxß
--;;;2 '
Now a, ß, to can be chosen appropriately. In order to apply this strategy to the
unapproximated system (11.16) we modify the input to
Xl = VI,
X2 = V2, (11.18)
As before, we steer XI and X2 using VI, V2. To steer the third variable, we use
VI = a sin(wt), V2 = ß cos(wt) as before. Then
The value of X 3 after 2rr/ w seconds is determined by the constant part of the right
hand side of (11.19). The constant coefficient is given by
-1 . -I
2 tt _"
1" (ß )
tan - sin () a sin«()d().
ta
x= (cos()COSq,)UI,
Y= (sin ()cos q,) U I>
(11.20)
= U2,
<b
e= (~ sin q,) U I.
The form of the equations shows that when q, = n /2, the cart cannot be driven
forward. As in the previous section an approximation to this system is instruc-
tive. Relabeling the variables X , y, q" () as XI> X4, X2, X3 , setting I = I, and
11.5 General Methods for Steering 531
x2 x3
,...---------------, , - - - - - - - - - --,
1.0 3
0.5
2
0.0
-0.5 B
0
- 1.0
- 1.5 -I
o 2 3 4 0 4
xl
x l, x2, x3 ul, u2
4 f- A B
"~
2
0
0 f- 2
3
-2 r- I I I I I I
-\
0 2 4 6 8 10 0 2 4 6 8 10
time time
FIGURE 11.4. Sampie trajectories for the unicyc1e. The trajectory shown is a two stage
path which moves the unicycle from (0, -.6, 0) to (3,0,3), The first portion of the path,
labeled A, drives the XI and Xz states to their desired values using a constant input. The
second portion, labeled B, uses a periodic input to drive X3 while bringing the other two
states back to their desired values. The top two figures show the states versus XI ; the bottom
figures show the states and inputs as functions of time.
XI = Ut ,
X2 = U2,
(11.21)
Note that span{81, 82, [81,82], [81, [8\, 82]]} = IRn, so that system is a regular
system with degree of nonholonomy 2. It is also easy to verify that this condi-
tion also holds for the original system. Steering the states XI, X2, X3 of (11.21) is
immediate from the previous seetion. To steer X4 note that if
UI = aeoswt, U2 = ßeos2wt,
then x .. X2 and X3 are all periodic and return to their initial values after 2rr/ w
seeonds. Also
aß aß
X3 = - - -2 eoswt --2 eos3wt,
4w 12w
532 11. Geometrie Nonlinear Control
FIGURE 11.5. Front wheel drive eart. The configuration of the eart is detennined by its
Cartesian loeation, the angle the ear makes with the horizontal, and the steering wheel angle
relative to the ear body. The two inputs are the velocity of the front wheels (in the direetion
the wheels are pointing) and the steering velocity. The rear wheels of the eart are always
aligned with the eart body and are eonstrained to move along the line in which they point
or rotate about their center.
rra
2
ß
- 4<.0 3 •
XI = VI,
X2 = V2,
. ta n x 2 (11.22)
X3 = --VI,
COSX3
X4 = tan x, VI.
We refer to such systems as triangular but not strictly triangular, since X3 depends
on X3. By approximating cos X3 by 1, the equations become strictly triangular; using
VI = a cos tot, V2 = ß cos 2wt we can solve for the Fourier series coefficients of
XI, X2. X3 and X4. Note that only the Fourier coefficient corresponding to the zero
frequency is needed to get the change in X4 after one time period. We have a sense
now that it is easy to steer drift free systems if they can be put into the "triangular"
form discussed earlier using sinuoids. A natural question arises as to whether
arbitrary systems of higher order of nonholonomy can be put into triangular form.
The answer to this question is not easy. and we refer the reader to the work of
Bicchi in this regard (see his survey [25]). Another question is whether systems
can be converted into the chained form that we can steer as shown above. The
11.5 General Methods for Steering 533
answer to this question is the subject of some discussion in the next section and
also the next chapter.
v = ß(q)u, z=r/J(q),
the system is in chained form in the z coordinates with inputs v? One can give
necessary and sufficient conditions to solve this problem (see [223]) , but the dis-
cussion of these conditions would take us into far too much detail for this section.
In [226] conditions for the two-input case, for converting the system to the one-
chain form were given. The conditions assurne that g] , g2 are linearly independent.
The system can be converted into one-chain form if the following distributions are
all of constant rank and involutive:
dh 2 • Ö2 = O.
given by
= L 1:1- h 2.
n 2
Z2
Zn -I = L1: 1 h 2 .
Zn = h z.
yields
Z, = VI.
Z2 = V2.
Zn = Zn-IV) .
This procedure can be illustrated on the kinematic model of a car.
Example 11.11 Conversion ofthe Kinematic Car into Chained Form. First,
we rewrite the kinematic equations by defining a new input u ) +- U I COS () cos l/J
x= U1 .
y= tan() u),
. 1
() = Ttanl/Ju t •
<p = U2·
Then, with h, = x and h 2 = y, it is easy to verify that this system satisfies the
conditions given above and the change ofvariables and input are given by
z: =x,
UI = Vt.
~2 1
0
1
lu(t)12dt.
Our treatment here is, of necessity, somewhat informal; To get all the smoothness
hypotheses worked out would be far too large an excursion to make here. We
will assurne that the steering problem has a solution (by Chow's theorem, this is
guaranteed when the controllability Lie algebra generated by gl, . .. , gm is ofrank
n for all q). We give a heuristic derivation from the calculus of variations of the
necessary conditions for optimality. To do so, we incorporate the constraints into
the cost function using a Lagrange multiplier function p(t) ERn to get
J(q, p, U) = 1 {~UT(t)U(t)
o
1
2
- pT(q - t g;(q)Ud! dt .
;= 1
(11.23)
I T
H(q , p , u) = -u U + p T~
L.Jgj(q)Uj. (11.24)
2 ; =1
Using this definition and integrating the second term of (11.23) by parts yields
8J = _pT(t)Oq(t)jl
o
+1' 0
(8H oq+ 8H OU+pTOq)dt.
Bq 8u
If the optimal input has been found, a necessary condition for stationarity is that
the first variation above be zero for all variations OU and Bq:
. 8H 8H
p=-- , -=0. (11.25)
8q 8u
From the second of these equations, it follows that the optimal inputs are given by
u, = _pT g j(q) , i=I, ... , m. (11.26)
I In the calculus of variations, one makes a variation in u, namely 8u, and calculates the
changes in the quantities p and H as 8p and 8H . See, for example, [334].
536 11. Geometrie Nonlinear Control
with boundary conditions q(O) = qo and q(1) = qf . Using this result, we may
derive the following proposition about the structure of the optimal controls:
Proposition 11.12 Constant Norm ofOptimal Controls. For the least squares
optimal control problem for the control system
m
q= Lg;(q)u;
;=1
which satisfies the controllability rank condition, the norm 0/ the optimal input is
constant, that is,
IU(t)!2 = lu(0)12 Vt E [0, I].
Proof: The formula for the optimal input is given in (11.26). Differentiating it
yields
. . T () Tag; .
u, = - p g; q - p aq-q.
o
11.5 GeneralMethods for Steering 537
The solution of the linear time -varying equation (11.29) is of the form
u(t) = V (t)u(O) (11.30)
for some V(t) E SO(m) (see Problem 11.6). From this fact the statement ofthe
proposition folIows. 0
1 1
Ju Tudt ,
tll = UI ,
(/2 = Uz ,
(h = qtUZ - qzUt,
• Z
q4 = qt Uz ·
This system has growth vector (2, 3, 4). It may be verified that
o
o
2
2qt
so that the optimal inputs satisfy the differential equation obtained by specializing
(I I .29), namely
Üj = 2(p3 +qIP4)UZ,
(11.31)
ÜZ = -2(P3 + qt P4)UJ .
The solution 01 this equation is 01 the form
Uj (t) = r sin otr),
Uz (t) = r cos a(t),
where r Z = uT(O) + u~(O). Further, since the optimal Hamiltonian given by
z 1
* 1
H (q , p) = -i(Pt - qZP3) - '2(pz + qlP3 + qtz P4)z
538 11. Geometrie Nonlinear Control
Ci = 2P4r sin a.
To integrate this equation, multiply both sides by 2ä, integrate and define 8 = 2P4r
to get
(ä)2 = b - 28cosa,
for some constants c, k. This last equation may be integrated using elliptic integrals
(see, for example, Lawden [175 J) for a using
l°
al 2 da ct
-r===::;;= = - + d. (11.32)
.JI - k 2 sin 2 a 2
The left hand side ofthis equation is an elliptic integral. Hence the optimal inputs
for the Engel's system come from elliptic Junctions.
In general, it is difficult to find the solution to the least squares optimal control
problem for steering the system from an initial to a final point. However, it may
be possible to find an approximate solution to the problem by using the so-called
Ritz approximation method. To explain what this means, we begin by choosing an
orthonormal basis for L 2[0, I], the set ofsquare integrable functions on [0, I]. One
basis is a set oftrigonometricfunctions {1/Io(t), 1/11 (t), ... } given by 1/Io(t) == land
for k ~ I,
1/I2k-l (t) = h cos 2k1rt, 1/I2k (t) = h sin Zktt t , t E [0, I] .
For a choice of integer N, a Ritz approximation to the optimal i -th input is assumed
to be of the form
N
Ui(t) = Laik1/lk(t).
k=O
The plan now is to apply this input to steer the system
q = gt(q)ul + ... + gm(q)u m (11.33)
from q (0) = qo to q (1) = qJ and to determine the coefficient vectors, a, =
(aiO, an, .. . , aiN) E iR N+ 1 for i = I, . .. , m, so as to minimize the cost function
goal heavily. The heart of the Ritz approx imation procedure is the hope that for
large N and large y, we will find a u that steers to the final point qf with low cost.
At this point, we have a finite-dimensional optimization problem, which may
be solved by a variety of methods. One method involving a modification of the
Newton iteration for minimizing J has been explored in [98] and [99], where a
software package called NMPack for this purpose is described. We refer the reader
to these papers for details of its application to various examples.
X=g IUI+···+gmum
the Lie products between control vector fields of order greater than k are 0; i.e.,
[gi\, ... , [gip_l' gip] ...] is zero when p > k , The method proposed in [173] is
conceptually straightforward but the details oftheirmethod are somewhat involved.
The first step is to derive a "canonical system equation" associated with the given
control system. The chief new concept involved is that of formal power series in
vector fields .
Recall that the flow associated with a vector field gi was denoted by cP!' (q),
referring to the solution of the differential equation q = gi (q) at time t starting
from q at time O. This flow is referred to as the formal exponential of gi and is
denoted
where polynomials like gl andgt need to be carefully justified. We will defer this
question for the moment and think formally of erg, (q) as a diffeomorphism from
Rn to Rn. Now consider a nil potent Lie algebra of order k generated by the vector
fields g), •.. , gm' Recall from Section 11.2 that a Philip Hall basis is a basis for
the controllability Lie algebra wh ich has been constructed in such a way as to keep
track of skew symmetry and the Jacobi identity associated with the Lie bracket.
We define the Philip Hall basis of the controllability Lie algebra generated by
gl, .. . , gm to be
B ), B 2 , ••• , Bs '
540 11. Geometrie Nonlinear Control
Thus, basis elements are Lie products of order less than or equal to k. In our
language of formal power series, we will refer, for instance, to
[gi, g2] := glg2 - g2g1 [gi, [g2 , g3]] := glg2g3 - glg3g2 - g2g3g1 + g3g2g1·
It is a basic result of nonlinear control, called the Chen-Fliess series formula (see
[I03]), that all flows of the nonlinear control system (11.35):
m
q = Lg;(q)u;, q(O) = q , (11.35)
; =1
=L
s
S(t)e- hIBI .. . e-hj -,Bj -I hjBj ehj-IBj -1 ... ehlBI (11.38)
j=l
s
:= L S(t)Ade-hIßI ...e-hj_I Bj_1 hjBj.
j=1
Since the controllability Lie algebra is nilpotent of degree k, we can express each
one of the elements on the right hand side in terms ofthe basis elements BI , . . . , B.,.
More specifically, it may be verified that
(11.39)
Thus each element on the right hand side of (11.38) is a linear combination of
BI, . .. , B., and we may express
for some polynomials pj,k(h). Using this in the equation (11.37) and equating
coefficients of the basis elements Bi yields
L pj,k(h)hj =
S
Vb k = 1, . ..• s.
j=1
Example 11.14 Nilpotent System of Degree Three with Two Inputs. Con-
sider a two-input system which is nilpotent 0/ degree three on IR 4 • We will assume
that g\ , g2, [g\, g2], [gI , [gi, g2]] are linearly independent. As the Philip Hall
basis, we have
1 2
B2 - h l B3 + Zh IB 4 •
B 3 - h2Bs - h, B 4 ,
e..
For instance, the coefficient 0/ h2 is calculated as
542 11. Geometrie Nonlinear Control
ht = Vt,
h2 = V2.
h3 = hivi + V3, (11.41)
. 1 2
h4 = '2ht V2 + h l V3 + V4,
hs = h2V3 + h th 2V2.
Note that this system is in IRs, though the state space equations evolve on 1R4 •
1 0
0
g, = q2 g2 = 0
q3 0
q4 0
The system is nilpotent of degree k = 4, and the Philip Hall basis vectors are
The vector fields gt , Bz- g3 := adg ,g2, g4 := ad;,g2' and g6 := ad;,g2 span the
tangent space oflRs.
Thus, for the Chen-Fliess-Sussmann equations, we have Vs = V7 = va = 0,
and the coefficient of hj is given by
The calculations for the remaining terms are carried out in a similar fashion ,
and the results are given below (we invite the reader to do the calculation herself):
BI ,
1 2 1 3
B2 - h 1B3 + "2h1B4 - "6 h 1B6,
1 2 1 2
B3 - h 1B4 - h2BS + "2 h1B6 + h 1h2B7 + "2 h2Bg,
B4 - h , B6 - h 2B7,
B5 - h lB7 - h 2Bg,
Bg.
g
Finally, with V5 = V7 = Vg = 0 the differential equation for the h E R is found
to be
;'1 = VI,
;'2 = V2,
;'3 = h l V2 + V3,
· 1
h4 = "2h1V2 +h1V3 + V4 ,
2
The task of steering the system from qo to qf still remains to be done. This
is accomplished by choosing any trajectory connecting qo to qf indexed by time
t. This is substituted into (11.37) to get the expression for the "fictitious inputs"
VI, ••. , V n , corresponding to the first n linearly independent vector fields in the
Philip Hall basis of the control system. The inputs Vi are said to be fictitious, since
they need to be generated by using the "real" inputs u., i = l , .. . , m , To do so
involves a generalization of the definition of Lie brackets. For instance, to try to
generate an input V3 corresponding to [gi, g2] in the previous example, one follows
the definition and uses the concatenation of four segments, each for ,.jE seconds:
UI = l,U2 = O;UI = 0,U2 = l;ul = -l ,U2 = O;UI = 0,U2 = -1. The
544 11. Geometrie Nonlinear Control
egl egz = egl + gz + 4[gl ' gz] + -tz ([gI , [g i , gzl] - [gz , [gI, gz]]) + " ',
(11.43)
where the remammg terms may be found by equating terms in the (non-
commutative) formal power series on the right- and left-hand sides.
Using the Campbell-Baker-Hausdorff formula for the flow in equation (11.42)
gives the Philip Hall coordinates
h = (0,0, e, h 4 (e) ) ,
where h 4 (e) is of order higher than one in e . Thus, the strategy of cycling between
UI and uz not only produces motion along the direction [gi, gz]' but also along
[gI , [glo gzll. Thus, both V3 and V4 are not equal to 0. However, both the h., and
b: variables are unaffected. To generate motion along the V4 direction, one con-
catenates an eight-segment input consisting of cycling between the UI, u: for VE
seconds. Since the controllability Lie algebra is nilpotent, this motion does not
produce any motion in any of the other bracket directions (in the context of the
Campbell-Baker-Hausdorff fonnula above, the series on the right hand side of
(11.43) is finite). This example can be generalized to specify a constructive pro-
cedure for generating piecewise constant inputs for steering systems which have
nilpotent controllability Lie algebras.
When the controllability Lie algebra is not nilpotent, the foregoing algorithm
needs to be modified to steer the system only approximately. The two difficulties
in this case are:
1. The Philip Hall basis is not finite.
2. The Campbell-Baker-Hausdorff formula does not terminate after finitely many
tenns .
One approach that has been suggested in this case is to try to change the input gi
using a transfonnation of the inputs, that is by choosing a matrix ß E IRm xm so as
to make the transfonned 8j, defined by
holonomic systems using piecewise constant inputs, see the paper of Lafferriere
and Sussmann in [184].
An interesting other approach to using piecewise constant controls is motivated
by "multi-rate" sampled data control where some controls are held constant for in-
teger multiples of a basic sampling rate (i.e., they are changed at lower frequencies
than the basic frequency). This results in more elegant paths than those obtained
by the method presented above. To explain this method fully requires a detailed
exposition of multi -rate control , we refer the reader to the work of Monaco and
Normand Cyrot, and co-authors [I 10], [216].
It may be thought of as a special case of the drift free systems studied in the
previous subsection by defining the new system
m
X = L8j{X)Uj ,
;=0
with the input Uo attached to the vector field 80(X) = I{x) frozen at 1. The
controllability Lie Algebra is defined as folIows:
Definition 11.17 Controllability Lie Algebra. The controllability Lie algebra
C 01 a control system 01 the form (J J.J) is defined to be the span over the ring 01
smooth functions 01 elements 01 the form
[Xkt [Xk -I, [... , [X 2 , XJl . . .lll .
where X; E {I, gl, ... , gm} and k = 0, 1,2, .. ..
In the preceding definition recall that a Lie sub-algebra of a module is a sub-
module that is closed under the Lie bracket. The controllability Lie algebra may
be used to define the accessibility distribution as the distribution
C(X) = span{X(x) : X E Cl .
From the definition of C it follows that C(x) is an involutive distribution. The
controllability Lie algebra is the smallest Lie algebra containing I. gl •.. . , gm
which is closed under Lie bracketting which is invariant under I, 81, .. . • gm' The
controllability distribution contains valuable information about the set of states
that are accessible from a given starting point xo. To develop this, we first develop
some machinery:
Definition 11.18 Invariance. A distribution !!t. is said to be invariant under I if
for every r E !!t. , [f, rl E !!t..
546 11. Geometrie Nonlinear Control
i(z) = (11.44)
h+1 (Zd+l, . . . , Zn)
fn (Zd+l, . .. , Zn)
Proof: Since the distribution 1::1 is regular and involutive, it follows that it is
integrable and that there exist functions labeled ~d+l, . . . , ~n such that
Complete the basis to get ~I, .. . , ~d and define Z = ~(x) . Now if r(x) E l::1(x) it
follows that in the Z coordinates
has zeros in its last n - d coordinates. This is simply a consequence of the fact
that r(x) is in the null space ofthe last n - d rows of ~~. Now choose as basis set
for 1::1 in the Z coordinates
it. il = _ af.
aZi
*
Since
of
ri, il E 1::1 by the definition of invariance it follows that the last n - d rows
are identically 0. Thus we have
afk
aZi
=_ ° fori=l, .. . ,d, k=d+I, . . . , n.
~I = I 1(SI' S2)
SI
Remarks:
1. The preceding representation theorem is the nonlinear extension of a familiar
result in linear algebra.
2. From a control theoretie point of view it represents a triangular decomposition
of a given system
x= I(x)
into the form
~I = I1(~I , ~2),
~2 = h(~2),
where ~I = (ZI , • •. , zdl , ~2 = (Zd+h . .. , znl. This decomposition is shown
in Figure 11.6 .
3. Another valuable geometrie insight is gained by representing the foliation as-
sociated with l:1 as the level sets of ~2 ' The preceding theorem states that if
two different solution trajectories start with the same value of ~2, that is if
X U (0), x b (0) are such that that ~2 (XO (0)) = ~2 (x b (0)) = ~2 (0) then it follows
that ~2(Xu(t)) = ~2(xb(t)) = ~2(t) with ~2(t) satisfying
~2 = h(~2)
with initial condition ~2 (0) . Thus, the solution of the differential equation carries
sheets of the foliation onto other sheets of the foliation . This is illustrated in
Figure 11.7 .
This machinery is very useful to study local decompositions of control systems
ofthe form of (11 .1) . Let l:1 be a distribution which contains I, 81,82, . . . ,8m and
is invariant under the vector fields J, 81, ... ,8m and whieh is involutive. Then by
an easy extension of the preceding theorem it is possibly to find coordinates such
that the control system may be represented in coordinates as
m
~I = 11(~1,~2) + L81;(~I'~2)U;,
;= 1 (11.46)
~2 = O.
548 11. Geometrie Nonlinear Control
u ~I = f l (SI' S2)
SI + LgJj(St> S2)u i
A candidate for the involutive distribution /),. which satisfies the preceding require-
ments is the controllability distribution C (x) which was defined to be the involutive
closure of f , gl , . . . , gm and is hence involutive by definition. The decomposition
of equation (11.46) is visualized in Figure (11.8).
The preceding discussion shows that the ~2 states are in some sense to be thought
of as the uncontrollable states in that the dynamics ofthe ~2 variables are unaffected
by the choice of the input. The number of ~2 states is equal to n - dimension of
C(x) . By choosing C(x) to be the smallest invariant distribution containing the
f, gi it follows that we have tried to extract as many ~2 variables as possible.
However, since / is the drift vector field, which is not steerable using controls it is
not completely clear in which directions it is possible to steer the control system
in the distribution C(x) . We now define the set of states accessible from a given
state Xo :
Definition 11.20 Reachable Set from xo. Let RU(xo, T) C Rn to be the subset
0/ all states accessible from state Xo in time T with the trajectories being confined
to a neighborhood U of xi; This is calIed the reachable setfrom xo.
In order to categorize the reachable set we define the accesibility Lie algebra:
11.6 Observability Concepts 549
where XI E {g], . .. , gm}, and i ::: 2X; E {j, gl, .'" gm} and k = 0, 1,2, .. ..2
The accesibility distribution is defined as the evaluation of the Lie algebra
A(x) . The distribution is involutive and g; invariant. In fact, if the distributions
A(x), C(x) are related by
m
~I ,d-I = fd-l (~I, ~2) + L gd- l,i(~], ~2)U;,
;= 1
(11.47)
~l ,d = fd(~I, d, ~2),
~2 = O.
Note the close resemblance between equations (11.46) and (11.47). In fact, the
only difference is the presence of the extra equation for the d-th variable in ~).
Indeed, if the last n - d variables ~2 start at 0 zero at t = 0 they will stay at zero,
and the d-th variable will satisfy the autonomous differential equation
~Id = !d(~ld, 0) .
Constructive controllability results, that is, constructions for generating control
laws for steering systems with drift from one point in the state space to another
are not easy to come by. Further reading in this topic is indicated in the summary
section of this chapter.
2Note that X , is chosen from the set g" g2, . . . ,gm and does not inc1ude f.
550 11. Geometrie Nonlinear Control
Also, recall that in coordinates, the Lie derivative 01 a covector field w along a
vector field I is a new covector field L fW defined by
8WT T 81
Lfw(x) = I T (x) [ - + w(x)-. (11.48)
8x J 8x
The following is a list of some useful properties of the Lie derivative of a covector
field. The proofs are left as exercises:
1. If I is a vector field and A a smooth function, then
LfdA(X) = dLfA(X).
wr = w[f, rl == O.
Further, using the last of the properties of the Lie derivative of a covector field
Iisted above, it follows that
Ljtar = Lf(wr) - w[f, rl,
so that we have that
Lfwr = O.
Since r E /l is arbitrary, it follows that L fW E n. This establishes the I invariance
of n. the converse follows similarly. 0
11.7 Zero Dynamics Algorithm and Generalized Normal Forms 551
YI = h, (x), (11.49)
Ym = hm(x).
Our development paralleis that of Kou, Elliott, and Tarn [166].
Proposition 11.23 Representation. Let t!. be a nonsingular involutive distri -
bution of dimension d invariant under the vector fields f, gt. . .. ,gm' Further.
assume that the codistribution spanned by dh J , • • • , dh m is contained in t!..1.. . Then
it is possible to find a local change ofcoordinates ~ = 4J(x) in a neighborhood of
Xo such that the system has the form
m
~I = fl(~] ,~2) + Lgli(~t.~2)U; ,
;=1
m
(11.50)
~2 = h(~2) + Lg2;(~2)U;
;=]
y; = h;(~2),
u ~I = f l 0;1' S2)
~ + ~glk(SI' S2)uk
i
~2 = f 2(S2) y =
~ + ~g2k(S2)Uk
ative degree for MIMO systems) in the context of input-output linearization. For
systems for which vector relative degree accrues after dynamic extension, too, the
zero dynamics were weil dcfined since dynamic extension does not alter the zero
dynamics of the system. Thc question that arises then is as to whether the zero
dynamics of nonlinear systems may be intrinsically defined. The answer to this
question is fumished by the so-called zero dynamics algorithm. This algorithm is
a nonlinear counterpart of a corresponding algorithm in the linear case attributed
to Wonham [331l. In the following we follow closely the treatment of [149]. The
set up is as before; We consider "square" nonlinear control systems of the form
i = f(x) + g(x)u,
(11.51)
Y = h(x) ,
with m inputs and outputs and x E jRn. We will also assume tnat the inputs are all
independent, that is to say,
dimtspanlg, (xo) , . . . , gm(xo)}) = m,
Further, Xo E IR n is an equilibrium point of the undriven system, i.e., f(xo) = 0,
and we will assume that the output is unbiased at this point, namely h(xo) = O.
It is clear that if the state of the system (11.51) is Xo at t = 0 and thc input
u(t) identically 0 then the output y(t) is identically O. The question that is to
be answered is: For which other initial conditions besides xo, does there exist a
choice of a suitable input u(t) so as to hold the output y(t) =
O? The answer to
this qucstion needs some more definitions :
Definition 11.24 Locally Controlled Invariant. Given a smooth connected
manifold M c jRn with Xo E M , M is said to be locally controlled invariant
at Xo if there exists a choice of smooth feedback law u : M ~ IRm defined in a
neighborhood ofXo such that the vector field
fex) + g(x)u(x) E TMx Vx E nbhd of xi, E M.
Definition 11.25 Output Zeroing Manifold. Given a smooth connected man-
ifold M C jRn with Xo E M , M is said to be locally output zeroing at Xo
if:
(i) h(x) = Ofor x E M.
(ii) M is locally controlled invariant at xo.
This sccond definition is particularly useful in solving the problem at hand, since
each output zeroing manifold is a set of initial conditions for which the output
can be held identically zero. The zero dynamics manifold of thc system is the
maximal output zcroing manifold (maximal is used locally in this context to mean
that it locally contains all othcr output zeroing manifolds). The zero dynamics
algorithm described below gives a constructive procedure for constructing this
maximal output zeroing manifold by constructing a (decreasing) nested chain of
manifolds Mo ::) MI ::) ... ::) Mk' = Mk'+1 = . , "
11.7 Zero Dynamics Algorithm and Generalized Normal Forms 553
Mk*+1 = Vk*.
Further, assume that span {gI (x) , .. . , gm(x)} n T, Vi,. has constant dimensionfor
all x E Vk" Then the manifold M* = Vk* is a locally maximal output zeroing
manifold.
Proof: The regularity hypothesis guarantees that the Mk are a chain of subman-
ifolds of decreasing dimension. As a consequence it follows that there exists an
integer k" such that Mk*+1 = Vk* for some neighborhood Ui- of xo. Set M* = Vk*.
It follows from the construction of the zero dynamics algorithm that for each
x E M* there exists u(x) E jRm such that
f(x) + g(x)u E TxM*,
The smoothness of u(x) needs as yet to be established. We express this a little
more explicitly by characterizing M* locally as the zero set of a function H (x) :
jRn ~ jRn- y , where y is the dimension of M* . Then it follows that the preceding
equation may be written as
dH(x)(f(x) + g(x)u) = O.
or
dH(x)f(x) E spandH(x)g(x)
ateach point x of M*nU* forsome U* a neighborhood of r-. Thehypotheses on the
rank ofthe span of {gi (x), . . . ,gm (x)} and the span of {gI (x), .. . ,gm (x)} n TxM*
guarantee that the matrix d H (x)g (x) has constant rank on M* in a neighborhood
of xo . Thus in a neighborhood of Xo we can find a smooth mapping u" : U* (a
neighborhood of xo) ~ jRm such that
f(x) + g(x)u*(x) E TxM*.
This establishes that at the conclusion of the zero dynamics algorithm we have a
controlled invariant manifold.
To show the local maximality of M*, consider another controlled invariant output
zeroing manifold Z. It is immediate that Z C Mo = (x : h(x) = O}. The rest of
554 11. Geometrie Nonlinear Control
So = [l 0].
Define the function
Ho(x) = Soh(x)
so that
Mo n Uo = {x E Uo : Ho(x) = O}
Step k: At the k-th step, we begin the iteration with Hk : jRn f-+ jRSO+"+Sk,
and with
LjHk + LgHkU = O.
If the matrix L gHk(x) has constant rank rk for all x E Vk the connected component
of M, n U, containing Xo it is possible to find a matrix Rk(X) of smooth functions
whose (so + . . .+ Sk - rk) rows locally span the left null space of L gHk (x), namely,
(11.54)
Given the recursive definition of Hk (x) and the definition of Rk-I (x) as the left
null space of LgHk_l, we may choose Rk(X) to be
It follows that equation (11.54) has a solution for those x for which we have
Rk(X)LjHk(X) = O.
11.7 Zero DynamicsAlgorithm and Generalized Normal Forms 557
Since the first equation above was satisfied in the previous step of the recursion it
follows that
to be L:7~ci Si. The selection matrix Sk+1 picks the Sk+1 linearIy independent rows
of <1>k+l.
Summary: Two kinds of regularity assumptions were made above:
• To guarantee that the M, are manifoIds, it was assumed that the rank of d Hk(x)
was constant on the M, nUk. Further, since therank oi dH, is L:7=o Si it follows
that
• To guarantee that LgHk has constant rank rk for x E Vk so that a smooth left
null space Rk(X) may be found we assume that
rk' =m.
Now, following the development in [149], once again we develop a "generalized
normal form" for nonlinear systems which satisfy the three assumptions necessary
for the implementation of the zero dynamics algorithm of the preceding section:
1. d H (x) has constant rank So in a neighborhood of xo, and for choice of matrices
R o, R 1 , ••• , Rk' -I, with the Rk chosen to be such that RkLgHk_1 = 0, the
differentials of the mappings H» defined by
A non linear system of the form (11.51) that satisfies these conditions is said
to have Xo as a regular point of the zero dynamics algorithm. The nature of the
regularity hypothesis varies somewhat in the literature, for example [90] require
that the matriees LgHk have rank rk not just on M, but in an open set around xo.
This is referred to as a strong regularity conditionfor the zero dynamics algorithm.
It is useful to note that as a consequence of the first of the three assumptions
spelt out above, there is no necessity for the selection matrices (see also Proposition
3.34). An immediate consequence of this is that the differentials of the 2:7:0 Si
entries of
Hi- = col(h(x), lf>o(x) , .. . , lf>k'-I (x))
are linearly independent. These will thus serve as a partial change of coordinates
for the generalized normal form. The rest of this section is devoted to a discussion
ofthe appearance ofthe system dynamics in the coordinates given by this transfor-
mation. A three input, three output conceptual exampIe worked through by Isidori
is particularly revealing of the structure of the normal form and we give it here :
Example 11.28 Three Input, Three Output Example Due to Isidori [149].
Step 0: Mo = {x : h) = h 2 = h 3 = O} is assumed to be a n - 3 dimensional
manifold as a consequence ofthe regularity ofO for the h i •
Step 1:
Further, we will assume that Lgh has rank 1 near xo. Thus we may write
L gh 2(x) = -y(x)Lgh 1 (x) + G2(X)
where G2(X) = 0 for x E Mo. Note that strong regularity would guarantee that
G2 == O. The preceding helps determine specijic valuesfor Ro(x) , lf>o(x) as
Ro(x) =[ y (X) I 0] ,
o 0 1
Step 2: M) = {x E Mo : rP2 = rP3 = O} is assumed to be an n - 5 dimensional
manifold. Now consider the matrix
Lgh ]
L gHI =
[ L glf>o
and assume it to have rank 2. Since the second row is dependent on the first row
for x E Mo and the third row is zero, we will assume that the first and the fourth
rows are independent. Then, the fifth row can be expressed in terms ofthe first and
fourth as
11.7 Zero Dynamics Algorithm and Generalized Normal Forms 559
Ro [0 0] ].
Rt(x) = [
[ 01 o 0] [02 I]
so that
is assumed to have rank 3 at xo, so that its first .fourth, and sixth rows are linearly
independent. Thus the algorithm terminates at this step, and the zero dynamics
manifold
There are 6 scalar equations in the above equation, but the construction 0/ M*
automaticaily fulfiils the second, third, and fifth equations. Thus we may obtain u*
from the remaining equations to be
The matrix inverse above exists because 0/ the hypothesis on the first , fourtn
and sixth row 0/ L gH 2. By the regularity hypothesis the differentials 0/
h l • ha, h 3• rP2. rP3. 1/13 are linearly independent at Xo and thus qualify for a 10-
cal, partial change 0/ coordinates. Complementary coordinates T/ E jRn-6 may
be chosen with the requirement that T/(xo) = O. In these coordinates the system
560 11. Geometrie Nonlinear Control
»1 = Ljh, + Lgh1u,
»2 = Lfh 2 + L gh 2u = L fh 2 - yLgh,u + CT2U
= 4J2 - y(Lfh, + Lgh,u) + CT2U = 4J2 - y»l + CT2U,
»J = Lfh J = 4JJ,
~2 = Lf4J2 + Lg4J2u,
~J = Lf4JJ + Lg4JJu = Lf4JJ - (olLgh l + 02L g4J2)U + CTJU
= VtJ - 0'»1 - 02»2 + CTJU,
lh = LfVtJ + LgVtJu,
with the remaining variables satisfying
This is the generalized normalform ofthe three input, three output system under the
foregoing assumptions on its zero dynamics algorithm. Several interesting points
may be noted: Ifstrong regularity rather than regularity holds, then the CT2, CTJ == °
and the form ofthe preceding equations dijfers from the normal form for a MIMO
system with vector relative degree only in-as-much-as the presence of the terms
y», and 01»,,02»2 in addition to the chain of integrators (one long for Yl, two
long for Y2 and three long for YJ). To derive the zero dynamics we set u = u" and
x E M*. As a consequence we get YI = Y2 = YJ = 4J2 = 4JJ = VtJ == 0, and
the residual dynamics are the dynamics of the TJ coordinates parameterizing M*
locally
~o(t)=t,
.. . d~io'
The last entry in the formula above is an iterated integral defined for multi-indices
ofthe form hh-I . . . io. It is easy to see that ifthe inputs are bounded, IUj(·)1 ::: M,
then
r
Jo d~i* '" d~io s
(Mt)k+1
-'-(k-+'-I-)-! •
which, in the light of the previous bound, are convergent when the coefficients
satisfy the following growth condition
lC(h , .. . ,io)l::: MIM~+I. (11.56)
For a given set of vector fields go = f, gl, . .. ,gm, we will be interested in
functionals with c(h, .. . ,io) = L liio •• • L Ki* A(xo) for some smooth function A :
IRn 1-+ IR. The following theorem needs an interesting fact about series of the
form (11.55), whose proof we leave to the Exercises (Problem 11.24). This fact
establishes that series ofthis kind, the Chen-Fliess series behave like formal power
series!
Proposition 11.29 Formal Power Series Property ofChen-Fliess Series. Let
VI(t), ... , Vk(t) be series of the form of (11.55) corresponding to functions
AI, . . . ,Ak respectively. Then, for an analytic function J.L : IRk 1-+ IR, we have
that
(11.57)
1, ... , m. Then,for T sufficiently small, the j -th output Yj(t) , t E [0, Tl, can be
expressed as
(11.58)
(11.59)
where the function Xj(') simply extracts the j -th component of its argument. We
do this by invoking the existence and uniqueness theorem of differential equations
after checking that it satisfies the correct initial condition (which is obvious) and
that its derivative satisfies the differential equation, which we do now. First notice
that
-d
dt
l'
0
d;od;iH '" d;io = l'0
d;i*_, ... d;io'
(11.60)
-d
dt
I'
0
d;id;i*_• . . . d;io = Ui (f) l'
0
d;iH . . . d;io'
m
= L (L g;xj(xo») Ui(t)
i= 1
Using the fact that Lfxj = Jj(x) and the formal power series property of the
Chen-Fliess series, the first set of terms may be written as
+L
m
where gij(X) stands for the j -th component of the vector field gi. Completing
this calculation yields that x satisfies the differential equation in (11.51). Since it
11.8 Input-Output Expansions for Nonlinear Systems 563
satisfies the eorreet initial eondition, it is the solution of the differential equation.
The assertion of the theorem, namely (11.58), now follows from the power series
property of Chen-Fliess series applied to (11.59). 0
Remark: One of the major eonsequenees of this theorem is that we ean give
neeessary and suffieient eonditions for an output Yj to be unaffeeted by an input
u., namely,
L Kihj{x) = 0,
(11.61)
L KiL,) L"hj{x) = 0,
for all ehoiees of veetor fields ri , ,t", in the set {j, g l, ... , gm}' The eondi-
tion (11.61) may be given a more geometrie interpretation . To do this define the
observability codistribution of the output Yj associated with the system (11.51) by
n j{X) = span{d)' : ). = Lg,o . .. L Ki,h j, 0::: ik s m , 0::: k s oo}. (11.62)
We write n j{x) to emphasize the dependenee ofthe eodistribution on x. Then the
statement (11.61) is equivalent to
s. E nt. (11.63)
it follows that (11.61) implies that the smallest distribution A i (x) eontaining gi,
which is invariant under J, gl," " gm, is in {span{dh j}} .l:
L g,Lrh j = 0 =* s. E (span{dL,hj}).l
Iterating on this argument we will arrive at
gi E Qt·
564 11. Geometrie Nonlinear Control
x = fex) + g(x)u,
y = h(x),
with f , gl, g2, .. . , gm all smooth vector fields and h : !Rn r+ !Rm a smooth map.
We show that controlled invariant distributions can be used to solve problems such
as disturbance decoupling and input-output decoupling of nonlinear systems.
Definition 11.32 Controlled Invariant Distributions. A (locally) controlled
invariant distribution of(1 1.51) is a distribution !::J. C T!Rn defined on a neighbor-
hood U of the origin such that there exist a(x) : !Rn r+ !Rm, ß(x) E !Rnxm such
that !::J. is invariant under the closed loop state feedback vector fields
We can use this in the first equation, along with [j, r 1 E ß to establish that
[f,r] E ß+G.
The proof of the sufficiency is more involved and we refer the reader to Isidori
[l 49], Chapter 6 for the details. 0
Using the decoupling theorem 11.31 of the preceding section, we see that the
disturbance w is decoupled from all the outputs if there exists a distribution A (x) ::J
p(x) invariant under j, gl, gz, ...,gm such that
span{p(x)} C !:!.. C (span{dh l , .. . ,dhp})l.. (11.71)
From the preceding proposition 11.33 it follows that !:!.. is a controlled invariant
distribution. Thus, the disturbance decoupling problem is solvable if there exists
a controlled invariant distribution A in
ker(dh) = nf=lker(dhj) = (span{dh l , ... , dhp})l.
containing the disturbance vector field p. In fact, there is a conceptual algorithm,
referred to as the controlled invariant distribution algorithm (strongly motivated
by the controIled invariant subspace algorithm), to obtain the largest, or maximal
controlled invariant distribution algorithm !:!.. * in the kernel of dh as defined above .
In this case the necessary and sufficient condition for disturbance decoupling is a
small modification of (11.71):
span{p(x)} C !:!..*. (11.72)
We do not present the proof of the existence of this maximal distribution or show the
connection between this distribution A * and the necessary and sufficient conditions
for disturbance decoupling which we derive in Chapter 9 (9.107) for systems that
have vector relative degree, but refer the reader to [149] or [232] for more details on
the nonlinear counterpart of the geometric control theory begun by Wonham [331].
This theory is very rieh and powerful and may also be used to give necessary and
sufficient conditions for decoupling of a MIMO system (generalizing the results
of Chapter 9) using the concept of controllability distributions. However, from a
computational point of view the design procedures most amenable to computer
aided designs are those discussed in Chapter 9.
11.10 Summary
In this chapter we have given the reader a very brief snapshot of some of the
exciting new directions in nonlinear control with the studies of constructive
controllability and nonholonomy. There is a lot more on this topie in addi-
tion to the dual "differential form" point of view, which we study in the next
chapter. An excellent recent survey of new methods and directions is in the
paper by Kolmanovsky and McClamroch [1641. Another body of interesting
work in this area is the control of underactuated systems, see for example [272;
284]. Constructive controllability methods for systems with drift are not easy to
come by. Some papers to study here include Bloch, Reyhanoglu, and McClarnroch
[3Il, Kapitnavosky, Goldberg, and Mills [160]. An interesting approach to using
sampled data techniques to systems whose controIlability distributions are nilpo-
tent or nilpotentizable (see [137] and [l6Il) was proposed by Di Giamberardino,
Monaco, and Normand Cyrot in [l l l l. A survey ofthese methods as weIl as some
11.11 Exercises 567
new methods, along with applieations to some interesting systems including the
"diver" is in a paper by Godhvan, Balluchi, Crawford, and Sastry [115]. See also
[72]. The control of Hamiltonian systems with symmetries is an interesting di-
rection of work begun by van der Schaft (see [232]) and eontinuing with [309;
74]. There has been recent aetivity on the use of geometrie methods to study con-
trollability of dynamic models of nonholonomic systems with symmetries, see for
example the work of Montgomery on falling cats, Yang Mills fields, and other
exotic applieations [217], Bloch and Crouch [27], Sarychev and Nijmeijer [258],
and Bloch et al [29l. See also the forthcoming monograph of Montgomery on
nonholonomy in mechanics and optimal control. For application to problems of
gait control in locomotion see for example Chirikjian , Ostrowski and Burdick [63;
237l.
We gave areader abrief flavor of nonlinear differential geometrie approaches to
the solution of problems that were proposed in a linear context by Wonham [331].
In a historical sense much of the recent interest in these methods may be traced
to a paper by Isidori, Krener, et al. [150], which paralleled the linear development
for the nonlinear ease. Construetive algorithms for mechanizing these calculations
remain an open area, and the zero dynamics algorithm is indispensable in this
regard. The reader is invited to study [149], [232] for more details on this topic.
11.11 Exercises
Problem 11.1. Prove the formula for the maximum growth of the distributions of
a regular filtration , namely (11.7).
Problem 11.2 Car with N trailers [174]. The figure below shows a car with N
trailers attached. We attach the hitch of eaeh trailer to the center of the rear axle of
the previous trailer. The wheels of the individual trailers are aligned with the body
of the trailer. The constraints are again based on allowing the wheels only to roll
and spin, but not slip. The dimension of the state space is N + 4 with two controls.
'------ x
Parameterize the configuration by the states of the automobile plus the angle of
each of the trailers with respeet to the horizontal. Show that the control equation
568 11. Geometrie Nonlinear Control
x= cos (Jo u 1,
Y= sin (Jo u I ,
~= U2,
. 1
(Jo = I tan r/J Ut.
.
(J;
1 (1- 1
= d; ~IJCOS«(Jj-l-ej)
)
sin«(J;_1 -(J;)UI .
Problem 11.3. Use induction and Jacobi 's identity to prove that
[ß;, ßjl C [ßI; ß;+j-Jl C ß;+j,
where ß = ß I C ß2 C ... is a filtration associated with a distribution ß.
41 = UI,
42 = U2,
412 = QlU2,
4122 = Q12 U2 ·
solves the minimum time steering problem to steer the system from q (0) = qo to
q(T) = qf subject to the constraint that lu(t)1 2 ~ 1 for all t.
Problem 11.8 Heisenberg Control System [225]. Consider the control system
q= u,
r = qu T - uqT,
~tuTudt.
210
2. For the boundary conditions q(O) = q(l) = 0, Y(O) = 0, and Y(1) = Y for
some y E lR 3 , solve the Euler-Lagrange equations to obtain the optimal inputs
u.
3. Find the input u to steer the system from (0,0) to (0, Y) E lRm x so(m).
Problem 11.9. Extend the method used to find the optimal inputs for Engel's
system to find optimal inputs for the system of Problem 11.8.
Problem 11.10 Optimal controls for systems with drift [264]. Consider the
least squares optimal input steering problem for a system with drift:
m
q= J(q) + Lg;(q)u; .
;=1
1. Find the expression for the optimal Hamiltonian and prove that the optimal
inputs satisfy the differential equation
pT[f, gIJ ]
u=Q(q ,p)u+ : (11.73)
[
pT[J, gm]
Problem 11.11 Conversion to chained form [225], [205]. Show that the
following system
41 =UI,
42 = U2,
in = q1U2,
· 1 2
q4 = 2 q 1 U2,
45 = qlq2 U2,
· 1 3
q6=6 q I U 1,
· 1 2
q7 = 2 q l q2 U2,
· 1 2
qg = 2Qtq2U2,
is controllable and nilpotent of degree four. Can you find a nonlinear change of
coordinates to transform this system into a one-chained form?
Problem 11.12. Show that the following system is controllable and nilpotent:
41 = U(,
42 = U2,
43 = Qt U2 - Q2 UI,
• 2
Q4 = QJ U2,
• 2
Q5 = Q2 U I ·
Problem 11.13 Chen-Fliess expansion for chained form systems [225]. Con-
sider the one-chain system in equation (11.14) . For the Philip Hall basis gl, g2,
ad: g2, k = I, ... , n - I, derive the Chen-Fliess-Sussmann equation.
J
41 = U(,
42 = U2,
43 = QJ U2 - Q2 U',
• 2
Q4 = Q, U2·
compute the polynomial equation for the amplitude parameters in terms ofthe
initial and final states.
Problem 11.15. Prove that if V C Rn then we can conceive of V as a constant
distribution and then AVe V is equivalent to [Ax , V] C V .
Problem 11.16. If V is an A-invariant subspace, prove that it can be represented
in suitable coordinates as
[A~l ~~: l
Problem 11.17. Show that if a distribution !1 is invariant under vector fields I1 , h
it is also invariant under [/1, h]. Hint: Use the Jacobi identity.
Problem 11.18. Prove the following facts about the properties of the Lie derivative
of a covector field:
1. If 1 is a vector field and A a smooth function, then
LfdA(X) = dLfA(X) .
2. If o, ß are smooth functions, 1 a vector field and w a covector field then
Lafßw(x) = aß(Lfw(x» + ß(x)wlda(x) + (Lfß(x»a(x)w(x).
3. If I, g are vector fields and w a covector field, it follows that
Lfwg(x) = L fw(x)g(x) + w(x)[/, g](x) .
Problem 11.19. For the minimal linear system
x = Ax +Bu,
y =Cx,
specialize the definition of a controlled invariant manifold (subspace) and an output
zeroing manifold. Assuming that B E Rnxm is of fuH rank characterize the zero
dynamics. Use Problem (11.16) to represent A , B , C in coordinates adapted to
the subspace T M* = M*. Convince yourself that the eigenvalues of the (linear)
zero dynamics are indeed what you mean by the transmission zeros of the linear
system.
Problem 11.20. Consider a MIMO system with vectorrelative degree (YI , . . . , Ym)'
Assume that YI ~ Y2 ~ .. . ~ Ym' We now have two ways of deriving the zero
dynamics of this system: the procedure given in Chapter 9 and the zero dynamics
algorithm of this chapter. Show that they give the same answer. You may impose
the regularity conditions and other assumptions as needed.
Problem 11.21 Generalized normal form [149]. Derive the generalized nor-
mal form following the steps of the zero dynamics algorithm and Example 11.28
for an arbitrary nonlinear control system (provided that the regularity conditions
for the zero dynamics algorithm are met).
572 11. Geometrie NonlinearControl
Problem 11.22 Exact model matehing [89; 88]. An extremely important con-
trol problem is to match the plant output to the output of a reference model. This
problem examines a method proposed by Di Benedetto to cast this problem into
the framework of the zero dynamics algorithm. To this end consider a reference
model with m inputs, m outputs and nu states described by
z= JM(Z) + gM(Z)V ,
YM = hM(z) .
Further, define the extended system consisting of the plant and model ~ E given
by
Further define
gE (xE) = [g(x E) p(x E)] .
Also, define the dynamical system with state xE, input u, and output yE described
by the triple (JE, i. h E) to be t . Now consider a point xt = (xo , zo) which is an
equilibrium point of JE and also produces zero output for the system t, i.e.,
JE (xt) = 0, h E (xt) = O.
Assume that xt is a regular point for the zero dynamics algorithm applied to t. Let
Mk denote the manifold defined at step k of the zero dynamics algorithm and M*
thc zero dynamics manifold obtained at the conclusion of the algorithm. Show that
there exists a choice of controllaw u(x E , v) and initial conditions xE for which
the output of the extended system is identically zero if:
span{p(x
E)}
C TxEMk + span{g(x E)}
in a neighborhood of xt in Mk for all k ,
Problem 11.23 Adaptive nonlinear control [259; 266]. Consider the SISO
nonlinear system
The functions 10, s». /;(x) are considered known, but the parameters ()* E I~.P
are fixed and unknown. These are referred to as the true parameters. We will
explore a controllaw for making y track a given boundedtrajectory Yd(t). Assume
that LgOh(x) is bounded away from zero and the control law is the following
modification of the standard relative degree one asymptotic tracking controllaw
(11.76)
Here 0; (I) is our estimate of ()t at time t . Prove (using the simplest quadratic
Lyapunov function of Y - Yd, 0- ()* that you can think ot) that under the parameter
update law,
Lf,h ]
ii ~ - . , (y - Yd) (11.77)
[
for all initial conditions 0(0) and x(O), y(1) - Yd(1) -+ 0 as t -+ 00 . Note that
you are not required to prove anything about 0(1)!
Sketch the proof of the fact that if the true system of (11.75) is exponentially
minimum phase, then x(·) is bounded as weil. Can you modify the controllaw
(11.76) and the parameter update law (11.77) for the case that the right-hand side
of the system (11.75) has terms of the form
p
L(/;(x) + g;(x)u)()t
;=1
with /;, Br known and ()* E ~p unknown? You can assume here that
p
LgOh + LO;(1)Lg,h
;=1
is invertible.
Problem 11.24 Formal power series property of Chen-Fliess series. Prove
Proposition 11.29. You may wish to start not with arbitrary analytic functions
JL(VI , ••• , Vk), but instead start with functions of the form VI V2, VI V3, ••• etc, to
arrive at the proof.
12
Exterior Differential Systems in Control
This chapter has been extensively based on the research work offandpapers written
with) Richard Murray, Dawn Tilbury, and Linda Bushnell. The text ofthe chapter
follows notes written with George Pappas, lohn Lygeros, and Dawn TIlbury [24 JJ.
12.1 Introduction
The vast majority of the mathematically oriented literature in the areas of
robotics and control has been heavily influenced by a differential geometrie
"vector field" point of view, which we have seen in the earlier chapters. In
recent years, however, a small but influential trend has begun in the litera-
ture on the use of other methods, such as exterior differential systems [119;
108] for the analysis of nonlinear control systems and nonlinear implicit systems.
In this chapter we present some key results from the theory of exterior differen-
tial systems and their application to current and challenging problems in robotics
and control. The area of exterior differential systems has a long history. The early
theory in this area sprung from the work of Darboux, Lie, Engel, Pfaff, Camot, and
Caratheodory on the structure of systems with non-integrable linear constraints on
the velocities of their configuration variables , the so-called nonholonomic control
systems (for a good development ofthis see [49]). This was followed by the work
of Goursat and Cartan, which is considered to contain some of the finest achieve-
ments of the middle part of this century on exterior differential systems. In parallel
has been an effort to develop connections between exterior differential systems
and the calculus of variations (see [119]).
12.2 Introduction to Exterior Differential Systems 575
Dur attention was first attracted to exterior differential systems through their
applications in path planning for nonholonomic control systems, where initial
results were for the problem ofsteering a car with trailers [302], [223], the so-called
"parallel parking a car with N trailers problem." This involved the transformation
of the system of nonholonomic rolling without slipping constraints on each pair
of wheels into a canonical form, the so-called Goursat normal form. This program
continued with another example, the parallel parking of a fire truck [53], which in
turn was generalized to a multi-steering N trailer system. In [304] we showed how
the multi-steering N trailer system could be converted into a generalized Goursat
normal form, which was easy to steer. The full analysis of the system from the
exterior differential systems point of view was made in [3031. The vector field
point of view to these problems was studied in Chapter 11.
In parallel with this activity in nonholonomic motion planning, there has been
considerable activity in the nonlinear control community on the problem of exactly
linearizing a nonlinear control system using (possibly dynamic) state feedback
and change of coordinates. The first results in this direction were necessary and
sufficient conditions for exact linearization of a nonlinear control system using
static state feedback, which was presented in Chapter 9. It was shown there that
a system that satisfies these conditions can be transformed into a special canon-
ical form, the so called Brunovsky normal form. As pointed out by Gardner and
Shadwick in [108], this normal form is very close to the Goursat normal form for
exterior differential systems. The problem of dynamic state feedback lineariza-
tion, on the other hand, remained largely open, despite some early results by [59;
2761.
This chapter is divided into three parts. Section 12.2 contains the necessary
mathematical background on exterior differential systems. Section 12.3 describes
some of the important normal forms for exterior differential systems: the Engel,
Pfaff, Caratheodory, Goursat, and extended Goursat normal forms. It is shown how
certain important robotic systems can be converted to these normal forms. Section
12.4 discusses some of the connections between the exterior differential systems
formalism, specialized to the case of control systems, and the vector field approach
currently popular in nonlinear control.
Marsden, and Ratiu [2], and Spivak [282] on whose presentations the present
development is based.
if i =1= j,
if i = i.
form a basis ofV* called the dual basis.
Let V = IRn with the standard basis el , ... , en and let ifJ I, .. . , ifJn be the dual
basis.If
n
XE IRn = LXjej,
j=1
then evaluating each function in the dual basis at x gives
W.L := {a E V* I a(v) = 0 V v E W}
A linear mapping F : VI f-+ V2 between any two vector spaces induces a linear
mapping between their dual spaces.
Definition 12.3 Dual Map. Given a linear mapping F : VI ~ V2, its dual map
is the linear mapping F* : V2* ~ vt
defined by
Tensors
Let VI, . . . , Vk be a collection of real vector spaces . A function f : VI x . .. X Vk f-+
IR. is said to be linear in the ith variable if the function T : Vi f-+ IR. defined for
fixed Vj, j i= i as T(Vi) = f(vJ, .. . , Vi-I. Vi , Vi+J, .. . , Vk) is linear. The function
f is called multilinear if it is linear in each variable, with the other variables held
fixed. A multilinear function T : V k ~ IR. is called a covariant tensor of order k
or simply a k-tensor. The set of all k-tensors on V is denoted by t:. k(V). Note that
t:. 1 (V) = V*, the dual space of V . Therefore, we can think of covariant tensors
as generalized covectors. The inner product of two vectors is an example of a
2-tensor. Another important example of a multilinear function is the determinant.
If XI, X2 , ••• , x; are n column vectors in IR.n, then
f tx«, X2 , •• • , x n ) = detlx, X2 • • • xn ]
then, the set of all k-tensors on V, t:. k (V), is areal vector space. Because of
their multilinear structure, two tensors are equal if they agree on any set of basis
elements. Indeed, let al, . . . .a; be a basis for V . Let f , g : V k f-+ IR. be k-tensors
on V . If f(ai, , . .. , ah) = g(ai" . .. , ah) for every k-tuple I = (it, . . . , id E
{1, 2, . . . , n}k , then f = g. This allows us to construct a basis forthe space t:. k (V)
as folIows:
ifl # J
if 1= J .
The collection 0/all such tP' forms a basis for .ck (V) .
Proof: Uniqueness follows from the basis construction which we have just
described . To construct the functions tP', we start with a basis for V*, tP i : V ~ R,
defined by tP i (aj) = 8ij. We then define each tP' by
tP' = tPit(vd' tP;2(V2)· . .. · tPh(Vk) (12.1)
k
and claim that these tP' form a basis for .c (V). To show this, we select an arbitrary
k-tensor / E .ck(V) and define the scalars a, := /(a; t' . . . , a;i )' Next , we define
a k-tensor
J (12.2)
g = LaJtP ,
J
Since there are n k distinct k-tuples from the set {I, . .. , n} the space .c k (V) has
dimension n k •
Tensor Products
Definition 12.5 Tensor Products. Let / E .ck(V) and g E .ce(V). The tensor
product I ~ go/ land gis a tensor in .ckH(V) defined by
(f ~ g)(VI> . .. , VkH) := I(Vl, . .. , vd . g(Vk+l, .. . , Vk+e) .
The following theorem is elementary:
Theorem 12.6 Properties ofTensor Products. Let I , g, h be tensors on V and
cER The following properties hold:
1. Associativity / ~ (g ~ h) = (f ~ g) e h.
2. Homogeneity cf ~ g = c(f ~ g) = I ~ eg.
3. Distributivity (f + g) e h = / ~ h + g e h.
4. Given a basis al , .• • , an for V . the eorresponding basis tensors satisfy tP' =
tP;t~tP;2~ · ··~tP;i.
Altemating Tensors
Before introducing altemating tensors, we present some faets about permutations.
Definition 12.8 Even, Odd Permutations. Let o E Sk. Consider the set 01all
pairs 01 integers i, j from the set {1 , . . . , k} for which i < j and o (i) > a (j).
Each such pair is calied an inversion in a . The sign 01a is defined to be the number
-1 if the number 01 inversions is odd and + 1 if it is even. We call a an odd or
evenpermutation respectively. The sign of a is denoted by sgn(a).
Here is how we ealculate the sign of arbitrary permutations: Let a, r E Sk. Then
1. If a is the eomposition of m elementary permutations then sgn(a) = (_1)m
2. sgn(a 0 r) = sgn(a) . sgntr)
3. sgntc :") = sgn(a)
4. If P =/: q, and if r is the permutation that exehanges p and q and leaves all
other integers fixed, then sgn(r) = -1
tensor is vacuously altemating, i.e., AI (V*) = .cl (V) = V*. For completeness,
we define AO(V*) = R
Elementary tensors are not altemating but the linear combination
f = ifJi ~ ifJ j - ifJ j ~ ifJi
is altemating. To see this, let V = jRn and let ifJi be the usual dual basis. Then
and it is easily seen that ft» , y) = - f(y, X). Similarly, the function
Xi Yi
Zi ]
g(x, Y, z) = det Xj Yj z,
[
Xk Yk Zk
is an altemating 3-tensor.
In order to obtain a basis for the linear space A k (V*), we start with the following
lemma:
Lemma 12.10 Properties of Alternating Tensors. Let f be a k-tensor on V
and a ,T E Sk be permutations . Then
1. The transformation f ----+ I" is a linear transformation from .ck(V*) to
.c
k(V*) . It has the property that for alt a, T E Sk
(Ja), = I '"
= sgn(a) · f.
Now suppose vp = vq and p =1= q . Let T be a permutation that exchanges p and q.
Since vp = vq , r
(VI, ... , vd = f(VI, ... , Vk). Since f is an altemating tensor
12.2 Introduction to ExteriorDifferential Systems 581
and sgn(r) = -1, r (VI , . . . , Vk) = - f(VI , . . . , Vk). Therefore, It», ... , Vk) =
o. 0
Lemma 12.10 implies that if k > n, the space 1\ k(V*) is trivial since one of
the basis elements must appear in the k-tuple more than once . Hence for k > n,
1\k(V*) = O. We have also seen that for k = 1 we have 1\ I (V*) = .c l (V) = V* ,
and therefore one can use the dual basis as a basis for 1\ 1(V *) . In order to specify
an altemating tensor for 1 < k ~ n we simply need to define it on an ascending
k-tuple of basis elements since, from Lemma 12.10, every other combination can
be obtained by permuting the k-tuple. Thus, we have that if al, az , . . . , an is a basis
for V and f, gare altemating k-tensors on V with
1/1
1
(aj" .. . , aj,) =
(0 .
ifJ=I=/,
1 ifJ=/.
The tensors 1/1 1 form a basis for 1\k(V *) and satisfy the formula
1/11 = L sgn(a)(q/)".
"ESI
• k * ( n ) n!
dlm(1\ (V )) = k = kIen - k)! '
Theorem 12.12. For any tensor I E .ck (V) , define Alt : .ck(V) ~ A k(V*) by :
1
A1t(f) = k' L sgn(a)j"
• aESk
(12.3)
An elegant basis for A k (V *) can be formed using that for the dual basis for V .
Thus, given a basis al, •••, an for a vector space V ,let </JI, •• . ,</Jn denote its dual
basis and 1/11 the corresponding elementary altemating tensors. If I = (il, . .. , id
is any ascending k-tuple of integers, then
1/11 = </Jil /\ </Ji2/\ ... /\ </Jh. (12.4)
Using the basis in (12.4), any altemating k-tensor f E A k (V*) may be expressed
in terms of the dual basis </J I , . .. , </Jn as
f = L....
"""' d,)I . · ··,}k'l'
. A.jl /\ A.h /\ ... /\ A.j,
'I' 'I'
J
for all ascending k-tuples J = (jl' . . . , A) and some scalars dj l .. ... h- If we require
the coefficients to be skew -symmetric, i.e.,
for all I E {I, . . .• k - I}, we can extend this summation over all k-tuples:
1 ~ .. .
f = k! . L.... di, .....h</J" /\ </J'2/\ •• •/\ </J'k. (12.5)
'1.....',=1
The wedge product provides a convenient way to check whether a set of I-tensors
is linearly independent.
Theorem 12.16 Independence of Alternating I-tensors. lf w l •.. . , wk are 1-
tensors over V. then
k
i j
a = Laijw •
j=1
i
Therefore, the product of the a can be written as
a' /\ a
2
/\ • • . /\
k
a = (t j =1
a1jw
j)
/\ ... /\ (t j=1
akjw
j)
.
i , •.... h=1
The claim follows by Theorem 12.16 and the skew commutativity of the wedge
product. 0
Proof: Pick a basis </J I, </J2, ••. , </Jn for V * such that cu = </J I. With respect to
this basis, ~ can be written as
(12.6)
for all ascending k-tuples J = UI , ... , jd and some scalars, dJ, ....•Jj • If cu is a
divisor of ~, then it must be contained in each nonzero term of this summation.
Therefore cu A ~ == O. On the other hand, if cu A ~ == 0, then every nonzero term of ~
mustcontain cu. Otherwise, we would havecuA</J}' A· . 'A</J}j = </JI A</J}IA· . ' A</J};
for i. . .. . , A t= 1 which is a basis element of A k+J (V*) and therefore nonzero.D
Ifwe select a basis </JI , </J2, .•. ,</Jn for V* such that span {</JI , </J2, . . . ,</JI} = L~ ,
then , ~ can be written as ~ = € A </JI A .. . A </JI, where €E A k- l (V*) is not
decomposable and involves only </Jl+l, •.• ,</Jn•
It may be verified (see forexample Abraham et al. [2, page 429]) that if a, b, c, d E
JR, v,W E V, g, h E .ck(V) are k-tensors, and r E N(V*), f E Am(V*) are
altemating tensors, then we have the following identities:
1. Bilinearity:
Proof: Let ifJl, .. . , ifJ n be the dual basis to al, .. . , an' Then w can be written
with respect to the dual basis as
w = LdJifJ j , /\ ifJh /\ . .. /\ ifJjl = LdJt/rJ,
J J
where the SUfi is taken oven an ascending k-tuples J . If a basis element t/rJ does
not contain ifJi , then clearly, a, J t/rJ := O. If a basis element contains ifJi, then
a, J ifJjl /\ ifJh /\ ... /\ ifJjl i= 0 because a, can always be matched with ifJj through
a permutation that affects only the sign. Consequently, (ar J w) := 0 if and only if
the coefficients d J of an the terms containing ifJi are zero. 0
2The algebraic ideal is the ideal of the algebra considered as a ring . Furthermore, since
this ring has an identity, any ideal must be a subspace of the algebra considered as a vector
space.
588 12. Exterior DifferentialSystems in Control
Definition 12.29 Minimal Ideal. Let (V, 0) be an algebra. Let the set A :=
{ai E V, I ~ i ~ K} be any finite collection of linearly independent elements in
V . Let S be the set ofall ideals containing A, i.e.,
[A =n[
fE S
If (V, 0) has an identity element, then the ideal generated by a finite set of
elements can be represented in a simple form.
x == y mod I .
If (V , 0) has an identity element the above definition implies that x == y mod [
if and only if:
L o, 0 a,
K
X - Y=
i=1
I t
written as
where alw stands for a(vl, ... , vd/or all v" ... , Vk E W.
A system of exterior equations generally does not have a unique solution, since
any subspace W 1 C W will satisfy alw, = 0 if alw = O. A central fact conceming
systems of exterior equations is given by the following theorem:
Theorem 12.34 Solution of an Exterior System. Given a system 0/ exterior
equations a' = 0, . . . , a K = 0 and the corresponding ideal Ir. genera ted by
the collection 0/ alternating tensors ~ := {a' , . . . , a K } , a subspace W solves
the system 0/ exterior equations if and only if it also satisfies n Iw = O/or every
n E l z-
Proof: Clearly, tt Iw = 0 for every n E ls: implies a i Iw = 0 as a' E Lx.
Conversely, if zr E Ir., then n = (Ji'Lf=, /\ a i for some (Ji E A(V*). Therefore,
a' Iw = 0 implies that n Iw = O. 0
590 12. ExteriorDifferential Systems in Control
This result allows us to treat the system of exterior equations, the set of generators
for the ideal, and the algebraic ideal as essentially equivalent objects. We may
sometimes abuse notation and confuse a system ofequat ions with its corresponding
generator set and a generator set with its corresponding ideal. When it is important
to distinguish them, we will explicitly write out the system of exterior equations,
and denote the set of generators by 1; and the ideal which they generate by l z-
Recall that an algebraic ideal was defined in a coordinate-free way as a subspace
of the algebra satisfying certain closure properties. Thus the ideal has an intrinsic
geometrie meaning, and we can think of two sets of generators as representing the
same system of exterior equations if they generate the same algebraic ideal.
Definition 12.35 Equivalent Systems. Two sets 0/generators, 1; 1 and 1;2 which
generate the same ideal, i.e. lt; = Ir.2' are said to be algebraically equivalent.
We will exploit this notion of equivalence to represent a system of exterior
equations in a simplified form. In order to do this, we need a few preliminary
definitions.
Definition 12.36 Associated Space of an Ideal, Let 1; be a system 0/ exterior
equations and ls: the ideal which it generates. The associated space of the ideal
Ir. is defined by:
AUr.) := {v E V I v Ja E l t: Va E h,l ,
that is.for alt v in the associated space and a in the ideal, v Ja == 0 mod l z- The
dual associated space, or retracting space of the ideal is defined by A(h,).L and
denoted by C(h) c V* .
Once we have determined the retracting space C(h,), we can find an algebraically
equivalent system 1;' that is a subset of A(C(h,» , the exterior algebra over the
retracting space.
Theorem 12.37 Characterization of Retracting Space. Let 1; be a system 0/
exterior equations and Ir. its corresponding algebraic ideal. Then there exists an
algebraicalty equivalent system 1;' such that 1;' C A(C(h,».
Proor: Let VI, ••• , Vn be a basis for V , and ~ 1 , •• • , ~n be the dual basis, selected
such that Vr+I, •• • , Vn span A(h) . Consequently, ~I , • • • , ~r must span C(h,) .
The proof is by induction. First, let a be any one-tensor in Ir.. With respect to the
chosen basis, a can be written as
n
a = La;~;.
;=1
Since vJa == 0 mod ls: for all v E AUr.) by the definition ofthe associated space,
we must have a, = 0 for i = r + I, . . . , n. Therefore,
r
a= La;~; .
;= 1
Now suppose that all tensors of degree :::: k in l z are contained in A(C(h;».
Let a be any (k + I) tensor in I E . Consider the tensor
and Vr+1 Ja E Lx -
We can continue this process for Vr+2, ••• , Vn to produce an athat is a generator
of lt: and is an element of A(C(h;)). 0
Example 12.38. Let VI, • • • , V6 be a basis for ]R6 and let 8 1 , • •• , 0 6 be the dual
basis. Consider the system ofexterior equations
a l = 8 I A 8 3 = 0,
a2 = 0I A 8 4 = 0,
a 3 = 0 I A 0 2 - 8 3 A 0 4 = 0,
a 4 = 8 1 A 8 2 A 0 5 - 0 3 A 0 4 A 0 6 = O.
Let l z bethe ideal generated by }:; = {al, a 2 , a 3 , a 4 } and A(lI;) the associated
space ofh;. Because l t: contains no I -tensors, we can infer thatforall v E A(lE),
we have V Ja I = 0, v J a 2 = 0, and v J a 3 = O. Expanding the first equation, we
get
vJa l = vJ(OI A( 3)
= (vJO I ) A0 3 + (-1)10 1 A (vJ0 3 )
= 8 1 (v)8 3 - 0\v)81 = O.
V Ja 2 = 0 1(v)84 - 04(v)81 = 0,
- (u J (0 3 /\ 0 4)) /\ 0 6 - (-1)2(0 3 /\ 0 4) /\ (u J 0 6)
= OSCU)OI /\ 02 - 06(u)03 /\ 0 4
= a(O' /\ 0 3) + b(Ol /\ 0 4) + c(OI /\ 02 - 0 3/\ 0 4).
yi = o', i = I, . . . , 4,
With respect to this new basis, the retracting space C(!I;) is given by
C(!I;) = span{y I , ... , y5}
12.2.2 Forms
Since the tangent space to a differentiable manifold at each point is a vector space,
we can apply to it the multilinear algebra presented in the previous section. Recall
12.2 Introduction to Exterior Differential Systems 593
(12 .8)
for any smooth function f E COO(p) . The space TpM is an n-dimensional vector
space and if (U, x) is a local chart around p , then the tangent vectors
a a
ax l , ••• , ax n (12.9)
for all t E (-8, 8) . Thus, a local expression for a vector field F in the chart (U , x)
is
n a
F(p) = La;(p)-. (12.11)
;= 1 ax'
The vector field F is C oo if and only if the scalar functions a, : M ~ ~ are C oo •
Tensor Fields
Since the tangent space to a manifold at a point is a vector space, we can apply
all the multilinear algebra that we presented in the previous section to it. The dual
space of TpM at each p E M is called the cotangent space to the manifold M
at p and is denoted by T; M. The collection of all cotangent spaces, called the
cotangent bundle, is
T*M :=
pEM
UT;M, (12.12)
(12.15)
be the basis for TpM. Let the l-forms cfJi be the dual basis to these basis tangent
vectors, i.e.,
for multi-indices land scalar functions b l (p). The k-form a can be written
uniquelyas
for ascending multi-indices land scalar functions CI. The k-tensor wand k-form a
are of dass C'" if and only if the functions b, and CI are of dass C oo respectively.
Given two forms W E Qk(M). () E QI(M), we have
w = Lblt/tl .
I
J•
() = LCJt/t
J
W /\ e= L L b, Clt/tl /\ t/t J.
I J
12.2 Introduction to Exterior DifferentialSystems 595
(12.21)
The operator d provides a new way of expressing the elementary l -forms r/Ji(p)
on TpM. Let x : M ---+ Rn be the coordinate function in a neighborhood of p .
Consider the differentials of the coordinate functions
(12.23)
If we evaluate the differentials dx ' at the basis tangent vectors of TpM we obtain,
and therefore the dx' (p) are the dual basis of Tp M. Since the r/Ji (p) are also the
dual basis, dx' (p) = r/Ji (p) . Thus the differentials dx' (p) span [.1 (TpM) and from
our previous results, any k-tensor w can be uniquely written as
for ascending multi-indices I = {i1 , iz. . . . , h}. Using this basis we have that for
a O-form,
n Bf .
df= ' L , -.dx' . (12.27)
i=1 Bx'
More generally, we can define an operator d : nk(M) ~ nk+ 1(M) that takes
k-forms to (k + I) -forms.
596 12. Exterior Differential Systems in Control
dco] =
~aw,
L.J -
.
. dx! . (12.30)
j=1 Bx!
"'~
dto = L.J L.J -aw,. dx!. /\ dx , . (12.31)
, j= l Bx!
From the definition, this operator is certainly linear. We now prove that this
differential operator is a true generalization of the operator taking O-forms to
1-forms, satisfies some important properties, and is the unique operator with those
properties.
Theorem 12.41 Properties of Exterior Derivative. Let M be a manifold and
let p E M. Then the exterior derivative is the unique Linearoperator
(12.32)
for k ::: 0 that satisfies:
1. lf f is a O-form, then df is the J-form
df(p)(X p) = Xp(f). (12.33)
2. lf w 1 E n k (M) , w2 E nt (M) then
d(w 1
/\ w 2
) = dw' /\ w 2 + (-I)k w' /\ dw 2 • (12.34)
3. For every form to, d(dw) = O.
Proof: Property (1) can be easily checked from the definition of the exterior
derivative. For property (2), it suffices to consider the case w l = f dx' and w 2 =
gdx! in some chart (V, x), because of linearity ofthe exterior derivative.
For property (3), it again suffices to consider the case w = f dx' because of
Iinearity. Since f is a O-form,
d(df) = d (t j =1
f
a .) dx j =
Bx!
t: t: ~
i = 1 j=1
af.dx i
ax' ax J
/\ dx !
=~ "" (aax i
af
ax j -
a af ) ' .
ax j ax i dx' /\ dx! = O.
' <J
We therefore have d(df) = 0 by the equality of mixed partial derivatives and the
fact that dx' r.dx' = O. If w = f dx! is a k-form, then dio = df r.dx! + f /\d(dx l )
by property (2), and since
didx'') = d(1 /\ dx l ) = d(1) /\ dx' = 0 , (12.35)
we get
d(dw) = d(df) /\ dx' - df /\ d(dx l ) = O. (12.36)
To show that d is the unique such operator, we assurne that d' is another linear
operator with the same properties and then show that d = d'. Consider again a
k-form w = f dxl . Since d' satisfies property (2), we have
(12.37)
From the above formula we see that if we can show that d' (dx l ) = 0, then we will
get
(12.38)
because d' f = df by property (1), and that will complete the proof. We therefore
want to show that
d' (dx" /\ ... /\ dx h ) = 0, (12.39)
But since both d and d' satisfy property (1) we have
d x' = dx' ' /\ ... /\ dx" = d'x" /\ . . . /\ d'Xil = d'x l , (12.40)
since the coordinate functions x' are O-forms. Then
d'(dx" /\ .. . /\ dX i l ) = d'(d'x" /\ . . . /\ d'xi;) = O. (12.41)
since d' satisfies property (3) . o
Now let f : M ~ N be a smooth map between two manifolds. We have seen that
the push-forward map, f *, is a linear transformation from TpM to TJ(p)N. There -
fore given tensors or forms on TJ(p)N we can use the pullback transformatiorr',
f *, in order to define tensors or forms on TpM . The next theorem shows that the
exterior derivative and the pull back transformation commute.
The following lemma establishes a relation between the exterior derivative and Lie
brackets .
Lemma 12.46 Cartan's Magie Formula. Let W E 0 1(M) and X, Y smooth
vector jields. Then
dw(X , Y) = X(w(Y» - Y(w(x» - w([X, Y]). (12.47)
Codistributions
Recall from Chapter 8 that a smooth distribution smoothly associates a subspace
of the tangent space at each point p E M . It is represented as the span of d smooth
vector fields, as follows
(12.48)
(12 .56)
(12.57)
t
A finite collection of forms, 1: := {al , .. . ,a K } generates an algebraic ideal
Theorem 12.49. Let 1: be a finite collection of forms , and let Ir:. denote the
differential ideal generated by 1:. Define the collection
1:' = 1: U az
and denote the algebraic ideal which it generates by l»- Then
Ir:.=h,"
Proof: By definition, Ir:. is closed with respect to exterior differentiation, so
1:' c Ir:.. Consequently, ls: C Ir:.. The ideal lt: is aclosed withrespect to exterior
differentiation and contains 1: by construction. Therefore, from the definition of
Ir:. we have that Ir:. C Ir:.' . 0
The associated space and retracting space of an ideal in Q (M) are defined
pointwise as in Section 12.2.1. The associated space of Ir:. is called the Cauchy
characteristic distribution and is denoted by A(Ir:.) .
This theorem allows us to work either with the generators of an ideal or with the
ideal itself. In fact some authors define exterior differential systems as differential
ideals of Q (M) . Because a set of generators E generates both a differential ideal
Ir. and a algebraic ideal lx , we can define two different notions of equivalence
for exterior differential systems. Two exterior differential systems EI and E2 are
said to be algebraically equivalent if they generate the same algebraic ideal. i.e.•
l z , = h 2 • Two exterior differential systems EI and E2 are said to be equivalent
if they generate the same differential ideal. i.e.• 'Ir. = 'Ir.2' Intuitively, we want
j
to think of two exterior differential systems as equivalent if they have the same
solution set. Therefore, we will usually discuss equivalence in the latter sense.
Pfaffian Systems
Pfaffian systems are of particular interest because they can be used to represent a
set of first-order ordinary differential equations.
12.2 Introduction to ExteriorDifferential Systems 603
Derived Flags
lf the algebraic ideal generated by a Pfaffian system does not satisfy the Frobenius
condition, then it is not a differential ideal. However, there may exist a differential
ideal which is a subset ofthe algebraic ideal. This subideal can be found by taking
the derived f1ag of the Pfaffian system. Let / (0) = {w l , ••• , w"} be the algebraic
ideal generated by independent l-forms w l •• •• , co": We define /(1 ) as
/(1 ) = {A E / (0) : dA == 0 mod / (O)} C / (0) .
The ideal / (I) is called the first derived system. The analogue of the first derived
system from the distribution point of view is given by the following theorem.
Theorem 12.58. /// (0) = ß .l , then / (1 ) = (ß + [ß , ß]).l.
Proof: Let / (0) be spanned by I-forms w l
, ••• , W S and let ß be its annihilating
distribution. By definition we have that
/ (1 ) = {w E / (0) : dco == 0 mod / (O)} .
L
s
j
dn = (}j /\ w
j=1
for some forms (}j . Now let X, Y be vector fields in ß. Since ß is the annihilating
distribution of / (0). w j (X) = w j (y) = O. Also, TJ E / (I ) C / (0 ) . and therefore
TJ(X) = TJ(Y) = O. Now, using the expression for dTJ •
,
.
dTJ(X , Y) = L(}j /\wj(X, Y)
j=J
s
= L (}j (X)w
j (Y) - o! (Y)w j (X)
j =1
=0,
12.2 Introduction to Exterior Differential Systems 605
and therefore
17([X, Y)) = 0,
One may inductively continue this procedure of obtaining derived systems and
define
or, in general,
1(k+1) = {A E I(k) : dA == 0 mod I(k)} C I(k) .
The following theorem allows us, under a rank condition, to write er in anormal
form.
Theorem 12.60 PfatT Theorem. Let er E QI (M) have constant rank r in a
neighborhood 01 p . Then there exists a coordinate chart (U , z) such that in these
coordinates, a = dZI + z2d z 3 + ... + z2r d z2r+l.
Proof: Let I be the differential ideal generated by a. From Theorem 12.39 the
retracting space of I is of dimens ion 2r + 1. By Theorem 12.57 there exist local
coordinates y I , ... , yn such that I has a set of generators in y I , . .. , y2r+I . Then,
by dimension count, any function I1 of those 2r + 1 coordinates results in
(da)' 1\ a 1\ df, = O.
Now let II be the ideal generated by {dll , a , da} . If r = 0, then the result follows
from the Frobenius theorem, Theorem 12.55. If r > 0, the forms df, and er must
be linearly independent, since a is not integrable . Applying Theorem 12.39 to II,
r,
let be the smallest integer such that
(da)'t+ 1 1\ er 1\ df, = O.
da 1\ a 1\ df, 1\ dh 1\ • . . 1\ dir = 0,
a 1\ d], 1\ dh 1\ ... 1\ df,. #- O.
Finally, let Ir bc the ideal {dlt , . .. , df,. , a , da}. Its retraction space C(Ir) is of
dimension r + l. There is a function Ir+1 such that
a 1\ d], 1\ dh 1\ 1\ dlr+1 = 0,
df, 1\ dh 1\ 1\ dlr+1 #- O.
By modifying a by a factor, we can write
a = dlr+1 + gid], + + grdf,..
Because (da)' 1\ a #- 0, the functions 11 , , Ir+I, gl, . . . ,gr are independent.
The result then follows by setting
Z2i+1 = j;k
for I Si Sr. o
ExampIe 12.61 Unicycle in PfatT Form. Consider the unicycle example de-
scribed by the codistribution I = {a}, where er = sin Odx - cos Ody. We can
immediately see that
da = cos 0 dO 1\ dx + sin 0 dO 1\ dy
608 12. Exterior Differential Systems in Control
and that
da 1\ a = d(} 1\ dy 1\ dx =1= 0,
(da)2 1\ a = O.
Therefore a has rank J and by Pfaff's Theorem there exist coordinates z 1 , Z2 , Z3
such that
a = dZI + z2d z3 .
In this example we trivially obtain
a = dy + (- tan(})dx .
The following theorem is similar to Pfaff's theorem and simply expresses the
result in a more symmetrie form.
Theorem 12.62 Symmetrie Version of PfatT Theorem. Given any a E
Ql (M) with constant rank r in a neighborhood U of p, there exist coordinates
z. y1 , ... , yr, x', ... .x' such that
a = dz + ~ t ( yidx i - x id/) .
2 ;= 1
Z2i =/ 1 ~ i ~ r,
1 ~ i ~ r.
reduees the above theorem to the Pfaff theorem. o
The Pfaffian system a = 0 on a manifold M is said to have the local accessibility
property if every point x E M has a neighborhood U sueh that every point in U ean
be joined to x by an integral eurve. The following theorem answers the question of
when this Pfaffian system has the local aeeessibility property. The reader should
reeoneile this definition with the definition of eontrollability and aeeessibility in
Seetion 11 .1.
Theorem 12.63 Caratheodory Theorem. The Pfaffian system
a =0,
where a has constant rank, has the local accessibility property ifand only if
a 1\ da =1= O.
Proof: The above condition simply says that the rank of a must be greater than
or equal to I. If a has zero rank then da 1\ a = 0, and therefore by the Frobenius
theorem (Theorem 12.55), we ean write
a = dh = 0
12.3 Normal Forms 609
for some function h. The integral curves are of the form h = c for any arbitrary
constant c. Since we can only join points p, q E M for which h(p) = h(q) , we
do not have the Iocal accessibility property.
Conversely, let a have rank r ~ 1. From Theorem 12.62, we can findcoordinates
z, xl, ... , x" , yl, . .. , yr , u J , • • • , US in some neighborhood U with 2r + s + I =
dirn M such that
and therefore
dz = ~ t(xidi - i dx i ) .
2 i= 1
I
z (t) = -
2
l' L:
0
r
i= J
( . dy' . dx' ) I r
x ' - - y ' - dt=-L:A i ,
dt dt 2 i =1
where Ai is the area enclosed by the curve (Xi (t), yi (t» and thc chord joining
the origin with (x~, yD. To reach thc point q, the curve (Xi (t) , yi (t) must satisfy
z(l ) = z.. Geometrically, it is clear that a curve (x' (f) , yi (t» linking the points
p and q while enclosing the area prescribed by z 1 will always exist. Thus, the
integral curve c(t) given by
(z(t), X I (t), ... , x' (r), i (t), ... , yr (t), tu J (r), ... , t US (t»
has c(O) = p and c( I) = q and satisfies the equation a = 0, and the system
therefore has the local accessibility property. 0
/ = spanlo", a 2 }
Proof: Choose a basis for / that is adapted to the derived flag; that is /(0) = / =
{al, a 2}, /(1) = {al}, and /(2) = {O} . Choose a 3 and a 4 to complete the basis.
Since /(2) = {O} we have
while
(da l ) 2 1\ a l = 0,
since it is a 5-form on a 4-dimensional space. Therefore, a l has rank 1. By Pfaff's
Theorem, we know that there exists a coordinate change such that
a l = d Z4 - z3d z l .
Now, since alE / (I), the definition of the first derived system will imply that
da' 1\ al 1\ a 2 = 0,
and thus
dZI 1\ d Z3 1\ al 1\ a 2 = O.
a 2 == a(x)dz 3 + b(x)dzlmod a l .
(12.62)
da' '1= 0 mod I.
Then there exists a coordinate system z'. Z2, ... , z" in which the Pfaffian system
is in Goursat normal form:
1 = {d Z3 - z2d z l , d Z4 - z3d z l , •• • , dz" - zn-Idzl}.
Proof: The Goursat eongruenees ean be expressed as
da' == _ a 2 Arr modal,
da 2 == _a 3 A n mod a l , a 2 ,
I:
1(0) = {a l , a 2 , ••., aS }, ,
1(1) = {al, .. . , a S
-
I}
which means that a l has rank I. We can therefore apply Pfaff's Theorem and
express a l as
a' = dz" - Z"-Idzl
for some choice of zI , Zll-I , Z". Furthermore, by Corollary 12.65 we can express
a 2 as
a 2 = dZIl-1 - ZIl-2d zl . (12.63)
In these new coordinates we have
da' 1\ a l = _dZ"-1 1\ dZI 1\ dz",
Now we have that
da ll\a ll\7f = n 1\ (_d ZIl-ll\dzll\dzll) = n 1\ (-a 21\7f I\a l) = 0,
and therefore 7f is a linear combination of dZI, dZIl-I , dz", Noting that dZIl-1 _
ZIl-2d zl mod a l , a 2,
and thus
a S = d Z3 - zZdz l •
Now, from the Goursat congruences we have that
das f. 0 mod I.
and therefore
al /\ a Z /\ •• • /\ aS /\ da s f. O.
If we substitute the a i in the above expression we obtain
dZl /\ dz z /\ ... /\ dt" f. 0,
and therefore the functions z 1, • •• , zn can serve as a local coordinate system. 0
The following example illustrates the power ofthe Goursat's theorem by apply-
ing it in order to linearize a nonlinear system . A more systematic approach to the
feedback linearization problem can be found in the paper by Gardner and Shad-
wick [108]. Note that the integral curves of a system in Goursat normal form are
completely determined by two arbitrary functions in one variable and their deriva-
tives. For example, once Z I (r) and ZS (r) are known, alI of the other coordinates
are determined from
i Zi+1 (r)
Z = Zl (r) ,
where the dot indicate the standard derivative with respect to the independent
variable r . Because of this property, these two coordinates are sometimes referred
to as linearizing outputs for the Pfaffian system.
614 12. Exterior DifferentialSystems in Control
xs = ! .(XI, ... , x s , u) .
Zl = V I,
Z2 = V2 ,
Z3 = Z2 V I ,
is not reachable. Taking into account the connection relations (12.64) any one of
the Cartesian positions Xi, / together with all the hitch angles (J0, •• • ,(Jn will
completely represent the configuration of the system. The configuration space is
thus M = jR2 X (SI )n+1 and has dimension n + 3. In any neighborhood, the
configuration space can be parameterized by jRn+3. The velocity constraints on the
system arise from constraining the wheels of the robot and trailers to roll without
slipping; the velocity of each body in the direction perpendicular to its wheels
must be zero. Each pair of wheels is modeled as a single wheel at the midpoint of
the axle. Each velocity constraint can be written as a one-form,
ai = sin (Ji dx i - cos (Ji dy' , i = 0, . . . , n. (12.65)
The one-forms aO, a l , . .. , an represent the constraints that the wheels of the
zeroth trailer (i.e. the cab), the first trailer, ... , the n-th trailer, respectively, roll
without slipping. The Pfaffian system corresponding to this mobile robot system
is generated by the codistribution spanned by all of the rolling without slipping
4For example, see the Hilare family of mobile robots at the laboratory LAAS in Toulouse
[60; 113l.
616 12. ExteriorDifferential Systems in Control
constraints,
(12.66)
(12.67)
The exact form of the function /0; is unimportant; what will be needed is the
relationship between d(}i and dx ' , The first lemma relates the exterior derivatives
of the X coordinates,
Lemma 12.68. The exterior derivatives 0/ any 0/ the x variables are congruent
modulo the Pfaffian system, that is: dx' == /x ;, jdx j mod I.
12.3 Nonna1 Fonns 617
Proof: For two adjacent axles , the relationship between the x coordinates is
given by the hitching,
X
i - l = x' + Li cosOi,
dX i - 1
= dx' - Li sin o'ae',
= (1 - L i sinO i f Oi)dx i mod ai-I, a i ,
(12.72)
/(n) = {an},
/(n+l) = {O} .
the constraint corresponding to the axle directly in front of it, it is simple to check
that the derived flag has the form given in equation (12.72). 0
Note that l (n+ l ) = {O} implies that there is no integrable subsystem contained
in the constraints which define n-trailer Pfaffian system.
i = I, . .. , n,
daO t- 0 mod I.
A one-form which satisfies these congruences is given by
'Ir = dx", (12.73)
Proof: First of all, consider the original basis of constraints. The expression
for a ' can be written in the configuration space coordinates from equation (12.65)
together with the connection relations (12.64) and some bookkeeping as:
L
n
a i = sin ()i dx" - cos ()i dy" - L k cos«(i _ ()k)d()k. (12.74)
k=i+ 1
Before beginning the main part of the proof, it will be helpful to define a new
a
basis of constraints i , which is also adapted to the derived flag but is somewhat
a
simpler to work with. Each i will have only two terms. Although the last constraint
already has only two terms, it will be scaled by a factor,
an = sec ()n an = tan On dx" _ dy": (12.75)
Note that arearrangement of terms will give the congruence
dy" == tan Ondx" mod an. (12.76)
Now consider the next-to-last constraint, an -I , andapply the preceding congru-
ence:
an- I = sinOn- 1dx n - coson-1dyn - L n cos(on - on-I)do n,
== sec On sin(on - I - on)dx n - Ln cos(on -I - On)dOn mod an .
(12 .77)
12.3 Normal Forms 619
This proeedure of eliminating the terms dy", dt)", ... , d() i from ai+l ean be
eontinued.
Lemma 12.71. A new basis ofconstraints ä i ofthe form
ä n = tan ()ndx n _ dy" ,
ä i = sec ()n see«()n -I _ ()n) .•• see«()i+l _ ()i+2)
X tan«()i - ()i+I)dx n - Li+l d()i+l,
i = 0, ... , n - 1, (12.80)
The lemma is proved by induetion. It has already been shown that ä n = fa -an
and ä n - I == f a_-1an -1 mod ä n . Assurne that ä i == fa ia i mod a i+ 1, • • • , an for
i = j + 1, ... , n. Consider ä j as defined by equation 12.80,
ä j = sec ()n see«()n-I _ ()n) • . . see«()HI _ ()H2)
x tan(ß j - ()HI)dx n - Lj+ld()j+I. (12.83)
L
n
a j = sin ()j dx" - eos ß j dy" - L, eos«(Jj - ()k)d()k. (12.84)
k=j+l
dfl == ~i seeensee(e n- 1 -en) ... see(ß i _ ()i+l) tan «()i- 1 _() i)dx n modä i- I,
620 12. Exterior Differential Systems in Control
After some trigonometrie simplification, it can be seen that this equation reads as
The basis (i; will now be scaled to find the basis (i; which will satisfy the
congruences (12.62). Once again, the procedure will start with the last congruence,
(in. The exterior derivative of (in is given by
(12.87)
Looking at the expression for (in-l given in equation (12.78), it can be seen that
x should be chosen to be some multiple of dx" or dü", In fact, either 7r = dx"
or x = dt)" will work, although the computations are different for each case. The
calculations here are for choosing n = dx" , Choosing the new basis element (in-l
as
should be chosen as
(\2.91)
It has already been shown that the congruences hold for i = n and i = n - 1.
Assurne that the congruences
(\2.92)
Before calculating all of the terms, recal1 that the fol1owing congruences hold:
de i t\ dx" == 0 mod ä i - I,
(\2.94)
de i t\ de k == 0 mod ä i - I, ä k - I ,
and thus the only term in dä j mod ä j , ••. , ä n will be a multiple of de j t\ dx",
This completes the proof that the Goursat congruences are satisfied. o
The constraint corresponding to the last axle is once again given by5
an = sin (r dx" - cos (r dy", (12.98)
Z( = I1 (x) =x n ,
Zn+3 = h(x) = y".
The one form an may be written by dividing through by sin on as
giving Zn+1 = tan on. The remaining coordinates are found by solving the
equations
(12.103)
for i = n - I, .. . , I. In fact, because d Z I = rr as chosen in the proof of Theorem
12.70, the one-forms (xi already satisfy these equations, so that the coordinates z,
are given by the coefficients of dx" in the expression for the (xi.
5The basis that satisfies the Goursat congruences was a scaled version of the original
basis, (in = fa na" . However, it can be checked that
d(in 1\ = (dfan 1\ an + fanda") 1\ fa_an
(in
(12.97)
= (fa- )2da n 1\ an,
and thus a function fl will satisfy da" 1\ a n 1\ df, = 0 if and only if dii" 1\ (in 1\ df, = O.
12.3 Normal Forms 623
Singularities
There are two types of singularities associated with the transformation into Goursat
form. At on = n /2, for example, the transformation will be singular, but this
singularity can be avoided by choosing another coordinate chart at the singular
point (such as by interchangingx and y, using the SE(2) symmetry ofthe system) .
A singularity also occurs when the angle between two adjacent axles is equal to
n /2; at this point, some of the codistributions in the derived ftag will lose rank.
The derived ftag is not defined at these points; nor is the transformation. There are
no singularities of the second type for the unicycle (n = 0) or for the front-wheel
drive car (n = 1).
Once the constraints are in the Goursat normal form, paths can be found which
connect any two desired configurations. See Tilbury, Murray, and Sastry [302] and
Problem 12.8 for details.
then there exists a set of coordinates z such that / is in extended Goursat normal
form.
/ = {dzf.-zf+ldz
. °: i = I , ... .s]; j = I, .. . ,mI.
Proof: If the Pfaffian system is already in extended Goursat normal form, the
congruences are satisfied with 7T = dzO (which is integrable) and the basis of
. a j-d j jdo
constramts i - Zi - Zi+l Z •
Now assurne that a basis of constraints for / has been found which satisfies the
congruences (12.105). It is easily checked that this basis is adapted to the derived
flag, that is,
lek) = {al : i = l , . .. ,sj -k; j = 1•... ,m}.
The coordinates z which comprise the Goursat normal form can now be
constructed.
Since 7T is integrable, any first integral of 7T can be used for the coordinate zO.
If necessary, the constraints al can be scaled so that the congruences (12.105) are
satisfied with dzo:
dal == -al+ 1 /\ dzo, mod /(.<r i ) i = I, , Sj - I,
and the constraints can be renumbered so that SI ~ sz ~ ~ Sm'
Consider the last nontrivial derived system, tv-:», The one-forms al, . .. ,a~'
form a basis for this codistribution, where SI = sz = ... = S'I • From the fact that
da{ == -a1 /\ dzo mod / (s, - I),
it follows that the one-forms al' ... , a~1 satisfy the Frobenius condition
d a lj /\ a l1 /\ . . . /\ a 'Il /\ °= 0,
dz
and thus, by the Frobenius theorem , coordinates z] ' . .. , Z~I can be found such that
The matrix A must be nonsingular, since the a{ 's are a basis for / (SI - I) and they
are independent of dzo . Therefore, a new basis ä{ can be defined as
and the coordinates z~ := -(A -I B)j are defined such that the one-forms ä{ have
the form
12.3 Normal Forms 625
where ä! = dz! - Z{+l dzO for j = 1, . . . , rl, as found in the first step, and a{
for j = rl + 1, .. . , rz are the one-forms which satisfy the congruences (12.105)
and are adapted to the derived flag. The lengths ofthese towers are Srl+1 = . . . =
Srl +r2 = SI - k + 1. For notational convenience, define Z{k) := (z{, .. . , zf) for
j = 1, . .. ,rl '
By the Goursat congruences, da{ == -a~ A dzO mod [(s l-k) for j = rl +
1, ... , rl + ri - Thus the Frobenius condition
Since the congruences are defined only up to mod [(sl-k), the last group of terms
(those multiplied by the matrix C) can be eliminated by adding in the appropriate
multiples of ä! = dz{ - z{+ldzo for j = 1, ... , rl and i = 1, . .. , k, This will
change the B matrix, leaving the equation
626 12. Exterior Differential Systems in Control
Again, note that A must be nonsingular, because the er{ 's are linearly independent
mod /(.,,-k) and also independent of dzo. Define
If the one-form 7T which satisfies the congruences (l2.l 05) is not integrable,
then the Frobenius Theorem cannot be used to find the coordinates. In the special
case where SI > Sz, that is, there is one tower which is strictly longer than the
others, it can be shown that if there exists any 7T which satisfies the congruences,
then there also exists an integrable 7T' which also satisfies the congruences (with
a rescaling of the basis forms); see [52; 222]. However, if SI = Sz , or there are at
least two towers which are longest, this is no longer true. Thus, the assumption
that 7T is integrable is necessary for the general case .
If / can be converted to extended Goursat normal form , then the derived flag of
/ has the structure
/ = {erl,
/(1) = {er i, er~ , ...},
/
(Sm - 1) = {<vI ern,
""I '
where the forms in each level have been arranged to show the different towers. The
superscripts j indicate the tower to which each form belongs, and the subscripts i
index the position ofthe form within the j -th tower. There are Sj forms in the j-th
tower. An algorithm for converting systems to extended Goursat normal form is
given in [52].
Another version of the extended Goursat normal form theorem is given here,
which is easier to check, since it does not require finding a basis which satisfies the
congruences but only one which is adapted to the derived flag. One special case
of this theorem is proven in [273].
12.3 Normal Fonns 627
{[ (k ) ,7T: } -- {dZi'
j d Z0 .. I. -- k + 1,... , sJ'" J. -- 1, . . . , m },
which is integrable for each k.
Now assume that such a 7T: exists . After the derived f1ag of the system, [ -.
[(0) :J [(I) :J .. . :J [(SI) = {O}, has been found, a basis which is adapted to the
derived f1ag and which satisfies the Goursat congruences (12.105) can be iteratively
constructed.
The lengths of each tower are determined from the dimensions of the derived
f1ag. Indeed, the longest tower of forms has length s] . If the dimension of [( .<, - ])
is rJ, then there are rl towers each of which has length SI; and we have SI = S2 =
.. . = s r,. Now, ifthe dimension of [('<1- 2) is 2rl +r2, then there are rz towers with
length SI - 1, and Srl+1 = . . . = sr,+r2 = SI - 1. Each Sj is found similarly.
A 7T: which satisfies the conditions must be in the complement of l , for if 7T:
were in l , then {[, 7T:} integrable means that [ is integrable , and this contradicts
the assumption that [(N) = {O} for some N.
Consider the last nontrivial derived system, [('<1 -I) . Let [o a~1 } be a basis I' ...,
for [(.,,-1). The definition ofthe derived f1ag, specifically [("1) = {O}, implies that
da{t=Omod[(.<t- I ) , j=1 , ... ,rl' (12.106)
Also, the assumption that {l(k ) , 7T:} is integrable gives the congruence
dar =Omod{l(sl-l),rr}, j= I, .. . ,rl' (12.107)
Combining equations (12.106) and (12.107), the congruence
da{=7T:/\ß jmod[('<t- I ) , j=I , . . . , r l ' (12.108)
must be satisfied for some ßj t= 0 mod [( .<, - I ) •
Now, from the definition of the derived f1ag,
dar =0, mod[(.<,-2) j= 1, ... ,rl,
da = Lb}da{ + Ldb} /\ a{
}=I }= I
'1
== Lb}(Jr /\ ß}) mod /(sl-l)
}=I
== n /\ (tb}ß})
j=1
mod / (sl-l)
== 0 mod /(s,-I),
which implies that a is in / (SI). However, this contradicts the assumption that
/(.'Il = {O}. Thus the ß}'s must be linearly independent mod tv:»,
Define ai := ß} for j = 1, .. . , rl. Note that these basis elements satisfy the
first level of Goursat congruences, that is:
dar == -ai /\ x , mod / (SI -\) j = 1, . . . , rl .
Ifthe dimension of /(sl-2) is greaterthan 2r], then one-forms a~l+l , ..• , a~I+r2 are
chosen such that
is a basis for / (S I - 2) •
For the induction step, assume that a basis for / (i) has been found,
l
{a l , · · · , a kI l , a 2l , ··· ,
2
a k2 , • •• , aC C}
l , · · · , a kc
Note that c towers of forms have appeared in / (i). Consider only the last form in
each tower that appears in / (i). that is a{. j = 1, . . . • C. By the construction of
1
this basis (or from the Goursat congruences), a{ is in / (i) but is not in / (i+ I) , thus
1
for some ß } ;f= 0 mod /(i) . From the fact that aZ is in /(i) and the definition of the
derived flag, 1
l I Z Z c c c+ 1 c+r C }
{ a 1 , •• • ,akl+l' a l, . . . ,ak1+I, ... , a l, ... , a kc+ I ' a l , •••, al
will satisfy the same set of congruences up to a rescaling of the constraint basis
by multiples of this factor f. 0
x= [t», U) (12.109)
A very important special case of system (12.109) is the one where the input enters
affinely in the dynamics:
where g(x) = [gI (x) .. . gm(x)] and gj(X) are smooth vector fields. Most of the
results presented here will be concemed with systems belonging to this dass, even
though some can be extended to the the more general case (12.109).
In Chapter 9 we have established conditions under which the dynamics
of (12.109) and (12.110) are adequately described by those of a linear system
x = Ax+Bu, (12.111)
x
where E jRn, U E jRm, A E jRnxn, and B E jRnxm with 2: n. n
The problem of linearization can also be approached from the point of view of
exterior differential systems. Note that any control system of the form (12.109)
can also be thought of as a Pfaffian system of codimension m + 1 in jRn+m+ I. The
corresponding ideal is generated by the codistribution
1= {dXi - f;(x, u)dt : i = I, ... , n}. (12.112)
The n + m + 1 variables for the Pfaffian system correspond to the n states, m inputs
and time t. For the special case of the affine system (12.110) the co-distribution
becomes:
(12.113)
In this light, the extended Goursat normal form looks remarkably similar to the
MIMO normal form (also known as Brunovksy normal form) with (Kronecker)
indices sj. j = 1, , m (see Problem 9.14) . Indeed, if we identify coordinates
za, Z!j+I' j = 1, , m in the Goursat normal form with t , Uj, j = 1, . . . , m, the
Pfaffian system becomes equivalent (in vector field notation) to a collection of
m chains of integrators, each one of length Sj and terminating with an input in
the right-hand side. With this in mind, Theorems 12.73 and 12.74, which provide
conditions under which a Pfaffian system can be transformed to extended Goursat
normal form, can be viewed as linearization theorems with the additional restric-
tion that n = dt. An equivalent formulation of the conditions of Theorem 12.66
involving the annihilating distributions is given by Murray [223] . The result is
restricted to Pfaffian systems of codimension two.
Theorem 12.75. Given a 2-dimensional distribution ö., construct two jiltrations:
Ea = ö., Fa = ö.,
dirn Ei = dirn Fi = i + 2, i = 0, . . . , n - 2,
12.4 Control Systems 631
Then there exists a loeal basis {er I , . . . , crS} and a one-form tt sueh that the Goursat
eongruenees are satisfiedJor the differential system I = !:i1. .
Proof: In [223].
In [223] this theorem is shown to be equivalent to Theorem 12.66. However,
there is no known analog of Theorem 12.75 to the extended Goursat case covered
by Theorems 12.73 and 12.74. We will now explicitly work through the connection
between the c1assical static feedback linearization theorem (Theorem 9.16) and the
extended Goursat normal form theorem (Theorem 12.74).
Proposition 12.76 Connections Between Vector Field and Pfaffian System Ap-
proaches. The eontrol system (12 .110) satisfies the eonditions oJTheorem 9.16
if and only if the eorresponding Pfaffian system (12.1l3) satisfies the eonditions
of Theorem 12.74 for n = dt .
Proof: Consider control system (12.110) and the equivalent Pfaffian sys-
tem (12.113). For simplicity, we will consider the case m = 2. The Pfaffian system
/ (0 ) and its annihilating distribution !:io are given by
0 0
0 0
!:io = (/(0» 1. =
0 0
0 0 J+gIU I+g zu Z
o o
o o
o o
Tberefore ,
0 0 1 0 o
0 0 0 o
ßI = (/ (1 »).1 =
0 1 0 0 o
0 0 f + gt Ut + g2 U2 gl
o
o
[V4, vsl =
o
[gI , g 2]
o
o
o
The computation of [V3, vs] is similar. Therefore, assum ing that the conditions of
Step 1 hold and in particular that [gI, g 2] E spanlg , g2 }:
1 1
o o
o o
adjgl
= {Vi : i = 1, . . . , 7}.
0 0
0 0 0 0
~i-I =
0 0 0 0
0 0 f + glUI + g2U2 ad }gl ad }g2
0
0 0
0 0
f + glul + g2U2 ad ~-2gl
0
0
=
0
ad~-Igl +[gl , ad ~-2 gilu I + [g2, ad ~-2gtlu2
634 12. ExteriorDifferentialSystems in Control
and similarly for ad ~-I g2. By the assumed involutivity of lli _ I the last two terms
are already in ß; -I. Therefore , we can write
o o
o o
o o
ad}-lgl ad}-l g2
The condition of Theorem 12.74 requires that j(i) be integrable, or equivalently
that lli be involutive. As before, the only pairs that can cause trouble are the ones
not involving VI and V2. Hence the condition is equivalent to G i - 1 = { ad }gj :
o ~ k ~ i - I , j = 1, 2} being involutive.
By induction, the condition of Theorem 12.74 holds for the i-th iteration of
the derived flag if and only if distribution Gi-I is involutive, i.e., if and only if
condition (3) of Theorem 9.16 holds. In addition, note that the dimension of G,
keeps increasing by at least one untiI, for some value (denoted K in Theorem 9.16)
G d l = G K • The involutivity assumption on G K prevents any further increase in
dimension after this stage is reached. Since the dimension of G, is necessarily
bounded above by n, the number of steps until saturation, is bounded by the
maxirnum final dimension, K ~ n, By construction, the dimension of ß i is three
greater than the dimension of G i-I . Moreover ß K = ßK+I and therefore /(K) =
/(K+l), i.e., the derived fiag stops shrinking after K steps. The remaining condition
ofTheorem 12.74, namely that there exists N such that /(N) = {O}, is equivalent
to the existence of K such that / (K) = {O}, or that ß K has dimension n + 3. As
noted above, this is equivalent G K - 1 having dimension n. Since K ~ n, this can
also be stated as G n - I having dimension n, i.e., condition (2) of Theorem 9.16.
The remaining conditionof Theorem 9.16, namely that the dimension of G, is
constant for all 0 ~ i ~ n - 1, is taken care of by the implicit assumption that all
co-distributions in the derived flag have constant dimension. 0
Tl other than d t implies a rescaling of time as a function of the state. Even though
this effect is very useful forthe case of driftless systems (where the role oftime is ef-
fectively played by an input), solutions for Tl =1= dt are probably not very useful for
linearizing control systems with drift. Because of their generality, Theorems 12.73
and 12.74 are capable of dealing with the more general case of control systems of
the form (12.112) (or equivalently (12.109), as well as drift-free systems, which
were investigated in Section 12.3. Equivalent conditions for the vector field case
have not been thoroughly investigated. These results will be useful in deriving
conditions for dynamic full state feedback linearization. FinaBy, Theorem 12.75
is a very interesting alternative to Theorems 12.73 and 12.74, since it provides a
way of determining whether a Pfaffian system can be converted to Goursat normal
form just by looking at the annihilating distributions, without having to determine
a one-form Tl or an appropriate basis. Unfortunately a generalization to multi-input
systems (or more precisely to the extended Goursat normal form) is not easy to
formulate. It should be noted that the conditions on the filtrations are very much
like involutivity conditions. It is interesting to try to relate these conditions to the
conditions ofTheorem 12.74 (the connection to the conditions ofTheorem 12.73
is provided in [223]) and see whether a formulation for the extended problem can
be constructed in this way.
12.5 Summary
The work on exterior differential systems continues in several different directions.
Some important approaches to solving the problem of full state linearizing nonlin-
ear systems using dynamic state feedback were given by Tilbury and Sastry [303].
There has also been a great deal of recent work in making connections between
a dass of nonlinear systems called differentiaBy flat (and used to characterized
systems which can be full state linearized using possibly dynamic state feedback)
and exterior differential systems. Differential flat systems are well reviewed in
the paper by Fliess et al [104], and the connections between flatness and exterior
differential systems is discussed by van Nieuwstadt et al [311], Aranda-Bricaire
et al [8], and Pomet [242]. While this literature and its connections to the exterior
systems of this chapter is very interesting, we make only some summary com-
ments here : Roughly speaking, a system is said to be differentially flat when the
input and state variables can be expressed as meromorphic functions of eertain
outputs (the flat outputs) and their derivatives. Onee these flat outputs have been
identified then path planning or tracking can be done in the coordinates given by
the flat outputs. While it is easy to see what flat outputs for the n-trailer system, or
systems in Goursat or extended Goursat normal form, and several other systems
are; there are no general methods, at the current time, for constructively obtaining
flat outputs. When systems fail to be flat, they can be sometimes be shown to be
approximately flat; see, for example, Tomlin et al [305], and Koo and Sastry [165].
The literature on navigation of n-trailer systems in the presence of obstacles is
636 12. Exterior DifferentialSystems in Control
very large and growing. Some entry points to the literature are in the work of Wen
and eo-workers [92]. Control and stabilization of ehained form systems is also a
growing literature: see, for example, Samson [254], Walsh and Bushnell [323],
and M'Closkey and Murray [205].
12.6 Exercises
Problem 12.1. Prove Theorem 12.26.
Problem 12.2. Prove Theorem 12.42.
Problem 12.3 Scalar and vector fields. Given a sealar field (funetion) I on
]R3,there is an obvious transformation to a O-form I E gO(]R3) . Similarly, any
veetor field in ]R3 ean be expressed as
G(x) = (x; g, (x)e\ + g2(x)e2 + g3(x)e3),
where e, are the standard eoordinate veetors for ]R3. A eorresponding I-form ean
be eonstrueted in a very straightforward manner:
w = g, (x)dx l + g2(x)dx 2 + g3(x)dx 3
Perhaps less obviously, sealar fields mayaiso be identified with 3-forms and veetor
fields with 2-forms. Consider the following transformations from sealar and veetor
fields to forms on ]R3 (To, T3 have sealars as their domain and TI , T2 veetors in
their domain) :
To:I!-?I,
3 al )
V I(x) = ( x; ~ ax i (x)ei .
12.6 Exercises 637
3
8g;
"l ·G (x) = L -.(x).
;=1 8x '
The gradient and divergence may be formed analogously in IRn by extending the
summation over all n indices; the curl has no real generalization outs ide of IR3 .
Prove using the transformations T; of the previous problem that the divergence,
gradient and curl operators can be expressed in the languag e of exterior forms:
"l~ ~d
("l x) ~ ~ d (12.114)
Vector Fields T2~ Q2(IRn) ,
(V ·) ~ ~ d
Equivalently,
df = TI(V f),
d (T 1G) = T2(V x G),
d(T2G) = T3("l . G).
Problem 12.5 Rolling penny: derivedHag. Consider the rolling penny system .
In addition to the three configuration variables of the unicyc1e, we also have an
angle <P describing the orientation of Lincoln 's head. The model in this case,
assuming for simplicity tha t the penny has unit radius , is given by
x= Ut cos8 ,
y= UI sin 8 ,
Ö= U2 ,
4>= - U l,
638 12. Exterior DifferentialSystems in Control
~ -1 0
Prove that the annihilating codistribution to the distribution ßo = {fl, h} is
/ = ß.l = {a' , a 2 },
where
a l = ccse dx + sinOdy + OdO + I dlj),
2
a = sinO dx - cosO dy + OdO + Odlj).
Compute the derived f1ag of this system by taking the exterior derivatives of the
constraints. Show that
/(1) = {al} /(2) = {O}.
Thus show that the basis is adapted to the derived f1ag. Since /(2) = {O}, an
integrable subsystem does not exist, and the system is not constrained to move on
some submanifold ofJR4.
Problem 12.6 RolIing penny in Engel form. In Problem 12.5, we saw that
the derived f1ag for the rollingpenny system satisfies the conditions of Engel 's
theorem. After some calculations we obtain
da' /\ a = -dx /\ dy /\ dO + sin 0 dO /\ dx /\ dlj) + cos 0 dB /\ dy /\ dlj).
1
Since (da l ) 2 /\ a' = 0, the rank of a' is 1. From Pfaff's Theorem we know that
there exists a function 11 such that
da' /\ a' /\ df, = O.
We can easily see that the function /1 = 0 is a solution to this equation. Since the
rank of a l is 1, we must now search for a function [z such that
a' /\ df, /\ dh = O.
Let h = h (x, y, 0, rP). Verify that a solution to this system of equations is
h (x, y, 0, lj) = x cos B + y sin 0 + rP.
Therefore, following once again the proof ofPfaff's Theorem, we may now choose
z' = I1 and Z4 = h such that
Prove that
Z3 = -x sin B + Y cos O.
12.6 Exercises 639
Prove that
a(x ,y, (),4» = -I,
b(x, y , (), 4» = -x cos () - y sin (),
will satisfy the equation. Complete the transformation of the rolling penny system
into Engel normal form, by defining
2 . b(x,y,(J,4»
z = -x cos () - y sm () = - .
a(x ,y,(),4»
Problem 12.7 Coordinates for the n-trailer system consisting of the origin
seen from the last trailer. Prove that another choice for I1 in solving equations
(12.96) is to write the coordinates ofthe origin as seen from the last trailer. This is
reminiscent of a transformation used by Samson [253] in a different context, and
is given by
ZI :=/J(x)=x"cos()n+y"sin()n . (12.115)
This has the physical interpretation ofbeing the origin ofthe reference frame when
viewed from a coordinate frame attached to the n-th trailer, Derive the formulas
for the remaining coordinates Z2 , ••• ,Z"+2 corresponding to this transformation
by solving the equations
(12.116)
for i = n - I, ... , 1.
Problem 12.8 Steering the n-trailer system [302]. Once the n-trailer system
has been converted into Goursat normal form, show that the control system whose
control distribution annihilates I is given (in the Z coordinates) by
ZI = Uj,
Z2 = U2,
Z3 = Z2 UI, (12.117)
Use all the steering methods of Chapter 11, namely, sinusoids, piecewise constant
inputs, etc . to steer the n-trailer system for one initial configuration to another. You
may wish to see how the trajectories look in the original x., Yi, ()i coordinates.
Problem 12.9 Planar space robot [321]. Consider a simplified model of a pla-
nar robot consisting of two arms connected to a central body via revolute joints.
If the robot is free-floating, then the law of conservation of angular momentum
640 12. Exterior Differential Systems in Control
implies that moving the arms causes the central body to rotate. In the case that
the angular momentum is zero, this conservation law can be viewed as a Pfaffian
constraint on the system. Let M and I represent the mass and inertia of the central
body and let m represent the mass of the arms, which we take to be concentrated
at the tips. The revolute joints are located a distance r from the middle of the
central body and the links attached to these joints have length I. We let (XI, YI)
and (xz, yz) represent the position of the ends of each of the arms (in terms of
e, 1/11, and 1/Iz). Assuming that the body is free floating in space and that friction
is negligible, we can derive the constraints arising from conservation of angular
momentum. If the initial angular momentum is zero, then conservation 0/ angular
momentum ensures that the angular momentum stays zero, giving the constraint
equation
(12.118)
where vu can be calculated as
I
I 0 - cos qs sin qs
o cosq\ sinqs cce qs (12.119)
o sin ql o 0
Analyze this set of constraints to determine whether they can be converted into
generalized Goursat form. Compute the derived flag associated with this system
as astart. If it cannot be converted into Goursat form can you think of how you
might determine the feasible trajectories of this system . For more details on the
dual point ofview (vector field) to this system see Bicchi, Marigo, and Pratichizzo
[25].
13
New Vistas: Multi-Agent Hybrid
Systems
Nonlinear control is very much a growing endeavor, with many new resuits and
techniques being introduced. In the summary sections of each of the preceding
chapters we have given the reader a sense of the excitement surrounding each of
the new directions. In this chapter, we talk about another area of tremendous recent
excitement: hybrid systems. The growth ofthis area is almost directly attributable to
advances in computation, communication, and new methods of distributed sensing
and actuation, which makes it critical to have methods for designing and analyz-
ing systems which involve interaction between software and electro-mechanical
systems. In this chapter, we give a sense of the research agenda in two areas where
this activity is especiaIly current: embedded control and multi-agent distributed
control systems.
practical set of software design tools which support the construction, integra-
tion, safety and performance analysis, on-line adaptation and off-line functional
evolution of multi-agent hierarchical control systems. Hybrid systems refer to
the distinguishing fundamental characteristics of software-based control systems,
namely, the tight coupling and interaction of discrete with continuous phenom-
ena. Hybridness is characteristic of all embedded control systems because it arises
from several sourees. First, the high-level, abstract protocol layers of hierarchi-
cal control designs are discrete as to make it easier to manage system complexity
and to accommodate linguistic and qualitative information; the low-level, concrete
controllaws are naturally continuous . Second, while individual feedback control
seenarios are naturally modeled as interconnections of modules characterized by
their continuous input/output behavior, multi-modal contro I naturally suggests a
state-based view, with states representing discrete control modes; software-based
control systems typically encompass an integrated mixture of both types. Third,
every digital hardware/software implementation of a control design is ultimately
a discrete approximation that interacts through sensors and actuators with a con-
tinuous physical environment. The mathematical treatment of hybrid systems is
interesting in that it builds on the preceding framework of nonlinear control, but
its mathematics is qualitatively distinct from the mathematics of purely discrete
or purely continuous phenomena. Over the past several years, we have begun to
build basic formal models (hybrid automata) for hybrid systems and to develop
methods for hybrid control law design, simulation, and verification. Hybrid au-
tomata, in particular, integrate diverse models such as differential equations and
state machines in a single formalism with a uniform mathematical semantics and
novel algorithms for multi-modal control synthesis and for safety and real-time
performance analysis. The control of hybrid systems has also been developed by
a number of groups: for an introduction to some of the intellectual ferment in the
field, we refer the reader to some recent collections of papers on hybrid systems [6;
4; 7; 135], the April 1998 Special Issue on "Hybrid Systems" of the IEEE Trans-
actions on Automatie Control. See also Branicky, Borkar and Mitter [40], and
Lygeros, Tomlin, and Sastry [195].
Important examples of distributed control are the Air Traflic Management System,
the control system of an interconnected power grid, the telephone network, a
chemical process control system, and automated highway transportation systems.
Although a centralized control paradigm no longer applies here, control engineers
have with great success used its theories and its design and analysis tools to build
and operate these distributed control systems. There are two reasons why the
paradigm succeeded in practice, even when it failed in principle. First, in each case
the complexity and scale of the material process grew incrementally and relatively
slowly. Each new increment to the process was controlled using the paradigm, and
adjustments were slowly made after extensive (but by no means exhaustive) testing
to ensure that the new controller worked in relative harmony with the existing
controllers. Second, the processes were operated with a considerable degree of
"slack," That is, the process was operated weil within its performance limits to
permit errors in the extrapolation oftest results to untested situations and to tolerate
a small degree of disharmony among the controllers. However, in each system
mentioned above, there were occasions when the material process was stressed to
its limits and the disharmony became intolerable, leading to a spectacular loss of
efficiency. For example, most air travelers have experienced delays as congestion
in one part of the country is transmitted by the control system to other parts. The
distributed control system of the interconnected power grid has sometimes failed
to respond correctly and caused a small fault in one part of a grid to escalate into
a system-wide blackout.
We are now attempting to build control systems for processes that are vastly more
complex or that are to be operated much closer to their performance limits in order
to achieve much greater efficiency of resource use. The attempt to use the central
control paradigm cannot meet this challenge: the material process is already given
and it is not practicable to approach its complexity in an incremental fashion as
before. Moreover, the communication and computat ion costs in the central control
paradigm would be prohibitive, especially ifwe insist that the control algorithms be
fault-tolerant. What is needed to meet the challenge of control design for a complex,
high performance material process, is a new paradigm for distributed contro!. It
must distribute the control functions in a way that avoids the high communication
and computation costs of central control, at the same time that it limits complexity.
The distributed control must, nevertheless, permit centralized authority over those
aspects of the material process that are necessary to achieve the high performance
goals. Such achallenge can be met by organizing the distributed control functions
in a hierarchical architecture that makes those functions relatively autonomous
(which permits using all the tools of central control), while introducing enough
coordination and supervision to ensure the harmony of the distributed controllers
necessary for high performance. Consistent with this hierarchical organization
are sensing hierarchies with fan-in of information from lower to higher levels of
the hierarchy and a fan-out of control commands from the higher to the lower
levels. Commands and information at the higher levels are usually represented
symbolically, calling for discrete event control , while at the lower levels both
information and commands are continuous, caIIing for continuous control laws.
644 13. New Vistas: Multi-Agent Hybrid Systems
Interactions between these levels involves hybrid control. In addition protocols for
coordination between individual agents are frequently symbolic, again making for
hybrid controllaws. The hybrid control systems approach has been successful in
the control of some extremely important multi-agent systems such as automated
highway systems [313; 194; 145], air traffic control [306], groups of unmanned
aerial vehicles, underwater autonomous vehicles [78], mobile offshore platforms,
to give a few examples. We invite the reader to use the tools that she has developed
in this book to use in these very exciting new directions.
References
[14] K. J. Aström and B. Wittenmark. Adaptive Control. Addison Wesley, Reading, MA.
1989.
[15] E. W. Bai, L. C. Fu, and S. S. Sastry. Averaging for discrete time and sampled data
systems. IEEE Transactions on Cireuits anti Systems, CAS-35 : 137-148, 1988.
[16] J. Baillieul. Geometrie methods for nonlinear optimal control problems. Journal of
Optimiaztion Theory anti Applications, 25(4) :519-548, 1978.
[17] A. Banaszuk and J. Hauser. Approximate feedback linearization: a homotopy operator
approach. SIAM Journal of Control anti Optimization, 34:1533-1554, 1996.
[18] J. P. Barbot. N. Pantalos, S. Monaco, and D. Normand-Cyrot. On the control ofregular
epsilon perturbed nonlinear systems. International Journal of Control, 59:1255-
1279,1994.
[19] G. Becker and A. Packard. Robust performance of linear parametrically varying sys-
tems using parametrically dependent linear feedback . Systems anti Control Letters,
23(4):205-215. 1994.
[20] N. Bedrossian and M. Spong. Feedback linearization of robot manipulators and
Riemannian curvature. Journal ofRobotic Systems, 12:541-552. 1995.
[21] S. Behtash and S. S. Sastry. Stabilization of nonlinear systems with uncontrollable
linearization . IEEE Transactions on Automatie Control, AC-33:585-590. 1988.
[22] A. R. Bergen and R. L. Franks. Justification of the describing function method. SIAM
Journal on Control, 9:568-589, 1973.
[23] A. R. Bergen and D. J. HilI. A structure preserving model for power systems stabil-
ity analysis. IEEE Transactions on Power Apparatus anti Systems, PAS-100:25-35,
1981.
[24] S. Bharadwaj, A. V. Rao, and K. D. Mease . Entry trajectory law via feedback
linearization . Journal ofGuidance, Control, anti Dynamies, 21(5):726--732.1998.
[25] A. Bicchi , A. Marigo, and D. Prattichizzo. Robotic dexterity via nonholonomy. In
B. Siciliano and K. Valavanis, editors, Control Problems in Robotics anti Automation,
pages 35-49. Springer-Verlag, 1998.
[26] A. Bloch, R. Brockett, and P. Crouch. Double bracket equat ions and geodesie flows on
asymmetric spaces. Communications in Mathematieal Physics, 187:357-373. 1997.
[27] A. Bloch and P. Crouch. Nonholonomic control systems on Riemannian manifolds.
SIAM Journal ofControl anti Optimization , 33: 126--148, 1995.
[28] A. Bloch, P. S. Krishnaprasad, J. Marsden, and G. Sanchez de Alvarez. Stabilization
of rigid body dynamics by internal and external torques . Automatiea, 28:745-756.
1993.
[29] A. Bloch, P.S. Krishnaprasad, J. Marsden. and R. Murray. Nonholonomic mechanical
systems with symmetry. Archivefor Rational Meehanics anti Analysis. 136( 1):21-99,
1996.
[30] A. Bloch, N. Leonard, and J. Marsden. Stabilization of mechanical systems using
controlled Lagrangians. In Proeeedings of the 36th Conference on Decisions anti
Control, San Diego, December 1997.
[31] A. Bloch, M. Reyhanoglu, and N. H. McClarnroch. Control and stabilization of
nonholonomic dynamical systems. IEEE Transactions on Automatie Control, AC
-38:1746--1753. 1993.
[32] M. Bodson and J. Chiasson. Differential geometric methods for control of electric
motors. International Journal ofRobust anti Nonlinear Control, 8:923-954, 1998.
[33] N. N. Bogoliuboff and Y. A. Mitropolskii. Asymptotie Methods in the theory of
nonlinear oscillators. Gordon Breach, New York, 1961.
References 647
[55] C. I. Byrnes, A. Isidori, and J. Willems. Passivity, feedback equivalence, and the
global stabilization of minimum phase nonlinear systems. IEEE Transactions on
Automatie Control, 36:1228-1240, 199I.
[56] C. I. Byrnes and Alberto Isidori. A frequency domain philosophy for nonlinear sys-
tems, with applications to stabilization and to adaptive contro!. In Proeeedings ofthe
23rd Conferenee on Decision and Control , pages 1569-1573, Las Vegas, Nevada,
1984.
[57] C. I. Byrnes, F. Delli Priscoli, and A. Isidori. Output Regulation 0/ Uneertain
Nonlinear Systems. Birkhäuser, Boston, 1997.
[58] M. Camarinha, F. Silva-Leite, and P. Crouch. Splines of class C, in non-Euclidean
spaces. IMA Journal 0/ Mathematieal Control and Information, 12:399-410, 1995.
[59] B. Charlet, 1. Levine, and R. Marino. Sufficient conditions fordynamic state feedback
linearization. SIAM Journal 0/ Control and Optimization, 29( 1):38-57, 1991.
[60] R. Chatila. Mobile robot navigation: Space modeling and decisional processes. In
O. Faugeras and G. Giralt, editors, Roboties Research: The Third International
Symposium , pages 373-378. MIT Press, 1986.
[61] V. H. L. Cheng. A direct way to stabilize continuous time and discrete time linear
time varying systems. IEEE Transactions on Automatie Control , 24:641-643, 1979.
[62] H-D. Chiang, F.F. Wu,and P.P.Varaiya.Foundations ofthe potential energy boundary
surface method for power system transient stability analysis . IEEE Transactions on
Cireuits and Systems, CAS-35:712-728, 1988.
[63] G. Chirikjian and J. Burdick. The kinematics of hyperedundant robot locomotion.
IEEE Transactions on Roboties and Automation, 11(6):781-793, 1995.
[64] D. Cho and J. K. Hedrick. Automotive powertrain modeling for contro!. ASME
Transaetions on Dynamies , Measurement and Control, 111, 1989.
[65] L. O. Chuaand P.M. Lin. Computer AidedAnalysis ofElectronic Cireuits : Algorithms
and Computational Teehniques. Prentice Hall, Englewood Cliffs, NJ, 1975.
[66] L. O. Chua, T. Matsumoto, and S. Ichiraku. Geometrie properties of resistive non-
linear n ports: transversality, structural stability, reciprocity and antireciprocity. IEEE
Transaetions on Cireuits and Systems, 27:577-603, 1980.
[67] D. Claude, M. Fliess, and A. Isidori. Immersion , directe et par bouclage, d' un systeme
nonlineaire, C. R. Aeademie Scienee Paris, 296:237-240, 1983.
[68] E. A. Coddington and N. Levinson. Theory of Ordinary Differential Equations.
McGraw Hill, New York, 1955.
[69] P. A. Cook. Nonlinear Dynamieal Systems. Prentice Hall, 1989.
[70] M. Corless and G. Leitmann. Continuous state feedback guarantees uniform ulti-
mate boundedness for uncertain dynamical systems. IEEE Transactions on Automatie
Control, AC-26:1139-1144, 1981.
[71] J.-M. Coron. Linearized control systems and applications to smooth stabilization.
SIAM Journal of Control and Optimization, 32:358-386, 1994.
[72] L. Crawford and S. Sastry, Biological motor control approaches for a planar diver.
In Proeeedings 0/the IEEE Conference on Decision and Control , pages 3881-3886,
1995.
[73] P. Crouch and F. Silva-Leite. The dynamic interpolation problem on Riemannian
manifolds, Lie groups, and symmetric spaces. Journal 0/ Dynamieal and Control
Systems , 1:177-202, 1995.
[74] P. E. Crouch, F. Lamnabhi, and A. van der Schaft . Adjoint and Hamiltonian formu-
lations input output differential equations. IEEE Transactions on Automatie Control,
AC-40:603-615,1995.
References 649
[117] G. C. Goodwin and K. S. Sin. Adaptive Filtering, Predietion and Control. Prentice
Hall, Englewood Cliffs, NJ, 1984.
[118] M. Grayson and R. Grossman. Models for free nilpotent Lie algebras. Journal 0/
Algebra, 135(1):177-191, 1990.
[119] P. Griffiths. Exterior differential systems and the ealeulus 0/ variations. Birkhäuser,
1982.
[120] J. Grizzle , M. Di Benedetto, and F. Lamnabhi Lagarrique. Necessary conditions for
asymptotic tracking in nonlinear systems. IEEE Transaetions on Automatie Control,
39: 1782-1794, 1994.
[121] J. W. Grizzle. A linear algebraic framework for the analysis of discrete time nonlinear
systems. SIAM Journal ofControl and Optimization, 31:1026-1044, 1993.
[122] J. Guckenheimer and P. J. Holmes. Nonlinear Oscillations, Dynamieal Systems and
Bifureations ofVeetor Fields. Springer Verlag, Berlin, 1983.
[123] J. Guckenheimer and P. Worfolk. Instant chaos. Nonlinearity, 5: 1211-1221 , 1992.
[124] W. Hahn. Stability ofMotion. Springer-Verlag, Berlin, 1967.
[125] J. K. Hale, Ordinary Differential Equations. Krieger, Huntington, New York, 1980.
[126] J. K. Haie and H. Kocak, Dynamies and Bifureations . Springer Verlag, 1991.
[127] M. Hall. The Theory ofGroups. Macmillan, 1959.
[128] P. Hartmann. Ordinary Differential Equations. Hartmann, Baltimore, 1973.
[129] J. Hauser and R. Hindman. Aggressive flight maneuvers. In Proeeedings 0/ the IEEE
Conferenee on Decision and Control, pages 4186-4191 ,1997.
[130] J. Hauser, S. Sastry, and P. Kokotoviö, Nonlinear control via approximate input-
output linearization: the ball and beam example. IEEE Transaetions on Automatie
Control, 37:392-398, 1992.
[131] J. Hauser, S. Sastry, and G. Meyer. Nonlinear control design for slightly non-
minimum phase systems : Application to V/STOL aircraft. Automatica" 28:665-679,
1992.
[132] U. Helmcke and J. Moore. Optimization and Dynamieal Systems. Springer Verlag,
New York, 1994.
[133] M. Henson and D. Seborg . Adaptive nonlinear control of a pH process. IEEE
Transaetions on Control Systems Teehnology, 2:169-182,1994.
[134] M. Henson and D. Seborg. Nonlinear Proeess Control. Prentice Hall, Upper Saddle
River, NJ, 1997.
[135] T. Henzinger and S. Sastry (editors). Hybrid Systems: Computation and Control.
Springer Verlag Lecture Notes in Computer Science, Vol. 1386, 1998.
[136] R. Hermann and A. J. Krener. Nonlinear controllability and observability. IEEE
Transactions on Automatie Control, AC-22:728-740, 1977.
[137] H. Hermes , A. Lundell, and D. Sullivan. Nilpotent bases for distributions and control
systems. Journal ofDifferential Equations, 55:385-400,1984.
[138] J. Hespanha and A. S. Morse. Certainty equivalence implies detectability. Systems
and Control Letters, 36:1-13,1999.
[139] D. J. Hill and P. J. Moylan . Connections between finite gain and asymptotic stability.
IEEE Transaetions on Automatie Control, AC-25:931-936, 1980.
[140] M. Hirsch , C. Pugh, and H. Shub. Invariant Manifolds. Springer Lecture Notes in
Mathematics, Vol. 583, New York, 1977.
[141] M. W. Hirsch and S. Smale. Differential equations, Dynamieal Systems and Linear
Algebra. Academic Press, New York, 1974.
[142] R. Hirschom and E. Aranda Bricaire. Global approximate output tracking for
nonlinear systems. IEEE Transaetions on Automatie Control , 43:1389-1398 , 1998.
652 References
[143] J. J. Hopfield. Neurons, dynamics and computation. Physies Today, 47:40-46, 1994.
[144] F. C. Hoppensteadt. Singular perturbations on the infinite time interval. Transactions
ofthe American Mathematieal Society, 123:521-535, 1966.
[145] H. J. C. Huibert and J. H. van Schuppen. Routing control of a motorway network-a
summary. In Proeeedings ofthe IFAC Nonlinear Control Systems Design Symposium
(NOLCOS), pages 781-784, Rome, ltaly, 1995.
[146] T. W. Hungerford.Algebra. Springer-Verlag , 1974.
[147] L. R. Hunt, V.Ramakrishna, and G. Meyer. Stable inversion and parameter variations.
Systems and Control Letters , 34:203-209,1998.
[148] L. R. Hunt, R. Su, and G. Meyer. Global transfonnations of nonlinear systems. IEEE
Transactions on Automatie Control, AC-28:24-3I , 1983.
[149] A. Isidori. Nonlinear Control Systems. Springer-Verlag, Berlin, 1989.
[150] A. Isidori, A. J. Krener, C. Gori-Giorgi, and S. Monaco. Nonlinear decoupling
via feedback : A differential geometric approach. IEEE Transactions on Automatie
Control, AC-26:33 1-345, 1981.
[151] A. Isidori and A. Ruberti. Realization theory of bilinear systems. In D. Q. Mayne
and R. W. Brocken, editors, Geometrie Methods in System Theory, pages 81-130.
D. Reidel, Dordrecht, Holland, 1973.
[152] A. Isidori , S. Sastry, P. Kokotovic, and C. Bymes. Singularly perturbed zero dynamics
of nonlinear systems. IEEE Transactions on Automatie Control, pages 1625-1631,
1992.
[153] N. Jacobsen. Basic Algebra I. Freeman, San Francisco, 1974.
[154] B. Jakubczyk and W. Respondek. On Iinearization of control systems. Bulletin de
L'Aeademie Polonaisedes Sciences, Serie des scienees mathematiques, XXVIII:517-
522,1980.
[155] J.Carr. Applieations of Center Manifold Theory . Springer Verlag, New York, 1981.
[156] M. Johansson and A. Rantzer. Computation of piecewise quadratic Iyapunov func-
tions for hybrid systems. IEEE Transactions on Automatie Control, AC-43 :555-559,
1998.
[157] V. Jurdjev ic, Geometrie Control Theory. Cambridge University Press, Cambridge,
UK,1997.
[158] R. Kadiyala, A. Teel, P. Kokotovic, and S. Sastry. Indirect techniques for adaptive
input output linearization of nonlinear systems. International Journal of Control ,
53:193-222,1991.
[159] W. S. Kang and J. E. Marsden . The Lagrangian and Hamiltonian approach to the
dynamics of nonholonomc systems . Reports on Mathematieal Physics, 40:21-62,
1997.
[160] A. Kapitanovsky, A. Goldenberg, and J. K. Mills . Dynamic control and motion
planning for a dass of nonlinear systems with drift. Systems and Control Letters,
21:363-369,1993.
[161] M. Kawski. Nilpotent Lie algebras of vector fields. Journal für die reine und
angewandte Mathematik, 388:1-17, 1988.
[162] H. K. Khalil. Nonlinear Systems. Macmillan, New York, 1992.
[163] P. Kokotovic, H. Khalil, and J. O'Reilly. Singular perturbation methods in eontrol:
analysis and design. Academic Press, Orlando, Florida, 1986.
[164] I. Kolmanovsky and N. H. McClamroch. Developments in nonholonomic control
problems. IEEE Control Systems Magazine, 15:20-36, 1996.
References 653
(165) T. J. Koo and S. S. Sastry. Output tracking control design of a helicopter model
based on approximate Iinearization. In Proeeedings ofthe 37th IEEE Conferenee on
Decision and Control, Tampa, Florida, December 1998.
(166) S. R. Kou, D. L. Elliott, and T.J. Tarn.Observability ofnonlinear systems. Information
and Control , 22:89-99, 1973.
(167) J.-P. Kramer and K. D. Mease. Near optimal control of altitude and path angle during
aerospace plane ascent. Journal ofGuidanee, Control, and Dynamies, 21(5):726-
732,1998.
[168] G. KreisseImeier. Adaptive observers with an exponential rate of convergence. IEEE
Transaetions on Automatie Control, 22:2-8, 1977.
[169] A. J. Krener. Approximate Iinearization by state feedback and coordinate change.
Systems and Control Letters, 5: 181-185 ,1984.
[170] A. J. Krener. Feedback Iinearization. In J. Baillieul and J. Willems, editors,
Mathematieal Control Theory. Springer-Verlag, New York, 1998.
[171] M. Krstic, I. Kanellakopoulos, and P. Kokotoviö, Nonlinear and Adaptive Control
Systems Design . John Wiley, New York, 1995.
[172] P. Kudva and K. S. Narendra . Synthesis of an adaptive observer using Lyapunov's
direct method.Intemationallournal ofControl, 18:1201-1210, 1973.
[173] G. Lafferriere and H. J. Sussmann. A differential geometrie approach to motion
planning. In Z. Li and J. F. Canny, editors, Nonholonomie Motion Planning, pages
235-270. Kluwer, 1993.
[174] J-P. Laumond . Controllability of a multibody mobile robot. IEEE Transaetions on
Roboties and Automation, 9(6):755-763, 1993.
[175] D. F. Lawden . Elliptie Funetions and Applieations. Springer-Verlag, 1980.
[176] H. G. Lee, A. Arapostathis, and S. Marcus. Linearization of discrete time nonlinear
systems . International Journal of Control, 45:1803-1822, 1986.
[177] E. Lelarasmee, A. E. Ruehli, and A. L. Sangiovanni-Vincentelli. The waveform re-
laxation approach to time domain analysis of large scale integrated circuits. IEEE
Transaetions on Computer Aided Design, CAD-I(3):131-145, 1982.
[178] N. Leonard and P. S. Krishnaprasad. Motion control of drift-free, left invariant sys-
tems on Lie groups. IEEE Transaetions on Automatie Control, AC-40:1539-1554,
1995.
(179) J. J. Levin and N. Levinson . Singular perturbations of nonlinear systems of ordinary
differential equations and an associated boundary layer equation. Journal ofRational
Meehanies and Analysis , 3:267-270,1959.
[180] P. Y. Li and R. Horowitz. Control of smart exercise machines. I. problem formulation
and nonadaptive control. IEEEIASME Transaetions on Meehatronics, 2:237-247,
1997.
[181] P. Y. Li and R. Horowitz. Control of smart exercise machines. H. self optimizing
control.IEEEIASME Transaetions on Meehatronies, 2:248-258, 1997.
[182] T. Li and J. Yorke. Period three implies chaos. Ameriean Mathematies Monthly, pages
985-992, 1975.
[183] Z. Li and J. Canny. Motion of two rigid bodies with rolling constraint. IEEE
Transaetions on Roboties and Automation, 6(1):62-71, 1990.
[184] Z. Li and J. Canny, editors. Nonholonomic Motion Planning . Kluwer Academic
Publishers, 1993.
[185] D.-C. Liaw and E.-H. Abed. Active control of compressor stall inception: A
bifurcation theoretic approach . Automatica.. 32:109-115 , 1996.
654 References
[186] L. Ljung and S. T. Glad. On global identifiability for arbitrary model parameteriza-
tions. Automatica , 30:265-276, 1994.
[187] E. N. Lorenz. The local structure of a chaotic attractor in four dimensions. Physiea
D, 13:90-104, 1984.
[188] A. Loria, E. Panteley,and H. Nijmeijer. Control of the chaotic Duffing equation with
uncertainty in all parameters. IEEE Transaetions on Cireuits and Systems, CAS-
45:1252-1255,1998.
[189] R. Lozano, B. Brogliato, and I. Landau. Passivity and global stabilization of cascaded
nonlinear systems. IEEE Transaetions on Automatie Control, 37:1386-88, 1992.
[190] A. De Luca. Trajectory control of flexible manipulators. In B. Siciliano and
K. Valavanis, editors, Control Problems in Robotics and Automation, pages 83-104.
Springer-Verlag, 1998.
[191] A. De Luca, R. Mattone, and G. Oriolo. Control of redundant robots under end-
effector commands: a case study in underactuated systems. Applied Mathematies
and Computer Sciencen, 7:225-251,1997.
[192] A. De Luca and G. Ulivi. Fulllinearization of induction motors via nonlinear state
feedback. In Proeeedings of the 26th Conferenee on Decis ions and Control , pages
1765-1770, December 1987.
[193] A. M. Lyapunov.The general problem ofthe stability ofmotion. InternationaLJournal
ofControl, 55:531-773,1992. (this is the English translation ofthe Russian original
published in 1892).
[194] J. Lygeros, D. Godbole, and S. Sastry. A verified hybrid controller for automated
vehicles. IEEE Transaetions on Automatie Control, AC-43:522-539, 1998.
[195] J. Lygeros, C. Tomlin, and S. Sastry. Controllers for reachability specifications for
hybrid systems. Automatiea, 35:349-370, 1999.
[196] Y. Ma, J. Koseckä, and S. Sastry. A mathematical theory of camera calibration.
Electronics Research Laboratory, Memo M 98/81, University ofCalifornia, Berkeley,
1998.
[197] Y. Ma, J. Koseckä, and S. Sastry. Motion recovery from image sequences: Dis-
crete/differential view point. Electronics Research Laboratory, Memo M 98/11,
University of California, Berkeley, 1998.
[198] J. Malmborg and B. Bernhardsson. Control and simulation of hybrid systems. In
Proeeedings ofthe Second World Congress ofNonlinear Analysis, July 1996, pages
57-64, 1997.
[199] R. Marino and P. Tomei. Nonlinear Control Design. Prentice Hall International, UK,
1995.
[200] J. E. Marsden. Elementary Classieal Analysis. W. H. Freeman and Co., San Francisco ,
1974.
[201] J. E. Marsden and M. McCracken. The Hopf Bifureation and its Applieations.
Springer Verlag, New York, 1976.
[202] C. F. Martin and W. P. Dayawansa. On the existence of a Lyapunov function for a
family of switching systems. In Proeeedings of the 35th Conferenee on Decisions
and Control, Kobe, Japan, pages 1820-1823, December 1996.
[203] R. May. Stability and Complexity in Model Eeosystems. Princeton University Press,
Princeton, NJ, 1973,.
[204] McDonnell Douglas Corporation. YAV-8B Simulation and Modeling, December
1982.
References 655
(250) F.M. Salam and S. Sastry. Dynamics ofthe forcedJoscphsonjunction circuit: Regions
of chaos . IEEE Transaetions on Circuits and Systems, CAS-32:784-796, 1985.
(251) F. M. A. Salam, J. E. Marsden, and P. P. Varaiya. Chaos and Amold diffusions in
dynamical systems. IEEE Transaetions on Cireuits and Systems, CAS-30:697-708,
1983.
(252) F. M. A. Salam, J. E. Marsden, and P. P. Varaiya. Amold diffusion in the swing
equation . IEEE Transaetions on Circuits and Systems, CAS-31:673-688, 1984.
(253) C. Samson. Velocity and torque feedback control of a nonholonomic cart. In Inter-
national Workshop in Adaptive and Nonlinear Control: Issues in Roboties, pages
125-151, 1990.
(254) C. Samson. Control of chained form systems: applications to path following and
time-varying point-stabilization of mobile robots. IEEE Transaetions on Automatie
Control, 40(1):64-77, 1995.
(255) 1. W. Sandberg . A frequency domain condition for stability of feedback systems
containing a single time varying nonlinear element. Bell Systems Teehnieal Journal,
43:1601-1608,1964.
[256] 1.W. Sandberg. On the 12 boundedness of solutions of nonlinear functional equations.
Bell Systems Teehnieal Journal, 43:1581-1599,1964.
[257] A. Sarti, G. Walsh, and S. S. Sastry. Steering left-invariant control systems on matrix
Lie groups . In Proeeedings ofthe 1993 IEEE Conferenee on Decision and Control,
December 1993.
[258] A. V. Sarychev and H. Nijmeijer. Extremal controls for chained systesm. Journal of
Dynamieal and Control Systems, 2:503-527, 1996.
(259) S. Sastry and M. Bodson. Adaptive Systems : Stability, Convergenee and Robustness.
Prentice Hall, Englewood Cliffs, New Jersey, 1989.
[260] S. Sastry and C. A. Desoer. The robustness of controllability and observability of
linear time varying systems . IEEE Transaetions on Automatie Control, AC-27:933-
939,1982.
[261] S. Sastry, J. Hauser, and P. Kokotoviö. Zero dynamics of regularly perturbed systems
may be singularly perturbed. Systems and Control Letters, 14:299-314, 1989.
[262] S. S. Sastry. Effects of small noise on implicitly defined nonlinear systems. IEEE
Transaetions on Cireuits and Systems, 30:651-663,1983.
(263) S. S. Sastry and C. A. Desoer. Jump behavior of circuits and systems. IEEE
Transaetions on Cireuits and Systems , 28:1109-1124, 1981.
[264] S. S. Sastry and R. Montgomery. The structure of optimal controls for a steering
problem . In IFAC Workshop on Nonlinear Control, pages 385-390, 1992.
[265] S. S. Sastry and P. P. Varaiya. Hierarchical stability and alert state steering control
of interconnected power systems. IEEE Transaetions on Cireuits and Systems, CAS-
27:1102-1112,1980.
[266] S.S. Sastry and A. Isidori. Adaptive control of linearizable systems. IEEE
Transaetions on Automatie Control, 34:1123 - 1131, 1989.
[267] M. Schetzen. The Volterra and Wiener Theories ofNonlinear Systems. John Wiley,
New York, 198Q.
[268] L. Sciavicco and B. Siciliano. Modeling and Control of Robot Manipulators. Mc
Graw Hili, New York, 1998.
[269] R. Sepulchre, M. Jankovic, and P. Kokotovic. Construetive Nonlinear Control.
Springer Verlag, London, 1997.
[270] J-P. Serre . Lie Algebras and Lie 'groups. W. A. Benjamin, New York, 1965.
658 References
[271] P. R. Sethna. Method of averaging for systems bounded for positive time. Journal 0/
Mathematical Analysis anti Applications, 41:621-631, 1973.
[272] D. Seto and J. Baillieul. Control problems in superarticulated mechanical systems .
IEEE Transactions on Automatie Control, 39:2442-2453, 1994.
[273] W. F. Shadwick and W. M. Sluis. Dynamic feedback for classical geometries.
Technical Report FI93-CT23, The Fields Institute, Ontario, Canada, 1993.
[274] M. A. Shubov,C. F.Martin, J. P. Dauer, and B. P. Belinsky. Exact controllability ofthe
damped wave equation. SIAM Journal on Control anti Optimization, 35:1773-1789,
1997.
[275] L. E. Sigler. Algebra. Springer Verlag, New York, 1976.
[276] W. M. Sluis. Absolute Equivalence anti its Applieations to Control Theory . PhD
thesis, Vniversity of Waterloo, 1992.
[277] S. Smale . Differential dynamical systems. Bulletin 0/ the Ameriean Mathematieal
Society , 73:747-817,1967.
[278] D. R. Smart. Fixed Point Theorems. Cambridge Vniversity Press, 1973.
[279] E. Sontag. Mathematieal Control theory: Deterministie Finite Dimensional Systems.
Springer Verlag, New York, 1990.
[280] O. J. Sardalen. Conversion of the kinematics of a car with N trailers into a
chained form, In Proceedings 0/ the IEEE International Conference on Roboties
anti Automation, pages 382-387, Atlanta, Georgia, 1993.
[281] M. Spivak. Caleulus on Manifolds. Addison- Wesley, 1965.
[282] M. Spivak. A Comprehensive Introduetion to Differential Geometry, volume One.
Publish or Perish, Inc., Houston, second edition, 1979.
[283] M. Spong. The swingup control problem for the Acrobot. IEEE Control Systems
Society Magazine, 15:49-55, 1995.
[284] M. Spong. Underactuated mechanical systems. In B. Siciliano and K. Valavanis,
editors, Control Problems in Roboties anti Automation, pages 135-149. Springer-
Verlag, 1998.
[285] M. Spong and M. Vidyasagar. Robot Dynamics and Control. John Wiley and Sons,
New York, 1989.
[286] J. J. Stoker. Nonlinear Vibrations. Wiley Interscience, 1950.
[287] G. Strang. Linear Algebra anti Its Applieations. Harcourt Brace Jovanovich, San
Diego, 1988.
[288] R. S. Strichartz. The Campbell-Baker-Hausdorff-Dynkin formula and solutions of
differential equations. Journal 0/Functional Analysis, 72(2) :320-345, 1987.
[289] H. J. Sussmann. Lie brackets, real analyticity and geometrie control. In Roger W.
Brocket , Richard S. Millman, and Hector J. Sussmann, editors, Differential
Geometrie Control Theory, pages 1-116. Birkhäuser, Boston , 1983.
[290] H. J. Sussmann. On a general theorem on local controllability. SIAM Journal 0/
Control anti Optimization, 25:158-194, 1987.
[291] H. J. Sussmann and V. Jurdjevic. Controllability of nonlinear systems. Journal 0/
Differential Equations, 12:95-116, 1972.
[292] D. Swaroop, J. Gerdes, P. Yip, and K. Hedrick. Dynamic surface control ofnonlinear
systems. In Proeeedings 0/ the Ameriean Control Conference, Albuquerque, NM,
pages 3028-3034, June 1997.
[293] F.Takens. Constrained equations, a study of implicit differential equations and their
discontinuous solutions. In A. Manning, editor, Dynamical Systems , Warwick 1974,
pages 143-233. Springer Lecture Notes in Mathematics, Volume 468, 1974.
References 659
[313] P. P. Varaiya. Smart cars on smart roads: problems of contro\. IEEE Transaetions on
Automatie Control, AC-38:195-207, 1993.
[314] P. P. Varaiya, F. F. Wu, and R. L. Chen. Direct methods for transient stability analysis
of power systems: Recent results. Proeeedings 0/ the IEEE , 73: 1703-1715, 1985.
[315] H. L. Varian. Mieroeeonomie Analysis. Norton, 1978.
[316] M. Vidyasagar. Input-Output Analysis 0/ Large Seale Intereonneeted Systems.
Springer Verlag, New York, 1981.
[317] M. Vidyasagar.Nonlinear SystemsAnalysis. Second Edition, Prentice-Hall Electrical
Engineering Series. Prentice-Hall, Inc., Englewood Cliffs, NJ., 1993.
[318] M. Vidyasagar and A. Vannelli. New relationships between input-output and
Lyapunov stability.IEEE Transactions on Automatie Control, AC-27:481-483, 1982.
[319] V.Volterra. Theory ofliunaionals and ofIntegral and Integro-Differential Equations.
Dover, New York,1958. (this is the English translation ofa Spanish original published
in 1927).
[320] G. Walsh, A. Sarti, and S. S. Sastry. Algorithms for steering on the group ofrotations.
In Proeeedings ofthe 1993 IEEE Automatie Control Conference, March 1993.
[321] G. Walsh and S. S. Sastry. On reoricnting rigid Iinked bodies using internal motions.
IEEE Transactions on Roboties and Automation, 11:139-146, 1995.
[322] G. Walsh, D. Tilbury, S. Sastry, R. Murray, and J.-P.Laumond. Stabilization oftrajec-
tories for systems with nonholonomic constraints. IEEE Transactions on Automatie
Control, 39(1):216-222, 1994.
[323] G. C. Walsh and L. G. Bushnell. Stabilization of multiple chained form control
systems . Systems and Control Letters, 25:227-234, 1995.
[324] H. O. Wang and E.-H. Abed. Bifurcation control of a chaotic system. Automatica.,
31:1213-1226,1995.
[325] F. Warner. Foundations 0/ Differential Manifolds and Lie groups. Scott, Foresman,
Hili, 1979.
[326] J. Wen and K. Kreutz-Delgado, The attitude control problem. IEEE Transaetions on
Automatie Control, 36:1148-1162, 1991.
[327] N. Wiener. Generalized harmonic analysis . Acta Mathematiea, 55:117-258, 1930.
[328] S. Wiggins. Global Bifureations and Chaos. Springer Verlag, Berlin, 1988.
[329] S. Wiggins. Introduction to Applied Nonlinear Dynamieal Systems anti Chaos.
Springer Verlag, Berlin, 1990.
[330] J. C. WiIlems. The Analysis 0/Feedback Systems. MIT Press, Cambridge, MA, 1972.
[331] W. M. Wonham. Linear Multivariable Control: A Geometrie Approach. Springer
Verlag, 1979.
[332] C. W. Wu and L. O. Chua. On the generality of the unfolded Chua circuit.
InternationaLJournal ofBifurcation Theory and Chaos , 6:801-832,1996.
[333] F. F. Wu and C. A. Desoer. Global inverse function theorem . IEEE Transaetions on
Cireuit Theory, CT-72:199-201, 1972.
[334] L. C. Young. Leetures on the Calculus 0/ Variations and Optimal Control Theory.
Chelsea, New York, second edition, 1980.
[335] G. Zames. On the input output stability of time varying nonlinear feedback systerns,
part I. IEEE Transactions on Automatie Control, 11:228-238, 1966.
[336] G. Zames. On the input output stability of time varying nonlinear feedback systems,
part II.IEEE Transaetions on Automatie Control, 11:465- 476, 1966.
[337] E. C. Zeeman. Catastrophe Theory. Addison Wesley, Reading, MA, 1977.
Index
exponential, 187
converse theorem, 195 torus, 44, 109
global asymptotic, 187 tracking
global uniform asymptotic, 187 asymptotic
in the sense of Lyapunov , 185 MIMO,409
indirect method, 214 SIS0,404
linear time invariant systems, 211 transcritical bifurcation, 320
Lyapunov's theorem, 189 ofmaps, 326
uniform, 185 tunnel diode, 14
uniform asymptotic, 186 twist matrix, 364
stabilizability
smooth,233 Unfolding, 330
stabilization, 235 unit ball, 79
backstepping, 275 unit disk, 117
adaptive , 285 unstable equilibrium, 206
linear quadratic regulator, 278
necessary conditions for, 281 VanderPol, 16, 141
observer based, 279 degenerate, 251
SISO,404 van der Pol equation
stable-unstable manifold theorem, 39, degenerate, 60
290 vector field, 352
for rnaps, 293 vector space, 77
steering, 518 basis, 78
Stiefel manifolds, 382 Volterra series, 167, 560
structural stability, 302 frequency domain representation, 173
subharmonie oscillation, 299 from differential equations, 170
sufficient richness , 283 homogeneous systems , 168
swing equations, 193, 201 , 255 Peano-Baker fonnulas, 170
domains of attraction, 218 polynomial systems, 168
structure preserving model, 256 Volterra-Lotka, 22, 29,71
symplectic group , 363
Wavefonn relaxation, 90
Takens excitation, 339 wedge product, 582
tangent bundle, 352 Wei-Nonnan fonnula, 378
tangent space, 110, 111, 350
Taussky lemma, 212 Zames, 127, 163
Tayl,13 Zeno behavior, 337
tensor fields, 593 zero dynamies
tensor products, 578 driven, 426
tensors, 577 MIMO,41O
altemating, 579 intrinsic, 554
decomposable, 584 SISO, 398
Thom, 330 exponentially minimum phase, 400
time varying systems minimum phase , 400
stability, 208 zero trace matrix, 364
torsion, 376 Zubov's theorem, 227
Interdisciplinary Applied Mathematics