N O N L I N E A R
C O N T R O L S Y S T E M S
H O R A C I O J . M A R Q U E Z
D e p a r t m e n t o f E l e c t r i c a l a n d C o m p u t e r E n g i n e e r i n g ,
U n i v e r s i t y o f A l b e r t a , C a n a d a
W I L E Y 
I N T E R S C I E N C E
A J O H N W I L E Y & S O N S , I N C . , P U B L I C A T I O N
NONLINEAR CONTROL SYSTEMS
HORACIO J. MARQUEZ
Department of Electrical and Computer Engineering, University ofAlberta, Canada
WILEYINTERSCIENCE
A JOHN WILEY & SONS, INC., PUBLICATION
Copyright © 2003 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate percopy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 7508400, fax (978) 7504744, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 7486011, fax (201) 7486008, email: petmreq@wiley.com. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representation or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services please contact our Customer Care Department within the U.S. at 8777622974, outside the U.S. at 3175723993 or fax 3175724002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.
Library of Congress CataloginginPublication Data is available.
ISBN 0471427993 Printed in the United States of America.
109876543
To my wife, Goody (Christina); son, Francisco; and daughter, Madison
Contents
1
Introduction
1.1
1
. . . . . . .
Linear TimeInvariant Systems
Nonlinear Systems
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1.2 1.3 1.4 1.5 1.6 1.7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
5
5
Equilibrium Points
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
FirstOrder Autonomous Nonlinear Systems SecondOrder Systems: PhasePlane Analysis
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8
PhasePlane Analysis of Linear TimeInvariant Systems PhasePlane Analysis of Nonlinear Systems
1.7.1
. . . . .
.
.
.
.
.
.
.
.
.
.
.
10
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
Limit Cycles
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1.8
HigherOrder Systems
1.8.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
20
21 22
Chaos
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1.9
Examples of Nonlinear Systems
1.9.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Magnetic Suspension System
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
25 26
27
1.9.2
Inverted Pendulum on a Cart
The BallandBeam System
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1.9.3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1.10 Exercises
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Mathematical Preliminaries
2.1
Sets ........................................
Metric Spaces .
. .
31
31
2.2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
32
vii
viii
2.3
CONTENTS Vector Spaces .
2.3.1
. . . . . . . . . . . . . . .
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
32 34 36 37
Linear Independence and Basis
Normed Vector Spaces .
. . . . . . . . .
. . .
.
.
.
.
.
.
.
.
.
.
2.3.2
Subspaces .................... .
. . . . . . . . . . . . .
.
.
.
. .........
. . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
2.3.3
2.4
.
.
.
Matrices .
2.4.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
39 40
41
Eigenvalues, Eigenvectors, and Diagonal Forms
.
.
.
.
.
.
.
.
.
.
.
.
2.4.2
2.5
Quadratic Forms
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Basic Topology
2.5.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
44
Basic Topology in ]It"
. . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
44
45
2.6
Sequences
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2.7
Functions
2.7.1
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
46 48 49
51 52
Bounded Linear Operators and Matrix Norms .
. . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
2.8
Differentiability
2.8.1
.
. .
.
.
.
.
.
.
.
.
.
. .
Some Useful Theorems .
. . .
.
.
.
.
.
.
.
.
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2.9
Lipschitz Continuity
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2.10 Contraction Mapping .
2.12 Exercises
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
54 56 59
2.11 Solution of Differential Equations .
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 Lyapunov Stability I: Autonomous Systems
3.1
65
. .
Definitions .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
65
3.2
Positive Definite Functions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
71
3.3 3.4
3.5
Stability Theorems .
Examples
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
74 77
Asymptotic Stability in the Large
3.6.1
.
.
.
.
.
.
.
.
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
. .
. .
3.6
Positive Definite Functions Revisited Construction of Lyapunov Functions
.
.
. . .
.
.
.
.
.
Exponential Stability ................
. . . . . . . . . . .
.
. .........
. . . . .
.
.
.
.
.
.
.
80
82 82
3.7
.
. .
.
.
.
2
Basic Feedback Stabilization .
.
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.10.1 5.
.
.
.
. .
.
. . .
.
.
.
.
.
.
.
.
.
.
..4
4.
.
.
.
.
.
.
.
.2 DiscreteTime Positive Definite Functions
.
.
.
. .10 Analysis of Linear TimeInvariant Systems .
.
.
.
.
.
.
.
.
.
.
.
.8 3.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.11 Instability
3.
.
107
110
111
4.
.
.
.
.
.
.
. .
. .
.
.
.
.
3.
.
.
137
138
141
Integrator Backstepping
.
.
.
.
. .
.9
ix
.
.
.
.
.
..
.
.
. .
.
.
.
.
.
.
. .
.
.
.
.
. . .
. .
.
.
.
.
. .
.
.
.
.
85
.
.
.
.
4 Lyapunov Stability II: Nonautonomous Systems
4.
.
.
.
.
.
.
100 102
.
.
.
.
.
.
.
The Invariance Principle Region of Attraction
.
.
.
.
.
.
.
.
.
..
.
. .
.
.
.
.
.
.
.
.
..5.
. .
.
.
.
.
.
.
.
.2
Positive Definite Functions
4. .
.
.
. .
.
.10.
.
113
115 119 120 122
125
Proof of the Stability Theorems .
.
.
.
. Discretization
.
.
. .
. .
.
.
.
.
.
.
. .
.
.
. .
.1 Linearization of Nonlinear Systems .
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
4.
.
.
.
.
.
.
.
.
.
.
.
.
Examples
.
.
.
.
.12 Exercises
.
.
.
.
Analysis of Linear TimeVarying Systems
.
.
.
3.
.
.
.
.
. . .
.
3.
.
.
. .
. .
.
.
.
.
.
. .
.
.
.
.
. .
..
.
.1 Definitions
.
.
.
.
.
.
.
.
5 Feedback Systems
5.
.
.
.
.10.
.
.
.
.
.
.
.
4.
.
DiscreteTime Systems .
.
.
.
.. . .
.
.
.
.
.
. .
. .
.
.
.
.
. .
.
5. .
. .
.
. .
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
. .
. .
.
.3 Stability Theorems .
.1
.
.
.
.
.
.
.
The Linearization Principle
.
.
.
.
. .
.9
.
.
.
. .
. .
.
. .
..
.
.
.
127
4.
.
.
.
.
. .2.
.
. .
.
.
4. .
.
.
.
.
.
. . .
.
.
.
.5
Stability Theorems
.
130 130
131 132
133
..
.
.
. .
.
.
.
.
.
.10 Stability of DiscreteTime Systems 4.
.
.
.
4.
.
. .
.
.
4.
. .
.
.
.
.
.
.CONTENTS
3.
.
.
.
.
.
.
..
93 96 99
.
.
. . .
.
. .
.
.
. .
.
.
.
.
.
.
.
145
.
.
.
..
.
.
.
. .
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
..
.
.
.
.
. .
.
.11 Exercises
.
.
.
. .
.
.
.
.
.
.
.
.
4.
. .
.
.1
107
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
..
.
.
.
.
.
.
.
. .
Definitions .
.
.
.
.
.
.
4.
.
.
.
.
.
.
.
.3
Backstepping: More General Cases
.
.10.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
. .
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
126
. .
.8
4.
. .
.
.
. .6
Perturbation Analysis
Converse Theorems . .
.
.
.
.7 4.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.1
.
. .
.
.
.
.
.
.
.
.
. .
.
.
.
.
.3 4.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
6.
.
.
.
.
.
.
.
174 178 180
The Circle Criterion
Exercises
.6
InputtoState Stability Revisited
CascadeConnected Systems
Exercises
.
. .
.
.
.
.
.
.
.
.
.
.
.2 7.
. .
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
145
5.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
. .
.
.
.
156
Extended Spaces
.
. .
.
.
.
. .
. . .
.
.
.
.
.
157
159
6.
. .
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . .
.
.
.
. .
.
.
.
.
.
.
.
.
. .3
Linear TimeInvariant Systems Lp Gains for LTI Systems
6.
.
.
.
.
. .
. .
.
.
.
. .
. .
.
.
.
.
.
.
.1.
.
.
.
.
.
.
.
.
.
.
.
. .
.2
5.
. . .
.
.
.
.
.
6.
.
.
.
.4.
.
.
.
.
.
.
.
.
. .1
.
.
.
.
.
.
.
.
.
.9
. .
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
164
166 166
6.7
6. .
.
.
.
153
6 InputOutput Stability
6.
. .
.
.
. .
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.1
. .
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
. . .
.
.
.
.
.
.
.
.
.
.
.8 6.
.
.
. . .
.
.
.
.2
InputOutput Stability .
.
.
.
.
.
.
.
.
.
.4.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.1
183
.
. .
.
.
.
.
.
.
.5
Strict Feedback Systems .
.
.
.
.
.
.
Examples
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.4 7. .
.
.
. . .
.
.
.
.
.
.
.
.
.
Exercises
.
.
.
.
.
.
.
.
.
.
7 InputtoState Stability
7.
.
.
.4
.
.
. .
.
.
.
195 198
.
.
.
.
.
.
. .
.3
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.3.
.
.
.
L. .
.
.
.
.
.
Function Spaces .
.5 6. .
.
.
.
.
.
.
.
.
.
.
.
.
.
. Gain
G2 Gain
.
.
.
.
.
.3.
.
. . .x
5.
.
.
.
.
.
.
.5 7.
. .
7.
.
.
.
.
.
.
.
.
. .
.
.
.3.
.
.4
5.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
167
168
171
ClosedLoop InputOutput Stability
The Small Gain Theorem
Loop Transformations
. .
.
.
. .
.
.
.
InputtoState Stability (ISS) Theorems
7.1
.
.
. .
.
.
.
.
.
.
.
. .
.
.
.
. .
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6.
.
.
Motivation
Definitions .
.
.
148
151
Example .
.
.6
.
.1
CONTENTS
Chain of Integrators
.
.
.
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
8 Passivity
201
.
.
.
.
.
.2
6.
.
.
.
. .
.
. .1
155
.
.
.
.
6.
.
.
183 185 186 189
191
7.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
..
.1
.
.
.
.
. . ..
.
.
.
..4.
.
.
.
.
Dissipative Systems
.
.
.
253
.
.
.
.
Back to InputtoState Stability
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.12 Exercises
.4
.. .
.
. .
.
.
..
.
. .
.
.
9.
.
.
.
.
..6
MassSpring System without Friction
.
.
.
.
.
.
.
..
.
. . .
.
. .
.
.
.
.
.2 8.
.
..
.
.
.
.
. 201
.
.
.
.
.
.
.
.
.
.
.
.
. . .
.2
Strictly Output Passive Systems
.
.
.
.
.
.
.
. .
.
.
. .
..
. ..
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.3.
. ..
9.
.
.
.
.
. .
.
..
..
.
.
.
. .
..
.
.
.
.
.
.
. .
.
.
.
.
.
.1
.
.
.
.
.6..
.
Exercises
.
.
.
..
.
..
.1
223
.
.
.
.
.
.
.
.
...
.
. .
. .
.
.9.
..2. .
.
.
.
. .
.
.
.
..
.
.
.
.
..
.
.
.
.
.2
Differentiable Storage Functions ..
224 225 226 226 229 229
231
231
9. .
.
.
.
.
. .
.. .
.
.4.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
. ..
.. .
.
.
.
.
.
.
.
.
.
.
.
.5
Stability of Feedback Interconnections
.
.
.
.
. .
.
.
.
. .
.
.
. . .8 9.
.
.
.
8.
.
.
.
.
.
.
.
. .
.
..
.1
.
.
.
.
.
.
.
Dissipativity
9.
.
Algebraic Condition for Dissipativity
9.9
Stability of Dissipative Systems
Feedback Interconnections
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
..
.
. .
.
.
Linear TimeInvariant Systems
.
.
.
.10 Some Remarks about Control Design 9.
..
.3
.
.
.
. . .
.
.
...
.
9.
.
.
.1
.
.
.
.
.
.
.
.
..
.
.
217 220
8.
.
.
.
Examples
9.3 QSR Dissipativity
9. .
.
.
.
. .
.
.
.2
9.
.
. .
. .
.
.
.
.
.
. ..
9.
.
.
.
.
..
.
.
.
.
.
.1
xi
Power and Energy: Passive Systems
Definitions .
. .
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
..
.
.7 9.
.
.
.
214
8.
. .
.4 8.
.
.
..
.
. .
.
.
.
.
.
.
..
. .
.
.
.
.
.
.
.
.
.
.
233 235 237 239 243 245 246
Special Cases
.
.
.
.
. .
.
.
.
.
.
Nonlinear L2 Gain
9.
.
.
.
Available Storage .
.
.
.
.
.
.
.
8.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
. .
.
.
.
.
.. .
.
.11 Nonlinear L2Gain Control
9.
.
.
.
.
9.
.
.
..
247
251
.
.
Passivity of Linear TimeInvariant Systems
.
.
. .
.
.
..
. .
.
.
.
.
.
..
.
.
.
. .
.
.
.
..
.
.
.
.
.
.
204
208 210
211
Interconnections of Passivity Systems
8.
.
..
.
9.5
9.CONTENTS
8.
... .1
. .
.7
9
.
.
.
.
.
.6
Strictly Positive Real Rational Functions .
.
.
. . .
.
.
.
.9.
.
.
. .
. . .
.
.
.
.
.
. ..
..
.
.
.
.
.
.
.
.
.
.
Passivity and Small Gain
.
..
.
.
MassSpring System with Friction
.
.
.
.
.
.
.
. .
.
. .
.
.
.
.
.
. ..
.
267 270
.
.
.
.
.
.
.
292 294
.
. .
.
.
.
.
.
.
.
.
.
.
. . .
.
..
.
. .
.
.
.. .
.
.
.
.
.
.
.
.. .
.
.
.
.
. .
.
.. ..
...
.
.
. .
.
.
.
.
.
.
. .2 Observer Form
.
. .
.1.
.
.
.
.
.
.
.
.
. .
..
.
.
265 265
10.
.1.
. .
. .
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.1..
.
.
.
. .
.
.
.
.
.
.
.5 InputOutput Linearization
10.
.xii
CONTENTS
10 Feedback Linearization
10.
.
.
.
.
..
.
.
. .
.
.5 Distributions
.
.
.
.
.
291
.
.
. .
..
.
. .
.
. .
. .
.
.
.
.
.
.
.
.
303
.
.
. .
.
.
.
.
.
.
. .
.
.
.
.
.1 Observers for Linear TimeInvariant Systems 11.
.
.
.
.
.
.
. .
..
294
295 296 298 298
301
11.
. .
.
.
.
.
.
.
.
.
.
.
.4 Lipschitz Systems .
.
.
.
.
. .
.
.
275
.
.
.
.
..
11. ... . .
.
.
.1 Mathematical Tools 10.
.
.2 Lie Bracket
.
.
.
.
.
. ..
. .
.
.. .
10. .
.
.
...
.
. . .
.
.
.
.
.
.
.
..
... .
.
.
.
.
.
. . . .
..
. .
.
.
.
.
.
. .O(x)] .
.
..
..
.
.2.
.
.
.
.
257
259 259
261
10.
.
.
.
10.
.
.
. .
.
.
.
.
255
255 256
10.
..
.
.
.
.
..
.
.
.
..
.
.
.
.
. .
.
..
.
.
.
..
. .
. .
.2.
.
.
.
.
. .
.
291
.
.
.
.
.
.4 Conditions for InputState Linearization .
.
.
.
.
.
10.
.
.
. ..
.
.
.2 Systems of the Form i = f (x) + g(x)u
10.
. .
..
.
.
.
.
.
.
.
11 Nonlinear Observers
11.
.
.
. .
.
.
10.
.
.
.
.
. .
. .
.
.
. .
. .
.
.
.
.
.
. .
. .
.
.
.
.
. .
.
. .
.
. . .
.
..
.
.
.
.
..
.
.
.
.
.
. .
.6 The Zero Dynamics . .
.
.5 Nonlinear Separation Principle
.
.
.
.
.
.
10.
.
.7 Conditions for InputOutput Linearization
. .
.
. .
.1.
.
.
.1 Observability 11.
.
.
.
.
. .
. .
.
. .
. 10.
..
.
.
.
.
..
.. .
.
. .2.4 Separation Principle
11.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.3 Observers with Linear Error Dynamics . .
..
.
.3 Observers for Linear TimeInvariant Systems
.
.
.1 Nonlinear Observers
11...
.
..
.3 Examples
.4 Coordinate Transformations . . 273
..
. .
.
.
.
..
.
. .
.
.
.
.
.1.
.
.1.
. ..
.
..
..
.
.
.
.
.
.1. .
. .
.
.
. .
.
.
.
.
280 287 287
10..
.
.
.
.
.
10. .
.
. .
.
. .
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.3 Diffeomorphism .
..1 Systems of the Form x = Ax + Bw(x) [u .
.
..
11.1.
.
. .
.
.
.
11.
.
.
.
.
.
.
.1 Lie Derivative .
.
. .
.
.
.
.
.
.8 Exercises
.
.
.
.
11.
..
.
.
.
.1.
.2 InputState Linearization
.
.
.2 Nonlinear Observability
.
...
. . .
.
.3 Chapter 6 A.
.
324
328
. .. ..
. .
.
. ..
.
.
.
..
. ... ..
.
. .. .
..
...
.
.
..
307
.
.
.
..
. ..
..
.
.
. .
.1 Chapter 3 A. .
.
..6 Chapter 9
A.
.
.
.
. .
..
... . .
. . . ..
..
... .
. . 330
.
.
. . .
.
.
.
.. . ...
.
.
.
. . ... .
.
.
.
.... ..
.
..
.
.
......
. .
....5 Chapter 8 A. .
..
.
. . . . .
.
. . .
. .
... . . .
.
.
.
.
.
. . .
. . .
.
.
. . ..
.
.
.. .
.
. .
..
.
.
.
Bibliography
337
345 349
List of Figures Index
.
.
.
.
307 313 315 320
..
.
. .CONTENTS
xiii
A Proofs
A.
. ..
. .
.. .
.. .
. .
.
..
.
.7 Chapter 10
..2 Chapter 4 A... .
.
..
. ..
.
.
. .. .
.
..4 Chapter 7 A...
. .
.
.
.. .
.
. . . .
.
.
.
. . .
...
.
leaving the more subtle technicalities for later. and the inputoutput theory. without external excitations).e.e. with zero initial conditions) and subject to an external input. Chapter 2 introduces the notation used throughout the book and briefly summarizes the basic mathematical notions needed to understand the rest of the book. Passive systems are studied first in Chapter
. As such. which doesn't even resemble the original plan. I find that introducing this technique right after the main stability concepts greatly increases students' interest in the subject. This material is intended as a reference source and not a full coverage of these topics.e.
I have tried to write the kind of textbook that I would have enjoyed myself as a student. including a thorough discussion of passivity and dissipativity of systems. At that time my intention was to write a research monograph with focus on the inputoutput theory of systems and its connection
with robust control. Chapter 6 considers inputoutput systems. The approach in this chapter is classical. Chapter 5 briefly discusses feedback stabilization based on backstepping. Chapter 7 focuses on the
important concept of inputtostate stability and thus starts to bridge across the two
alternative views of stability. The same chapter also discusses the stability of feedback interconnections via the celebrated small gain theorem. The result of this effort is the present book. and
system gains and introduces the concept of inputoutput stability.
The first chapter discusses linear and nonlinear systems and introduces phase plane analysis. In Chapters 8 and 9 we pursue a rather complete discussion
of dissipative systems.Preface
I began writing this textbook several years ago. an active area of research. My goal was to write something that is thorough. Autonomous systems are discussed in Chapter 3 and nonautonomous systems in Chapter 4. state space) description. yet readable. and my interests quickly shifted into writing something more useful to my students. Chapters 35 and 6 present two complementary views of the notion of stability: Lyapunov. where systems are assumed to be relaxed (i. Chapters 3
and 4 contain the essentials of the Lyapunov stability theory. In the middle of that venture I began teaching a firstyear graduatelevel course in nonlinear control. causality. including its importance in the socalled nonlinear L2 gain control problem. I have chosen this separation because I am convinced that the subject is better understood by developing the main ideas and theorems for the simpler case of autonomous systems. where the focus is on the stability of equilibrium points of unforced systems (i. The chapter begins with the basic notions of extended spaces. inputoutput systems are considered without assuming the existence of an internal (i.
most of whom I never had the pleasure to meet in person. My philosophy is that real physical examples tend to be complex. Please email your comments to
marquez@ee. It was through their writings that I became interested in the subject.ualberta. Most of them are not meant to be reallife applications. Finally.
I have tried my best to clean up all the typographical errors as well as the more
embarrassing mistakes that I found in my early writing. but I would like to acknowledge four people to whom I feel specially indebted: Panajotis Agathoklis (University of Victoria). which is the explanation of a particular technique or a discussion of its limitations.ualberta.ca/marquez
Like most authors. for the beautiful things that they have published. In fact. Each one
of them had a profound impact in my career. the emphasis of the book is on analysis and covers the fundamentals of the theory of nonlinear control. Chapter 9 generalizes these ideas and introduces the notion of dissipative system. Chris Diduch. Chris Damaren (University of Toronto). like many before me. I would argue that most of the material in this book is essential enough that it should be taught to every graduate student majoring in control systems.xvi
8. I would also like to thank the many researchers in the field. I owe much to many people who directly or indirectly had an influence in the writing of this textbook. I have tried to acknowledge those references that have drawn my attention during the preparation of my lectures and later during the several stages of the writing of this book. and focused on those parts of the theory that seem more fundamental. and the many that will come after.ca
I will keep an uptodate errata list on my website:
http://www. along with some of the most important results that derive from this concept. I am sure that I have failed! I would very much appreciate to hear of any error found by the readers. simply because this would be impossible. and 10. I will not provide a list because I do not want to
forget anyone. I have not attempted to list every article by every author who has made a contribution to nonlinear control. but have been designed to be pedagogical.
Although some aspects of control design are covered in Chapters 5. require elaboration. 9. respectively. and often distract the reader's attention from the main point of the book. I have chosen this presentation for historical reasons and also because it makes the presentation easier and enhances the student's understanding of the subject. I have restrained myself from falling into the temptation of writing an encyclopedia of everything ever written on nonlinear control.
There are many examples scattered throughout the book. However. and without their example this book would have never been written. Chapters 10 and 11 provide a brief introduction to feedback linearization and nonlinear observers.ee. I sincerely apologize to every
. and Rajamani Doraiswami (both of the University of New Brunswick).
for their professionalism and assistance.
Horacio J. Kirsten Rohstedt and Brendan Cody. Guess what guys? It's over (until the next project). Kristin Cooke Fasano.
I am deeply grateful to the University of Alberta for providing me with an excellent working environment and to the Natural Sciences and Engineering Research Council of Canada (NSERC) for supporting my research. To all three of them I owe many hours of quality time.
I would like to thank my wife Goody for her encouragement during the writing of this book. as well as my son Francisco and my daughter Madison. I am also thankful to John Wiley and Son's representatives: John Telecki.xvii
author who may feel that his or her work has not been properly acknowledged here and encourage them to write to me. Marquez
Edmonton. Alberta
. Tonight I'll be home early.
.
and D are (real) constant matrices of appropriate dimensions. C.1) determines the dynamics of the response. The reader is assumed to be familiar with the basic concepts of state space analysis for linear timeinvariant (LTI) systems. Equation (1.
1.1 Consider the massspring system shown in the Figure 1. Using Newton's second law. B. showing the evolution from linear timeinvariant to nonlinear.
Example 1.1)
y=Cx+Du
(1. we obtain the equilibrium equation
1
.Chapter 1
Introduction
This first chapter serves as an introduction to the rest of the book. Equation (1.2) is often called the read out equation and gives the desired output as a linear combination of the states. We also define the several classes of systems to be considered throughout the rest of the book. We recall that a state space realization of a finitedimensional LTI system has the
following form
x = Ax + Bu
(1.1.2)
where A. We present several
simple examples of dynamical systems.1
Linear TimeInvariant Systems
In this book we are interested in nonlinear dynamical systems. Phaseplane analysis is used to show some of the elements that characterize nonlinear behavior.
we have that
fp = Qy. Assuming linear properties. and fk = ky. a state space realization for the massspring systems is given by
i = Ax+Bu y = Cx+Du
.
my = E forces = f(t) .fp
where y is the displacement from the reference position.
Defining states x1 = y. Thus.1: massspring system. we obtain the following state space realization
i1 = x2
1 x2=mkxlmx2+g
or
i1 X1
1
X2
X2
I
If our interest is in the displacement y. and fk represents the restoring force of the spring.
my+I3'+ky=mg. x2 = y.fk . fp is the viscous friction force.2
CHAPTER 1. INTRODUCTION
Figure 1. then
y=x1 = [1
0
thus.
0) = f (x. ul.. x)u) _
fn(x. In this case. and the function u is the input.. t. t..4) and (1.5)
Equations (1... t).4) is a generalization of equation (1.. The vector x is called the state vector of the system.1. the equation takes the form
x = f (x. u)
we can re write equation (1.6)
. t. .5) are referred to as the state space realization of the nonlinear
system. . up)
Defining vectors
ul
fl (x. t.3) as follows:
x = f (x.
Special Cases:
An important special case of equation (1. . u)..
(1.2
Nonlinear Systems
Most of the book focuses on nonlinear systems that can be modeled by a finite number of firstorder ordinary differential equations:
±1 = fl(xl...t.Up)
xn = fn(xl.. u).
(1.4) is when the input u is identically zero.
(1. u)
x=
UP
f (t. t.u1. Similarly. the system output is obtained via the socalled read out equation
y = h(x. NONLINEAR SYSTEMS
with
3
A=[o 13 ] B= [0] C=[1 0]
D=[0]
1.2.xn..4)
Equation (1. xn. t.1) to nonlinear systems.. .
1.t) (i.. t) is not a function of time. Notice that.a does not change the righthand side of the state equation.fk . if u = 0. Autonomous systems are invariant
to shifts in the time origin in the sense that changing the time variable from t to T = t .
Defining state variables xl = y. We now consider the more realistic case of a hardening spring in which the force strengthens as y increases.4
CHAPTER 1. According to Newton's
law
my = E forces = f(t) . In particular.e.
The second special case occurs when f (x. there is no difference between the unforced system with u = 0 or any other given function u(x. x2 = y results in the following state space realization
r xl = X2
(l
x2 = kM

La2X3 
Ax2 +
(e
t
which is of the form i = f (x. We can
approximate this model by taking
fk = ky(1 + a2y2). u is not an arbitrary variable).ma2xi
'M m X2
.
ll x2 = m x1 .
Example 1.2 Consider again the mass spring system of Figure 1.fOIn Example 1.7) x = f (x)
in which case the system is said to be autonomous. then
J it = x2
or i = f(x). u).
Throughout the rest of this chapter we will restrict our attention to autonomous systems. INTRODUCTION
This equation is referred to as the unforced state equation. Substituting u = 'y(x. in general. In this case we can write (1. t) in equation (1.4) eliminates u and yields the unforced state equation. the differential equation results in the following:
my+Qy+ky+ka2y3 = f(t).1 we assumed linear properties for the spring.
With this constant.
if r > 0. See Chapter 4 for further details.
1.3
Equilibrium Points
An important concept when dealing with the state equation is that of equilibrium point.3. we solve the equation r + x2 = 0 and immediately obtain that
(i) If r < 0.1. EQUILIBRIUM POINTS
5
1. Indeed. namely x = ff.4
FirstOrder Autonomous Nonlinear Systems
It is often important and illustrative to compare linear and nonlinear systems.6). the equilibrium points of (1. It will
become apparent that the differences between linear and nonlinear behavior accentuate as the order of the state space realization increases.1 A point x = xe in the state space is said to be an equilibrium point of the
autonomous system
x = f(x)
if it has the property that whenever the state of the system starts at xe1 it remains at xe for all future time. by definition. This is clear from equation (1.6) are the real roots of the equation f (xe) = 0. In this section we consider the simplest
.
Example 1. the system has two equilibrium points.
(ii) If r = 0. it is an equilibrium point. if
x
_ dx
dt
= f(xe) = 0
it follows that xe is constant and. although the time dependence brings some subtleties into this concept.3 Consider the following firstorder system
± = r + x2
where r is a parameter. both of the equilibrium points in (i) collapse into one and the same.
According to this definition. To find the equilibrium points of this system. (iii) Finally. Equilibrium point for unforced nonautonomous systems can be defined similarly.
Definition 1. then the system has no equilibrium points. and the unique equilibrium point is x = 0.
we consider a system of the form (1.
'See Chapter 3 for a more precise definition of the several notions of stability. In this case. Short of a solution. we look for a qualitative understanding of the behavior of the trajectories.9). Attractive equilibrium points are called stables. the solution of this equation with an arbitrary initial condition xo # 0 is given by
x(t) = eatxo
(1. f (x) dictates the "velocity vector" x which determines how "fast" x is changing. A very special case of (1.9)
It is immediately evident from (1.
. and (1.8) or (1. x(t) exponentially converges to the origin. 2This point is discussed in some detail in Chapter 2. representing x = f (x) in a twodimensional plane with axis x and x.8) is that of a firstorder linear system. at each x. the sign of ± indicates the direction of the motion of the trajectory x(t). One way to do this is to acknowledge the fact that the differential equation
x = f(x)
represents a vector field on the line.10)
A solution of the differential equation (1. while repellers are called unstable.9) starting at xo is called a trajectory. In other words.
Thus. According to (1.8).10). INTRODUCTION
case. a < 0: Starting at x0.8) takes the form
x = ax. Our analysis in the linear case was guided by the luxury of knowing the solution of the differential equation (1. that is. a > 0: Starting at x0. We also assume that f () is a continuous
function of x. the trajectories of a first order linear system behave in one of two
possible ways:
Case (1).
Consider now the nonlinear system (1. Indeed.
(1.8) x = f (x)
where x(t) is a realvalued function of time. The simplicity associated with the linear case originates in the simple form of the differential equation (1.9). the equilibrium point of a firstorder linear system can be either attractive or repelling. x(t) diverges to infinity as t tends to infinity.
Case (2). most nonlinear equations cannot be solved analytically2. Thus. f (x) = ax.6
CHAPTER 1.9) that the only equilibrium point of the firstorder linear system is the origin x = 0. which is that of first order (linear and nonlinear) autonomous systems. Unfortunately.
The system (1. the trajectories move to the right hand side.
2.4 Consider the system
th = cosx
To analyze the trajectories of this system. +1. k = 0. trajectories are forced to either converge or diverge from an equilibrium point monotonically. and the other half are unstable.2: The system i = cos x. in the sense that the only events that can occur to the trajectories is that either (1) they approach an equilibrium point or (2) they diverge to infinity.
Example 1.11).1. Thus. The behavior described in example 1. and vice versa.4. oscillations around an equilibrium point can never exist in first order systems. are equilibrium points of (1. but it cannot change sign without passing through an equilibrium point.10) that a similar behavior was found in the case of linear firstorder systems. Indeed.4 is typical of firstorder autonomous nonlinear systems. a bit of thinking will reveal that the dynamics of these systems is dominated by the equilibrium points. In all cases. that is where cos x intersects the real axis. are the equilibrium points o f (1. Thus.
From this analysis we conclude the following:
1. or repellers.11) has an infinite number of equilibrium points. Recall from (1. To see this. ±2. notice that ± can be either positive or negative.
Whenever ± > 0. all points o f the form x = ( 1 + 2k)7r/2.
. FIRSTORDER AUTONOMOUS NONLINEAR SYSTEMS
7
2
/
\ 5ir/2
x
Figure 1. The arrows on the horizontal axis indicate the direction of the motion.11). From the figure we notice the following:
The points where ± = 0. we plot i versus x as shown in Figure 1. Exactly half of these equilibrium points are attractive or stable.2.
x20 ]T has a unique solution of the form x(t) = [x1(t).3: The system ± = r + x2. r) monotonically converge to xe.x2)
t2
(1.8
CHAPTER 1.3. From the analysis of the sign of ±.3. INTRODUCTION
x
Figure 1. r) U (r. while xe is a repeller.13)
= f2(xl.12)
(1.5 Consider again the system of example 1. r < 0. oo). As in the
.14) with initial condition
x(0) = xo = [x10. Trajectories initiating in the interval xo = (r. Any trajectory starting in the interval x0 = (oo. and xe1 are shown in the Figure. The two equilibrium points. namely xe. they can be used to explain interesting features encountered in the nonlinear world.14)
Throughout this section we assume that the differential equation (1.
1. that is
x=r+x2
Plotting ± versus x under the assumption that r < 0. we obtain the diagram shown in Figure 1.x2(t)]T. we see that xe is attractive. This class of systems is useful in the study of nonlinear systems because they are easy to understand and unlike firstorder systems. Example 1. on the other hand.x2)
or
i = f(x).5
SecondOrder Systems: PhasePlane Analysis
In this section we consider secondorder systems. diverge toward infinity.
(1. Consider a secondorder autonomous system of the form:
i1 = fl(xl.
This means that to each point
x' in the plane we can assign a vector with the amplitude and direction of f (x'). Given any initial condition x0 on the plane. from the phase diagram it is easy to sketch the trajectories from x0. Thus it is possible to construct the trajectory starting at an arbitrary point x0 from the vector field diagram. This plot.x2 plane is usually referred to as the phaseplane.4 shows a phaseplane diagram of trajectories of this system. as well as many similar ones presented throughout this book. we have that
1
±2
f2 (X) 11 (X)
= f(). SECONDORDER SYSTEMS: PHASEPLANE ANALYSIS
9
case of firstorder systems. when dealing with secondorder systems. The technique is known as phaseplane analysis. For
easy visualization we can represent f (x) as a vector based at x. that is. Example 1.x2
plane.
The function f (x) is called a vector field on the state plane. we obtain a vector field diagram.14). we assign to x the directed line segment from x to x + f (x).x2. and the xl . In this book we do not emphasize the manual construction of these diagrams.
. Repeating this operation at every point in the plane.1.x2 plane. along with the vector field diagram. Very often.6 Consider the secondorder system
1
= x2
22 = xi . was obtained using MAPLE 7. it is useful to visualize the trajectories corresponding to various initial conditions in the xl .
From equation (1. Several computer packages can be used for this purpose.
Figure 1. this solution is called a trajectory from x0 and can be represented graphically in the xl . Notice that if
x(t) =
xl(t)
X2(t)
is a solution of the differential equation i = f (t) starting at a certain initial state x0i then i = f (x) represents the tangent vector to the curve.5.
the corresponding
eigenvectors.
1.6
PhasePlane Analysis of Linear TimeInvariant Systems
i=Ax.4: Vector field diagram form the system of Example 1.
(1.6.
T E IIt2x2. Throughout this section we denote by Al.
Now consider a linear timeinvariant system of the form
AEII22x2
(1. T nonsingular. INTRODUCTION
l'
`.16)
. we consider several cases. v2. A2 the eigenvalues of the matrix A.
CASE 1: Diagonalizable Systems
Consider the system (1. and
define the following coordinate transformation:
x = Ty. Assume that the eigenvalues of the matrix A are real.15). These systems are well understood.
Figure 1. N \\'
1
.\
\ . and vl. depending on the properties of the matrix A. and the solution of this differential equation starting at t = 0 with an initial condition xo has the following wellestablished form:
x(t) = eAtxo
We are interested in a qualitative understanding of the form of the trajectories.15)
where the symbol 1R2x2 indicates the set of 2 x 2 matrices with real entries. To this end.10
CHAPTER 1.
The importance of this transformation is that in the new coordinates y = [y1 y2]T the system is uncoupled. A2 are linearly independent.10).18) (1. and transformations of the form T1AT are
called similarity transformations.17)
Transformation of the form D = T'AT are very well known in linear algebra.e. denoted T1 exists. and the general solution is given by (1. The equilibrium point of a system where both eigenvalues have the same sign is called a node. D[ . depending the sign of the eigenvalues Al and A2.
0
1
0
. The following examples clarify this point.19)
= Alyl
= A2y2
Both equations can be solved independently.. and the matrices A and D share several interesting properties: Property 1: The matrices A and D share the same eigenvalues Al and A2. In this case the matrix T defined in (1.2].17) can be formed by placing the eigenvectors vl and v2 as its columns.
Property 2: Assume that the eigenvectors v1i v2 associated with the real eigenvalues A1. and we can write
y = T1ATy = Dy
(1. PHASEPLANE ANALYSIS OF LINEAR TIMEINVARIANT SYSTEMS
11
Given that T is nonsingular. In this case we have that
DAl
i. in this case the matrix A is similar to the diagonal matrix D. its inverse.
0]
A2
0
that is.
or
[y20
yl
y2
'\2J [Y2
(1. A is diagonalizable and D = T1AT
T=[0 1 3
1].
Example 1.
The eigenvalues in this case are Al = 1.A2 = 2.7 Consider the system
[± 2][
with
0 1
2] [x2
I
. For this reason the matrices A and D are said to be similar. This means that the trajectories along each of the coordinate axes yl and Y2 are independent of one another.
Several interesting cases can be distinguished.6.1.
8 Consider the system
[ 22 j = [ 0 2 ] [ x2 ]
. only with a distortion of the coordinate axis.
Example 1.5 (b) by applying the linear transformation of coordinates x = Ty. the modified system is
yi
= Yi
y2 = 2Y2
or y = Dy.
In the new coordinates y = Tx. It is clear from the figures that the origin is repelling on both
. It is clear from part (b) that the origin is attractive in both directions.5 shows the trajectories of both the original system
[part (a)] and the uncoupled system after the coordinate transformation [part (b)]. (b) uncoupled system. INTRODUCTION
Figure 1.12
CHAPTER 1. which is uncoupled. as expected given that both eigenvalues are negative.6 shows the trajectories of both the original and the uncoupled systems after the coordinate transformation. A2 = 2. The equilibrium point is thus said to be a stable node. It is in fact worth nothing that Figure 1.5: System trajectories of Example 1.5 (a) can be obtained from Figure 1.7: (a) original system. we obtain
y1 = yi
y2 = 2Y2
Figure 1. Applying the linear coordinate transformation x = Ty.
The eigenvalues in this case are Al = 1. Figure 1. Part (a) retains this property.
Applying the linear coordinate transformation x = Ty. only one linearly independent eigenvector v can be associated with A. on the other hand.1. If this is possible the matrix A is diagonalizable and the trajectories can be analyzed by the previous method. (b) original system
directions.6: System trajectories of Example 1. In this case it may or may not be possible to associate two linearly independent eigenvalues vl and v2 with the sole eigenvalue A. then the matrix A is not
. The equilibrium point in this case is said to be a saddle.
[ x2 ]
yl
y2
[
0
2
] [ x2 J
. consider the system.9 Finally. as expected given that both eigenvalues are positive.7 shows the trajectories of both the original and the uncoupled systems after the coordinate transformation. the equilibrium point is attractive in one direction but repelling in the other. Al = A2 = A). we obtain
= yi
=
2112
Figure 1. PHASEPLANE ANALYSIS OF LINEAR TIMEINVARIANT SYSTEMS
13
Figure 1. The equilibrium point in this case is said to be an unstable node.8 (a) uncoupled system.
Example 1. If.
CASE 2: Nondiagonalizable Systems
Assume that the eigenvalues of the matrix A are real and identical (i.e. 1\2 = 2..
The eigenvalues in this case are Al = 1.6. Given the different sign of the eigenvalues.
Example 1. Figure 1. The equilibrium point is called a stable node if A < 0 and unstable node if
A>0. In this case.10 Consider the system
0
2][x2
The eigenvalues in this case are Al = A2 = A = 2 and the matrix A is not diagonalizable.9 (a) uncoupled system. the eigenvalue A < 0 and thus the equilibrium point [0.
. there always exist a similarity transformation P such that
P1AP = J =
L
Al
0
1
A2 ]
. In this case the transformed system
is
y = P'APy
or
yl
= Ayl + y2
y2 = 42
the solution of this system of equations with initial condition yo = [ylo.7: System trajectories of Example 1. 0] is a stable node. INTRODUCTION
Figure 1.14
CHAPTER 1. In this example.8 shows the trajectories of the system.
The matrix J is in the socalled Jordan canonical form. (b) original system
diagonalizable. y20]T is as follows
yl
y2
= yloeAt +
=
y2oeAc
y2oteat
The shape of the solution is a somewhat distorted form of those encountered for diagonalizable systems.
1.6. PHASEPLANE ANALYSIS OF LINEAR TIMEINVARIANT SYSTEMS
15
Figure 1.8: System trajectories for the system of Example 1.10.
CASE 3: Systems with Complex Conjugate Eigenvalues
The most interesting case occurs when the eigenvalues of the matrix A are complex conjugate, .\1,2 = a ±13. It can be shown that in this case a similarity transformation M can be found that renders the following similar matrix:
M1AM=Q=
Thus the transform system has the form
y1
/3
a
Q
= aY1 )3Y2
i3y1 + aye
(1.20) (1.21)
y2 =
The solution of this system of differential equations can be greatly simplified by introducing polar coordinates:
P0=
yi + y2
tan1(1).
yl
Converting (1.20) and (1.21) to polar coordinates, we obtain
which has the following solution:
p = Poe"
0 = 0o + Qt.
(1.22) (1.23)
16
CHAPTER 1. INTRODUCTION
Figure 1.9: Trajectories for the system of Example 1.11 From here we conclude that
In the polar coordinate system p either increases exponentially, decreases exponentially, or stays constant depending whether the real part a of the eigenvalues A1,2 is positive, negative, or zero.
The phase angle increases linearly with a "velocity" that depends on the imaginary
part /3 of the eigenvalues \1,2
In the yl  y2 coordinate system (1.22)(1.23) represent an exponential spiral. If a > 0, the trajectories diverge from the origin as t increases. If a < 0 on the other hand, the trajectories converge toward the origin. The equilibrium [0, 0] in this case is said to be a stable focus (if a < 0) or unstable focus (if a > 0).
If a = 0 the trajectories are closed ellipses. In this case the equilibrium [0, 0] is said to be a center.
Example 1.11 Consider the following system:
0 0 1
1
(1.24)
[ X2
The eigenvalues of the A matrix are A1,2 = 0 f 3f . Figure 1.9 shows that the trajectories in this case are closed ellipses. This means that the dynamical system (1.24) is oscillatory. The amplitude of the oscillations is determined by the initial conditions.
Example 1.12 Consider the following system:
1.6. PHASEPLANE ANALYSIS OF LINEAR TIMEINVARIANT SYSTEMS
17
Figure 1.10: Trajectories for the system of Example 1.12
±2l_r0.1
A1,2
0. 5j 1
X2 X,
The eigenvalues of the A matrix are = 0.5 ± j; thus the origin is an unstable focus. Figure 1.10 shows the spiral behavior of the trajectories. The system in this case is also oscillatory, but the amplitude of the oscillations grow exponentially with time, because of the presence of the nonzero o: term.
The following table summarizes the different cases:
Eigenvalues
A1, A2 real and negative A1, A2 real and positive A1, A2 real, opposite signs A1, A2 complex with negative real part A1, A2 complex with positive real part A1, A2 imaginary
Equilibrium point
stable node unstable node saddle stable focus unstable focus center
As a final remark, we notice that the study of the trajectories of linear systems about the origin is important because, as we will see, in a neighborhood of an equilibrium point the behavior of a nonlinear system can often be determined by linearizing the nonlinear equations and studying the trajectories of the resulting linear system.
18
CHAPTER 1. INTRODUCTION
1.7
PhasePlane Analysis of Nonlinear Systems
We mentioned earlier that nonlinear systems are more complex than their linear counterparts, and that their differences accentuate as the order of the state space realization increases. The question then is: What features characterize secondorder nonlinear equations not already seen the linear case? The answer is: oscillations! We will say that a system oscillates when it has a nontrivial periodic solution, that is, a nonstationary trajectory for
which there exists T > 0 such that
x(t +T) = x(t) dt > 0.
Oscillations are indeed a very important phenomenon in dynamical systems.
In the previous section we saw that if the eigenvalues of a secondorder linear timeinvariant
(LTI) system are imaginary, the equilibrium point is a center and the response is oscillatory. In practice however, LTI systems do not constitute oscillators of any practical use. The reason is twofold: (1) as noticed in Example 1.11, the amplitude of the oscillations is determined by the initial conditions, and (2) the very existence and maintenance of the oscillations depend on the existence of purely imaginary eigenvalues of the A matrix in the state space realization of the dynamical equations. If the real part of the eigenvalues is not identically zero, then the trajectories are not periodic. The oscillations will either be damped out and eventually dissappear, or the solutions will grow unbounded. This means that the oscillations in linear systems are not structurally stable. Small friction forces or neglected viscous forces often introduce damping that, however small, adds a negative component to the eigenvalues and consequently damps the oscillations. Nonlinear systems, on the other hand, can have selfexcited oscillations, known as limit cycles.
1.7.1
Limit Cycles
Consider the following system, commonly known as the Van der Pol oscillator.
yµ(1y2)y+y=0
defining state variables xl = y, and x2 = y we obtain
IL > 0
xl = X2
x2 = xl + µ(1  xi)x2.
Notice that if µ = 0 in equation (1.26), then the resulting system is
(1.25)
(1.26)
[22]
_[1
0]
[x2
1.7. PHASEPLANE ANALYSIS OF NONLINEAR SYSTEMS
19
Figure 1.11: Stable limit cycle: (a) vector field diagram; (b) the closed orbit.
which is linear timeinvariant. Moreover, the eigenvalues of the A matrix are A1,2 = ±3, which implies that the equilibrium point [0, 0] is a center. The term µ(1 x1)x2 in equation (1.26) provides additional dynamics that, as we will see, contribute to maintain the oscillations. Figure 1.11(a) shows the vector field diagram for the system (1.25)(1.26) assuming µ = 1. Notice the difference between the Van der Pol oscillator of this example and the center of Example 1.11. In Example 1.11 there is a continuum of closed orbits. A trajectory initiating at an initial condition xo at t = 0 is confined to the trajectory passing through x0 for all future time. In the Van der Pol oscillator of this example there is only one isolated orbit. All trajectories converge to this trajectory as t > oo. An isolated orbit such as this is called a limit cycle. Figure 1.11(b) shows a clearer picture of the limit cycle.
We point out that the Van der Pol oscillator discussed here is not a theoretical
example. These equations derive from simple electric circuits encountered in the first radios. Figure 1.12 shows a schematic of such a circuit, where R in represents a nonlinear resistance. See Reference [84] for a detailed analysis of the circuit.
As mentioned, the Van der Pol oscillator of this example has the property that all trajectories converge toward the limit cycle. An orbit with this property is said to be a stable limit cycle. There are three types of limit cycles, depending on the behavior of the trajectories in the vicinity of the orbit: (1) stable, (2) unstable, and (3) semi stable. A limit cycle is said to be unstable if all trajectories in the vicinity of the orbit diverge from it as t > oo. It is said to be semi stable if the trajectories either inside or outside the orbit converge to it and diverge on the other side. An example of an unstable limit cycle can be
obtained by modifying the previous example as follows:
±1
= x2
±2 = xl  µ(1  xi)x2.
20
CHAPTER 1. INTRODUCTION
C T
Figure 1.12: Nonlinear RLCcircuit.
Figure 1.13: Unstable limit cycle.
Figure 1.13 shows the vector field diagram of this system with µ = 1. As can be seen in the figure, all trajectories diverge from the orbit and the limit cycle is unstable.
1.8
HigherOrder Systems
When the order of the state space realization is greater than or equal to 3, nothing
significantly different happens with linear timeinvariant systems. The solution of the state equation with initial condition x0 is still x = eATxo, and the eigenvalues of the A matrix still control the behavior of the trajectories. Nonlinear equations have much more room in which to maneuver. When the dimension of the state space realization increases from 2 to 3 a new phenomenon is encountered; namely, chaos.
1.8. HIGHERORDER SYSTEMS
21
1.8.1
Chaos
Consider the following system of nonlinear equations
= a(y  x)
rxyxz
xybz
where a, r, b > 0. This system was introduced by Ed Lorenz in 1963 as a model of convection
rolls in the atmosphere. Since Lorenz' publication, similar equations have been found to appear in lasers and other systems. We now consider the following set of values: or = 10, b = 3 , and r = 28, which are the original parameters considered by Lorenz. It is easy to prove that the system has three equilibrium points. A more detailed analysis reveals that, with these values of the parameters, none of these three equilibrium points are actually stable, but that nonetheless all trajectories are contained within a certain ellipsoidal region
in ii
.
Figure 1.14(a) shows a three dimensional view of a trajectory staring at a randomly selected initial condition, while Figure 1.14(b) shows a projection of the same trajectory onto the XZ plane. It is apparent from both figures that the trajectory follows a recurrent, although not periodic, motion switching between two surfaces. It can be seen in Figure 1.14(a) that each of these 2 surfaces constitute a very thin set of points, almost defining a twodimensional plane. This set is called an "strange attractor" and the two surphases, which together resemble a pair of butterfly wings, are much more complex than they appear in our figure. Each surface is in reality formed by an infinite number of complex surfaces forming what today is called a fractal.
It is difficult to define what constitutes a chaotic system; in fact, until the present time
no universally accepted definition has been proposed. Nevertheless, the essential elements constituting chaotic behavior are the following:
A chaotic system is one where trajectories present aperiodic behavior and are critically sensitive with respect to initial conditions. Here aperiodic behavior implies that the trajectories never settle down to fixed points or to periodic orbits. Sensitive dependence with respect to initial conditions means that very small differences in initial conditions can lead to trajectories that deviate exponentially rapidly from each other.
Both of these features are indeed present in Lorenz' system, as is apparent in the Figures 1.14(a) and 1.14(b). It is of great theoretical importance that chaotic behavior cannot exist in autonomous systems of dimension less than 3. The justification of this statement comes from the wellknown PoincareBendixson Theorem which we state below without proof.
(2) There exists a trajectory x(t) that is confined to R.14: (a) Threedimensional view of the trajectories of Lorenz' chaotic system. that is.
1. a trajectory that is enclosed by a closed bounded region that contains no equilibrium points must eventually approach a limit cycle. (b) twodimensional projection of the trajectory of Lorenz' system. Our intention at this point is to simply show that nonlinear equation
arise frequently in dynamical systems commonly encountered in real life.
Theorem 1.oo. the PoincareBendixson theorem predicts that.22
CHAPTER 1. as seen in the Lorenz system. and assume that
(1) R C D is a closed and bounded set which contains no equilibrium points of x = f (x). In higherorder systems the new dimension adds an extra degree of freedom that allows trajectories to never settle down to an equilibrium point or closed orbit.
Then either R is a closed orbit. According to this theorem.1 [76] Consider the two dimensional system
x=f(x)
where f : IR2 > j 2 is continuously differentiable in D C 1R2. in two dimensions.9
Examples of Nonlinear Systems
We conclude this chapter with a few examples of "real" dynamical systems and their nonlinear models. INTRODUCTION
Figure 1.
. The examples in this section are in fact popular laboratory experiments used in many universities around the world. or converges toward a closed orbit as t . one that starts in R and remains in R for all future time.
1. We can approximate L as follows: L = L(y) =
1+µy. This parameter is not constant since it depends on the position of the ball. fk is the friction force. the equation of the motion of the ball is
my = fk + mg + F
(1. To this end we notice that the energy stored in the electromagnet is given by
E = 1Li2
where L is the inductance of the electromagnet.
1. we need to find a proper model for the magnetic force F.
(1.27)
where m is the mass of the ball.15: Magnetic suspension system. as well as in gyroscopes and accelerometers. The basic configuration is shown in Figure
1. g the acceleration due to gravity. and F is the electromagnetic force due to the current i.9.15. EXAMPLES OF NONLINEAR SYSTEMS
23
V (t)
Figure 1. To complete the model.1
Magnetic Suspension System
Magnetic suspension systems are a familiar setup that is receiving increasing attention in applications where it is essential to reduce friction force due to mechanical contact.9.
According to Newton's second law of forces. Magnetic suspension systems are commonly encountered in highspeed trains and magnetic bearings.28)
.
32)
d
dt (LZ)
_
d
Mi
dt
1 + µy
y(1+µy) dt+o9 (1+µy)
_
dt
(1.\ (1+µy)2 dt + 1+µy
di
dt. The energy in the magnetic circuit is thus E = E(i. y) is given by _ i2 8L(y) 1 µz2 F(i. the flux in the magnetic circuit is affected.34). x3 = i. INTRODUCTION
This model considers the fact that as the ball approaches the magnetic core of the coil. and substituting into (1. we recognize that the external circuit obeys the Kirchhoff's voltage law.27) we obtain the following equation of motion of the ball:
1 .
.34)
Defining state variables xl = y. y) = 8E (1.29) 2 ay 2(1+µy)2 8y Assuming that the friction force has the form

fk = ky
(1.\µi2 my=ky+mg2(1+µy)2
(1.30)
where k > 0 is the viscous friction coefficient and substituting (1. resulting in an increase of the value of the inductance. and thus we can write
v=Ri+at(Li)
where
(1. and the force F = F(i.29) into (1.32).24
CHAPTER 1.33) into (1. we obtain the following state space model:
it
2
= x2
9
x3 =
k m x2 1 + p xl
\µx3 2m(1+µx1)2
[_RX3+
(l + xl)2 x2x3 + vJ
l1
.
Substituting (1. x2 = y.33)
. we obtain
v = Ri 
(1 + Uy)2 dt + l + µy dt
(1.31) and (1.30) and (1.µi dy .31)
To complete the model. y) = 2L(y)i2.
37) taking account of (1. We denote by 0 the angle with respect to the vertical.40). and moment of inertia about the center of gravity of the pendulum.
1.9.7.Fx.
Considering the horizontal forces acting on the cart.38)
(1. (1. L = 21. Consider first the pendulum.mlO2 sin O Fy . x2 = 6 we obtain
±1 = x2
. whose horizontal and vertical coordinates are given by
x = X + 2 sing = X +lsinO
y
(1.mg = mlO sin O .16: Pendulumonacart experiment.1. M represents the mass of the cart. and G represents the center of gravity of the pendulum. where Fx and Fy represent the reaction forces at the pivot point. mass.39)
MX = fx . we have that
(1.2
Inverted Pendulum on a Cart
Consider the pendulum on a cart shown in Figure 1.40) Substituting (1.9.
Summing forces we obtain the following equations:
Fx = mX + ml6 cos O .40) into (1. m.36)
2 cos 0 = l cos 0
The freebody diagrams of the cart and pendulum are shown in Figure 1.35)
(1.16. and defining state
variables xl = 0.m162 cos 6 Fyl sinO . EXAMPLES OF NONLINEAR SYSTEMS
25
Figure 1. and J the length.Fxl cosO = J6.38) and (1.37) (1.
g sin x1 .26
CHAPTER 1. mass and moment of inertia of the ball.amlx2 sin(2x1) .2a cos(xl) fx
41/3 . the Lagrange equations of motion are (see [28] for further details)
0=
(R2 + m)r + mg sin 0 .3
1z
2
. INTRODUCTION
Figure 1. Assuming that the ball is always in contact with the beam and that rolling occurs without slipping.
and a
f
= 2(m+M)
The BallandBeam System
The ballandbeam system is another interesting and very familiar experiment commonly encountered in control systems laboratories in many universities. m and Jb are the radius.2aml cos2(xl)
22
where we have substituted
J=
1. Figure 1.18 shows an
schematic of this system. The acceleration of gravity is represented by
.mrO2
r = (mr2+J+Jb)B+2mr7B+mgrcos0
where J represents the moment of inertia of the beam and R. The beam can rotate by applying a torque at the center of
rotation.17: Freebody diagrams of the pendulumonacart system. respectively. and the ball can move freely along the beam.9.
2 <x < 2
(1. we obtain the following state space realization:
x2
mg sin x3 + mx1x4
m+ R
X4
T . plot ± = f (x) versus x.10.rngx1 COS X3 .
(b) ±=x32x2x+2.18.1. and x4 = 0.
.
(ii) Draw the phase portrait and verify your conclusions in part (i).2) Given the following linear systems.1) For the following dynamical systems. EXERCISES
27
Figure 1.
g. and r and 0 are shown in Figure 1. From each graph. x3 = 0.
(c) x=tanx
.18: Ballandbeam experiment.10
Exercises
(1. you are asked to
(i) Find the eigenvalues of the A matrix and classify the stability of the origin.2mxlx2x4
mxi+J+Jb
1. find the equilibrium points and analyze their stability:
(a) a=x22. Defining now state variables xl = r. x2 = r.
2) for the following threedimensional systems:
(a)
zl
X2
=
6 5 4 5
0 6
xl
X2
±3
(b)
x3
xl i2
23
(c)
2 2 1
I
=
0 0
4 1
0
1
1
xl
x2 x3
6
xl
x2
2 6
0 0
xl
X2
X3
4 0
23
6
.3) Repeat problem (1.28
CHAPTER 1. INTRODUCTION
(a)
[i2J[0
(b)
4J[x2]
[i2J[
(c)
0
4][x2
1
[x2J[ 01 41 [xz
(d)
I
1
iz1 [0
41[x2J
(e)
[2] =[2 01][X2]
(f)
[x2J[2
(g)
2J[x2
[ xl
±2
1
[
2 0 0
1
1 lIxl
2
1
X21
(1.
. Define state variables xl = 01. EXERCISES
29
m2
Figure 1. (ii) find the phase portrait.5). x2 = 91. and (iii) classify each equilibrium point as stable or unstable.10.4) For each of the following systems you are asked to (i) find the equilibrium points.19: Doublependulum of Exercise (1.19.5) Find a state space realization for the doublependulum system shown in Figure 1. 24 = 92. based on the analysis of the trajectories:
(a)
xi
21 X 1 +X2
{ 22
(b)
22
x2 + 2x1(21 + x2)
xl + 2x2(21 + x2)
{
(c)
= i2 =
±1
21 = cos 22
i2
sin x1
(1.1. x3 = 02.
(1.
Firstorder systems are harder to find in the literature. Section 1. [2]. Our simple model of the ballandbeam experiment was taken from reference [43]. [64]. Excellent sources on chaotic dynamical systems are References [26]. prepared by Drs. Section 1. references [15].
. for example. INTRODUCTION
Notes and References
We will often refer to linear timeinvariant systems throughout the rest of the book.1 is a slightly modified version of a laboratory experiment used at the University of Alberta. See Reference [28] for a more complete version of this model. See Strogatz [76] for an inspiring. The literature on chaotic systems is very extensive.8 is based mainly on this reference. Zhao. A. The magnetic suspension system of Section 1. or [40]. remarkably readable introduction to the subject. Lynch and Q. There are many good references on state space theory of LTI systems. See also Arnold [3] for an excellent indepth treatment of phaseplane analysis of nonlinear systems.30
CHAPTER 1. The pendulumonacart example follows References [9] and [13]. Our modification follows the model used in reference [81]. This model is not very accurate since it neglects the moment of inertia of the ball.4 follows Strogatz [76].9. [22] and [58]. Good sources on phaseplane analysis of secondorder systems are References [32] and [59]. See.
3)
. few proofs are offered. is thus contained in every set. and that A is a subset of B.2)
Assume now that A and B are non empty sets.
31
(2. As the material is standard and is available in many textbooks.1) (2. The emphasis has been placed in explaining the concepts. and we can write 0 C A. not essential for the understanding of the rest of the
book.1
Sets
We assume that the reader has some acquaintance with the notion of set.
(2. If A and B are sets and if every element of A is also an element of B. and pointing out their importance in later applications. A set is a collection of objects. the empty set.Chapter 2
Mathematical Preliminaries
This chapter collects some background material needed throughout the book. andbEB}. Then the Cartesian product A x B of A and B is the set of all ordered pairs of the form (a. b) with a E A and b E B:
AxB={(a. sometimes called elements or points. As it has no elements. denoted by 0. More detailed expositions can be found in the references listed at the end of the chapter.
2. This is. however. The union and intersection of A and B are defined by
AUB = {x:xEAorxEB} AnB = {x:xEAandxEB}. we write x E A. we say
that B includes A.b):aeA. and we write A C B or B D A. If A is a set and x is an element of A.
Throughout the rest of the book. p E F and x. Alternative names for vector spaces are linear spaces and linear vector spaces. d) of a non empty set X and a metric or
distance function d : X x X 4 ]R such that. x). z c X the following conditions hold:
(i) d(x.3
Vector Spaces
So far we have been dealing with metric spaces where the emphasis was placed on the notion of distance. y) = d(y. Z represents the set of integers.
. y) + d(y. Finally Ilt"" denotes the set of real matrices with m rows and n
columns. Metric spaces form a natural generalization of this concept. IR. respectively.
Defining property (iii) is called the triangle inequality. x) = 0 < d(x. and a function ". (iii) d(x. for all x.
Definition 2. y E X.
2. we arrive at the notion of vector space. d(x.2
Metric Spaces
In real and complex analysis many results depend solely on the idea of distance between numbers x and y.2 A vector space over F is a non empty set X with a function "+ ": X x X > X. Notice that. The next step consists of providing the space with a proper algebraic structure. y. letting x = z in (iii) and taking account of (i) and (ii) we have. z E X we have (1) x + y = y + x (addition is commutative).
If we define addition of elements of the space and also multiplication of elements of the space by real or complex numbers.
Definition 2. y) + d(y. x) = 2d(x.
(ii) d(x. z). MATHEMATICAL PRELIMINARIES
2.1 A metric space is a pair (X. IR and C denote the field of real and complex numbers. ": F x X * X such that.
(2) x + (y + z) = (x + y) + z (addition is associative).32
CHAPTER 2. R+ and Z+ represent the subsets of non negative elements of IR and Z. for all A. y) > 0 Vx. y) from which it follows that d(x. respectively. In the following definition F denotes a field of scalars that can be either the real or
complex number system. y) = 0
if and only if x = y. y. z) < d(x. and C.
(8) in Definition 2.2.2. y E X are added.
(6) \(x + y) = . if X = IRn and addition and scalar multiplication are defined as the usual coordinatewise operation xl xl Axl xl + yl yl + y2 de f de f Ax2 x2 + y2 Ax = A X2 X2
L xn
Jn
xn + yn
xn
Axn
J
then it is straightforward to show that 1Rn satisfies properties (1).\x + Ay (first distributive property). Similarly. VECTOR SPACES (3) 3 0 E X : x + 0 = x ("0" is the neutral element in the operation of addition). a linear space is a structure formed by a set X furnished with two operations: vector addition and scalar multiplication.
(7) (A + µ)x = Ax + µx (second distributive property). so from now on we assume that F = R.x E X : x + (x) = 0 (Every x E X has a negative x E X such that their sum
is the neutral element defined in (3)). In the sequel.
A simple and very useful example of a linear space is the ndimensional "Euclidean" space 1Rn consistent of ntuples of vectors of the following form:
xl
x
X2
xn
More precisely.
33
(4) 3 . the resulting vector z = x + y is also an element of X. we will denote by xT the transpose of the vector x. According to this definition.x = x ("1 " is the neutral element in the operation of scalar multiplication). the resulting scaled vector ax is also in X. (5) 31 E F : 1. that is.
We will restrict our attention to real vector spaces. if
xl
x
X2
.
(8) A (µx) = (Aµ) x (scalar multiplication is associative). The essential feature of the definition is that the set X is closed under these two operations.
A vector space is called real or complex according to whether the field F is the real or complex number system.3. This means that when two vectors x. when a vector x E X is multiplied by a scalar a E IR.
y E
Throughout the rest of the book we also encounter function spaces. = 0 implies that a2 = 0 for each i.].3 A finite set {x2} of vectors is said to be linearly dependent if there exists a corresponding set {a.2 Every set containing a linearly dependent subset is itself linearly dependent. R is xT y = E° 1 x: y:
x.
Example 2..
Example 2.
It is easy to see that this X is a (real) linear space.} is said to be
linearly independent.
Example 2.1 Let X be the space of continuous real functions x = x(t) over the closed interval 0 < t < 1. MATHEMATICAL PRELIMINARIES
. The following definition
introduces the fundamental notion of linear independence. such that
On the other hand. the set {x. not all zero.} of scalars. Notice that it is closed with respect to addition since the sum of two continuous functions is once again continuous.34
CHAPTER 2. Our next example is perhaps the simplest space of this kind.3.
Definition 2.3 Consider the space R" and let
0
et= 1
1
0
. if >2 atx. x2. The inner product of 2 vector in x. namely spaces where the vectors in X are functions of time.1
Linear Independence and Basis
We now look at the concept of vector space in more detail.
then xT is the "row vector" xT = [xi.
2.
and moreover..1 rows.. Then
{el.+Anen
An
Thus. .2. is a set of linearly independent vectors B such that every vector in X is a linear combination of elements in 13. VECTOR SPACES
35
where the 1 element is in the ith row and there are zeros in the other n . + . assume that we have two different linear combinations producing the same x.
Example 2.3.
Ale. .=An=0
Definition 2. bn} that the linear combination of the basis vectors that produces a vector x is unique...
en} is called the set of unit vectors in R. n forms a basis for this space since they are linearly independent. any vector x E R can be obtain as linear combination of the e. The set of unit vectors ei.4 A basis in a vector space X..n
It is an important property of any finite dimensional vector space with basis {bl. This set is linearly independent. To prove that this is the case. e2i
.. Thus. b2. we must have that
n n
x = E labz =
=1
z=1
But then.+Anen=0
is equivalent to
A1=A2=. +. + Anen
=
A. since
Al
Ale.. i = 1. by subtraction we have that
n
r7:)bz=0
i=1
.4 Consider the space lR'. since
Al
Alel+. values..
3.6 In the threedimensional space 1R3. The reader is encouraged to complete the details of the proof.
For completeness.'s.6 A nonempty subset M of a vector space X is a subspace if for any pair of scalars A and µ.5 In any vector space X. X is a subspace of itself.2 Let S be a set of vectors in a vector space X.
.
Example 2.
According to this definition. The subspace M spanned by S is the set of all linear combinations of the members of S. Definition 2. which is a corollary of previous results. are linearly independent. The necessity of passing through the origin comes from the fact that along
with any vector x. any twodimensional plane passing through the origin is a subspace.
In general. being a basis. we have that 1 = 1.
Theorem 2. which means that the It's are the same as the it's. a subspace also contains 0 = 1 x + (1) X. Similarly.
The next theorem gives a useful characterization of the span of a set of vectors. the intersection of all subspaces containing S is called the subspace generated or spanned by S. any line passing through the origin is a onedimensional subspace of P3. Ax + µy E M whenever x and y E M. n. or simply the span of S.
2. in a vector space X.36
CHAPTER 2.
Theorem 2.5 The dimension of a vector space X is the number of elements in any of its
basis. we now state the following theorem.
Definition 2.1 Every set of n + 1 vectors in an ndimensional vector space X is linearly dependent. . a subspace M in a vector space X is itself a vector space. MATHEMATICAL PRELIMINARIES
71.7 Given a set of vectors S.
Example 2. A set of n vectors in X is a basis if and only if it is linearly independent. = 0 for
and since the b.2
Subspaces
Definition 2. a finitedimensional vector space can have an infinite number of basis.
the triangle
inequality.
(iii) 11x+yj<11x11 +11y11
Notice that.y 11. and the theorem is proved.3. This is straightforward since linear combinations of linear combinations of elements of S are again linear combinations of the elements of S.4) shows that every normed linear space may be regarded as a metric space with distance defined by d(x.2.II
consisting of a vector space X and a norm 11 11: X * IR such that
(z)

I
11 x lI= 0 if and only if x = 0. For the converse notice that M is also a subspace which contains S. VECTOR SPACES
37
Proof: First we need to show that the set of linear combinations of elements of S is a
subspace of X.8 A normed vector space (or simply a nonmed space) is a pair (X. To recover this notion.
1
(ii)
1 Ax
111 x11
VA ER. letting y = x # 0 in (iii).
2. Also. It is immediate that N contains every element of S and thus N C M.
.dxEX. The limitation with the concept of vector spaces is that the notion of distance associated with metric spaces has been lost. y) _II x .
The following example introduces the most commonly used norms in the Euclidean
space R. Denote this subspace
by N.3. and taking account of (i) and (ii). by defining property (iii). Equation (2. we have
IIx + (x) II
011=0
=>11 X 11
< <
>0
Ix 11 + 11 x II
11 x11 +11111x11=211x11
thus.yEX. vector spaces introduce a very useful algebraic structure by incorporating the operations of vector addition and scalar multiplication.
Definition 2. the norm of a vector X is nonnegative.yEX
(2.4)
holds.3
Normed Vector Spaces
As defined so far. Thus M C N. we now introduce the concept of norrned vector space. and therefore contains all
linear combinations of the elements of S. Vx.
I1 xy11=11 xz+zyII511xz11+11zy 11
dx.
8)
By far the most commonly used of the pnorms in IRn is the 2norm. it is customary to drop the subscript p to indicate that the norm can be any pnorm. to simplify notation.7)
def
I21Iz+.10)
. makes this space a normed vector space.
dcf
maxlx.38
CHAPTER 2. the function
11
lip.
dx. y E IRn. where
IlxiiP
f
(Ix1IP + .I..6))

ki llxlla <_ Ilxllb <_ k2IIxIIa.. MATHEMATICAL PRELIMINARIES
Example 2. Many of the theorems
encountered throughout the book. 1 < p < oo.
(2.6) (2. Also.+Ixnl2. as well as some of the properties of functions and
sequences (such as continuity and convergence) depend only on the three defining properties
of a norm.
`dx E >Rn
Two frequently used inequalities involving pnorms in Rn are the following:
Holder's inequality: Let p E 1R.. y E IRn. there exist constants k1 and k2 such that (see exercise (2.
(2.. + IxnIP)1/P
In particular
Ilxlll
IIxlI2
def
Ix11
+. the oonorm is defined as follows:
IIxiI.9)
Minkowski's inequality Let p E 1R. Then
llx + yIIP
IIx1IP + IIyIIP . known as the pnorm in Rn. In these cases..
The 2norm is the socalled Euclidean norm.7 Consider again the vector space IRn. and not on the specific norm adopted..
Vx.
4
(2. p > 1. The distinction is somewhat superfluous in that all pnorms in 1R are equivalent in the sense
that given any two norms II IIa and II IIb on ]Rn. p > 1 and let q E R be such that
+ =1
p
then
q
1
1
IIxTyll1 <_
IIxIIP
Ilyllq .
(2.
Ixnl. For each p.
(AB)T = BT AT (transpose of the product of two matrices)
(A + B)T =A T +B T (transpose of the sum of two matrices). denoted AT. denoted rank(A).4
39
Matrices
We assume that the reader has some acquaintance with the elementary theory of matrices and matrix operations.
Rank of a matrix: The rank of a matrix A.
(AB)1 = B'A1.
Transpose: If A is an m x n matrix.
. Orthogonal matrix: A matrix Q is orthogonal if QTQ = QQT = I. if
QT
=Q1
Inverse matrix: A matrix A1 E R"' is said to be the inverse of the square matrix
A E R""" if
AA1 = A1A = I
It can be verified that
(A1)' = A. that is the mapping y = Ax
maps the vector x E ]R" into the vector y E R'.
Skew symmetric matrix: A is skew symmetric if A = AT. MATRICES
2. is the n x m matrix obtained by interchanging rows and columns with A. or equivalently. The following properties are straightforward to prove:
(AT)T = A. We now introduce some notation and terminology as well as some useful properties. and invertible.2.
Symmetric matrix: A is symmetric if A = AT.4. its transpose. is the maximum number
of linearly independent columns in A.
Every matrix A E 1R""" can be considered as a linear function A : R' 4 IR". provided that A and B are square of the same size.
MATHEMATICAL PRELIMINARIES
Definition 2.. vn.4 I f A E R. .Pn". denoted dimA((A).AI)x = 0
thus x is an eigenvector associated with A if and only if x is in the null space of (A . A2i . .3 Let A E .9 The null space of a linear function A : X + Y zs the set N(A).
Theorem 2. is important. and
S=
L
vl
.
Theorem 2. and Diagonal Forms
Definition 2.10 Consider a matrix A E j..1
Eigenvalues.
vn
. A scalar A E F is said to be an eigenvalue and a nonzero vector x is an eigenvector of A associated with this eigenvalue if
Ax = Ax
or
(A . A 2 .
2.
. We first analyze their use in the most elementary form of diagonalization. An has n linearly independent
eigenvectors vl. V2i
. The dimension of this
vector space. with eigenvalues Al. We now state the following theorem without
proof..
Eigenvalues and eigenvectors are fundamental in matrix theory and have numerous applications..40
CHAPTER 2. Then A has the following property:
rank(A) + dimM(A) = n.. defined by
N(A)={xEX: Ax=0}
It is straightforward to show that N(A) is a vector space.xn.AI). Eigenvectors.'. then it can be expressed in the form
A = SDS1
where D = diag{Al.4.
An}.
. then the diagonalizing matrix S can be chosen to be an orthogonal matrix P. It follows that S is invertible and we can write
A = SDSl
or also
D = SIAS. There is. a function q : lR . we have that AS = SD. the columns of the matrix S are the eigenvectors of A.
Anvn
=
V1
. by assumption.
x E R"
is called a quadratic form.... Thus.
This completes the proof of the theorem. no loss of generality in restricting this matrix to be symmetric..2
Quadratic Forms
Given a matrix A E R""n... Here we mention two of them.
An
Therefore. we have
AS=A
:l
vn
Alvl
.4. The matrix A in this definition can be any real matrix.2.. MATRICES
41
Proof: By definition. if A = AT. The columns of the matrix S are. however. if A is symmetric.IR of the form
q(x) = xT Ax.
Anon
and we can re write the last matrix in the following form:
Alvl
. that is.
El
Special Case: Symmetric Matrices
Symmetric matrices have several important properties.4. To see this. then there exist a matrix P satisfying PT = Pl such that
P'AP =PTAP=D.
2. notice
.
linearly independent. (ii) Every symmetric matrix A is diagonalizable.
Al
vn
. without proof:
(i) The eigenvalues of a symmetric matrix A E IIF"' are all real. Moreover.
. + . This means that the quadratic form associated with a skew symmetric matrix is identically zero..1"y2 n
where A. (iii) Negative definite if xT Ax < OVx # 0. Thus defining x = Py.
(v) Indefinite if xT Ax can take both positive and negative values. From this construction.
(ii) Positive semidefinite if xT Ax > OVx # 0.
It is immediate that the positive/negative character of a symmetric matrix is determined completely by its eigenvalues. i = 1.
Definition 2. given a A = AT.42
CHAPTER 2.
Hence. we have
that
xTCx = (xTCx)T = xTCTx = xTCx.
B = 2(A+AT) = BT. MATHEMATICAL PRELIMINARIES
any matrix A E 1R"' can be rewritten as the sum of a symmetric and a skew symmetric
matrix. Indeed.. whereas C is skew symmetric. the real number xT Cx must be identically zero. we have that
yTPTAPy
yT P1APy yTDy '\l yl + A2 y2 + . n are the eigenvalues of A.
(iv) Negative semidefinite if xT Ax < OVx # 0.11 Let A E ]R""" be a symmetric matrix and let x EIIF". For the skew symmetric part. 2. and C = 2(AAT)=CT
thus B is symmetric. as shown below:
A= 2(A+AT)+2(AAT)
Clearly. Then A is said to
be:
(i) Positive definite if xTAx > OVx # 0. it follows that
. there exist P such that P'AP = PT AP = D.
.. for any x E IItn.12)
n
1x112=1.
and let Aman and A.
(iv) A is negative semi definite if and only if \.
n. < 0.11) when the norm of x is 1.
a=1
Finally. which is symmetric.. + Anxnun]
= AixT x1u1 + ...
(iii) A is negative definite if and only if all of its eigenvalues are negative. Thus
xT Qx = XT Q [XI U1 +.(Q)
since
(2. it is worth nothing that (2.. Vi = 1.
43
(ii) A is positive semi definite if and only if \. we can always assume that the eigenvectors .. xn.
(2.11)
Proof: The matrix Q. We can write
.2. Moreover. 2. every vector x can be written as a linear combination of the elements of this set. .11. u2i . Vi = 1. a2. Under these conditions.+.1n are orthonormal.
(v) Indefinite if and only if it has positive and negative eigenvalues. MATRICES
(i) A is positive definite if and only if all of its (real) eigenvalues are positive.nxT xnun
= Ai
which implies that
x112+. We can assume that I xji = 1 (if this is not the case.+Anlxn12
Amin(Q) < xTQx < A. 2.
Theorem 2..
The following theorem will be useful in later sections..
.. + xnun]
= xT [Alxlu1 + . is diagonalizable and it must have a full set of linearly independent eigenvectors. divide by 11x11).n.
... un associated with the eigenvalues . Consider an arbitrary vector x.12) is a special case of (2. Given that u1i U2.ay be respectively the minimum and maximum eigenvalues of Q.5 (Rayleigh Inequality) Consider a nonsingular symmetric matrix Q E Rnxn.
amin(Q)IIXI12 < xTQx < am.4.(Q)11_112. .
x = x1u1 +x2U2 +"' +xnun
for some scalars x1i x2. un} form a basis in Rn. > 0. the set of eigenvectors Jul.
Now consider a set A C 1R".
(e) The complement of A C X is the set A` = {p E X : p V A}. A is closed if and only if and only if A` is open. MATHEMATICAL PRELIMINARIES
2. q) < b.q) < r. We say that
(a) A neighborhood of a point p E X is a set Nr (p) C X consisting of all points q E X such that d(p.
Bounded set: A set A C ]R" is said to be bounded if there exists a real number M > 0 such that
IIxli < M
Vx E A. We emphasize those concepts that we will use more frequently.
(c) A point p is an interior point of a set A c X if there exist a neighborhood N of p such that N C E. Then p is said to be a limit point of A if
every neighborhood of p contains a point q # p such that q E A.5
Basic Topology
A few elements of basic topology will be needed throughout the book.p1I < r}. Let X be a metric space.1
Basic Topology in 1R"
All the previous concepts can be specialized to the Euclidean space 1R".
(d) A set A c X is said to be open if every point of A is an interior point.
. It is important to notice that p itself needs not be in the set A. dp E A.
(b) Let A C X and consider a point p E X.5.44
CHAPTER 2.
2.
Neighborhood: A neighborhood of a point p E A C R" is the set Br (p) defined as
follows:
Br(p) = {x E R' :Iix . Equivalently.
(g) A set A C X is bounded if there exist a real number b and a point q E A such that
d(p.
Neighborhoods of this form will be used very frequently and will sometimes be referred
to as an open ball with center p and radius r.
(f) A set A C X is said to be closed if it contains all of its limit points.
Open set: A set A C 1R" is said to be open if for every p E A one can find a
neighborhood Br(p) C A.
Example 2.
0<0<1
2. with a.41. provided that n. and consider the sequence
{xn} = {1. d) is said to be a Cauchy sequence if for every real > 0 there is an integer N such that d(xn. Convex set: A set A C iR' is said to be convex if. then
9x1+(10)x2i
also belongs to A.12 convergence must be taking place in the metric space (X.1. m > N.414. d). the set of rational numbers (x E Q = x = b.6.yl. b # 0). if a sequence has a limit. d). y E A. then {xn} is not convergent (see Example 2.1. However.
Example 2.8 Let X1 = IR. lim xn = x' such that
x` V X.. The converse is. in this case f Q. In other words.
Definition 2. x. y) = Ix . We have that
f. and thus we
conclude that xn is not convergent in (X2.yl. d).x(n) l < e for n> N
and since i E IR we conclude that xn is convergent in (X1. denoted {xn} Definition 2. x2.n) < 1. Let d(x. is trying to converge to f. }
(each term of the sequence is found by adding the corresponding digit in v'2).13 A sequence {xn} in a metric space (X. d(x. however.
.1. SEQUENCES
45
Compact set: A set A c R" is said to be compact if it is closed and bounded. Once again
we have that x. whenever x. as shown in the
following example.6
Sequences
in a metric space (X. or xn * x0 and call x the limit of the sequence {xn}. We then write
x0 = limxn. d).10). y) = Ix . > 0 there is an integer N such that n > N implies that d(xn. and consider the sequence of the previous example. It is important to notice that in Definition 2.4. x0) < .2. is said to converge if there is a point x0 E X with the property that for every real number
l. not true.
It is easy to show that every convergent sequence is a Cauchy sequence.12 A sequence of vectors x0i x1.9 Let X2 = Q. that is a Cauchy sequence in not necessarily convergent. b E 7G.
In other words. b) and (a. We have
d(xn. If a space is known to be complete. m > N there exists x E X such that d(xn. c) are elements of f. on the other hand. A function f is called injective if f (xl) = f (X2) implies that xl = x2 for every x1.
Example 2.) =I
1.xm. x2 E A. convergent in X since limn. operator. in general. It is called bijective if it is both
injective and surjective. m > 2/e. The simplest example of a complete metric space is the realnumber system with the metric d =1 x . In other words. mapping.. Alternative names for functions used in this book are map. 1/n = 0. A function f is called surjective if the range of f is the whole of B.y 1.e. a sequence might be "trying" to converge to a point that does not belong to the space and thus not converging.14 A metric space (X. MATHEMATICAL PRELIMINARIES
1.xm) < e for n.
If a space is incomplete. It is not. then b = c. We will encounter several other important examples in the sequel. The set of elements of B that can occur as first members of elements of f is called the range of f. In incomplete spaces. d) is called complete if and only if every Cauchy sequence converges (to a point of X).46
CHAPTER 2. X = {x E R : 0 < x < 1}).
2.y
Consider the sequence {xn} = 1/n. The set of elements of A that can occur as first members of elements in f is called the domain of f. one needs to "guess" the limit of a sequence to prove convergence. then it has "holes". it is sufficient to check whether the sequence is Cauchy. x) > 0 as n > oo. provided that n.7
Functions
Definition 2. It follows that {xn} is a Cauchy sequence since d(xn.y) =I x .1 I< 1+ 1 <
n
m
n
mN
2
where N = min(n.1) (i.
Definition 2. and
An important class of metric spaces are the socalled complete metric spaces. and transformation. a function is a subset of the Cartesian product between A and B where each argument can have one and only one image. however. d) is a complete metric space if for every sequence {xn} satisfying d(xn. xm) < e. In other words. and let d(x. then to check the convergence of a sequence to some point of the space.
.15 Let A and B be abstract sets. (X.m).10 Let X = (0. A function from A to B is a set f of ordered pairs in the Cartesian product A x B with the property that if (a.
it corresponds to pointwise convergence. is continuous over (0. Then f is
uniformly continuous if and only if it is continuous. (Y. If f is continuous at every point of X. functions mapping Euclidean spaces. then the composition of f2 and fl is the function f2 o fl. consider a function f mapping a compact set X into a metric space Y.
Remarks: The difference between ordinary continuity and uniform continuity is that in the former 6(e. xo) < 6 implies that d2(f (x). y) < b implies that d2 (f (x). f : X > Y is continuous if for every sequence {xn} that converges to x.
Equivalently.16 Let D1. d1) and (Y. The converse is in general not true. xo) such that d(x. while in the latter b(e) is only a function of E. and if a function is uniformly continuous on a set A. f (xo)) < e. d2) be metric spaces. x > 0. D2 C iR0. then it is often useful to
define a new function fl with domain D1 as follows:
fl = {(a. Consider for example the function f (x) = 2. then f is said to be continuous on X. the corresponding sequence If (xn)} converges to y = f (x). In the special case of functions of the form f : IR0 * IR'. that is.
Definition 2. such that d1 (x.
Then f : X i Y is called
uniformly continuous on X if for every e > 0 there exists b = 6(e) > 0.b) E f : a E Dl}. Indeed.
. then it is continuous on A. that is. then we say that f is continuous at x E 1R" if given e > 0 there exists b > 0 such that
iix yMM < b => 1f(x) .
This definition is clearly local. oo) but not uniformly continuous.f(y) I
< E. defined by xEDI (f2 0 f1)(x) = f2[fl(x)]
Definition 2. di). The exception to this occurs when working with compact sets. FUNCTIONS
47
If f is a function with domain D and D1 is a subset of D. y c X. not every continuous function is uniformly continuous on the same set. f (y)) < e. and consider functions fl and f2 of the form fl : D1 4 II8n and f2 : D1 > IR". d2) be metric spaces and consider a function f : X > Y. that is. x0) depends on both a and the particular x0 E X. We say that f is continuous at xo if for every real e > 0 there exists a real 6 = 8(e. Clearly. uniform continuity is stronger than continuity. If fl(Di) C D2.2.17 Let (X. where x.
The function fl is called a restriction of f to the set D1.18 Let (X.7. Definition 2.
X2 E X and any A. This norm is
sometimes called the induced norm because it is "induced" by the p vector norm. MATHEMATICAL PRELIMINARIES
2. given A E II8"`n.L(x1) + pL(x2). = maxE lain
Ilxlloo=1
i
J=1
where Amax (AT A) represents the maximum eigenvalue of AT A. The operator norm applied to this case originates a matrix norm.
Definition 2. A special case of interest is that when the vector spaces X and Y are Rn and 1R'.16) (2.17)
(2. Important special cases are p = 1.
. this matrix defines a linear mapping A : Rn * R'n of the form y = Ax.13)
The function L is said to be a bounded linear operator if there exist a constant M such that
JIL(x)Ij < MIjxII
Vx E X.1
Bounded Linear Operators and Matrix Norms
Now consider a function L mapping vector spaces X into Y. For this mapping. It is not difficult to show that
m
max IAxjIi = maxE lazal
IlxIIi=1
7
(2. In this case all linear functions A : Rn * R'n are of the form
y=Ax.
xER'. respectively.15)
Where all the norms on the righthand side of (2.
or a linear transformation) if and only if given any X1.14)
The constant M defined in (2.
(2.14) is called the operator norm. 2. Indeed. and oo.7.15) are vector norms. p E R
L(Axi + µx2) = .19 A function L : X 4 Y is said to be a linear operator (or a linear map. yERm
where A is a m x n of real elements.18)
x=1
max IIAxII2 =
IIXI12=1
Amax(ATA)
n
IIAIIc
max IIAxII.
(2.48
CHAPTER 2. we define
def
zoo
IIAxp =
IIx11P
1121
1
(2.
and by {el.f(x) = f'(x)h + r(h)
where the "remainder" r(h) is small in the sense that
lim h. e2. h
Now consider the case of a function f : R" + R.f'(x)hIl
lhll
=0
exists. b) C R and
f(x + hh . 1 < i < 7n the components of the function f. to distinguish it from the partial derivatives that we discuss next. since D is open and f (x + h) E JRt.21 that h E R'.
Definition 2.
IhII
is small enough.20 A function f : R > R is said to be differentiable at x if f is defined in an open interval (a.e2 =
f(x) =
el =
en =
fBl
1
Lfm(x)I
0
0
.21 A function f : R' 4 R' is said to be differentiable at a point x if f is
defined in an open set D C R" containing x and the limit
h+0
lim
If(x+h) .
Notice.f(x) . en} the standard basis in R"
fi (x)
f2(x)
fol
I
f°1
.21 is called the differential or the total derivative of f at x. then
The derivative f'(x) defined in Definition 2.8
Differentiability
Definition 2.Or(h) = 0.
The limit f'(x) is called the derivative of f at x. The function f is said to be
differentiable if it is differentiable at each x in its domain. then
f(x + h) .8. . In the following discussion we denote by f2. in Definition 2. of course. If
x + h E D.f(x)
f'(x) = l
a
exists.2. DIFFERENTIABILITY
49
2. If the derivative exists.
On the other hand. stated without proof.x2) =
is not continuous at (0. if f : R' > R' is differentiable. For
xEDCIR" andl<i<m. The following definition introduces the concept of continuously differentiable function. implies that the continuously differentiable property can be evaluated directly by studying the partial derivatives of the function.. then it is continuous on D. if f is known to be differentiable at a point x.]R"` is differentiable on an open set D C 1R". For example. both A and t x exist at every point in ]R2.21 is not implied by the
existence of the partial derivatives of the function. 1 <j <n we define
Of.22 Consider a function f : ]R" 4 IItm and let D be an open set in ]R".
9X1 '1
§Xn
[f'(x)] _
xi
x
If a function f : 1R . as defined in Definition 2. on the other hand.
Differentiability of a function. However. (or ) are called the partial derivatives of f .
. f:(x + Deb) . the function f : ]R2 > IR
f(xl.
Definition 2. Even for continuous functions the existence of all partial derivatives does not imply differentiability in the sense of Definition 2. y E D and II x .
The following theorem.. MATHEMATICAL PRELIMINARIES
Definition 2.50
CHAPTER 2.yII < d. 0) and so it is not differentiable.21. Indeed. and they determine f'(x). The derivative of the function. then the partial derivatives exist at x. lim = D2f: def o+o Oxj
AC R
provided that the limit exists.f:(x) = 0.
The functions D3 f.23 A differentiable mapping f of an open set D C IIt" into R' is said to be continuously differentiable in D if f' is such that
IT(y) . may or may not be continuous. f'. then f'(x) is given by the Jacobian matrix or Jacobian transformation [f'(x)]:
.f'(x)II < e
provided that x.
form a vector
space. together with the operations of addition and scalar multiplication. We will frequently denote this vector by either
ax
or V f (x)
vf(x) = of = of of
Ox
of
. If a function f has continuous partial derivatives of any order.2.8.23 is often restated by saying that
A function f : 1R' > Rm is said to be continuously differentiable at a point
xo if the partial derivatives '. if it is continuously differentiable at every point of D. if a function f : R > lR' has
continuous partial derivatives up to order k. and we write f E C. In general.8. Then f is continuously . the function is said to be in Ck. Abusing this terminology and notation slightly. A function f : IR" . Definition 2. DIFFERENTIABILITY
51
Theorem 2.
2.
Because it is relatively easier to work with the partial derivatives of a function.
of
axn
This vector is called the gradient of f because it identifies the direction of steepest ascent of f.6 Suppose that f maps an open set D C IR' into IRm. f is said to be sufficiently smooth where f has continuous partial derivatives of any required order.
Summary: Given a function f : R" i 1R' with continuous partial derivatives. If f : IR' + R.. 1 < i < m. 1 < j < n exist and are continuous
at xo.
axl'ax2'
axn
It is easy to show that the set of functions f : IR'2 + IR' with continuous partial derivatives. and we write f E Ck. This vector space is denoted by Cl. then the Jacobian matrix is the row vector
of of
f1(X)
axl' ax2'
. 1 < j < n exist and differentiable in D if and only if the partial derivatives are continuous on D. 1 < i < m..
. abusing the notation slightly. we shall use f'(x) or D f (x) to represent both the Jacobian matrix and the total derivative of f at x.1
Some Useful Theorems
We collect a number of well known results that are often useful. then it is said to be smooth.1W is said to be continuously differentiable on a set D C IR'`.
Theorem 2.24 A function f (x) : Rn . b).1 (Chain Rule) Let fl : D1 C R" + R".a). known as Lipschitz continuity.
2.f (a)lb = Ilf'(c)(b 
a) 11. Then there exists a point c in (a.7 (MeanValue Theorem) Let f : [a. Finally.
.10.9 (Inverse Function Theorem) Let f : Rn > R" be continuously differentiable in an open set D containing the point xo E R". In the following theorem S2 represents an open subset of lR'.19)
It is said to be Lipschitz on an open set D C R" if it satisfies (2.
Theorem 2. and assume that f is differentiable at every point of S.lRm is said to be locally Lipschitz on D if every point of D has a neighborhood Do C D over which the restriction of f with domain D1
satisfies
If(XI) .x2j1. and f2 : D2 C R" + IR". f is said to be globally Lipschitz if it satisfies (2. b] > R be continuous on the closed interval [a.
Theorem 2. b] and differentiable in the open interval (a. then fl o f2 is differentiable at a.52
CHAPTER 2.
and
D(fl o f2)(a) = DF1(f2(a))Df2(a). We now introduce a stronger form of continuity.19) for all x1i x2 E D with the same Lipschitz constant.9
Lipschitz Continuity
We defined continuous functions earlier. As will be seen in Section 2.
A useful extension of this result to functions f : R" + R' is given below. Then there exist an open set Uo containing xo and an open set Wo containing f(xo) such that f : Uo + Wo has a continuous inverse f 1 : WO . and let f'(xo) # 0.f (X2)11 < LJJx1 . MATHEMATICAL PRELIMINARIES
Property 2. If f2 is
differentiable at a E D2 and fl is differentiable at f2(a). The there exists a point c on S such that
If (b)
.Uo that is differentiable and for all y = f (x) E WO satisfies
Df1(y) = [Df(x)] 1 = [Df1(y)]1.19) with D = Rn. this property will play a major role in the study of the solution of differential equations.
(2. b) such that
f(b) .8 Consider the function f : 0 C R" > R'" and suppose that the open set w contains the points a.f(a) = f'(c) (b . and b and the line segment S joining these points.
Definition 2.
10 provides a mechanism to calculate the Lipschitz constant (equation (2. we have that
I0(1) . Now define the function 0 : [0.x211= LIIxi . It is in fact very easy to find counterexamples that show that this in not the case. there exist 0l E (0.x211. where
B. we conclude that the line segment y(O) = Oxl + (1 ..8.f (X2)11
ax
11x1 . the converse is not true.(xo)={xED:lxxol <r}
and consider two arbitrary points xl. L can be estimated as follows:
11af11
< L. calculating 0'(O) using the chain rule. LIPSCHITZ CONTINUITY
53
Notice that if f : 1R' 4 1R' is Lipschitz on D C 1R". that is.10 If a function f
:
IR" x R' is continuously differentiable on an open set
D C IIE". then. 1] r R' as follows:
0(0)=fo'r=f(7(O))
By the meanvalue theorem 2. However. f = f (x.2. we can define 8 = e/L. that is. then the previous definition can be extended as follows
. Noticing that the closed ball Br is a convex set. substituting 0(1) = f (xi).x2ED
which implies that f is uniformly continuous. t). then it is locally Lipschitz on D.9. 0(0) = f (X2)
If(XI) .0(0)11= 110'(Bl)11
Thus.
VX E Br(xo). and we have that
I41 x211<8 =>
1f(xi)f(x2)11<Lllxlx211<LLe. is contained in Br(xo). 1) such that
II0(1) .O)x2i 0 < 0 < 1. given e > 0.20)
Theorem 2.
Theorem 2.
(2.
Proof: Consider xo E D and let r > 0 be small enough to ensure that Br(xo) C D.20). x2 E Br.
xi. Indeed.0(0)II =
ax(xl x2)
and.
ax
If the function f is also a function of t. The next theorem gives an important
sufficient condition for Lipschitz continuity. not every uniformly continuous function is Lipschitz.
xn1)
(2. T].yll
. t) < L
(2. Proof: A point xo E X satisfying f (xo) = xo is called a fixed point.25 A function f (x. and denote x1 = f (xo).
2. y c S. f (xn1)
d(xn. the contraction mapping principle is sometimes called the fixedpoint theorem. T.t)_f(y.
Theorem 2.
It is an straightforward exercise that every contraction is continuous (in fact. MATHEMATICAL PRELIMINARIES
Definition 2. y E D.t)1I <LIIx
.20). uniformly continuous) on X.22)
for any two points x. x2 = A X0. which we use later to analyze the existence and uniqueness of solutions of a class of nonlinear differential equations. oo). Since f maps S into itself. oo) if it is locally Lipschitz in x on every D1 x [to. xn) = d(f (xn). It is said to be locally Lipschitz on D C 1Rn x (to. . Every contraction mapping f : S > S has one and only one x E S such that f (x) = x.21)
on D x [to. Thus. This construction defines a sequence {xn} = { f (xn_ 1) }. T] if it satisfies (2. Vt E [to. T] C D x [to. T] satisfies (2. d) be a metric space.20) for all x1. xk E S Vk.12 (Contraction Mapping Principle) Let S be a closed subset of the complete metric space (X.10
Contraction Mapping
In this section we discuss the contraction mapping principle.54
CHAPTER 2.
Theorem 2.
Vx. xn+1 = f (xn).f(y)) : e d(x.11 Let f :
1R' x IR * IR' be continuously differentiable on D x [to. d). Tj and assume that the derivative of f satisfies
of (x. x2 E D and all t E [to.26 Let (X.T]. It is said to be Lipschitz in x on D x [to. We have
d(xn+1. y)
(2. T] C In x IR if every point of D has a neighborhood D1 C D over which the restriction off with domain D1 x [to. The f is Lipschitz continuous on D with constant L:
11f(x. and let S C X. A mapping f : S * S is
said to be a contraction on S if there exists a number £ < 1 such that
d(f(x). t) : lR' x R > IRn is said to be locally Lipschitz in x on an open set D x [to.23)
.
Definition 2. Let xo be an arbitrary point in S.
xi1)
t=n+1
<
(Cn + `n+l +. Then. We have
f(x)=x. noticing that all the summands in equation (2.25)
d(xn. Vn. y)
<
d(x.28) can be satisfied if and only if d(x. limn. y)
f(y)=y
by (2.xo)
55
(2. Since (2. we have seen that xn E S C X.26)
(2. f is continuous.
d(f(x). To prove uniqueness suppose x and y are two different fixed points. for some x E X. It follows that
f(x)
f(xn) = rnn (xn+1) = x.24)
Now suppose m > n > N. by successive applications of the triangle inequality we obtain m
d(xn..
. xo) = E. we have x = y.(xn) = x. + m1) d(xi. {xn} is a Cauchy sequence.
The series (1 + x + x2 + ) converges to 1/(1 .
Moreover. we have:
1+e+. we have that
d(xn..+Cmn1 <
1
(1 .x). xo)
(2. x0). + Cmn1) d(xl. d) is complete.xn) < End(xl.28)
where < 1.f(y)) = d(x. {xn} has a limit. y)
because f is a contraction
(2. Therefore.25) are positive. the existence of a fixed point is proved. and since S is closed it follows that x E S.
In other words.26)
(2. Thus. xo)
£ d(xl. by induction
d(xn+l. m > N. y) = 0. xm) <
defining
1
n
d(xl. xm) < n(1 + 1 + ....2. Now.. xm) < e n.27)
=
d(x. since f is a contraction. Since the metric space (X. xm)
E d(xt.10. CONTRACTION MAPPING and then.. for all x < 1... This completes the proof.E)
d(xn.
30) and assume that f (x. and (2) uniqueness of the solution.30).x211
Vx1i x2 E B = {x E Rn : llx . The
.29)
Theorem 2. this is not the case for nonlinear differential equations.30) is a fixed point of the mapping F that maps x into Fx. r] dr t
to
(2. x)
x(to) = xo
(2.Vt E [to. given the state space realization of a linear timeinvariant system
x = Ax + Bu.t= f (t.32) implies that a solution of the differential equation (2. then x(t) satisfies
x(t) = xo + J f [x (T). t) . t).
x(0) = xo
we can find the following closed form solution for a given u:
x(t) = eAtxo +
Lt eA( tBu(rr) dr. t1]. Indeed. t) is piecewise continuous in t and satisfies
ll
f(x1.
In general.
In this section we derive sufficient conditions for the existence and uniqueness of the solution of a differential equation of the form
. Equation (2. x(to) = xo (2.xo11 < r}. it is usually possible to derive a closedform expression for the solution of a differential equation.
Proof: Notice in the first place that if x(t) is a solution of (2.30) has a unique solution in [to. when dealing with nonlinear differential equations two issues are of importance: (1) existence. Thus.32) where Fx is a continuous function of t. there exist some b > 0 such that (2.56
CHAPTER 2. to + 61.31)
which is of the form
x(t) = (Fx)(t) (2.11
Solution of Differential Equations
When dealing with ordinary linear differential equations.f (x2. For example. t)ll < L 11x1 .13 (Local Existence and Uniqueness) Consider the nonlinear differential equation x = f (x. MATHEMATICAL PRELIMINARIES
2.
we denote
Ilxlic =
tE m. To complete the proof.11.
Clearly.
. and thus a
unique solution in S.30) in X (not only in S). Step (1): From (2. In step (2) we show that F is a contraction from S into S. To start with. In step (1) we show that F : S > S. in turn implies that there exists one and only one fixed point x = Fx E S.xo =
fo
f (r. x
1X(t)11
(2. SOLUTION OF DIFFERENTIAL EQUATIONS
57
existence of a fixed point of this map.to) [Lr + ]. tl] (since it is piecewise continuous).33) is a complete metric space. we define the sets X and the S C X as follows:
X = C[to. It can also be shown that S is closed and that X with the norm (2.2.xoll
<
f[Lr+]dr
t
< (t . This.r)I] dr t
o
f[L(Ix(r) xoll)+C]dr
o
and since for each x E S. x(r)) d7
The function f is bounded on [to. S C X.t)jI
IIFxxoll
< j[Mf(x(r)r)_f(xor)+ If(xo. Because we are interested in solutions of (2. It follows that we can find such that
tmaxjf(xo. can be verified by the contraction mapping theorem.to+a].xo II < r we have
IFx .31).33)
and finally
S={xEX:IIxxoMIc<r}.30) in X must be in S. The final step. in turn. to + 8]
thus X is the set of all continuous functions defined on the interval [to. IIx . we will proceed in three steps. step (3). consists of showing that any possible solution of (2. Given x E X. we obtain
t
(Fx)(t) .
x2 E S and proceed as
II (Fx1)(t) .Fx211c < L6II xl . 1) .t0+6] IFx .T)II + Ilf(xo.
Thus. This implies that there is a unique solution of the nonlinear equation (2.30) in S.xoll < 6(Lr+
and then.r) f(xo. Starting at xo at to. x(t) crosses the border B for the first time. T) ] dr
11
o.to).T)II J dT
o
< <
j[L(Ilx(r)xoII)+]dr
j[Lr+]dr.
. T) 11 dT
d7
<
L Ilxl(T) . for all t < tl we have that
IIx(t) x011
< ft[ IIf(x(r). < PIIx1 . For this to be the case we must have
Ilx(tl)xoll=r.xollc= tE[TT.f (x2 (T ).30) in X must lie in S.[
f (x1(T ).) (tl . a solution can leave S if and only if at some t = t1.xo II < (Lr + .
o
t
It follows that r = 11x(ti) .(Fx2)(t)II =
t
Jtot
t
to
t
f(x1(T). T) .x211. we conclude that F is a construction. we consider x1.
Step (3): Given that S C X.f(x2(T). MATHEMATICAL PRELIMINARIES
It follows that
max IFx . to complete the proof.x211c
for 6 <
L
Choosing p < 1 and 6 < i .
Step (2):To show that F is a contraction on S. we must show that any solution of (2.x2(T)II
t
< LIlx1x2IIc ft
o
dT
IIFxi .58
CHAPTER 2. choosing 6 <
follows:
means that F maps S into S.
as outlined in Theorem 2.
Theorem 2. y and z be linearly independent vectors in the vector space X.12
Exercises
(2.30) has a unique solution in [to.t)II
<_ 0
dx1. t) II
<
Lllxl . t1]. EXERCISES Denoting t1 = to + p.
Theorem 2.2) Let x.30) and assume that f (x. (2. t) satisfying somewhat mild smoothness conditions. According to the theorem. t) is piecewise continuous in t and satisfies
Il f (x1. is this set a vector space over the field of real numbers? Is so. a]T. is not very restrictive and is satisfied by any function f (x. and z + x are also linearly independent?
(2. The local Lipschitz condition of theorem 2. below we state (but do not prove) a slightly modified version of this theorem that provides a condition for the global existence and uniqueness of the solution
of the same differential equation. to+s].3) Under what conditions on the scalars a and 3 in C2 are the vectors [1.12. which is very restrictive.131T linearly dependent?
. dt E [to.14 is usually too conservative to be of any practical use. i.x211
1f(xo. t) . we have that if u is such that
µ
59
r
Lr+
then the solution x(t) is confined to B. find a basis. ti]. Is it correct to infer that x + y.1) Consider the set of 2 x 2 real matrices. y + z. in the interval [to.13 on the other hand. Then. the solution is guaranteed to exist only locally.13 provides a sufficient but not necessary
condition for the existence and uniqueness of the solution of the differential equation (2.14 (Global Existence and Uniqueness) Consider again the nonlinear differential equation (2. x2 E R".
(2.e. With addition and scalar multiplication defined in the usual way.f (x2. The price paid for this generalization is a stronger and much more conservative condition imposed on the differential equation. This completes the proof. and [1. For completeness.11
2. notice that the conditions of the theorem imply the existence of a global Lipschitz constant.2.
It is important to notice that Theorem 2. Indeed.30).
(2.
(2. respectively. N ATHEMATICAL PRELIMINARIES
(2.11x112. explain why. (ii) The intersection of a finite number of open sets is open. and IIxII.12
is a matrix norm' (i.
(i) Assuming that A is nonsingular.6) Show that the vector norms II
IIxl12
111.5) For each of the norms 11x111.
.0]T. xn its
v1
11x112
(2. d) be a metric space. (iii) The union of any collection of open sets is open.18).60
CHAPTER 2. however. Show that
(i) X and the empty set 0 are open.9) Let (X. it satisfies properties (i)(iii) in Definition 2.Iloo satisfy the following:
<
<_
:5
Ilxll.
(2. the sets IIxiii < 1. II

112. what can be said about its eigenvalues?
(ii) Under these assumptions.16).
'This norm is called the Frobeneus norm. < 1..5
IIx1Io
IIxII2
< n 11x112 < V"n IIxI12
.4) Prove Theorem 2. sketch the "open unit ball" centered
at 0 = [0.. (ii) Show that IIIIIP = 1 for any induced norm lip. find the eigenvalues and eigenvectors of A'. and IlXlloo in lR ..7) Consider a matrix A E R1 ' and let )q.8. and (2.8) Consider a matrix A E R' '
(i) Show that the functions IIAll1. lIx112 < 1. in not an operato norm since it is not induced by any
vector norm. and IIAIIm defined in equations (2. i. IIA112.e. and 11
1141
1x111
.1.17). (iii) Show that the function
I

n
IIAII =
trace(ATA)
1: 1a.
(2.e. (not necessarily linearly independent) eigenvecti irs.
. Show that this norm. are norms.8). is it possible to ex press the eigenvalues and eigenvectors
of A' in terms of those of A? If the am wer is negative. namely. If the
answer is positive. (2. satisfy properties (i)(iii) in Definition 2. An be its eigenvalues and X1.
14) Consider the function
x°+y`
0
for for
(x.2. d2) be metric spaces and consider a function f : X * Y. 0):
x2 _y2
(1)
x
lOm 0 1 + x2 + y2
(ii)
lim 2 2 x*0.yi0 x+ Y2
x
(iii)
X
lim
O
(1 + y2) sin x
X
(2.12) Let (X. d) be a metric space.11) Let (X. whenever they exist.
(2.
lira
f (x.
. Proceed
as follows:
(i) Show that
x40.
(2. 0). and determine
whether each function is continuous at (0.12. if x . Show that
(i) X and the empty set 0 are closed. y) of exercise (2.
(2. EXERCISES
61
(2.13). 0)
x=y=0
Show that f (x.y=x+0
0.
(iii) The union of a finite collection of closed sets is closed. then limx (ii) Show that
x+O. d) is closed if and only if its complement is
open. dl) and (Y.13) Determine the values of the following limits. (ii) The intersection of any number of closed sets is closed.0 and y > 0 along the parabola y = x2. y) = 0
0
that is. y) = 2
1
thus the result. y) # (0. Show that f is continuous if and only if the inverse image of every open set in Y is open in X. show that the partial derivatives and both exist at (0. This shows that existence of the partial derivatives does not imply continuity of the function.10) Show that a set A in a metric space (X.15) Given the function f (x. y) is not continuous (and thus not differentiable) at the origin.y=x
lim
f (x.
(2.
(iii) f (x) = satx. (iv) f (x) = sin(' ).)
(2. (2. 1). (b) continuously differentiable at x = 0. y = r sin s.22) For each of the following functions f : 1R2 + IIY2. MATHEMATICAL PRELIMINARIES
(2.
(i) z = x3 + y3. (2.
(i) f (x.R. determine in each case whether f is (a) continuous at x = 0.
(i) f (x) = ex2
(ii) f (x) = Cos X.2s. 0)
0
x=y=0
and
x2y2
is continuous at (0. determine whether f is (a) continuous
at x = 0. 1).20) Show that f = 1/x does not satisfy a Lipschitz condition on E = (0.17) Given the following functions. N.62
CHAPTER 2.16) Determine whether the function
x2 2
for for
(x.21) Given the following functions f : IR . (c) locally Lipschitz at
x=0. (Suggestion: Notice that y 7 < x2. y) = ex' cos x sin y
(ii)
y) = x2 + y2
(2. x = 2r + s.
(2.x2(xl + x2)
(ii)
{ x2 = x1 + x2 /32 . (ii) z = xr+y7 x = r cos s. and.
(i)
x1 = x2 .18) Use the chain rule to obtain the indicated partial derivatives. (d)
Lipschitz on some D C R2.19) Show that the function f = 1/x is not uniformly continuous on E = (0. az. 0). (c) locally Lipschitz at x = 0. y) # (0.x1(xl +x2)
22 = x1 . (b) continuously differentiable at x = 0.x1 ( l
:i1 =x2}'xl(N2x1 x2)
x2 2 )
. find the partial derivatives
f (X. y = 3r . and a
(2.
12
J ±1 = X2
ll
k. For general background in mathematical analysis.10.
. See also References [59] and
[55]. The material on existence and uniqueness of the solution of differential equations can be found on most textbooks on ordinary differential equations. and Rudin [62].
remarkably wellwritten account of vector spaces.9. see Halmos [33]. [88] and [41]. including Theorem 2. For a complete.12. we refer to Bartle [7]. Section 2. EXERCISES
63
J
r1 = X2
Sl 22
=
m21 .22
Notes and References
There are many good references for the material in this chapter. We have followed References [32]. is based on Hirsch [32].2. Maddox [51].
.
there are many definitions of stability of systems.
3. The more general case of nonautonomous systems is treated in the next chapter.1). Throughout this chapter we restrict our attention to autonomous systems.Chapter 3
Lyapunov Stability I: Autonomous Systems
In this chapter we look at the important notion of stability in the sense of Lyapunov. which applies to equilibrium points. we explore the notion of stability in the sense of Lyapunov.1) represents an unforced system
65
. Other notions of stability will be explored in Chapters 6 and 7.
'Notice that (3. In this chapter.
Indeed. try to determine whether such a system is well behaved in some conceivable sense. The problem lies in how to convert the intuitive notion a good behavior into a precise mathematical definition that can be applied to a given dynamical system. In the sequel we will assume that x = xe is an equilibrium point of (3.1
Definitions
x=f(x)
Consider the autonomous systems
f:D Rn
where D is an open and connected subset of Rn and f is a locally Lipschitz map from D into Rn. In other words. Exactly what constitutes a meaningful notion of good behavior is certainly a very debatable topic. In all cases the idea is: given a set of dynamical equations that represent a physical system. xe is such that f(xe) = 0.
LYAPUNOV STABILITY I. we start by measuring proximity in terms of the norm II II.
This definition captures the following conceptwe want the solution of (3. To this end.
Definition 3.1 The equilibrium point x = xe of the system (3. the equilibrium point is said to be unstable .
t +00
. then the
equilibrium point is said to be stable (see Figure 3.X. AUTONOMOUS SYSTEMS
Figure 3. If this objective is accomplished by starting from an
initial state x(0) that is close to the equilibrium xef that is.1) to be near the equilibrium point xe for all t > to.2 The equilibrium point x = xe of the system (3. The main limitation of this concept is that solutions are not required to converge to the equilibrium xe. and we say that we want the solutions of (3. I1x(0) .1) is said to be convergent
if there exists 61 > 0 :
IIx(0) . 3b=6(e)>0
IIx(0) .xell < 6
IIx(t) . 11 < e
Vt > to
otherwise. staying close to xe is simply not enough.xell < e.66
CHAPTER 3.xell < 6.1).xeII < 6 = lim x(t) = xe.1) to remain inside the open region delimited by 11x(t) .1: Stable equilibrium point. Very often. Definition 3.
This definition represents the weakest form of stability introduced in this chapter.1) is said to be stable if for
each e > 0. Recall that what we are trying to capture is the concept of good behavior in a dynamical system.
We now introduce the following definition.
IT such that
IIx(0) .
Equivalently.xell < 6.3. A > 0 such that
11x(t)
. DEFINITIONS
67
Figure 3. It is said to be globally exponentially stable if (3.2: Asymptotically stable equilibrium point. as defined in 3. is convergent if for any given cl > 0.2) holds for any
xER".1) is said to be (locally)
exponentially stable if there exist two real constants a.xell < a Ilx(0) . Indeed.
.xell < dl
11x(t)
.3 The equilibrium point x = xe of the system (3.2)
whenever Ilx(0) .xell e
At
Vt > 0
(3. x.4 The equilibrium point x = xe of the system (3. yet does not satisfy the conditions of Definition 3.
Definition 3.2) is the desirable property in most applications.
A convergent equilibrium point xe is one where every solution starting sufficiently close to xe will eventually approach xe as t 4 oo.1 and is therefore not stable in the sense of Lyapunov. which makes precise this idea. It is important to realize that stability
and convergence.1) is said to be asymptotically stable if it is both stable and convergent. The principal weakness of this concept is that it says nothing about how fast the trajectories approximate the equilibrium point.
Asymptotic stability (Figure 3.1 and 3.
Definition 3.2 are two different concepts and neither one of them implies the other. There is a stronger form of asymptotic stability. referred to as exponential stability.1. it is not difficult to construct examples where an equilibrium point is convergent.xell < el
Vt > to + T.
AUTONOMOUS SYSTEMS
Clearly. exponential stability is the strongest form of stability seen so far.3.=
Z1 = 21
:i2
= x2 . In general. in the sequel we will state the several stability theorems assuming that xe = 0. We have
my + fly + ky = mg
defining states x1 = y. in the definitions and especially in the proofs of the stability theorems. we can perform a change of variables
and define a new system with an equilibrium point at x = 0.
Given this property. we obtain the following state space realization. There is no loss of generality in doing so. Very often. studying the stability of the equilibrium point xe for the system x = f (x) is equivalent to studying the stability of the origin for the system y = g(y).
y=
y = xxe
i
= f(x)
f
f(x) = f(y + xe)
9(y)
Thus. It is also immediate that exponential stability implies asymptotic stability. According to this
z1 = x1 Z2
mg
k
. however.
According to this. if this is not the case. The converse is.1 Consider the massspring system shown in Figure 3.
The several notions of stability introduced so far refer to stability of equilibrium points. the equilibrium point ye of the new systems y = g(y) is ye = 0. since
9(0) = f(0 + xe) = f(xe) = 0.68
CHAPTER 3. Now define the transformation z = xxe. not true.1) and define. consider the
equilibrium point xe of the system (3. To see this. x2 = y.
Example 3.
. J ±1 = X2
ll
x2=mkxlmx2+9
which has a unique equilibrium point xe = (k . it is assumed that the equilibrium point under study is the origin xe = 0. the same dynamical system can have more than one isolated equilibrium point. LYAPUNOV STABILITY I. Indeed. 0).
POSITIVE DEFINITE FUNCTIONS
69
Figure 3.
Thus.
.3. The core of this theory is the analysis and construction of a class of functions to be defined and its derivatives along the trajectories of the system under study. and g(x) has a single equilibrium point at the origin. In the following definition.3: Massspring system. This is the center of the Lyapunov stability theory. D represents an open and connected subset of R". z = g(x).mz2
Thus.
Definition 3.5 A function V : D > IR is said to be positive semi definite in D if it satisfies the following conditions:
(1) 0 E D and V (O) = 0. We start by introducing the notion of positive definite functions. the next step is to study how to analyze the stability properties of an equilibrium point.2
Positive Definite Functions
Now that the concept of stability has been defined.mz2 + g
or
I zl = Z2
Sl
z2 = mk zl .2.
zl = z2
z2 = .
3.m (zl + k ) .
Vi = 1.{0}. In other words. In this case.
V : D 4 IR is said to be positive definite in D if condition (ii) is replaced by (ii')
(ii') V(x) > 0 in D ..{0}. V : D + R is said to be negative definite (semi definite)in D if V is positive
definite (semi definite). any x of the form x' = [0. and negative definite in D.x2]T # 0. V2(x*) = 0. n
xTQx < 0.. x21 L
0
] [ X2 ..
however.
V(x):IR">1R=xTQx.. n xTQx>0..
Example 3.
is not positive definite since for any x2 # 0. V > 0.) negative definite negative semi definite
xTQx>O. we know that its eigenvalues Ai. AUTONOMOUS SYSTEMS
(ii) V(x) > 0. Vx # 0 b At < 0.. respectively. and V < 0 in D to
indicate that V is positive definite. as we will see. Q = QT). Q is symmetric (i.
We will often abuse the notation slightly and write V > 0. for example:
Vi (x)
:
1R
2 >
R
= axi + bx2 = [x1.
Positive definite functions (PDFs) constitute the basic building block of the Lyapunov theory.2 The simplest and perhaps more important class of positive definite function
is defined as follow:s. are all real.. .dx0O b A.. b > 0
V2(x) :
IIt2
1R = axi = [xl.1.
defines a quadratic form.Vi=1. given an autonomous
.
Va > 0.70
CHAPTER 3.>0. x2] [
0
a 0
1
r
L
X2
1
I
b
J
>0. semi definite.>O. n.
Q=QT.e.Vx#0 = A. LYAPUNOV STABILITY I. i = 1.
> 0. n xTQx<O..dx#0 n
Thus... PDFs can be seen as an abstraction of the total "energy" stored in a system.
b'a.
Vx in D . .
QEIRnxn. Since by assumption. Thus we have that
positive definite positive semi definite V(.
Finally.di=1. All of the Lyapunov stability theorems focus on the study of the time derivative of a positive definite function along the trajectories of 3.
6 Let V : D + R and f : D > R'.2x2]
axl
bx2 + cos xl
1
taxi + 2bx2 + 2x2 cos x1. denoted
by LfV. we will first construct a positive definite function V(x) and study
V (x) given by
V(x) =
ddV
[axl' axe .3
Stability Theorems
Theorem 3.
11
It is clear from this example that the V(x) depends on the system's equation f (x) and thus it will be different for different systems.1 (Lyapunov Stability Theorem) Let x = 0 be an equilibrium point of ± = f (x).3. and let V : D > R be a continuously differentiable function such that
. (x)
aV dx
The following definition introduces a useful and very common way of representing this
derivative.
Example 3.3. we have that
V(x) = as f(x) = VV f(x) = LfV(x)..
Definition 3.3 Let
ax1
bx2 + cos xl
and define V = x1 + x2.1. Thus. we have
V(x) = LfV(x) = [2x1.' axn]
ax d = VV f(x) fl (x) av av av
f. according to this definition.
Thus. f : D + R". is defined by
LfV(x) = a f (x). The Lie derivative of V along f .
3. STABILITY THEOREMS
71
system of the form 3..
i = 1. given a Lyapunov function and defining
S21
522
= {x E Br : V(x) < c1} = {x E Br : V(x) < c2}
where Br = {x E R" : JJxlJ < rl }.
. and c1 > c2 are chosen such that 522 C Br. This implies that the equilibrium point is stable.1 intuitively very simple.{0}. if
V (O) = 0. Thus a trajectory satisfying this condition is actually confined to the closed region SZ = {x : V(x) < c}.
Now suppose that V(x) is assumed to be negative definite.72
CHAPTER 3. The condition V < 0 implies that when a trajectory crosses a Lyapunov surface V (x) = c. This clearly represents a stronger stability condition. A Lyapunov surface defines a region of the state space that contains all Lyapunov surfaces of lesser value.
As mentioned earlier. that is.
in D .
thus x = 0 is stable.
then we have that 522 C 521.{0}. it can never come out again. 2. positive definite functions can be seen as generalized energy functions.
thus x = 0 is asymptotically stable.
(ii) V(x) > 0 (iii) V(x) < 0
in D . In this case.
(ii) V(x) > 0
(iii) V(x) < 0
in D .{0}. a trajectory can only move from a Lyapunov surface V (x) = c into an inner Lyapunov surface with smaller c.1.
in D .2 (Asymptotic Stability Theorem) Under the conditions of Theorem 3.
In other words. the theorem implies that a sufficient condition for the stability of the equilibrium point x = 0 is that there exists a continuously differentiablepositive definite function V (x) such that V (x) is negative semi definite in a neighborhood of x = 0. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS
(i) V(O) = 0.
Theorem 3.{0}. and makes Theorem 3. The condition V (x) = c for constant c defines what is called a Lyapunov surface.
Proof of theorem 3. Using the same argument used in the proof of Theorem 3. Therefore.
The discussion above is important since it elaborates on the ideas and motivation behind all the Lyapunov stability theorems. and also in the discussion of region of attraction. to prove asymptotic stability.3={xEBr:V(x)<31Thus. f2p C B..1: Choose r > 0 such that the closed ball
Br={xE III": IxMI<r}
is contained in D. by construction.1 and 3. the solution will remain inside 52b. a) and denote
0. STABILITY THEOREMS
73
In other words.. Let
a = min V (x)
IxJJ=r
(thus a > 0. In
.
Proof of theorem 3. and from Theorem 3. We now provide a proof of Theorems 3.3. So f is well defined in the compact set Br.
It then follows that
IX(O)II < 6 = x(t) E Q0 C Br Vt > 0
and then
1x(O)11 < b
=>
Ix(t)II < T < f
Vt > 0
ED
which means that the equilibrium x = 0 is stable. These proofs will clarify certain technicalities used later on to distinguish between local and global stabilities.3. V (x) actually decreases along the trajectories of f (x). Moreover.2: Under the assumptions of the theorem.1. Now suppose that x(0) E S20. rather than semi definite.1 whenever the initial condition is inside f2b. for every real number a > 0 we can find b > 0 such that Ib C Ba. all we need to show is that 1b reduces to 0 in the limit.2. by the fact that V (x) > 0 E D). By assumption (iii) of the theorem we have that V(x) < 0
=
V(x) < V(x(0)) <'3 Vt > 0. the theorem says that asymptotic stability is achieved if the conditions of Theorem 3.
It then follows that any trajectory starting in Slp at t = 0 stays inside Sup for all t > 0.
Now choose Q E (0. by the continuity of V(x) it follows that 3b > 0 such that
< 'Q 3
(B6 C 00 C Br).1 are strengthened by requiring V (x) to be negative definite.
other words. is that is independent of the dynamics of the differential equation under study. However. since by assumption f7(x) < 0 in D.4 (Pendulum Without Friction)
Using Newton's second law of motion we have. of course. when a function is proposed as possible candidate to prove any form of stability. such a is said to be a Lyapunov function candidate. This completes the proof.4
Examples
Example 3.
Thus.4: Pendulum without friction. while V depends on this dynamics in an essential manner. what is rather tricky is to select a whose derivative along the trajectories near the equilibrium point is either negative definite. AUTONOMOUS SYSTEMS
Figure 3. For this reason. LYAPUNOV STABILITY I. 52b shrinks to a single point as t > oo. The reason. V (x) tends steadily to zero along the solutions of f (x).
3. then V is said to be a Lyapunov function for that particular equilibrium point. this is straightforward. If in addition happens to be negative definite.
Remarks: The first step when studying the stability properties of an equilibrium point
consists of choosing a positive definite function
Finding a positive definite function is fairly easy.74
CHAPTER 3.
ma = mg sin 9
a = la = l9
. or semi definite.
we compute the total energy of the pendulum (which is a positive function). This situation. k = 1. in this case we proceed inspired by our understanding of the physical system. however.j1 c
22 = gsinxl
=
x2
which is of the desired form i = f (x). EXAMPLES
where l is the length of the pendulum. In general. thus. we see that because of the periodicity of cos(xj).
. With respect to (ii).
Thus
E = 2ml2x2 + mgl(1 . We have
E=K+P
= 2m(wl)2 + mgh
where
(kinetic plus potential energy)
w = 0 = x2
h = l(1 . Thus. The origin is an equilibrium point (since f (0) = 0). .2. i. 2. e. with D = ((27r.cosO) = l(1 .3.. we have that V(x) = 0 whenever x = (xl. 0) T. and use this quantity as our Lyapunov function candidate. can be easily remedied by restricting the domain of xl to the interval (27r. we need to propose a Lyapunov function
candidate V (x) and show that satisfies the properties of one of the stability theorems seen so far.4.
is not positive definite.
We now define V(x) = E and investigate whether satisfy the and its derivative conditions of Theorem 3. 1R)T. choosing this function is rather difficult. 27r). Clearly. Namely. 21r).1 and/or 3.cosxl).
To study the stability of the equilibrium at the origin. however.cosxl). and a is the angular acceleration. we take V : D 4 R. V(0) = 0. defining property (i) is satisfied in both theorems. Thus
75
mlO+mgsin0 = 0
or B+ l sing = 0
choosing state variables
xl = 0 { X2 = 0
we have
. x2)T = (2krr.
{0}
V(x) _ VV . The added friction leads to a loss of energy that results in a decrease in the amplitude of the oscillations. The energy is the same as in Example 3. . this version of the pendulum constitutes an asymptotically stable equilibrium of the origin. m12x2] [x2.nk. In the limit.klO
defining the same state variables as in example 3.4 is consistent with our physical observations.cosxl) >0
in D . There remains to evaluate the derivative of along the trajectories of f(t).
The result of Example 3. f2(x)]T
k x21T
[mgl sin xl. all the initial energy supplied to the pendulum will be dissipated by the friction force and the pendulum will remain at rest. 9 sin xl
m
_ k12x2.
axl ax2
[fl (x). We have
V(x) _ VV f(x) = IV aV
axl ax2
[fl(x). Example 3. ml2x2] [x2.5 (Pendulum with Friction) We now modify the previous example by adding
the friction force klO
ma = mg sin 0 .4 we have
X2
X2
S1Ilxl . V : D > R is indeed positive definite.mglx2 sin xl = 0. This means that the sum of the kinetic and potential energy remains constant.1. f(x) = IV IV
. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS
With this restriction.76
CHAPTER 3.f2(x)1T
_ [mgl sin xl. a simple pendulum without friction is a conservative system. Thus. Thus
V(x) = 2m12x2 + mgl(1 . The pendulum will continue to balance without changing the amplitude of the oscillations and thus constitutes a stable system. Indeed.9 sinxl]T
mglx2 sin x1 .x2
Again x = 0 is an equilibrium point. In our next example we add friction to the dynamics of the pendulum.4.
Thus V (x) = 0 and the origin is stable by Theorem 3.
.
but cannot conclude asymptotic stability as suggested by our intuitive analysis.
. An equilibrium point that has this property is said to be globally asymptotically stable. it is often important to know under what conditions an initial state will converge to the
equilibrium point.5.
This example emphasizes the fact that all of the theorems seen so far provide sufficient but by no means necessary conditions for stability.
To study the equilibrium point at the origin.
Thus. x2] [xl (X2 + x . V(x) > 0 and V(x) < 0.1. In this case the solution not only stays within e but also converges to xe in the limit. Namely. When the equilibrium is asymptotically stable. for example.Q2). the definition of stability.xell < b
or in words.6 Consider the following system:
±2
XI + x2(x1 + x ./32) + x2. ASYMPTOTIC STABILITY IN THE LARGE
77
Thus V(x) is negative semidefinite. or asymptotically stable in the large. We have
V (x)
= VV f (x)
[xl//. It is not negative definite since V(x) = 0 for x2 = 0.
regardless of the value of xl (thus V (x) = 0 along the xl axis).xell < e. xlx2 (2 + x2 .2. any initial state will converge to the equilibrium point. and it follows that the origin
is an asymptotically stable equilibrium point.5
Asymptotic Stability in the Large
A quick look at the definitions of stability seen so far will reveal that all of these concepts are local in character. According to this analysis. provided that (xl +x2) < Q2. More important is the case of asymptotic stability.
provided that
11x(0) . we define V(x) = 1/2(x2 +x2). we conclude that the origin is stable by Theorem 3. V(x) is not negative definite in a neighborhood of x = 0.a2). The equilibrium xe is said to be stable if
JIx(t) .3. Consider. since we were not able to establish the conditions of Theorem 3.Q2)]T xllxl+2'32)+x2(xi+2'32)
(xi + x2)(xl + 2 .
Example 3. In the best possible case. The result is indeed disappointing since we know that a pendulum with friction has an asymptotically stable equilibrium point at the origin. this says that starting "near" xei the solution will remain "near" xe.
3.
Figure 3. This property.. it is however not sufficient.3 (Global Asymptotic Stability) Under the conditions of Theorem 3.
At this point it is tempting to infer that if the conditions of Theorem 3. _ {x E R' Jxii < r} and then showed that Stp C B. While this condition is clearly necessary. the surface is open.3}. This functions are called radially unbounded.1 (and so also that of theorem 3.1..7 Consider the following positive definite function:
=
V(x)
X2
1+X2
1
+ x2. if it is stable and every motion converges to the equilibrium
ast3oo. is allowed to be the entire space lR the situation changes since the condition V(x) < 3 does not. The following example shows precisely this. More precisely. define a closed and bounded region. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS
Definition 3.
Definition 3.1 we started by choosing a ball B.
The region V(x) < Q is closed for values of 3 < 1. if V(.8 Let V : D 4 R be a continuously differentiable function. even if V(x) < Q. in general. since they are also bounded). However.5 shows that an initial state can diverge from the equilibrium state at the origin
while moving towards lower energy curves.
The solution to this problem is to include an extra condition that ensures that V (x) = Q is a closed curve. are closed set (and so compact.
Theorem 3.2) relies on the fact that the positive
definiteness of the function V (x) coupled with the negative definiteness of V (x) ensure that
V(x) < V(xo). This can be achieved by considering only functions V(. This in turn implies that no is not a closed region and so it is possible for state trajectories to drift away from the equilibrium point. if Q > 1. then the asymptotic stability of the equilibrium is global. If now B. Then V (x) is
said to be radially unbounded if
V(x) >oo
as
iixligoo.7 The equilibrium state xe is said to be asymptotically stable in the large.1 by ft = {x E B. Both sets S2p and B.) that grow unbounded when x > oo.
Example 3... holds in a compact region of the space defined in Theorem 3. or globally asymptotically stable. V (x) <. The reason is that the proof of Theorem 3.) is such that
. in Theorem 3.2 hold in the whole space PJ'.78
CHAPTER 3. however.
To see implies that for any 0 > 0.2.
:
Example 3.. = {x E R' IIxII < r}. We only need to show that given an arbitrary /3 > 0. which implies that S20 is bounded. 3r > 0 such this notice that the radial unboundedness of that V(x) > 0 whenever IIxII > r.
Proof: The proof is similar to that of Theorem 3. for some r > 0. Thus. ASYMPTOTIC STABILITY IN THE LARGE
79
Figure 3. the condition
S2p = {x E Rn : V (X) < /3}
defines a set that is contained in the ball B.5: The curves V(x) = 0.3.8 Consider the following system
xl = x2 .
(ii) V(x)>0
(iii) V (x) < 0
Vx#0.
(iii) V(x) is radially unbounded. for some r > 0.5.. S20 C B..
Vx # 0.
(i) V(0) = 0.x1(xl 2+ X2
.
then x = 0 is globally asymptotically stable.
if D = 1R' and
is radially unbounded then a1 and a2 can be chosen in the
class K.2201 2 22).9 A continuous function a : [0.
Moreover.
Definition 3.21 22(21 f22)]T
2(x1 + 22)2
Thus. we define V (x) = x2 + x2. + 2
To study the equilibrium point at the origin.
(ii) It is strictly increasing.
Lemma 3.
is radially unbounded. LYAPUNOV STABILITY I.. B. since
it follows that the origin is globally asymptotically stable.80
CHAPTER 3.
a is said to be in the class KQO if in addition a : ]EF+ + IIt+ and a(r) + oo as r + oo. We have
(2) =
7f(x)
2 2[xlix2][x2 21(21 2
+22). and show that positive definite functions can be characterized in terms of this class of functions.
In the sequel. known as class 1C. We now introduce a new class of functions. V(x) > 0 and V(x) < 0 for all x E R2. a) > R+ is said to be in the class K if
(i) a(0) = 0. Moreover. represents the ball
Br= {xER":IxII<r}.
3.
Proof: See the Appendix. This new characterization is useful in many occasions.
. AUTONOMOUS SYSTEMS
x2 = 21 .6
Positive Definite Functions Revisited
We have seen that positive definite functions play an important role in the Lyapunov theory.1 V : D > R is positive definite if and only if there exists class K functions a1 and a2 such that al(II2II) 5 V(x) < a2(IIxII) Vx E Br C D.
4)
Proof: See the Appendix.10 A continuous function 3 : (0.) and a constant a such that
1x(0) .xell < S = 1x(t) .2 The equilibrium xe of the system (3. s) is decreasing with respect to s. 3(r.xell. a) x pg+ + R+ is said to be in the class KL if
(i) For fixed s.
(iii) 0(r.3 The equilibrium xe of the system (3.
A stronger class of functions is needed in the definition of asymptotic stability.m. respectively.
(3.xell)
Vt > 0. where P is a symmetric matrix. we now show that it is possible to re state the stability definition
in terms of class 1C of functions.
For completeness.5)
.
Definition 3. al. It then follows that
Am.min(P)Ilxl12 a2(x) = \max(P)Ilxll2.6.1) is asymptotically stable if and only if there exists a class ICL function and a constant e such that
1x(0) . t)
Proof: See the Appendix. POSITIVE DEFINITE FUNCTIONS REVISITED
81
Example 3. Denote )m%n(P) and ).
Thus.9 Let V (x) = xT Px. oc) > R+. s) is in the class IC with respect to r. This function is positive definite if and only if the eigenvalues of the symmetric matrix P are strictly positive.xell < a
=
lx(t) .m(P)Ilxll2
< xTPx <
<
Amax(P)Ilxll2
.xell <_ 0(I1x(0) .
(ii) For fixed r.
(3.1) is stable if and only if there exists a class IC function a(.
Lemma 3.3. 33(r.
Lemma 3.xell <_ a(I1*0) . a2 : [0.n(P)Ilxll2
V(x)
< Amax(P)Ilxll2.s)*0 ass oc.
Vt > 0.ax(P) the minimum and maximum eigenvalues of P. and are defined by
al(x) = .
4 Suppose that all the conditions of Theorem 3. The
advantage of this notion is that it makes precise the rate at which trajectories converge to the equilibrium point.2 are satisfied. Indeed. In this section we study one approach to this problem.4.
known as the "variable gradient. the function V (x) satisfies Lemma 3.1
Exponential Stability
As mentioned earlier." This method is applicable to autonomous systems and often but not always leads to a desired Lyapunov function for a given system. if the conditions hold globally.
3.1 with al and a2().
Then the origin is exponentially stable.
.
1
V(x) < V(xo)e(K3/K2)t V x ]1/p < = Ilxll <
or
lix(t)ll <_ llxoll
[K2]1/P e(K3/2K2)t. AUTONOMOUS SYSTEMS
3.
Theorem 3.7
Construction of Lyapunov Functions
The main shortcoming of the Lyapunov theory is the difficulty associated with the construction of suitable Lyapunov functions. LYAPUNOV STABILITY I. and in addition assume that there exist positive constants K1. by assumption
Klllxllp < V(x) V(x) < K3IIxllP
<_
K2llxjIP
< KV(x)
V(x)
<
V(x)
[V xo ecK3/K2)t]1/p
. Our next theorem gives a sufficient condition for exponential
stability.6.
Proof: According to the assumptions of Theorem 3.82
CHAPTER 3. exponential stability is the strongest form of stability seen so far. K3 and p such that
V(x) < K21IxljP Klllxllp < V(x) K3IIxIlP. satisfying somewhat strong conditions. K2. the x = 0 is globally exponentially stable. Moreover.
.V(xa) depends on the initial and final states xa and xb and not on the particular path followed when going from xa to xb. (3...921 = [hixl + hix2.
(= V (x) = VV (x) . 0. .. 0. we have that
V(Xb) . This property is often used to obtain V by integrating VV(x) along the coordinate axis:
xl
V (X) = f 9(x) dx = J0
0X
gl (s1. The following theorem details these
conditions.7. Given that
(3.
The power of this method relies on the following fact. CONSTRUCTION OF LYAPUNOV FUNCTIONS
83
The Variable Gradient: The essence of this method is to assume that the gradient of
the (unknown) Lyapunov function V(.. 0) dsl
x. In other words. .V (xa) =
Xb Xb
f
VV(x) dx =
J2
g(x) dx
that is. An example of such a function. hzxl + h2x2]. satisfied by all gradients of a scalar function. the difference V(Xb) .
0) dS2 + . we start out then finding by assuming that
V V (x) = 9(x). for a dynamical system with 2 states x1 and x2. + 1 9n(x1. could be
9(X)=[91.
. and itself by integrating the assumed gradient. S2.) is known up to some adjustable parameters. f (x))
and propose a possible function g(x) that contains some adjustable parameters.
0
Sn) dSn..
is symmetric.. f (x) = g(x) .
Theorem 3.6)
VV(x) = g(x)
it follows that
g(x)dx = VV(x)dx = dV(x)
thus.. x2.3..7)
The free parameters in the function g(x) are constrained to satisfy certain symmetry conditions.5 A function g(x) is the gradient of a scalar function V(x) if and only if the matrix
ax.
fzZ
+
J0
92(x1.
hl2 x1 + h2x2].ag.ax.i1
= axl
2 = bx2 + xlx2.
Clearly.
To simplify the solution.
2 = aV
ax. 's are constant.10 Consider the following system:
.
In our case we have
1991
ax2
xl ahl + h2 + x2 ax2 ax2
1
ahl
2
19x1
= h2 + x1
ax.
. equivalently
as. then
ahi = ahi = ah2 = ah2
ax2 ax2
1992
19x1
=0
axl
and we have that:
1991
ax2
=
19x1
2 hl
=1 = h2
k
g(x) = [hlxl + kx2i kxl + h2x2].8)
Step 2: Impose the symmetry conditions. we proceed to find a Lyapunov function as follows. 2
(3. 92] = [hlxl.
Example 3. ..
In particular. To study the stability of this equilibrium point. AUTONOMOUS SYSTEMS
Proof: See the Appendix.
+ x2 axl . ax.
ax. we have
9(x) = [91. the origin is an equilibrium point. LYAPUNOV STABILITY I.
We now put these ideas to work using the following example. choosing k = 0.
Step 1: Assume that VV(x) = g(x) has the form
g(x) = [hlxl + hix2.
02V
axtax.
or. h2x2]
.84
CHAPTER 3. we attempt to solve the problem assuming that the J.
If this is the case.
In this case
V(x) = axi
.
Step 4: Find V from VV by integration. h2x2]f (x)
= ahixi +h2 (b + xlx2)x2. and b < 0. 0) dsl +
JO
92(x1. V (x) > 0 if and only if hi. An example of this is that of the pendulum with friction (Example 3.
Step 5:
Verify that V > 0 and V < 0. under these conditions.8
The Invariance Principle
Asymptotic stability is always more desirable that stability. we have that
V(x) = I hix2 + 1 h2x2
V(x)
ahixi+h2(b+xlx2)x2
From (3.(b .9).
3.3. the origin is (locally) asymptotically stable. it is often the case that a Lyapunov function candidate fails to identify an asymptotically stable equilibrium point by having V (x) negative semi definite. h2 > 0. However. This shortcoming was due to the fact that when studying the
. THE INVARIANCE PRINCIPLE
85
Step 3: Find V:
V(x) = VV f(x) = 9(x) f(x)
= [hix1.5). we have that
x1
fx2
91(sl. Integrating along the axes.x1x2)x2
and we conclude that.8. Assume then that hi = h2 = 1. s2) ds2
0 X2
i
1
x1
hiss dsl +
10
1
h2s2 ds2
2h1x1
2
1 2X2 + 2h2x2. In this case
V(x) = axe + (b+x1x2)x2
assume now that a > 0.
An extension of Lyapunov's theorem due to LaSalle studies this problem in great detail. any trajectory is an invariant set.
11
Example 3.
.14 If V(x) is continuously differentiable (not necessarily positive definite) and satisfies V(x) < 0 along the solutions of i = f (x). one often views a differential equation as
being defined for all t rather than just all the nonnegative t.11 Any equilibrium point is an invariant set. then x(t) = xe Vt > 0. and a set satisfying the definition above is called positively invariant. however.12).15 The whole space R" is an invariant set.
The following are some examples of invariant sets of the dynamical system i = f (x).
Definition 3. are related by the pendulum equations and so they are not independent of one another.11 A set M is said to be an invariant set with respect to the dynamical system
i = f(x) if'
x(0)EM = x(t)EM VtER+. M is the set of points such that if a solution of i = f (x) belongs to M at some instant. since if at t = 0 we have x(0) =
x.
Example 3.
In other words. The central idea is a generalization of the concept of equilibrium point called invariant set. initialized at t = 0. then it belongs to M for all future time. Notice that the condition V < 0 implies that if a trajectory crosses a Lyapunov surface V(x) = c it can never come out again. LYAPUNOV STABILITY I.13 A limit cycle is an invariant set (this is a special case of Example 3.
Example 3.
Example 3. Remarks: In the dynamical system literature.12 For autonomous systems. then the set Sgt defined by
SZi={xE1R' :V(x)<l}
is an invariant set.
Example 3.86
CHAPTER 3. Those variables. AUTONOMOUS SYSTEMS
properties of the function V we assumed that the variables xl and x2 are independent.
16 An asymptotically stable equilibrium point is the limit set of any solution starting sufficiently near the equilibrium point. oc) such that
x(t. Theorem 3. based on this Lyapunov function.1) is bounded for t > to.
We start with the simplest and most useful result in LaSalle's theory. LaSalle's invariance principle removes this problem and it actually allows us to prove that x = 0 is indeed asymptotically stable.
The following lemma can be seen as a corollary of Lemma 3. The difference
.17 A stable limit cycle is the positive limit set of any solution starting sufficiently near it.4 If the solution x(t.8.
Lemma 3. our analysis.4. Following energy considerations we constructed a Lyapunov function that turned out to be useful to prove that x = 0 is a stable equilibrium point.
Proof: See the Appendix. something that we know thanks to our understanding of this rather simple system. The set N is called the limit set (or positive limit set) of x(t) if for any p E N there exist a sequence of
times {tn} E [0.6 can be considered as a corollary of LaSalle's theorem as will be shown later. the limit set N of x(t) is whatever x(t) tends to in the limit.3.1).
Example 3. Moreover. THE INVARIANCE PRINCIPLE
87
Definition 3. failed to recognize that x = 0 is actually asymptotically stable. x0i to) of the autonomous system (3.1) is invariant with respect to (3.5 The positive limit set N of a solution x(t. the solution approaches N as t . x0i to) of the system (3.
Invariant sets play a fundamental role in an extension of Lyapunov's work produced by LaSalle. The problem is the following: recall the example of the pendulum with friction.
Lemma 3.. However. then its (positive) limit set N is (i) bounded.12 Let x(t) be a trajectory of the dynamical system ± = f (x). (ii) closed. and (iii) nonempty.)>p
or.oc. Example 3. equivalently
as t *oc
urn IJx(tn) p1l = 0.
Proof: See the Appendix.
Roughly speaking.
with V short of being negative definite. other than the null solution. By (3.2 is that in Theorem 3. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS
between theorems 3.1) is asymptotically stable if there exists a function V(x) satisfying
(i) V(x) positive definite Vx E D. 7r) x IR.6 leads to a better result.
.6 The equilibrium point x = 0 of the autonomous system (3.11)(3.5:
xl
X2
x2
.6 and theorem 3. We now check condition (iii) of
the same theorem.
g
k
m
Again
V (X)
>0
Vx E (7r.88
CHAPTER 3.
Example 3.
(3. we check whether V can vanish identically along the trajectories trapped in R.
(iii) V(x) does not vanish identically along any trajectory in R. for any a E IR+.
Theorem 3. other than the null solution
x=0.12).
(x) _ kl2x2
. Indeed. something that will remove part of the conservativism associated with certain Lyapunov functions. Conditions (i) and (ii) of Theorem 3. We now look to see whether application of Theorem 3.13) we have
V (x) = 0
l2x 0 = k2 b X2 = 0
. the Lyapunov theory fails to predict the asymptotic stability of the origin expected from the physical understanding of the problem.l slnx1 .6 are satisfied in the region R
X2
l
with 7r < xl < 7r. assume that V(x) is identically zero over a nonzero time interval. where we assume that 0 E D. The key of this step is the analysis of the condition V = 0 using the system equations (3.18 Consider again the pendulum with friction of Example 3. and a < X2 < a. (ii) 1 (x) is negative semi definite in a bounded region R C D.x2.13)
which is negative semi definite since V (x) = 0 for all x = [x1i 0]T Thus.6 is allowed to be only positive semidefinite. that is.
Notice also that V (x) is continuous and thus. Also by lemma 3.
.4). Tr) the last condition can be satisfied if and only if xl = 0. Also V(x) is continuous on the compact set B.7 The null solution x = 0 of the autonomous system (3.(x1 + x2)2x2.) is radially unbounded.
To study the equilibrium point at the origin we define V (x) = axe + x2. and the origin is (locally) asymptotically stable by Theorem 3. and thus is bounded from below in B. x2 . V (x) = 0 since V (x) is constant (= L) in N.1). by assumption.(x1 + x2)2x2]T 2x2[1 + (x1
+X2)2]. we obtain
0 = 9 sinxl
k .6: By the Lyapunov stability theorem (Theorem 3.1) that starts in B6 is bounded and tends to its limit set N that is contained in B.3 and is
omitted.f (x)
2[ax1. (by Lemma 3..e.ax1 . R = R).. Thus. THE INVARIANCE PRINCIPLE
89
thus x2 = 0 Vt = x2 = 0
and by (3. Proof of Theorem 3. which means that any solution that starts in N will remain there for all future time.
Proof: The proof follows the same argument used in the proof of Theorem 3.1). any solution starting inside the closed ball Bs will remain within the closed ball B.
Theorem 3.6 hold in the entire state space (i. x2] [x2.6.19 Consider the following system:
xl = x2
x2 = x2 . V(x) = L Vx in the limit set N.12). N is an invariant set with respect to (3. It follows that V (x) does not vanish identically along any solution other than x = 0.n x2
and thus x2 = 0 = sin x1 = 0
restricting xl to the interval xl E (7r.
N is the origin of the state space and we conclude that any solution starting in R C B6 converges to x = 0 as t 4 oo.8. we know that for each e > 0 there exist b > 0
1x011 < b
=>
lx(t)II < E
that is. It is also non increasing by assumption and thus tends to a non negative limit L as t * oo.axl .5.3. But along that solution. x0i t0) of (3. We have
(x)
=
8..1) is asymptotically stable in the large if the assumptions of theorem 3. Hence any solution x(t. and V(.
Example 3.
1).
Then every solution starting in M approaches N as t + oo. the last equation implies that xl = 0. AUTONOMOUS SYSTEMS
Thus. It follows that
V (x) does not vanish identically along any solution other than x = [0. n. we assume that V = 0 and conclude that
V=0
x2=0. x2=0 Vt
22=0
X2=0 =
x2 .
Since x(t) is bounded. Also. By continuity of V(x).
(iii) E : {x : x E M. V() is required to be continuously differentiable (and so
. Lemma 3. It follows that V(x(t)) has a limit as t + oo.8 (LaSalle's theorem) Let V : D * R be a continuously differentiable function and assume that
(i) M C D is a compact set. since V(. Moreover. we have that
V(p) = lim V(x(tn)) = a (a constant). 0]T . Hence x(t) approaches N as t > oo.(x1 + x2)2x2 = 0
and considering the fact that x2 = 0. Proceeding as in the previous example. For any
p E w3 a sequence to with to + oo and x(tn) + p.oo
Hence. It follows that w C M since M is (an invariant) closed set. since V(.
Remarks: LaSalle's theorem goes beyond the Lyapunov stability theorems in two important aspects. invariant with respect to the solutions of (3.5 w is an invariant set. V(x) > 0 and V(x) < 0 since V(x) = 0 for x = (xii0).1) starting in M.a X1 . it is bounded from below
in the compact set M. that is.
(iv) N: is the largest invariant set in E. and moreover V (x) = 0 on w (since V(x) is constant on w). V (x) = a on w.) is radially unbounded. Also. V(x) is a
decreasing function of t. It follows that
wCNCEcM. Let w be the limit set of this trajectory. In the first place. Since V(x) < 0 E M.
Theorem 3.90
CHAPTER 3.
Proof: Consider a solution x(t) of (3. E is the set of all points of M such that V = 0.4 implies that x(t) approaches w (its positive limit set) as t + oo.) is a continuous function.
(ii) V < 0 in M. LYAPUNOV STABILITY I. and V = 0}. by Lemma 3. we conclude that the origin is globally asymptotically stable.
x1 .
Example 3.
if
Before looking at some examples. emphasizes this point. 1 x. 2x2)f (x)
2(xi + x2)(b2 . To see this.x2)
It is immediate that the origin x = (0.x
ztx2=R2 2
r x1 = x2
l x2 = xl. Let S = {x E D : V (x) = 0} and suppose that no solution can stay identically in S other than the trivial one.xi .xi . The trajectories on this invariant set are described by the solutions of
i = f(x)I.20 [68] Consider the system defined by
it
= x2 + xl(Q2 . at the end of this section.x2)
i2 = xi + x2(02 . and
is globally asymptotically stable.x2)
for all points on the set.3. and assume that V (x) < 0 E D. but also to more general dynamic behaviors such as limit cycles.
. THE INVARIANCE PRINCIPLE
91
bounded). LaSalle's result applies not only to equilibrium points as in all the Lyapunov theorems. Also.
Corollary 3.1 Let V : D 4 R be a continuously differentiable positive definite function in a domain D containing the origin x = 0. we notice that some useful corollaries can be found is assumed to be positive definite. Perhaps more important.2 If D = Rn in Corollary 3. along the solution of i = f (x):
T
[x2 + x2 .1. 0) is an equilibrium point. and thus the set of points of the form {x E R2 : xl + x2 = 32} constitute an invariant set. then the origin
Proof: See Exercise 3. Then the origin is asymptotically stable.11). It follows that any trajectory initiating on the circle stays on the circle for all future time. Example 3. we compute the time derivative of points on the circle.
Corollary 3. the set of points defined by the circle x1 + x2 = (32 constitute an invariant set.8.
is radially unbounded.
Proof: Straightforward (see Exercise 3.12.Q2]
= (2x1.20. but it is not required to be positive definite.
we have
E=(0. Also
1av' av
axl axl f(x)
(x2 +x 2)(xl + x2 . Clearly. We now apply
LaSalle's theorem. the largest invariant set in E. since (0. the circle is actually a limit cycle along which the trajectories move in the clockwise
direction. compact). and by LaSalle's theorem..
This is trivial.
We now investigate the stability of this limit cycle using LaSalle's theorem. Also V(x) < OVx E M.
. V = 0 either at the origin or on the circle of radius p. To this end. V(x) = 0 if and only if one of the following conditions is satisfied:
(a)xi+x2=0 (b) xl+x2a2=0. define the set M as follows:
M={xEIR2:V(x)<c}.
Moreover. and thus any trajectory staring from an arbitrary point xo E M will remain inside M and M is
therefore an invariant set.e. and the invariant set x1 +x2 = Q2. every motion staring in M converges to either the origin or the limit
cycle.
Step #1: Given any real number c > f3. E is the union of the origin and the limit cycle. 0) is an equilibrium point).
In other words.
By construction M is closed and bounded (i.92
CHAPTER 3.
is positive semi definite in R2.
Step #2: Find E = {x E M : V(x) = 0}. consider the following function:
V(x) = 1(x2 +x2 o2)2
Clearly.a2)2 < 0. Thus.0)U{xER2:xi+x2=j32}
that is. we conclude that N = E.
Step #3: Find N. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS
Thus. since E is the union of the origin (an invariant set.
In order to do so.2x2. b. We
are interested in the stability of the equilibrium point at the origin.
This system has three equilibrium points: xe = (0.5cxi
we now choose
{ 2d . choosing c : 0 < c < 1/204 we have that M = {x E JR2 : V(x) < c} includes the limit cycle but not the origin.9
Region of Attraction
As discussed at the beginning of this chapter. 0).10d .14)
. d constants to be determined.
5
Thus. 22 = 5x1 + x1 . 0) converges to the limit cycle. or attractive.
3. REGION OF ATTRACTION
93
We can refine our argument by noticing that the function V was designed to measure the distance from a point to the limit cycle:
V(x) = 2(x1 +x2 . and xe = (v'.12b
6a10d2c = 0
= 0
(3. we consider the following example. xe = (f .32)2
whenever x2 + x2 = /j2 r V (x) = 0 l V (x) = 2/34 when x = (0. we propose the following Lyapunov function candidate:
V(x) = axe . Differentiating we have that
V = (3c .3. and the limit cycle is said to be convergent. The same argument also shows that the origin is unstable since any motion starting arbitrarily near (0.4d)x2 + (2d . Throughout this section we restrict our attention to this form of stability and discuss the possible application of Theorem 3. Thus application of LaSalle's theorem with any c satisfying c : e < c < 1/234 with e arbitrarily small shows that any motion starting in M converges to the limit cycle. 0). To study this equilibrium point.9.2c)xlx2 + cxi .bxl + cx1x2 + dx2
with a. 0). c. asymptotic stability if often the most desirable form of stability and the focus of the stability analysis.21 Consider the system defined by
r 1 = 3x2
1.12b)x3 x2 + (6a . thus diverging from the origin. 0).2 to estimate the region of asymptotic stability.
Example 3.
we
obtain
V (x) = 12x2 . t) be the trajectories of the systems (3. but the region of the plane for which trajectories converge to the origin cannot be determined from this theorem alone. According to this theorem.
.16)
So far.xi + 6x1x2 + 6x2
= 3(x1 + 2x2)2 + 9x2 + 3x2 .13 Let 1li(x.17)
we have that V(x) > 0 and V < 0. this region can be a very small neighborhood of the equilibrium point.
(3. A quick inspection of this figure shows.30x2 + 6xi. however.6<xl <1. if V(x) > 0 and V < 0 in D . so good. Vx E D . We now study how to estimate this region. that our conclusions are incorrect.{0}. In general. Theorem 3. We begin with the following definition.
In summary: Estimating the socalled "region of attraction" of an asymptotically stable equilibrium point is a difficult problem.2 simply guarantees existence of a possibly small neighborhood of the equilibrium point where such an attraction takes place. thus suggesting that any trajectory initiating within D will converge to the origin.15)
V(x) = 6x2 . thus moving to Lyapunov surfaces of lesser values. AUTONOMOUS SYSTEMS
which can be satisfied choosing a = 12. c = 6.t)>xei as t goo}. x2 = 4 is quickly divergent from the origin even though the point (0. this theorem says that the origin is locally asymptotically stable. b = 1. the trajectory initiating at the point x1 = 0. It is therefore "tempting" to conclude that the origin is locally asymptotically stable and that any trajectory starting in D will move from a Lyapunov surface V(xo) = cl to an inner Lyapunov surface V(xl) = c2 with cl > C2. we again check (3.2. The question here is: What is the meaning of the word "local"? To investigate this issue. 4) E D. The point neglected in our analysis is as follows: Even though trajectories starting in D satisfy the conditions V(x) > 0 and V < 0.15) and (3. LYAPUNOV STABILITY I.6}
(3. The problem is that in our example we tried to infer too much from Theorem 3.16) and conclude that defining D by
D = {xEIIF2:1.1) with initial condition x at t = 0.x1
(3. D is not an invariant set and there are no guarantees that these trajectories will stay within D. and d = 6. For example. then the equilibrium point at the origin is "locally" asymptotically stable.
To check these conclusions.6.{0}. Using these values.94
CHAPTER 3.2. we plot the trajectories of the system as shown in Figure 3. The region of attraction to the equilibrium point x. We now apply Theorem 3. Strictly speaking. Thus. denoted RA.
Definition 3. is defined by
RA={xED:1/i(x. once a trajectory crosses the border xiI = f there are no guarantees that V(x) will be negative.
3.
Under these conditions we have that
M C RA. REGION OF ATTRACTION
95
Figure 3. E M.
V = 0 ifx=xe. and the result follows from LaSalle's
Theorem 3. Theorem 3. outlines the details.
Theorem 3.9.
. we have that E _ {x : x E M.
Proof: Under the assumptions.8).1).8. It then
follows that N = largest invariant set in E is also xe. and V = 0} = xe. In this section we discuss one way to "estimate" this region.6: System trajectories in Example 3.9 states that if M is an invariant set and V is such that V < 0 inside M. which is based entirely on the LaSalle's invariance principle (Theorem 3.
In general. Let V : D 4 IR be a
continuous differentiable function and assume that (i) M C D is a compact set containing xe.
In other words. then M itself provides an "estimate" of RA.1).9 Let xe be an equilibrium point for the system (3. the exact determination of this region can be a very difficult task. invariant with respect to the solutions of (3.21.
(ii) V is such that
V < 0 VX 34 X. The following theorem.
6x2 + 6x2 = z2
dz2
axe
= 9.6 = 24.32.6x2 + 6x2 = z1
dzl
axe
= 9.6. and every eigenvalue with Me(at) = 0 has an associated Jordan block of order 1. 0.9.8
S imilarly
Vlx1=1.10
Analysis of Linear TimeInvariant Systems
Linear timeinvariant systems constitute an important class.30x1 +6x1
V (x)
We know that V > 0 and V < 0 for all {x E R2 : 1.6 + 12x2 = 0 a x2 = 0. It is well known that given an LTI system of the form
2 = Ax.8) = 20.xl + 6x1x2 + 6x2 V(x) = 6x2 . LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS
Example 3.6
24.9. This means that M C R
3.
x = ±1.e.6 < xl < 1.21:
I i1 = 3x2
i2 = 5x1 + x1 .6 + 12x2 = 0 a x2 = 0.16 + 9. From here we can conclude that given any e > 0.
Thus.6).16 .2x2
= 12x1 . where te(A ) represents the real part of the eigenvalue )i. A E 1Rnxn
x(0) = x0
(3. the
region defined by
M = {x E IIY2: V(x) <20.22 Consider again the system of Example 3.8.6}.8. 0. We have
VIx1=1. which has been extensively analyzed and is well understood. It is immediate that V(1..18)
the origin is stable if and only if all eigenvalues A of A satisfy Me(at) < 0. The equilibrium point x = 0 is exponentially stable if
.6. the function V(±1.8) = V(1.x2) has a minimum when x2 = ±0.32e}
is an invariant set and satisfies the conditions of Theorem 3.96
CHAPTER 3. To estimate the region of attraction RA we now find the minimum of V(x) at the very edge of this condition (i.6.
and let V(. the solution of the differential equation (3. With these assumptions.19)
V (X) = xT Px
(3.10. Thus
XTATPi+XTPAx XT(ATP+PA)x
or
V=
Here the matrix Q is symmetric. Q). analyzing the asymptotic stability of the origin for the LTI system (3.3.
QT = (PA +ATP)T = (ATP+AP) = Q
If Q is positive definite. In the first place. This is done in two steps:
.i) < 0.
Consider the autonomous linear timeinvariant system given by
x=Ax. however. then is negative definite and the origin is (globally) asymptotically stable.22)
PA+ATP = _Q.) is positive definite. Finally. We also have that
V=xTPx+xTP±
by (3. where LTI is a special
case.20)
where p E Rnxn is (i) symmetric and (ii) positive definite.) be defined as follows
AERnxn
(3.19) reduces to analyzing the positive definiteness of the pair of matrices (P. since
_XTQX
(3. we will study the stability of nonlinear systems via the linearization of the state equation and try to get some insight into the limitations associated with this process. Moreover.21) (3. we will introduce a very useful class of Lyapunov functions that appears very frequently in the literature. the Lyapunov analysis permits studying linear and nonlinear systems under the same formalism. it seems unnecessary to investigate the stability of LTI systems via Lyapunov methods. exactly what we will do in this section! There is more than one good reason for doing so. V(. This is. Second.19).18) can be expressed in a rather simple closed form
x(t) = eAtxo
Given these facts. Thus. ANALYSIS OF LINEAR TIMEINVARIANT SYSTEMS
97
and only if all the eigenvalues of the matrix A satisfy Re(. 2T = XT AT .
To see
. AUTONOMOUS SYSTEMS
(i) Choose an arbitrary symmetric. The following theorem guarantees the
existence of such a solution.22) appears very frequently in the literature and is called Lyapunov equation.
Remarks: There are two important points to notice here. LYAPUNOV STABILITY I. Thus V = xTPx > 0
and V = xT Qx < 0 and asymptotic stability follows from Theorem 3. given the assumptions on the eigenvalues of A. We claim that P so defined is positive definite. Q) depends on the existence of a unique solution of the Lyapunov equation for a given matrix A. we have that
Q = PA + AT P = [
0 4
and the resulting Q is not positive definite.22)
T
Proof: Assume first that given Q > 0. the approach just described may seem unnecessarily complicated. The matrix P is also symmetric. the system with the following A matrix.10 The eigenvalues at of a matrix A E R
satisfy te(A1) < 0 if and only if for any given symmetric positive definite matrix Q there exists a unique positive definite symmetric matrix P satisfying the Lyapunov equation (3.22) and verify that it is positive definite.98
CHAPTER 3.
Theorem 3. Indeed.
(ii) Find P that satisfies equation (3.22).
Equation (3. In the first place. thus eliminating the need for solving the Lyapunov equation. it seems to be easier to first select a positive definite P and use this matrix to find Q. Consider. define P as follows:
00
P =
10
eATtQeAt dt
this P is well defined. This approach may however lead to inconclusive results. For the converse assume that te(A1) < 0 and given Q.2. positive definite matrix Q.
The second point to notice is that clearly the procedure described above for the stability analysis based on the pair (P. for example. Therefore no conclusion can be drawn from this regarding the stability of the origin of this system. 2P > 0 satisfying (3. since (eATt)T = eAt.
`4
=
0
4
8 12
4
24
]
taking P = I.
10. Then
(PP)A+AT(PP) = 0
eAT t
[(P .P)eAt] = 0
which implies that eAT t (P .
3. The Taylor series expansion about the equilibrium point xe has the form
f (x) = f (xe) + ax (xe) (x .
(3. we reason by contradiction and assume that the opposite is true. or equivalently.1
Linearization of Nonlinear Systems
f : D 4 R'
Now consider the nonlinear system
x = f (x). To see this.3. This contradicts the assumption. P = P. Ix # 0 such that xT Px = 0.10. ANALYSIS OF LINEAR TIMEINVARIANT SYSTEMS
99
that this is the case.P = 0.xe) + higherorder terms
. This can be the case if and only if P . This completes the proof.P)] eAt = 0 . suppose that there is another solution P # P.
there remains to show that this P is unique.P)e_At is constant Vt. that is.P)A+ AT (P
=>
dt
[eAT t (P
. We now show that P satisfies the Lyapunov equation
PA + ATP = j eATtQeAtA dt +
= J' t (eAtQeAt) dt
eATtQeAt
J0
ATeATtQeAt dt
IU = Q
which shows that P is indeed a solution of the Lyapunov equation. But then
xTPx=0
10
xTeATtQeAtx
r
dt = 0
with y = eAtx
yT Qy dt = 0
Ja
y=eAtx=O dt>0 x=0
since eat is nonsingular Vt.23)
assume that x = xe E D is an equilibrium point and assume that f is continuously differentiable in D. and thus we have that P is indeed positive definite. To complete the proof.
and moreover
A=
Ate.23). AUTONOMOUS SYSTEMS
Neglecting the higherorder terms (HOTs) and recalling that.
The proof is omitted since it is a special case of Theorem 4.
Theorem 3.26).
f
(Xe)
(3. and let A be defined as in (3. known as Lyapunov's indirect method. The following theorem. sufficient conditions for stability.xe1 we have that X = i. namely. Thus the usefulness of these results is limited by our ability to find a function V(.1) and assume that x = 0 is an equilibrium point. Then if the eigenvalues ) of the matrix A satisfy ate(.12 (Chetaev) Consider the autonomous dynamical systems (3.24) (Xe)(x . whether it is possible to show that the origin is actually unstable.26) is exponentially stable.11 Let x = 0 be an equilibrium point for the system (3. then it is indeed the case that for the original system (3. we have that r (3.25). In these circumstances it is useful to study the opposite problem.23) the equilibrium xe is locally exponentially stable.23) about the equilibrium point xe by analyzing the properties of the linear timeinvariant system (3. Assume that f is continuously differentiable in D. Let V : D + IR have the following properties:
.xe). To simplify our notation. f (xe) = 0. we assume that the equilibrium point is the origin. then no conclusions can be drawn with respect to the stability properties of the particular equilibrium point under study.
3.23). by assumption. however.\a) < 0.25)
(3.100
CHAPTER 3. If our attempt to find this function fails.11
Instability
So far we have investigated the problem of stability.) that satisfies the conditions of one of the stability theorems seen so far. the origin is an exponentially stable equilibrium point for the system (3. Perhaps the most famous and useful result is a theorem due to Chetaev given next. All the results seen so far are. The literature on instability is almost as extensive as that on stability. d (x) =
C72
Now defining
x = x .
Theorem 3. LYAPUNOV STABILITY I.5 for the proof of the timevarying equivalent of this result).26)
We now whether it is possible to investigate the local stability of the nonlinear system (3.7 in the next chapter (see Section 4. shows that if
the linearized system (3.
where the set U is defined as follows:
U = {x E D : IIxI1 < e. This argument also shows that V (x) is bounded in Q. such that V(xo) > 0
(iii) V > OVx E U.
. U is clearly bounded and moreover. its boundary consists of the points on the sphere 1x11 = e. arbitrarily close to x = 0. > 0
V(xo) > 0
but then.11. This conditions must hold as long as x(t) is inside the set U.
Under these conditions.
This set is compact (it is clearly bounded and is also closed since it contains its boundary points IIxII = c and V(x) = l. Now consider an interior point x0 E U. the set of points satisfying Ix1I < c). we cannot find 6 > 0 such that
14011 < 6
=
1X(t)II < e
Notice first that condition (ii) guarantees that the set U is not empty. which. and so in U. we briefly discuss conditions (ii) and (iii). Given that e is arbitrary. V(. It then follows that V(x). and V(x) > 0}. the trajectory x(t) starting at x0 is such that V(x(t)) > .
Remarks: Before proving the theorem. Accord
ing to assumption (ii). satisfy V(x) > 0.. No claim however was made about
is being positive definite in a neighborhood of x = 0.).Vt > 0. which is a continuous function in a compact set Q. this implies that given c > 0. Define now the set of points Q as follows:
Q = {xEU:IxII <eandV(x)> }. in addition. Assumptions (iii) says that positive definite in the set U.e.
Proof: The proof consists of showing that a trajectory initiating at a point xo arbitrarily close to the origin in the set U will eventually cross the sphere defined by IxII = E. where 6 can be chosen arbitrarily small. This set consists of all those points inside the ball Bf (i. and the surface defined by V(x) = 0. Define
y = min{V(x) : x E Q} > 0.) is such that V(xo) > 0 for points inside the ball B6 = {x E
D : Ix1I < d}. and taking account of assumption (iii) we can conclude that
V(xo) =
l. By assumption (ii) V(xo) > 0. has a minimum value and a maximum value in Q.3. x = 0 is unstable. INSTABILITY
101
(i) V(0) = 0
(ii) lix0 E R".
x # 0. x # 0. LYAPUNOV STABILITY I.20
21 = x2+x1(/32x1x2)
t2
= XI + x2(N2 . e. However. Also
V = (x1. We now verify this result using Chetaev's result.x2
(3.x1 . Thus.8 that the origin of this system is an unstable equilibrium point.x2). 0<E<)3}
we have that V (x) > OVx E U. and moreover V (x) > OVx E 1R2 # 0.x2)f(x)
(xl + x2)(/32 .1) Consider the following dynamical system:
. Thus. for the trajectory starting at x0.
This completes the proof.. x = 0 is unstable since given E > 0.
0
3. V() is positive definite. Thus a trajectory x(t) initiating arbitrarily close to the origin must intersect the boundary of U. Thus we have
that V (O) = 0. Let V(x) = 1/2(x2 + x2).12
Exercises
X1 = X2
x2 = x1 + xi . Thus the origin is unstable.
V (x0) + J
rt
ry
It then follows that x(t) cannot stay forever inside the set U since V(x) is bounded in U.
Defining the set U by
U={xEIR2: IIxiI <E.x2)
We showed in Section 3. we can write
V(x(t) = V(xo) +
J0
0
t
V (x(r)) dr
dr = V (xo) + ryt.
Example 3. AUTONOMOUS SYSTEMS
This minimum exists since V(x) is a continuous function in the compact set Q. the trajectory x(t) is such that V(x) > and we thus conclude that x(t) leaves the set U through the sphere IIxii = E. we cannot find b > 0 such that
IIxoli < b
=
11x(t)II < E. z. The boundaries of this set are IIxii = E and the surface V(x) = 0. by Chetaev's result.23 Consider again the system of Example 3.xi .102
CHAPTER 3. and V > OVx E U.
xe3] corresponding to this input. (b) Find the linear approximation about this equilibrium point and analyze its stability. (c) Using a computer package.x2
. find the eigenvalues of the resulting A matrix and classify the stability of each equilibrium point.1:
Xz
=g. EXERCISES
103
(a) Find all of its equilibrium points. = yo). Make sure that your analysis contains information about all the equilibrium points of these systems.3.
(3.MI
1+µx1
A
2m(1 +µx.)2
Aµ
[_+
X3 =
(1 + px.
l 22 = X2 .xef and show that the resulting system y = g(y) has an equilibrium point at the origin. . (b) Find the linear approximation about each equilibrium point. perform a change of variables y = x . (d) Using the same computer package used in part (c).
(ii)
I
23 X2
x2
x. xe2.
(3.2) Given the systems (i) and (ii) below. construct the phase portrait of each linear approximation found in (c) and compare it with the results in part (c).)2x2x3
+ vj
1
(a) Find the input voltage v = vo necessary to keep the ball at an arbitrary position y = yo (and so x.2x2)
(3.2 tan 1(x1 + x2)
. proceed as follows:
(a) Find all of their equilibrium points.12.3x1 .4) For each of the following systems. construct the phase portrait of each nonlinear system and discuss the qualitative behavior of the system. What can you conclude about the "accuracy" of the linear approximations as the trajectories deviate from the equilibrium points.9.x2 x2
. +x2(1 . study the stability of the equilibrium point at the
origin:
(Z)
f 1 = X1 .3) Consider the magnetic suspension system of Section 1.X1X2
(ii)
1
22
X + x. (b) For each equilibrium point xe different from zero.
(Z)
J ll x2 = x. Find the equilibrium point xe = [xel .x2 2 3 xlx2 .
G) V (
. and its derivative has been computed. V (x) _ (x2 + x2) (h) V (X) = (xl + 12) . Explain your answer in each case. Assuming that V(. as (a) stable.2xl (x2 + x2) :i2 = xl .x2) (x) = (xl + 12) . in each case. What can you conclude about your answers from the linear analysis in part (b)? (d) Repeat part (c) assuming a < 0. (c) V(x) = (x2 + x2 . (b) Find the linear approximation about the origin. use a computer package to study the trajectories for several initial conditions.x2) .
(3.1)2 .
(a) V (x) = (x2 + x2) .
(f) V (X) = (x2 .
V (X) _ (x2 + x2). (d) V(x) _ (x2 + x2 1)2 . V(x) = (x2 + x2). (b) V(x) _ (x2 +x2 1) . (b) locally asymptotically stable. study the stability of the equilibrium point at the
origin:
()
2
X1 = 12 . (c) Assuming that the parameter a > 0. V (x) = (x2 + x2) (g) V (X) _ (x2 .
V(x) = x2. and classify the stability of the equilibrium point at the origin. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS
(3.1) . V (X) = (x2 + x2). V(x) _ (x2 + x2). (c) globally
asymptotically stable. a function V(.6) Consider the following system:
xl = x2 + ax1(x2 + x2) ±2 = xl + ax2(x2 + x2)
(a) Verify that the origin is an equilibrium point.) has been proposed.104
CHAPTER 3.) and are given below you are asked to classify the origin.5) For each of the following systems.
(e) V (X) = (x2 + x2 . V (x) = (x2 . find the eigenvalues of the resulting A matrix.x2) . V(x) _ (x2 +x2). For this system. (d) unstable. and/or (e) inconclusive information.7) It is known that a given dynamical system has an equilibrium point at the origin.212(x2 + x2)
(ii) { i1 = 11 + 11x2
'
1 :i2
= x2
(3.
(3.axl . (3.(2x2 + 3x1)2x2
Study the stability of the equilibrium point xe = (0.14) Consider the system defined by the following equations:
11
= 3x2
2
±2 = xl + x2(1 . 0). a2 E Kim. 0).
(iii) Q < 0. then al o a2 E K.
(3.9) Consider the system defined by the following equations:
(X3
1
x2+Q3 xl 2 = x1
xl
Study the stability of the equilibrium point xe = (0. then a1 E K.7. 0) and (b) 1 . (iii) If a E Kim. (iv) If al. (ii) Study the stability of the origin x = (0.
(3. (ii) If al.13) Consider the system defined by the following equations:
Il = x2
12 = x2 . (3.1.3. in the following cases:
(i) /3 > 0.(3x1 + 2x2) = 0.11) Provide a detailed proof of Corollary 3. (iii) Study the stability of the invariant set 1 . 0).2x2)
(i)
Show that the points defined by (a) x = (0. functions:
105
(i) If a : [0.12) Provide a detailed proof of Corollary 3.(3x1 + 2x2) = 0 are
invariant sets. EXERCISES
(3.
. a(a)) > R E K.
(ii) /3 = 0.3x1 . a) > IRE K. (3.8) Prove the following properties of class IC and K.10) Provide a detailed proof of Theorem 3. a2 E K.2. then a1 o a2 E K. then a1 : [0.12.
[41] [88] [68] and [95] among others. Section 3.16) (Lagrange stability) Consider the following notion of stability: Definition 3.20 was taken from
Reference [68].
Then the equilibrium point at the origin is Lagrange stable. The beautiful Example 3. and Khalil.1) is said to be bounded or Lagrange stable if there exist a bound A such that
lx(t)II<A
Prove the following theorem
Vt>0. [41].8 is based on LaSalle.106
CHAPTER 3.
Notes and References
Good sources for the material of this chapter are References [48].4 and 3. (i) V(x) < 0 Vx E Q`.1 is based on Reference [32]. The proof of theorem 3. [27]. discuss the stability of the equilibrium point at the origin:
X2
_
xix2 + 2x1x2 + xi X3 + x2
(3. Section 3.15) Given the following system. LYAPUNOV STABILITY I.
(i) V is radially unbounded.14 The equilibrium point x = 0 of the system (3.
. Assume that V(x) : R^ * R be continuously differentiable in S2' and satisfying:
(i) V(x) > 0 VxE52C.5 follow closely the presentation in Reference [95].13 [49] (Lagrange stability theorem) Let fl be a bounded neighborhood of the origin and let 1 be its complement.
Theorem 3.7 as well as lemmas 3. [49]. AUTONOMOUS SYSTEMS
(3.
1
Definitions
We now extend the several notions of stability from Chapter 3. We start by reviewing the several notions of stability. We will say that the origin x = 0 E D is an equilibrium point of (4.
'Notice that.t)
f:DxIIF+>1R'
(4.
4.1) at t = to
if
f(0. In this case. Visualizing equilibrium points for nonautonomous systems is not as simple. (4. xe = 0. In general. in this chapter we state all our definitions and theorems assuming that the equilibrium point of interest is the origin.Chapter 4
Lyapunov Stability II: Nonautonomous Systems
In this chapter we extend the results of Chapter 3 to nonautonomous system. Consider the nonautonomous systems
x=f(x. as in Chapter 3.1) represents an unforced system. oo) a IR is locally Lipschitz in x and piecewise continuous in t on
D x [0.
For autonomous systems equilibrium points are the real roots of the equation f (xe) = 0. For simplicity.1)
where f : D x [0. to be defined.t) = 0
dt > to. oc). to nonautonomous systems.
107
. the initial time instant to warrants special attention. This issue will originate several technicalities as well as the notion of uniform stability.
t) . the origin y = 0 is an equilibrium point of the new system y = g(y. consider the nonautonomous
system
th = f(x.1) is said to be
Stable at to if given c > 0.5)
Asymptotically stable at to if it is both stable and convergent. NONAUTONOMOUS SYSTEMS
xe = 0 can be a translation of a nonzero trajectory. t) . xo is convergent at to if for any given cl > 0. t). Indeed.
Definition 4.108
CHAPTER 4. t) at t = 0. or a solution of the differential equation (4. Consider the change of variable y = X(t) . t)
But i = f (x(t).1 The equilibrium point x = 0 of the system (4.t)
0
if y=0
that is.
. 36 = b(e.t)
(4.t) = f(y + x(t). to) > 0 :
11x(0)11 < S =
11x(t)II < c
Vt > to > 0
Convergent at to if there exists 51 = 51(to) > 0 :
11x(0)11 < Si
=
slim x(t) = 0.
Unstable if it is not stable.2)
and assume that x(t) is a trajectory. LYAPUNOV STABILITY H.2) for t > 0.x(t).
400
Equivalently (and more precisely).
We have
f (x. 3T =
T(elito) such that
1x(0)11 < Si
=
Ix(t)11 < El
dt > to +T
(4.x(t)
def
=
g(y.f(x(t). Thus
g(y.
. The following lemmas outline the details.1) is (locally) exponentially stable if there exist positive constants a and A such that
11x(t)II < a
Ilxollkeat
(4.
Definition 4.
(4. x = 0 is uniformly convergent if for any given E1 > 0.4.2 The equilibrium point x = 0 of the system (4.t .9)
whenever jx(0) I < 6. It is said to be globally exponentially stable if (4.3 The equilibrium point x = 0 of the system (4. independent of to. This dependence on the initial time is not desirable and motivates the introduction of the several notions of uniform stability.
(4. independent of to such that
Ix(0)II < c
=>
Ix(t)II <_ 44(0)II)
Vt > to. The difference is in the inclusion of the initial time to.i30Ix(o)II.
Lemma 4.
As in the case of autonomous systems. independent of to such that
1x(0)11 < c
=>
lx(t)II <.2 The equilibrium point x = 0 of the system (4. DEFINITIONS
109
All of these definitions are similar to their counterpart in Chapter 3. such that
jxoII<61 = x(t)+0 ast >oo.9) is satisfied for
anyxER".to)
Vt > to.1) is uniformly stable if and only if there exists a class IC function a constant c > 0. 16 = b(e) > 0 :
X(0)11 < 6
1x(t)jj < e
Vt > to > 0
(4.1 The equilibrium point x = 0 of the system (4. IT = T(El) such that Ix(t)II < El Vt > to + T lx(0)II < 61
Uniformly asymptotically stable if it is uniformly stable and uniformly convergent.1) is said to be
Uniformly stable if any given e > 0.6)
Uniformly convergent if there is 61 > 0.1.1) is uniformly asymptotically stable if and only if there exists a class ICL and a constant c > 0.7)
Lemma 4. it is often useful to restate the notions of uniform stability and uniform asymptotic stability using class IC and class ICL functions.
Equivalently. The proofs of both of these lemmas are almost identical to their counterpart for autonomous systems and are omitted.8)
Definition 4.
Globally uniformly asymptotically stable if it is uniformly asymptotically stable and every motion converges to the origin.
a scalar function W (x.4
(i) W(0..
Definition 4. and not of t.t) = 0
Flt
is said to be positive semi definite in D if
E R+. i.t)goo
as
IIxii>oo
.xED.t)I < V2(x) Vx E D.e..6 W(. In this section we introduce timedependent positive definite functions.6 is to render the decay of toward zero a function of x only.
Definition 4. W (x.
In the following definitions we consider a function W : D x pt+ > IIt. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
4.
The essence of Definition 4. positive definite functions play a crucial role in the Lyapunov theory. Furthermore we assume that
(i) 0 E D.
(ii) W(x.2
Positive Definite Functions
As seen in Chapter 3.t)>0
Vx#0. t) is decrescent in D if it tends to zero uniformly with respect to t as Ixii + 0.
(ii) 3 a timeinvariant positive definite function V1(x) such that
V1(x) < W(x.7
is radially unbounded if
W(x. t) is continuous and has continuous partial derivatives with respect to all of its arguments.110
CHAPTER 4. t) = 0.
Definition 4.
Definition 4. ) is said to be decrescent in D if there exists a positive definite function V2(x) such that IW(x.. It is immediate that every timeinvariant positive definite function is decrescent.t)
Vx E D.
(ii) W (x. t) of two variables: the vector x E D and the time variable t.5 W(. Equivalently. ) is said to be positive definite in D if
(i) W(0.
t).t) .
If in addition
is decrescent..
Vx E D
(4. such that
Vi(x) < W(x. t)
and by Lemma 3.) and a2() can be chosen in the class ]Cc. By Definition 4.14)
and
E IC such that
(4. ) is radially unbounded if given M.4.
is positive definite and decrescent and radially unbounded if and only if al(. t) > M
for all t.t) < a2(1IxII) . according to Definition 4.
(4.1
Examples
In the following examples we assume that x = [x1. t) = 0 e"t = 0
.13)
It follows that is positive definite and decrescent if and only if there exist a (timeinvariant) positive definite functions and V2(.
(4. t) < V2(x) .
Vx E D
(4.
Finally.
Vx E Br C D.12)
and by Lemma 3.
Vx E D
(4. W (.
such that
Vx E Br C D.1 this implies the existence of
. t).t a > 0.
Remarks: Consider now function W (x.). AN > 0 such that
W (x.
Example 4.t) < V2(x) < 012(IIxII) .. provided that Ixjj > N. then. This function satisfies
(i) Wl (0. x2]T and study several functions W (x..
4.t) < V2 (x)
which in turn implies the existence of function
. POSITIVE DEFINITE FUNCTIONS
111
uniformly on t. W(. t) _ (x1 + x2)e .2.5.6 there exists V2:
W (x.15)
al(IIxII) < W(x.1 this implies the existence of
W(x.10)
such that
Vx E Br C D. ) is positive definite in D if and only if 3V1 (x) such that
Vi (x) < W (x..1 Let Wl (x.2.11)
al(IjxII) < Vl(x) < W(x. Equivalently.
It is not radially unbounded since it does not tend to infinity along the xl axis. VtE1R. W4(.t) > V2(x) Vx E ]R2.
Example 4. which implies that is positive definite. Wl
definite. ) > OVx E 1R2 and is positive definite. limt.t)>O Vx540.
Example 4.
However.
Example 4..
V2(x) =
+2))
Thus.t) = oo taco
Vx E 1R2. Wl (x. IW2(x. V2(x) > 0 Vx E 1R2 and moreover W2(x.3 Let
W3(x. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
(ii) Wl(x. t)
_ (x1+x2)(12 + 1)
V3(x)(12+1)
V3(x)deJ(xl+x2)
Following a procedure identical to that in Example 4.2 Let
W(xt) =
(x1 + x2)(12 + 1)
((x1
V2(x)(t2 + 1).
Thus it is not possible to find a positive definite function V() such that IW2(x. t)I < V(x) Vx.t) is not decrescent. radially unbounded and not decrescent. and so it is decrescent.. Thus.112
CHAPTER 4.t)l is not radially unbounded since it does not tend to infinity along the xl axis.4 Let
2 2 W4(x t) _ (x 2+X2 ) .
V5 (x)
def
(xl + x2 ) 2
.
and thus W2(x.2 we have that W3(x) is positive definite.
is positive semi definite. but not positive
Example 4. (xl + 1)
Thus. It is not timedependent. t) = 0 Vx. Also
lim W2(x.5 Let
W5(x t)
_ (x2 +x2)(12 + 1)
(t2 + 2)
=
V5(x)(t
+ 2).
1) and assume that the origin is an equilibrium state:
f(O. oo) > R such
that
(i) W (x. if W(x. oo) * IR such that
(i) W (x.1) is negative semi definite in D. Consider the system (4. then
the equilibrium state is uniformly asymptotically stable.. ) being decrescent is optional.2 (Lyapunov Uniform Asymptotic Stability) If in a neighborhood D of the equilibrium state x = 0 there exists a differentiable function W(.
Theorem 4.
Theorem 4.
Remarks: There is an interesting difference between Theorems 4.t) is also decrescent then the origin is
uniformly stable. t) is (a) positive definite.. t) > ki V5 (x) for some constant k1.t)I <_ k2V5(x)Vx E R2
113
is positive defi
It is also radially unbounded since
W5(x.3
Stability Theorems
We now look for a generalization of the stability results of Chapter 3 for nonautonomous systems. Namely.3. ) : D x [0.t) * oo
as llxil p oc. which implies that nite.t)=0
VtER.
then
the equilibrium state is stable. and (b) decrescent. STABILITY THEOREMS
Thus..2.4. Indeed..
4. ) along any solution of (4.. t) is positive definite. ) is
. and (ii) The derivative of W (x. in Theorem 4.) : D x [0.1 (Lyapunov Stability Theorem) If in a neighborhood D of the equilibrium state x = 0 there exists a differentiable function W(. It is decrescent since
IW5(x. W5(x.
Moreover.1 the assumption of W(. t) is negative definite in D.
(ii) The derivative of W(.1 and 4. if W(.
Vt
where U. and class K functions
al and a2 such that
V1(x) < W(x. the remaining
conditions are not sufficient to prove asymptotic stability. then the equilibrium state is uniformly asymptotically stable.114
CHAPTER 4. then the equilibrium state at x = 0 is globally uniformly asymptotically stable. i = 1. according to inequalities (4. Theorem 4.
For completeness. t) < V2(x)
a1(1Ix1I) < W(x. t) is negative definite Vx E 1R".
Theorem 4. 2. This point was clarified in 1949 by Massera who found a counterexample.2 is different. and such that (ii) The derivative of W(x.4 on exponential stability
to the case of nonautonomous systems. t) there exist positive definite functions V1 (x) and V2 (x).15).. then we settle for stability.16) (4. whereas if this is not the case.3 are positive definite functions in D. oo) + 1R such that
(i) W (x. t) < a2(I1xII)
Vx E D Vx E Br
(4. t) < V2 (X)
Vx E D. ) : D x [0.
.14)(4. and radially unbounded Vx E ]R°b.2 (Uniform Asymptotic Stability Theorem restated) If in a neighborhood D of the equilibrium state x = 0 there exists a differentiable function W(. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
decrescent we have uniform stability.2 can be restated as follows:
Theorem 4. given a positive definite function W(x.Vt
(ii) "W + VW f (x. The proof is almost identical to that of Theorem 3. t) is (a) positive definite.4 and is omitted.17)
with this in mind. Theorem 4. the following theorem extends Theorem 3.
Notice also that. t) < V3(x) Vx E D. oo) + IR such that
(z) V1 (X) < W(x. and (b) decrescent.3 (Global Uniform Asymptotic Stability) If there exists a differentiable function 1R" x [0. if the decrescent assumption is removed.
t) = X2 + (1 + e2i)x2. t) > V2 (x).
K2IIxllP
Then the origin is exponentially stable. the x = 0 is globally exponentially stable. we have that
W (x. t) is positive definite.x1x2 + 3x21. with V2 also positive definite in R2.2 are satisfied.e2tx2
{ ±2 = x1 . Moreover.6 Consider the following system:
21
= xl .
Then
W (x' t)
<
ax f (x. t) is negative definite and the origin is globally asymptotically stable. and K3 such that
KllIxMMP
W(x)
< <
W(x. PROOF OF THE STABILITY THEOREMS
115
Theorem 4. t) + at 2[x1 .x2.14.4
Proof of the Stability Theorems
We now elaborate the proofs of theorems 4.
To study the stability of the origin for this system.4 Suppose that all the conditions of theorem 4.
.3.4. W (x.
Clearly
Vi (x)
(x1 + x2) < W(x. since W (x.t) <
(x1 + 2x2)
= V2(x)
thus.
Example 4.
11
4.X1X2 + x2(1 + 2[x1 . with V1 positive definite in R2. we consider the following Lyapunov function candidate:
W (X. and in addition assume that there exist positive constants K1. t). t) is decrescent.t) < KsllxI '. if the conditions hold globally.
2e2t)]
It follows that W (x. K2. since Vl (x) < W (x.4.
116
CHAPTER 4. W (x. Vt > to. by assumption. t) cannot increase along any motion.2: Choose R > 0 such that the closed ball
BR={XE]R":11x11 <R}
. (4.11): ai(11x1I) < Vj(x) < W(x.t) < W(xo.
However. This completes the proof. t)1 < V2(x)
and then E3a2 in the class K such that
ai(IIxH) < V1(x) < W(x. Thus
W (x(t). then there exists a positive function V2(x) such that
I W(x.
Proof of theorem 4. t) < 0. then ai(I1xII) < a(R)
=>
Ix(t)II < R Vt > to.to). to) = 0. t) dx E BR.t) > a(1Ix(t)II)
which implies that
11x(t)II < R `dt > to.1: Choose R > 0 such that the closed ball
BR = {xER':IIxII <R}
is contained in D. t) is decreasing. to) < a(R)
which means that if 1xoHI < 6. Thus. t)
< W (xo. which means that W (x. given to.18)
Moreover. t) < V2(x) < a2(11xI1)
Vx E BR.to) > W(x. By the assumptions of the theorem is positive definite and thus there exist a timeinvariant positive definite function Vl and a class 1C function al satisfying (4. we can find 6 > 0 such that 11x011 < 6 = W(xo.
Also W is continuous with respect to x and satisfies W(0.
This proves stability. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
Proof of theorem 4. If in addition W (x. this 6 is a function of R alone and not of to as before. Thus. to)
t > to
al(IIxII)
< W(x(t). (R)
=> ai(R) > a2(a) > W(xo. 3b = f (R) such that
a2(6) < a. we conclude that the stability of the equilibrium is uniform.
for any R > 0.
By the properties of the function in the class K
.
t)
where (4.24)
W(x. and a3 satisfying
a1(IIxII)
< W(x. given e > 0.23)
Conjecture: We claim that
11x011 < 61
=
lx(t')II < b2
for some t = t` in the interval to < t' < to + T.t)
(4.21)
(4. We have that
0<
al(b2)
<_
a.Vx E BR.4. 3.Vx E BR Vt.(0) = 0.to) < a2(11x(to)11)
by (4.
. a2.4. t) dt
from (4.to) Ta3(d2)
a2(5i).
But
W(x(to). Notice also that inequality (4.20) and (4.22)
a2(52)
where we notice that b1 and b2 are functions of e but are independent of to.23).24) follows from (4.
To see this. to) + J
<
to+T
W(x(t).(52)
al is class IC
dto < t < to + T
(4. i = 1. By the assumptions of the theorem there exist class IC functions al. Now define
T = C" (R)
a3(a2)
(4. 2. by assumption. since W is a decreasing function of t this implies that
0< al(a2) < W(x(to +T).22) implies that 61 > 62.
to
W(x(to).19)
since 11xoll < 61. a3(114) < W(x.20)
Given that the at's are strictly increasing and satisfy a. we reason by contradiction and assume that 1xo11 < 61 but llx(t`)f1 > b2
for all t in the interval to < t' < to + T.19)
(4. PROOF OF THE STABILITY THEOREMS
117
is contained in D.19) and the assumption llx(t`)ll > b2. b2 > 0 such that
x2(81)
< al(R)
< min[a1(c). we can find 5i. to +T) < W(x(to).t) < a2(IIXII) Vt.a2(b1)]
(4. Also.
Now assume
< W(x(t).t*)
is a decreasing function of t because < a2(IIx(t')II) < a2(b2) < al(E)
This implies that 11x(t)II < E.23)
which contradicts (4.al(IIxII)
Thus.Ta3(82)
a2(81)
0<
that t > t'. we must have that
ai(IIxII) > oo
as
IIxII 4 oo. We have
ai(IIx(t)II)

al(R)
by (4. a2i and a3 be as in Theorem 4. LYAPUNOV STABILITY IL NONAUTONOMOUS SYSTEMS
Thus.118
CHAPTER 4.t) <W(x(t'). Thus. and we have that all motions are uniformly bounded. and T as follows:
a2 (b) < al (c)
T = al (b)
a3 (a)
and using an argument similar to the one used in the proof of Theorem 4. we can show that for t > to + T
11X(011 < E
provided that IIxoII < a.
.to) . we have
0
< W(x(to).to) >_ W(x(t).t) >. 8. IIxII < a = IIxII < b.
Proof of theorem 4. the origin is asymptotically stable in the large. Given that in this case
is radially unbounded.2. This proves that all motions converge uniformly to the origin.
Thus for any a > 0 there exists b > 0 such that
al(b) > a2(a). Consider El. It then follows that our conjecture is indeed correct. It follows that any motion with initial state IIxoII < 8 is uniformly convergent to the origin.
If IIxoII < a.2.3: Let al. then
ai(b) > a2(a) ? W(xo.21).
5. to)xo..25) with initial condition xo is given by. and radially unbounded. Vt > to
(4. k2 E R+ satisfying
kixTx < xTP(t)x < k2xTx
or
Vt > 0.
W (x. decrescent.
We now endeavor to prove the existence of a Lyapunov function that guarantees exponential stability of the origin for the system (4. details the necessary and sufficient conditions for stability of the origin for the nonautonomous linear system (4.26)
The following theorem. t)
= iT P(t)x + xT Pi + xT P(t)x
= xTATPX+xTPAx+xTP(t)x
= xT[PA+ATP+P(t)]x
.
x(t) = 4i(t. given without proof.28)
where P(t) satisfies the assumptions that it is (i) continuously differentiable.25). there exist constants
k1.25) is exponentially stable if and
only if there exist positive numbers kl and k2 such that
IF(t.
(iii) bounded.to)II < kiekz(tt0)
where
for any to > 0.
Proof: See Chen [15].5 The equilibrium x = 0 of the system (4. ). Consider the function
W (x' t) = xT P(t)x
(4. (ii) symmetric.
k1IIx1I2 < W(x.5
Analysis of Linear TimeVarying Systems
In this section we review the stability of linear timevarying systems using Lyapunov tools. t) is positive definite. to)xll.
Theorem 4. and (iv) positive definite.25)
where A(. Under these assumptions.4.Vx c 1R". Consider the system: ± = A(t)x (4. to) II = Iml x II4(t. ANALYSIS OF LINEAR TIMEVARYING SYSTEMS
119
4. page 404.
(4. the solution of (4.27)
II'(t.t) < k2IIXII2
This implies that W (x. Indeed. It is a wellestablished result that the solution of the state equation with initial condition xo is completely characterized by the state transition matrix <1(.Vx E R"
Vt > 0.25).) is an n x n matrix whose entries are realvalued continuous functions of t E R.
thus we can write f(x.2.
Vt > 0. then the origin is uniformly asymptotically stable. continuous. then
k3IIxII2 < Q(t) < k4IIxII2
Vt > 0. t) + of Ixo x + HOT
where HOT= higherorder terms. t) = 0.120
or
CHAPTER 4. and bounded matrix Q(t). there exist a symmetric. given a nonlinear system of the form x = f (x.
Proof: See the Appendix. k4 E 1(P+ such that
k3xTx < xTQ(t)x < k4xTx
Moreover. t) = f (0. positive definite.29)
4.1
The Linearization Principle
Linear timevarying system often arise as a consequence of linearizing the equations of a nonlinear system.t) N Of x=O
or
x = A(t)x + HOT
(4. Indeed.
Theorem 4. then it is possible to expand f (x.6 Consider the system (4. and bounded
matrix P(t) such that
Q(t) = P(t)A(t) + AT(t)p(t) + P(t). oo) > R'
(4.
(4. if these conditions are satisfied.30)
with f (x.Vx E Rn
and then the origin is exponentially stable by Theorem 4. This is the case if
there exist k3.t) = xTQ(t)xT
where we notice that Q(t) is symmetric by construction.25). f (0. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
W(x. According to theorem 4. t) using Taylor's series about the equilibrium point x = 0 to obtain
X = f (x. positive definite.5. t)
f : D x [0.Vx E IRT. continuously differentiable. The equilibrium state x = 0 is exponentially
stable if and only if for any given symmetric.31)
. t) having continuous partial derivatives of all orders with respect to x. if Q(t) is positive definite.4. Given that x = 0 is an equilibrium point.
Given e > 0. t) 4) (r. P so defined is positive and bounded. independent of t. t) = 0
i=u4a
Ilxil
uniformly with respect to t. We will denote
g(x. t) .6.31) seems
A(t) =
of
ax l==o
to imply that "near" x = 0.31).32)
lim g(x. (4. the quadratic
function
W (x. there exists 5 > 0. t) ax since the higherorder terms tend to be negligible "near" the equilibrium point.30) is similar to that of the socalled linear approximation (4. Theorem 4. t)
is uniformly asymptotically stable if
(i) The linear system a = A(t)x is exponentially stable.
Thus
x = f (x. t)
de f
f (x.6 (with Q = I):
P(t) =
J
4T (T. ANALYSIS OF LINEAR TIMEVARYING SYSTEMS
where
121
f of (o.4. and define P as in the proof of Theorem 4.7 The equilibrium point x = 0 of the nonlinear system x = A(t)x(t) + g(x. t) = A(t)x(t) + g(x.
More explicitly.
Proof: Let 4) (t. Also.30) can be inferred from the linear approximation (4.
(4. t) d7
c
according to Theorem 4. we study under what conditions stability of the nonlinear system (4. to) be the transition matrix of x = A(t)x.31).
(ii) The function g(x. We now investigate this idea in more detail. t) satisfies the following condition. the behavior of the nonlinear system (4. such that
Ilxll <6
This means that
Ilg(x.A(t)x. t).5. t) = xT Px
. t) 11
ilxll
<E.
122
CHAPTER 4.32). W (x.6
Perturbation Analysis
Inspired by the linearization principle of the previous section we now take a look at an important issue related to mathematical models of physical systems.9)11x112. it is not realistic to assume that a mathematical model is a true representation of a physical device. t)
= thT p(t)x
= xT +xTP(t)x+xTPx
= (xTAT +gT)Px+XTP(Ax+g)+XTPX
= XT (ATP+PA+P)x+gTPx+xTPg
(_ I)
_ XT x+9T (x. This completes the proof. Moreover
W (x. a model can "approximate" a true system. given that P(t) is bounded and that condition (4. there exist b > 0 such that
119(x. How to deal with model uncertainty
. t) is negative definite in a neighborhood of the origin and satisfies the conditions of Theorem 4.32) holds uniformly in t. t) II
ii
x
< E.
provided that
II x l l < b .
Hence. t)II11PII < IIx11
thus. we can write that
2119(x.
It then follows that
W(x.t)Px+xTPg(x.t).t) <
11x112+21191111PlIlIxll=11x112+OIIx112 0<9<1 < (1.
Thus. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
is (positive definite and) decrescent.4 (with p = 2).
At best.
By (4. there exist o > 0 such that
119(x.
4. In practice. t)II
1
Ilxll
< IIPII
2119(x. t)11lIPll
f Ollxll < Ilxll
with 0 < 9 < 1. and the difference between mathematical model and system is loosely referred to as uncertainty.
t) satisfies the bound
(iv) I1g(x. the lefthand side of
this inequality implies that W(. Thus.
Then if the perturbation g(x.. t) + g(x. t) is positive definite and decrescent. Moreover.k4ks)IIXII2
<0
since (k3 .4).3 (along with Theorem 4. t)
(4.t)II < k511x1I.
(k3 . then the exponential stability is global.6.4.. oo) p R such that
(i) k1IIxII2 <<W(x.
We now find
along the trajectories of the perturbed system (4. t) has an asymptotically stable equilibrium point. Moreover. We have
W=
+VWf(x.33) and assume that there exist a differentiable function W(. Assumption (ii) implies that. what can it be said about the perturbed system ± = f (x. W (X. t) is negative definite along the trajectories of
the system i = f (x.t) + g(x. t) < k3 11X 112. if all the assumptions hold globally.33)
where g(x. In our first look at this problem we consider a dynamical system of the form
i = f(x.t) < IxII2.k4k5) > 0 by assumption. with a1(IJxJI) = k1IIxII2 and a2(IIxJI) = k2JIxI12.t)+VWg(
< k3IIx112
< k4ks1IxI12
W < (k3 . PERTURBATION ANALYSIS
123
and related design issues is a difficult problem that has spurred a lot of research in recent years. t). t) is a perturbation term used to estimate the uncertainty between the true state of the system and its estimate given by i = f (x. ) : D x [0.k4k5) > 0
the origin is exponentially stable.
(ii) "W + VW f (x. ) is radially unbounded.
. assumptions (i) and (ii) together imply that the origin is a uniformly asymptotically stable equilibrium point for the nominal system i = f (x.24.
ignoring the perturbation term g(x.8 [41] Let x = 0 be an equilibrium point of the system (4. The result then follows by Theorems 4.33). t).
(iii) IIVWII <
k4IIxI12. t). In our first pass we will seek for an answer to the following question: Suppose that i = f (x. t)?
Theorem 4.
Proof: Notice first that assumption (i) implies that W (x. t).
t)
< amin(Q)IIx112 + [2Amax(P)IIx112] [711x112]
< Amin(Q)Ilxl12+27. t)
(4.in(P) and
Amax(P) respectively as the minimum and maximum eigenvalues of the matrix P. Notice also that g(x.124
CHAPTER 4. Only the bound (iv) is necessary.0112 <.. n the eigenvalues of A and assume that t(. Assume also that the perturbation term g(x. we have
that
amin(P)IIx112 < V(x) < amax(P)Ilx112
.\i) < 0. (P)Ilx1I2 fl (x) < [Amin(Q) + 27'\.34)
where A E 1Rn"n.. (P)] IIxI12
It follows that the origin is asymptotically stable if
Amin(Q) + 2'yA.10 guarantees that for any Q = QT > 0 there exists a unique matrix P = PT > 0.\. . 2. Denote by Ai.x(P) < 0
. .8 is that it shows that the exponential stability is robust with respect to a class of perturbations.711x112
Vt > 0. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
The importance of Theorem 4.
Also
av
ax
Px+xTP = 2xTP
112xTP112 = 2Amax(P)Ilx112
IlOx
It follows that
x
aAx +
ax 9(x. i = 1. Vx E R"
Theorem 3. t) need not be known.aV Ax = xTQx < Amin(Q)IIXI12
Thus V (x) is positive definite and it is a Lyapunov function for the linear system k = Ax. that is the solution of the Lyapunov equation
PA+ATP=Q. Vi. t) satisfies the bound
119(X..
Special Case: Consider the system of the form
Ax + g(x.
The matrix P has the property that defining V(x) = xTPx and denoting .
the whole point of the Lyapunov theory is to provide an answer to the stability analysis without solving the differential equation. however.35)
From equation (4. Assume that f satisfies a Lipschitz continuity condition in D C Rn. the theorems have at least conceptual value.1 . Does this imply the existence of a Lyapunov function that satisfies the conditions of the corresponding stability theorem? If so.
An important question clearly arises here. Indeed. and we now state them for completeness.
. namely. equivalently
ti
min (Q)
2Amax(P)
(4. In all cases.. This fact makes the converse theorems not very useful in applications since. ) : D x [0. Then we
have
If the equilibrium is uniformly stable. and the theorems related to this questions are known as converse theorems. the question above can be answered affirmatively.4. In fact..9 Consider the dynamical system i = f (x. CONVERSE THEOREMS
125
that is
Amin(Q) > 2ry Amax(P)
or.
Theorem 4. oo) a IIt that satisfies the conditions of Theorem 4. suppose that an equilibrium point satisfies one of the forms of stability. unless one can "guess" a suitable Lyapunov function. . then there exists a function W(. all of these theorems read more or less as follows:
if there exists a function (or that satisfies . then the search for the suitable Lyapunov function is not in vain.35) we see that the origin is exponentially stable provided that the value of ry satisfies the bound (4. t). one can never conclude
anything about the stability properties of the equilibrium point. The main shortcoming of these theorems is that their proof invariably relies on the construction of a Lyapunov function that is based on knowledge of the state trajectory (and thus on the solution of the nonlinear differential equation).35)
4. as discussed earlier. and that 0 E D is an equilibrium state.7
Converse Theorems
All the stability theorems seen so far provide sufficient conditions for stability. then the equilibrium point xe satisfies one of the stability definitions. Nevertheless. few nonlinear equation can be solved analytically.7. Thus. provides a systematic way of finding the Lyapunov
function.
None of these theorems.
) : D x [0. or [95] for details.
. then there exists a function
W(.. [41]...8
DiscreteTime Systems
So far our attention has been limited to the stability of continuoustime systems...126
CHAPTER 4..... ) D x [0. x(k) E 1R".. and f : ]R" x Z + 1R". that is.
If the equilibrium is globally uniformly stable...
Proof: The proof is omitted.. is a continuous variable).3.... (b) discretetime system Ed
e If the equilibrium is uniformly asymptotically stable.36)
where t E 1R (i. however.t)
(4.«
E
... Before doing so.... LYAPUNOV STABILITY H..1: (a) Continuoustime system E...k)
(4.37)
where k E Z+.. See References [88]... we digress momentarily to briefly discuss discretization of continuoustime nonlinear plants..
4. oo) 4 ]R that satisfies the conditions of Theorem 4.JR that satisfies the conditions of Theorem 4.e....... In this section we consider discretetime systems of the form
x(k + 1) = f(x(k).. W
x(k)
Figure 4.. oo) . systems defined by a vector differential equation of the form
i = f(x.2. and study the stability of these systems using tools analogous to those encountered for continuoustime systems. then there exists a function W(. NONAUTONOMOUS SYSTEMS
(a)
u(t)
E
u(k)
(b)
...
Clearly.
H represents a hold device that converts the discretetime signal. We assume the an ideal conversion process takes place in which H "holds" the value of the input sequence between samples (Figure 4. that is. Finding Ed is fairly straightforward when the system (4. given by x(k) = x(kT). To develop such a model.37).2.9
Discretization
Often times discretetime systems originate by "sampling" of a continuoustime system. Both systems E and Ed are related in the following way. which consists of the cascade combination of the blocks H. respectively.2 to represent continuous and discretetime signal. a device that reads the continues variable x every T seconds and produces the discretetime output x(k). i. u). If the plant is LTI. For
easy visualization. and S.38)
finding the discretetime model of the plant reduces to solving the differential equation with
initial condition x(kT) at time t(O) = kT and the input is the constant signal u(kT).
Case (1): LTI Plants.
The system Ed is of the form (4.1(b)). then the output x(k) predicted by the model Ed corresponds to the samples of the continuoustime state x(t) at the same sampling instances.
E represents the plant.x (a dynamical
system that maps the input u into the state x. where each block in the figure represents the following:
S represents a sampler. as shown in Figure 4. Clearly this block is implemented by using a digitaltoanalog converter. as shown in Figure 4. seen as a mapping from the input u to the state x. or sequence u(k). E.9. given
by
u(t) = u(k)
for kT < t < (k + 1)T. given a continuoustime system E : u .
This is an idealized process in which.u)=Ax+Bu
(4.36) is linear timeinvariant (LTI). we use the scheme shown in Figure 4. this block is an idealization of the operation of an analogtodigital converter.e.3). If u(k) is constructed by taking "samples" every T seconds of the
continuoustime signal u.1(a)) we seek a new system Ed : u(k) > x(k) (a dynamical system that maps the discretetime signal u(k) into the discretetime state x(k). DISCRETIZATION
127
4. we have used continuous and dotted lines in Figure 4. given
u the mapping E : u a x determines the trajectory x(t) by solving the differential
equation
x = f (x. into the continuoustime signal u(t). The
. but not in the present case.4. then we have that
e=f(x.
LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
H
U(t)
K
x(t)
S
Figure 4.
.3: Action of the hold device H.128
CHAPTER 4.2: Discretetime system Ed
1
2
3
4
5
x
01
2
3T4T5T
t
Figure 4.
then
dx x== lim x(t + AT) . which consists of acknowledging the fact that.38) has a wellknown solution of the form
x(t) = eA(t to)x(to) + it eA(tT)Bu(T) dT. There are several methods to construct approximate models.x(t)
T
Thus
x= f(x.
o
129
In our case we can make direct use of this solution with
t(O)
= KT
= x(k)
x(to)
t
= (k+1)T
x(t) = x[(k + 1)T]
u(T) = u(k) constant for kT < t < k(T + 1).36).
Therefore
x(k + 1) = x[(k + 1)T] = eAT x(kT) +
= eATx(k) + J
f /T
0
(k+1)T
eA[1)Tr1 dT Bu(kT)
T
eAT
dT Bu(k).u(k)]. if T is small. In the more general case of a nonlinear plant E given by (4. one is usually forced to use an approximate model. Given this fact.
.
Case (2): Nonlinear Plants.x(t) AT+O dt AT
x(t + T) . of course. is that finding the exact solution requires solving the nonlinear differential equation (4. which is very difficult if not impossible.9. with different degrees of accuracy and complexity.4. The reason. The simplest and most popular is the socalled Euler approximation.36) the exact model is usually impossible to find. DISCRETIZATION
differential equation (4.u)
can be approximated by
x(k + 1)
x(k) + T f[x(k).
(4.41)
Uniformly convergent if for any given e1 > 0.8 The equilibrium point x = 0 of the system (4. we now restate these definitions for discretetime systems.2 we introduced several stability definitions for continuoustime systems.
Asymptotically stable at ko if it is both stable and convergent.40)
Convergent at ko if there exists 81 = 81(ko) > 0 :
jjxojj < 81
=>
tlim
+00
x(k) = 0.130
CHAPTER 4.
Definition 4. this equation always has exactly one solution corresponding to an initial condition xo. and f : l x Z > II?1.37). 38 = 8(e. that is. x(k) E 1R".
Unstable if it is not stable.10
Stability of DiscreteTime Systems
In this section we consider discretetime systems of the form (4.1
Definitions
In Section 4. unlike the continuous case. As in the continuoustime case we consider the stability of an equilibrium point xe.
Uniformly asymptotically stable if it is both stable and uniformly convergent. we assume that
x(k + 1) = f(x(k). which is defined exactly as in the continuoustime case.39)
Uniformly stable at ko if given any given c > 0.37) is said to be
Stable at ko if given e > 0. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
4.10.k)
where k E Z+. 3M = M(e1) such that
IIxo11 <81 = IIx(k)11 <c1 Vk>ko+M. E38 = 8(e) > 0 :
jjxolI < 8
IIx(k)jl < e
Vk > ko > 0. It is refreshing to notice that.
4.
(4. For completeness.
(4. ko) > 0 :
llxojl < 8 =
jx(k)JI < e
Vk > ko > 0.
.
is said to be decrescent in D C P" if there exists a timeinvariant positive
definite function V2(x) such that
W(x..
It then follows by Lemma 3. Vx E B. k) > M
provided that jjxjj > N.t) <a2(IlX ). k) < V2(x)
Vx E D.k)=0 Vk>0.t)..
Definition 4. k) is decrescent in Br C D if and only if there exists a class 1C function a2() such that
W(x..1 that W (x.
Positive definite in D C P" if
(i) W (O.9 A function W : R" X Z + P is said to be
Positive semidefinite in D C R' if
(i) W(0.k)>0 `dx#0. and
(ii) I a time invariant positive definite function Vi(x) such that
Vl (x) < W (x. This means that given M > 0. k) is positive definite in B.. uniformly on k. xED.4.2
DiscreteTime Positive Definite Functions
Timeindependent positive functions are uninfluenced by whether the system is continuous or discretetime. C D.1 that W(x.
It then follows by lemma 3. k)
dx E D V.
. there exists N > 0 such that
W (x.
W(. STABILITY OF DISCRETETIME SYSTEMS
131
4.10. k) = 0
Vk > 0. ) is said to be radially unbounded if W(x. C D if and only if there exists a class 1C function such that
al(jjxMI) <W(x. (ii) W(x. C D. k) 4 oo as jxjj 4 oo. Timedependent positive function can be defined as follows. Vx E B. Vk.10.
Example 4.10 (Lyapunov Stability Theorem for DiscreteTime Systems). ) : D x Z+ + R such that
(i) W (x. k) is positive definite. then
the equilibrium state is uniformly asymptotically stable.10 The rate of change. then the origin is
uniformly stable. with AW (x.37) is negative semidefinite in D.
With these definitions. k) along the trajectories of the system (4. ) : D x Z+ ](P such that
(i) W (x. OW (x. all the theorems studied in Chapters 3 and 4 can be restated for discretetime systems. k) along any solution of (4.37) there exists a function W(. t). The proofs are nearly identical to their continuoustime counterparts and are omitted.
Theorem 4.37) is defined by
OW (x. if W (x.Time Systems).
(ii) The rate of change. Roughly speaking.
(4.
(ii) The rate of change OW(x. Moreover. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
4. we can now state and prove several stability theorems for discretetime system. then
the equilibrium state is stable. k) is also decrescent.10.11 (Lyapunov Uniform Asymptotic Stability for Discrete.3
Stability Theorems
Definition 4. k) is (a) positive definite. AW(x.132
CHAPTER 4. and (b) decrescent.. k).
Theorem 4. k) = W(x(k + 1). k) is negative definite in D. If in a neighborhood D of the equilibrium state x = 0 of the system (4. k + 1) . k).W (x.42)
X2 (k + 1)
= axi(k) + 2x2(k). of the function W (x.7 Consider the following discretetime system:
xl(k + 1)
= xl(k) + x2(k)
(4.43)
. If in a neighborhood D of the equilibrium state x = 0 there exists a function W(. k) replacing W (x.
(4.1) Prove Lemma 4. we consider the (timeindependent) Lyapunov function candidate V(x) = 2xi(k) + 2x1(k)x2(k) + 4x2(k).2. and thus the origin
is stable. since the system is autonomous). we conclude that
AV(x) = V(x(k + 1)) . EXERCISES
133
To study the stability of the origin. and
the origin is locally asymptotically stable (uniformly.4. which can be easily seen to be positive definite.
a = 0.V(x(k)) = 2x2 < 0.
a < 0.11.
(4.11
Exercises
(4.
4. (b) decreasing or not. t) = (xl + x2)et.V(x(k)).V(x(k)) _
Therefore we have the following cases of interest:
2x2 + 2ax4 + 6ax3 x2 + 4a2x6. (4. after some trivial manipulations.2) Prove Lemma 4. (c) radially unbounded or not.4. In this case AV(x) = V(x(k + 1)) . we have
V(x(k+1)) = 2xi(k+1)+2x1(k+1)x2(k+1)+4x2(k+1)
2 [xi(k) + x2(k)]2 + 2(x1(k) +
x2(k)][ax3
(k) + 2x2(k)]
+4[axl(k) + 2x2(k)]2
V(x(k)) = 2x2 + 2xix2 + 4x2.
From here.1. We need to find AV(x) = V(x(k + 1)) . In this case.3) Prove Theorem 4. AV(x) is negative definite in a neighborhood of the origin.
.4) Characterize each of the following functions W : 1R2 x R 4 R as: (a) positive definite or not.
(i) W1 (X. t) = (xi + x2)
(ii) W2(x.
t) = (xi +x2)(1 + cos2 wt). t) = (xi + x2). t) _ (xi + x2) (vi) W6 (X. a function are given below you are asked to classify the and computed. (iv) W4(x. W1(x. t) = xi. t) = (xi + x2) cos2 wt. as (a) stable. W3(x.5) It is known that a given dynamical system has an equilibrium point at the origin. t) _ xlet. W6 (x. t) = (xl + x2)e t W'5 (X.
(iii) W3(x.6) Given the following system. t) = (xi + x2)(1 + et). (ii) W2 (X. t) = (xi + x2) cos2 wt.
(Vii) W7(x. 4Vs(x.
(iv) W4 (X. t) = (xl + x2)et. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS
(iii) W3(x. t) = (xi + x2)et. (b) locally uniformly asymptotically stable.9) Given the following system. t) = (x2 + x2)et. Explain you answer in each case.x1x2 cos2 t 22 = x2 . t) _ (xl + x2)(1 + sine t).
(vi) W6(x. t) _ (xl + x2)(1 + cos2 wt). t) = (xi + x2). t) = (xi + x2)et. W7(x. t) _ xi.
(V) W5(x.
(4.t) = (xi + x2)(1 + cost wt).
xl{'1)
(4. Wz(x.t) = (x2 + x2)(1 + cos2 wt). in each case. Assuming that origin. and its derivative For this system. t) = (xi + x2)(1 + et).10.134
CHAPTER 4.
(4. (4.11.
(xi) W11(x. (viii) W8 (X.
(Vii) W7(x. W4 (X. (ix) W9(x. t) _ (xi + x2).t) =
(x2+x2)(l+e t).
(i) W1 (X.
has been has been proposed. t) =
(x2+ )+1
. study the stability of the equilibrium point at the origin:
21 = x1 . (v) W5(x. and/or (c) globally uniformly asymptotically stable.
W11(x) t) _
(x1 + x2). W10 (x.
(x) Wio(x. t) = (xl + x2)et. Ws(x. t) = (xl + x2).X1X2 sin2 t
(4.8) Prove Theorem 4. study the stability of the equilibrium point at the origin:
.7) Prove Theorem 4.t) _ (xi + x2)(1 + et).
11.
. [41]. Section 4. Classical references on the subject are Hahn [27] and Krasovskii [44]. and [95]. Perturbation analysis has been a subject of much research in nonlinear control.3X2
Hint: Notice that the given dynamical equations can be expressed in the form
Ax + g(x)
Notes and References
Good sources for the material of this chapter include References [27].4. [88].5 is based on Vidyasagar [88] and Willems [95]. EXERCISES
135
xl
±2
= x2 + xi + xz = 2x1 . Section 4. [68].6 is based on Khalil [41].
.
our interest is usually in the analysis and design of feedback control systems. Feedback systems can be analyzed using the same tools elaborated so far after incorporating the effect of the input u on the system dynamics. In this chapter we look at several examples of feedback systems and introduce a simple design technique for stabilization known as backstepping. In a typical control problem. To start. Now suppose that u is obtained using a state feedback law of the form
u = cb(x)
To study the stability of this system. our attention has been restricted to openloop systems.
137
.1) to obtain ± = f (x. if the origin of the unforced system (5. O(x))
According to the stability results in Chapter 3. however.Chapter 5
Feedback Systems
So far.3) is asymptotically stable.3) is negative definite in a neighborhood of the origin.u)
(5. 0).2) into (5.
It seems clear from this discussion that the stability of feedback systems can be studied using the same tool discussed in Chapters 3 and 4. consider the system
x = f(x. we substitute (5. then we can find a positive definite function V whose time derivative along the trajectories of (5.1)
and assume that the origin x = 0 is an equilibrium point of the unforced system ± = f (x.
x
and substituting (5.
0
We mention in passing that this is a simple example of a technique known as feedback
linearization. In a more realistic scenario what we would obtain at the end of the design process with our control u is a system of the form
i=(aa)x2x
where a represents the true system parameter and a the actual value used in the feedback law.
u = ax2 .138
CHAPTER 5. but only
locally because of the presence of the term (a .
consider the following example.4) we obtain
±= x
which is linear and globally asymptotically stable. We notice two things:
(i) It is based on exact cancelation of the nonlinear term ax2. This is undesirable since in practice system parameters such as "a" in our example are never known exactly. as desired. it does come at a certain price.1 Consider the first order system given by
we look for a state feedback of the form
u = O(x)
that makes the equilibrium point at the origin "asymptotically stable. setting
ax2. While the idea works quite well in our example.a)x2.
Example 5. These examples will provide valuable insight into the backstepping design of the next section. FEEDBACK SYSTEMS
5. The reason is that nonlinearities in the dynamical equation are not necessarily bad." One rather obvious way to approach the problem is to choose a control law u that "cancels" the nonlinear term Indeed.
(ii) Even assuming perfect modeling it may not be a good idea to follow this approach and cancel "all" nonlinear terms that appear in the dynamical system. To see this.5) into (5.
. In this case the true system is also asymptotically stable.1
Basic Feedback Stabilization
In this section we look at several examples of stabilization via state feedback.
Let's now examine in more detail how this result was accomplished. we will construct a function Vl = Vl (x) : D * IR satisfying
.
To find an alternate solution. notice that the cancellation of the term x3 was achieved by incorporating the term x3 in the feedback law. Given the system
x= f(x. on the other hand. however.1.5. with j odd.6)
has an asymptotically stable equilibrium point at the origin. These two terms are. the physical characteristics of the actuators may place limits on the amplitude of this function. In practice.uE1R.u)
xEr. BASIC FEEDBACK STABILIZATION
139
Example 5. O(x))
(5. Indeed. and thus the presence of the term x3 on the input u is not desirable. quite different:
The presence of terms of the form x2 with i even on a dynamical equation is never desirable. The presence of this variable in the control input u can lead to very large values of the input. greatly contribute to the
feedback law by providing additional damping for large values of x and are usually
beneficial. Notice first that our control law ul was chosen to cancel both nonlinear terms ax2 and x3. we proceed as follows. To show that this is the case. even powers of x do not discriminate sign of the variable x and thus have a destabilizing effect that should be avoided whenever possible. ul is the feedback linearization law that renders a globally asymptotically stable linear system.1 we can set
u
ui = ax 2 + x3 .x
which leads to
Once again.0)=0
u = q5(x)
we proceed to find a feedback law of the form
such that the feedback system
± = f (x.2 Consider the system given by
Lb =ax2x3+u
following the approach in Example 5.
Terms of the form x2. f(0.
At the same time.
we obtain the following feedback system
2 = ax2x3+u xx3
which is asymptotically stable.axe. then the origin is globally asymptotically stable by Theorems 3.2 we chose u = ul = axe + x3 .
Example 5.(x4 + x2). we modify the function V2(x) as follows:
V1 = ax3 .x4 + xu < x4 . FEEDBACK SYSTEMS
(i) V(0) = 0.
With this input function u.3.
In Example 5. Not happy with this solution.6).2 and 3.
=ax2x3+u
defining Vl(x) = 2x2 and computing V1.
ax3 .x4 + xu. we must have
dEf
. and V1(x) is positive definite in D . there exist a positive definite function V2(x) : D > R+ such that
Vi (x) = ax.x2 xu < x2 .ax3 = x(x + ax2)
which can be accomplished by choosing
u = x .140
CHAPTER 5.{0}. The result is global since V1 is radially unbounded and D = R. Moreover. u)
ax3 .
It then follows that this control law satisfies requirement (ii) above with V2(x) = x2. O(x)) < V2 (x)
Vx E D.
.V2(x).3 Consider again the system of example 5.l f (x.
Clearly. With this input function we have that
V1 = ax3 . if D = ]R" and V1 is radially unbounded.x4 + x(ax2 + x3 . we obtain
Vl = x f (x.2.
(ii) V1(x) is negative definite along the solutions of (5.x) = x2
'f .x4 + xu < V2(x)
For this to be the case.x.
2. As will be seen shortly.
0(0) = 0
and a Lyapunov function V1 : D 4 IIF+ such that
V1(x)
8 [f (x) + g(x) .g : D > R" are assumed to be smooth..7) and (5.10)
.
(5. the importance of this structure is that can be considered as a cascade
connection of the subsystems (5. We will make the following assumptions (see Figure 5. the origin is an equilibrium point of the subsystem i = f (x). More general classes of systems are considered in the next section. To start with. and [x.
(ii) Consider the subsystem (5. We
now endeavor to find a state feedback law to asymptotically stabilize the system (5. To this end we proceed as follows:
We start by adding and subtracting g(x)o(x) to the subsystem (5.8)
E ]R. we assume that there exists a state feedback control law of the
form
_ Ox). we now explore a recursive design technique known as backstepping. Thus.7) (Figure 1(b)).O(x)] < V.8)). augmented with a pure integrator (the subsystems (5. we consider a system of the form
= f(x) + g(x)f = U.1(a)):
(i) The function f Rn > ]Rn satisfies f (0) = 0. for which a known stabilizing law already exists. INTEGRATOR BACKSTEPPING
141
5.5.(x) < 0
Vx E D
where
D
R+ is a positive semidefinite function in D.
ov.7).8) consists of the subsystem (5. e]T E ]Rn+1 is the state of the system (5.
(5.8). The function
u E R is the control input and the functions f.7)(5.9)
= U.0(x)1
(5. We obtain the equivalent system
x = f(x) + g(x)0(x) + g(x)[ .8).2
Integrator Backstepping
Guided by the examples of the previous section.7)(5.7)(5. the system (5. Viewing the state variable as an independent "input" for this subsystem.8).7) (5.7).
x
According to these assumptions.
Here x E ]Rn.
(d) the final system after the change of variables. (b) modified system after introducing O(x).
.1: (a) The system (5.7)(5.142
CHAPTER 5. FEEDBACK SYSTEMS
a)
U
x
b)
U
x
O(x)
c)
U
z
x
L
d)
f (') + 9(')O(')
v=z
z
x
Figure 5. (c) "backstepping" of O(x).8).
To stabilize the system (5. INTEGRATOR BACKSTEPPING
Define
143
z = z =
where
.
k>0
(5.
(ii) The system (5. as shown in Figure 1(c).fi(x)
q=
ao[f(x)+9(x).15)(5. the system (5.15) incorporates the stabilizing state feedback law and is thus asymptotically stable when the input is zero.16) is equivalent to the system (5.15)(5. This feature will now be exploited in the design of a stabilizing control law for the overall system (5.16) consider a Lyapunov function candidate of the
form
V = V (x. However.
We can choose
v
.16) is.14)
the resulting system is
i = f(x) + 9(x)O(x) + 9(x)z
(5.
(5.16). the subsystem (5. once again.15)(5. Defining
v = z
(5.O(x)
(5. These two steps are important for the following reasons:
(i) By construction.8).((x) + kz)
. ) = Vi (x) + Z z2. as shown in Figure 1(d).15)(5.]
a0x=
(5.fi(x) = u.11) (5.7)(5.18)
.13)
This change of variables can be seen as "backstepping" O(x) through the integrator. the cascade connection of two subsystems.17)
We have that
V _ av.15)
i=v
(5.16)
which is shown in Figure 1(d). [f(x) E

x
+ 9(x)O(x) + 9(x)z] + zi
_AX f (x) + ax
ax g(x)z + zv.2.5.12)
.
find a state feedback control law _ q(x) to stabilize the origin x = 0.22)
Example 5.
(5.k[ (5.144
CHAPTER 5. l.13). In our case we have it =axi . . then the origin is globally asymptotically stable.18).11). z = 0 is asymptotically stable.x1 + X2
X2 = u. (5. we define
V1(xl) = 1xi
Vl(xl) = axi . we obtain
u = LO [f (x) +
g(x) . according to (5.(x1 + x1
. Moreover.
(5. Finally. notice that.19) that the origin x = 0.12).23)
Clearly this system is of the form (5. Proceeding as in Example 5.16) with
x`
= xl
= x2
1
S
f(x) = f(xi) = axi .kz2
< Va(x) .x1 + x1x2 < Va(x1)
deI
1
.20) and using (5.3:
it = axi .15).x3
g(x) =
Step 1: Viewing the "state" as an independent input for the subsystem (5. FEEDBACK SYSTEMS
Thus
V
8V1
a
+E1 g(x)cb(x) . the stabilizing state feedback law is given by u = i + q5 (5.15)(5.xi + x2.O(x)] . since z = l.21)
(5.kz2.19)
It then follows by (5.3.O(x) and 0(0) = 0 by assumption. If all
the conditions hold globally and V1 is radially unbounded.4 Consider the following system. and (5. = 0 is also asymptotically stable. which is a modified version of the one in
Example 5. the result also implies that the origin of the original system x = 0.kz2 ax f (x) 8x
l[f(x) + g(x).
specifically a system of the form
th
= f(x) + g(X)1
41
=6
. l. We have
u
8x [f (x) + g(xX] .
5.3. x E 1R".
With this control law the origin is globally asymptotically stable (notice that Vl is radially unbounded).k[C .1
Chain of Integrators
A simple but useful extension of this case is that of a "chain" of integrators.3
Backstepping: More General Cases
In the previous section we discussed integrator backstepping for systems with a state of the form [x.xl + axT.axi
leading to
21 = xl X 3 X.(5.21).3.5. E 1R. The composite Lyapunov function is
V=Vi+2z2 = 2X2+2[x2O(xl)]2
=
2X2
+ 2[x2 .]. we make use of the control law (5. under the assumption that a stabilizing law 0.
Step 2: To stabilize the original system (5.23).5X 1 g(x) .22). BACKSTEPPING: MORE GENERAL CASES
which can be accomplished by choosing
145
x2 = 0(x1) = XI .O(x)]
= (1+2axi)[ax2 ix3 l+x2]xlk[x2+xl+ax2i].
5. We now look at more general classes of systems.
ax1 g(x) V2 = V1 + 2 [S1 .k[f2[.
C [ .O(x)].8) with 6 considered as an independent input. The secondorder system (5.28)
and assume that 1 = O(x1) is a stabilizing control law for the system
th = f(x) + g(x)q5(x)
Moreover. To simplify our notation. We can asymptotically stabilize this system using the control law (5. a`1 ] [0 . bl) = aa(x) [f (x) + 9(x) 1] . FEEDBACK SYSTEMS
G1
bk
u
Backstepping design for this class of systems can be approached using successive iterations of the procedure used in the previous section. We first consider the first two "subsystems"
= f(x) + 9(x)1
S1
(5.O(x)]2
k>0
We now iterate this process and view the thirdorder system given by the first three equations as a more general version of (5. we also assume that V1 is the corresponding Lyapunov function for this subsystem.t .27)(5.
aV2
OxSl)
u=
ax xCC
V 9(x[) .24)
1=6
u
(5.7)(5.51]T
.
k>o
.i)]. ac.25) (5.26)
and proceed to design a stabilizing control law. WI
[.146
CHAPTER 5.28) can be seen as having the form (5. without loss of generality. we consider.k[S2 .
f= f f(x)+09(x) 1 1.[ a 22.21) and associated Lyapunov function V2:
6 = O(x. the third order system
= f(x) + 9(x)1
(5.O(x.27)
=6
(5.8) with
x=
at(x) l [a0
1
z.
k>0
11T
9=
L
0
J
Applying the backstepping algorithm once more. we obtain the stabilizing control law:
.
90(x.7)(5.
BACKSTEPPING: MORE GENERAL CASES
or
147
u = a0(x. Using the Lyapunov function Vl = 1/2x1.
We can now proceed to the first step of backstepping and consider the first two subsystems.0(x.
.x2)(= x3) =
11)[f(xl)
+9(xl)x2] . we propose the stabilizing law
O(x1.
We proceed to stabilize this system using the backstepping approach.O(x)]2 + 2 [S2 . assuming at this point that x3 is an independent input.5 Consider the following system.
it is immediate that 0(xl) = xl .aV2
k> 0.2 . we consider the first equation treating x2 is an independent "input" and proceed to find a state feedback law O(xi) that stabilizes this subsystem.4:
±1 = axi + x2
x2 = x3 X3 = u.
V=
1
= V1 + 2 [S1 .
k>0
with associated Lyapunov function
V2 = Vl+_z2
Vl + [x2 .
aav.5.k[l. the procedure for nthorder systems is entirely analogous.S1)b2 [f (x) +9(x)1] + al.3.
CC
I
CC
We finally point out that while. we have focused attention on thirdorder systems.axe is one such law.O(x.51)]. which is a modified version of the one in
Example 5. Using the result in the
previous section.W a0(x. for simplicity.k[x2 . ail .x19(xl) . ax The composite Lyapunov function is
. To start.
Example 5.O(xl)]. bl)]2. we consider the system it = 2 ax2 + O(xi) and find a stabilizing law u = O(x1). In other words.0(xl )]
2
1
= VI +
2
[x2 + XI + axi]2.
x2)
axi
[f (XI) + 9(xl)x2] +
aO(xl.
A.
.3.6.
Sk1) + 9k1(x. x2) = (1 + 2ax1) [axi + x2] . fzf and gz depend only on the variables x..I.. S2..7)(5. Si) + 91(x. x2) X3 _ aV2
09x2
09x2
..W(xl. k.8) with
x=
I
x2 J
f
L
9
f(x1) Og(x1)x2
J L
0
J
From the results in the previous section we have that
U = 090(x1. g= are smooth. in which we consider the third order system as a special case of (5.k[x3 . Sl)SCC2
2 = f2(x.e1.6)e3
Sk1 = A1 (X.x1 . for all i = 1. and fi. We now move on to the final step. C1.
k>0
is a stabilizing control law with associated Lyapunov function
V
+21
0
5.' GAG
G)+A(xi6)6.Ck)U
6 = A(Xi61 2) . Systems of this form are called strict feedback systems because the nonlinearities f.2
Strict Feedback Systems
We now consider systems of the form
f (x) +CC 9(x)S1
CC CC
S1
= fl (x.148
In our case
CHAPTER 5.[X2 + x1 + axi]
where we have chosen k = 1.6) + 92(x.
(1 + 2axi)
x1
avl
=>
O(xl. that are fed back. FEEDBACK SYSTEMS
a¢(xl )
ax. Strict feedback systems are also called triangular
. x2)]. 6. SL 61.7
where x E R'. Ct E R.
This system reduces to the integrator backstepping of Section 5.7)(5.
)
{[f(x)+g(x)] 
19(x) .31)
Substituting (5.35)
We now generalize these ideas by moving one step further and considering the system
[[ = f (x) +[ 9(x)1
S[[1
=
f1(x.fa(x.34)
V2 = V2 (x.
(5.S1.21).29)(5.
(5.1.3. S)
def
9.30)
= fa(x.(x. ) . then we can define
U = O(x.k1 [ .9=
[ 0 ].29) satisfies assumptions (i) and (ii) of the backstepping procedure in Section 5.29)(5.1.&)6
which can be seen as a special case of (5. (5. the stabilizing control law and associated Lyapunov function are
9a(x.fi(x)] . ) # 0 over the domain of interest. BACKSTEPPING: MORE GENERAL CASES
systems. 9a=92 91
.
l
f `9 1 ]
f. using (5.8).5.32)
(5. If ga(x.S[[1)[[+ 91(x.fa=f2. we now endeavor to stabilize (5.
e=e2.31) into (5.2. ga(x. To avoid trivialities we assume that this is not the case.31).
Assuming that the x subsystem (5. ) . S) = Vl (x) + 2 [S 
O(x)]2.30). We begin our discussion considering the special case where the one (equivalently.0+9a(x.
.17).Mx' S)].30) we obtain the modified system
Cx = f (x) + 9(xX
41
(5.30) with
x= l
C1
J
.29) (5. S)
1
[Ul .33)
= t1
which is of the form (5.Sl)S2
[
C
S2
= f2(x. It then follows that.
kl > 0
(5.0u.0. k = 1 in the system defined above):
= f(x)+g(x)
149
system is of order
(5.2 in the special case where f" (x.S2) + 92(x. and (5.
and using the control law and associated Lyapunov function (5.x1 + x2x2]
3 3 ax1 . 6.
(5. Using the Lyapunov function candidate V1 = 1/2x2 we have that
V1
= x1 [ax2 .01(x.
. It then follows by (5.xi .(x1 + X2)}.34)(5.
k1>0 V2 = 2x2+2[x1+x2+a]2.x12 +X 1x2.37)
The general case can be solved iterating this process.6 Consider the following systems:
x1 = axz .xl + x 2 1x2
1
x2 = XI + x2 + (1 + xz)u. S1.
Example 5. the control law x2 = O(x1) = (x1 + a) results in
V(x1+xi)
which shows that the x system is asymptotically stable. FEEDBACK SYSTEMS
With these definitions.X1 + x2 x2] . we have that a stabilizing control law and associated Lyapunov function for this systems are as follows:
02(x.36)
V3 (X. Sz) =
9z
1'x 
afi (fi(x)+9t
a i91kz[fz01]fz}
k2 > 0
(5. x2)
1
(1 + x2)f (1 +
a)[ax2 .k1[x2 + x1 + a] .35) that a stabilizing control law for the secondorder system and the corresponding Lyapunov function are given by
01(x1.35).150
CHAPTER 5.
Thus. 1)]2.34)(5. 2) = V2(x) + 2 [6 .
We begin by stabilizing the x subsystem.
4.4
Example
Consider the magnetic suspension system of Section 1.
(5.
Defining states zl = y.31):
my = ky + mg .5. however. EXAMPLE
151
8
Figure 5.3.2: Currentdriven magnetic suspension system.
We are interested in a control law that maintains the ball at an arbitrary position
y = yo. the electromagnet is driven by a current source 72. In this case. The equation of the motion of the ball remain the same as (1.
5. we obtain the following state space realization:
X1 = X2
(5.m X2
k
\I2
2m(1 +
px1)2. Setting x1 = 0
. It is not in the proper form of Section 5.9. Notice that I represents the direct current flowing through the electromagnet. we can ignore this matter and proceed with the design without change. We can easily obtain the current necessary to achieve this objective.2(1 + µy)2 . X2 = y.2 because of the square term in 72.39)
A quick look at this model reveals that it is "almost" in strict feedback form. but assume that to simplify matters. as shown in Figure 5.2.1.38)
AµI2
X2 = 9 . and therefore negative values of the input cannot occur.
To this end we define new coordinates:
2'm.152
CHAPTER 5.38)(5. 9(x) = 1
k
Mx. = X1=:1
x2 x2
X2 = x2
I2
2mg(1 + µyo)2
Ay
In the new coordinates the model takes the form:
21 = X2
1'2
(5.
It is straightforward to show that this equilibrium point is unstable. 9(x) = 1 
.
aµ .] = x2
V1 =
2
x1.40)
k 9_ mx2 _
9(1 + µyo)2 _ Ay u 2m[1 + µ(x1 + yo)]2 [1 + µ(x1 + yo]2
(5. 0 = g . we obtain
Io = µ (1 + µyo)2. [1 + µ(x1 + yo]2
Step 1: We begin our design by stabilizing the xlsubsystem.
X1 = :i . and so we look for a state feedback control law to stabilize the closed loop around the equilibrium point.Yo.mx2 .30) with
x = X1.
2
ex
x1. (x. To this end we use the control law (5. Step 2: We now proceed to find a stabilizing control law for the twostate system using backsteppping. FEEDBACK SYSTEMS
along with it = x2 = 0 in equations (5.[1 + µ(x1
9(1 + µyo)2
+ yo]2'
and g. f (X) = 0. We start by applying a coordinate transformation to translate the equilibrium point to = (yo. The new model is in the form
(5. S = X2. 0)T to the origin.41)
which has an equilibrium point at the origin with u = 0.39). which stabilizes the first equation.29)(5. Using the Lyapunov function candidate V1 = 2x? we have V1 = xjx2
and setting x2 = x1 we obtain O(x1) _ xl.34) with
9a = 2m[1
Ap
00
8V1
+ (xl + yo)]2' 8x
1
[f (x) + 9(x).
Straightforward manipulations show that with this control law the closedloop system reduces to the following:
xl = X1
x2 = (1 + k)(xl + x2)
5. S) = 9 k
[1
2
153
+ µ(x1+ yo]2 .0(x)] = x1 + x2. x2) _
_2m[1 + p(xl + yo)] 2
1I (1 + µyo)2 + µ(x1 + yo)]2 J
k [_(1+ki)(xi+x2)+_x2 9
. x2) = V1 + 2 [x2 .5. we obtain
u = 01(x1. design a tate feedback control law to stabilize the equilibrium point at the origin. EXERCISES
[S .O(x)12
)
X2
.
Substituting values.
(5.42)
The corresponding Lyapunov function is
V2 = V2(xl.1) Consider the following system:
5l x2 =U
Using backstepping.2) Consider the following system:
x1 = x2
{
x2 = x1 +X3 + x3
X2 = U
Using backstepping.
+g [1
(5.
Ja(x.5
Exercises
r xl=xl+cosxl1+x2
(5.
.5. design a state feedback control law to stabilize the equilibrium point at the origin.
including interesting applications as well as the extension of the backstepping approach to adaptive control of nonlinear plants. design a state feedback control law to stabilize the equilibrium point at the origin.
(5.
(5. The reader interested in backstepping should consult Reference [47]. FEEDBACK SYSTEMS
(5.
. with help from chapter 13 of Khalil [41].3) Consider the following system. design a state feedback control law to stabilize the equilibrium point at the origin.
Notes and References
This chapter is based heavily on Reference [47].5) Consider the following system:
zl = x1 + x2 { :2 = x2 ix2 .xi + u
Using backstepping. consisting of a chain of integrators:
xl = xl + ext .154
CHAPTER 5.1 + X2
x2 = X3
X2 =u
Using backstepping. design a state feedback control law to stabilize the equilibrium point at the origin. which contains a lot of additional material on backstepping.4) Consider the following system:
f it = xl +X 1 + xlx2
ll a2 = xi + (1 + x2)u
Using backstepping.
1: The system H. which we will denote by X. Namely. The space X must be
u
H
y
Figure 6. Zames and I. This notion is then characterized by the lack of external excitations and is certainly not the only way of defining stability.
155
. as discussed in Chapters 3 and 4. Thus.Chapter 6
InputOutput Stability
So far we have explored the notion of stability in the sense of Lyapunov. The inputoutput theory of systems was initiated in the 1960s by G. which. a system is viewed as a black box and can be represented graphically as shown in Figure 6. and it departs from a conceptually very different approach. Sandberg. corresponds to stability of equilibrium points for the free or unforced system. it considers systems as mappings from inputs to outputs and defines stability in terms of whether the system output is bounded whenever the input is bounded. we first need to choose a suitable space of functions.
In this chapter we explore the notion of inputoutput stability.1. To define the notion of mathematical model of physical systems. roughly speaking. as an alternative to the stability in the sense of Lyapunov.
" or "elements.e.Coonorm of the function u. By far.1)
The norm IIuII1C2 defined in this equation is the socalled C2 norm of the function u.. The classical solution to this dilemma consists of making use of the socalled extended spaces.1
Function Spaces
In Chapter 2 we introduced the notion of vector space.2).156
CHAPTER 6.. < oo. + Iu9I2] dt <
oo.
6." specifically. where the output to an input in the space
X may not belong to X. which we now introduce. Definition 6.
Mathematically this is a challenging problem since we would like to be able to consider
systems that are not well behaved. the
norm IIuIIc_ is the . and so far our interest has been limited to the nthdimensional space ]R". defined in section 2.." of the space are functions of time.C2 consists of all piecewise continuous functions
u : 1R
> ]R9 satisfying
IIuIIc2 =f f [Iu1I2 + Iu2I2 +. we consider a function u : pp+ > R q.2 (The Space Coo) The space Goo consists of all piecewise continuous functions
u : pt+ 1 1R4 satisfying
IIuIIc
dcf
sup IIu(t)II.1. i.
tEIR+
(6. spaces where the "vectors.3.1 (The Space . In this chapter we need to consider "function spaces. whereas IIu(t)lloo represents the infinity norm of the vector u(t) in 1R4. INPUTOUTPUT STABILITY
sufficiently rich to contain all input functions of interest as well as the corresponding outputs. In other words
IIuIIcm 4f sup (max Iuil) < oo
tE]R+
i
1 < i < q.2)
The reader should not confuse the two different norms used in equation (6. In the following definition. that is.
(6. the most important spaces of this kind in control applications are the
socalled Lp spaces which we now introduce.
.C2) The space . Indeed. u is of the form:
ui (t)
u(t) =
u2(t)
uq(t)
Definition 6.
FUNCTION SPACES
157
Both L2 and L.6)
t > T
Example 6.4)
Property: (Holder's inequality in Lp spaces). are valid in a much more general setting. To add generality to our presentation..5)
For the most part.
Definition 6. L1 is the space of all piecewise continuous functions u : pg+ 4 Rq satisfying:
IIUIIc1
f (jiuii + IU2I +
+ jugI]
dt) < oo. However.3 Let u E X. then f g E L1. From (6. and
I(f9)TII1=
f
T
f(t)9(t)dt <
o
(fT f(t)Idt)
(fT I9(t)I
dt)
. The truncation of u(t) is the following function:
uT(t)
_
t2.
(6.
0. If p and q are such that 1 + 1 = 1 with 1 < p < oo and if f E LP and g E Lq. We define the truncation operator PT : X > X by
(PTU)(t) = UT(t) =
def
u(t)..1 Consider the function u : [0. we will state all of our definitions and most of the main theorems referring to a generic space of functions. we will focus our attention on the space L2. oo) + [0.6. the
space LP consists of all piecewise continuous functions u : IIt+ 4 Rq satisfying
IIUIIcp
f (f [Iu1I' + IU2IP + .
(6. most of the stability theorems that we will encounter in the sequel. t < T
t. denoted by X. + IugIP] dt)
(f
00
1/P
< oo. with occasional reference to the space L.3).
(6. T E R+
0.
0<t<T t>T
. are special cases of the socalled LP spaces.1..
6. as well as all the stability definitions.. Given p : 1 < p < oo.1
Extended Spaces
We are now in a position to introduce the notion of extended spaces.1.
(6. oo) defined by u(t) = t2.3)
Another useful space is the socalled L1.
X is closed under the family of projections {PT}. that is. regardless of whether u itself belongs to X.. It is not normed because in general. a E R. X.. however.7)
In other words. Given a function u E Xe . such that XT E X VT E R+}.. We will assume that the space X satisfy the following properties:
(i) X is a normed linear space of piecewise continuous functions of the form u : IR+ > IR9. X is such that
u = limTc.158
CHAPTER 6. Consider the function x(t) = t. UT.2 Let the space of functions X be defined by
X = {x x : 1R + 1R x(t) integrable and Jx
We have
x(t) dt < o0
In other words.
The norm of functions in the space X will be denoted II . t>T
J
0<t<T
=
I XT(t) I
dt =
J
Tt
dt =
22 T
.
Definition 6. INPUTOUTPUT STABILITY
Notice that according to definition 6. using property (iv) above. Equivalently.
It can be easily seen that all the Lp spaces satisfy these properties. then u E X if and only if limT+. V E Xe . IIUTII
Example 6. and moreover. IIxTIIx < oo. the norm of a function u E Xe is not defined. Notice that although X is a normed space. In the sequel. then IIUTIIX 5 IIxIIx . it is possible to check whether u E X by studying the limit
limT. Xe is a linear (not normed) space. the truncation operator is a linear operator. X is the space of realvalued function in Li.IR9.4 The extension of the space X.
1f0.
(6.. PT satisfies
(1) [PT(u + v)J(t) = UT(t) + VT(t) Vu.
(ii) [PT(au)](t) = auT(t) Vu E Xe . then UT E X VT E IR+.
(iii) If u E X and T E R+. IIxTIIx is a nondecreasing
function of T E IR+.
(iv) If u E Xe.IIx
(ii) X is such that if u E X. the space X is referred to as the "parent" space of Xe.
Thus..
XT(t) =
IIXTII
f t. is the space consisting of all functions whose truncation belongs to X. denoted Xe is defined as follows:
Xe = {u : ]R+ .3.
is defined to be a mapping H : Xe > Xe that satisfies the socalled causality
condition:
[Hu(')]T =
Vu E Xe and VT E R. systems whose output grows without bound as time increases. and repeat the procedure used in the first experiment.5 A system. INPUTOUTPUT STABILITY
Thus XT E Xe VT E R+.2
InputOutput Stability
We start with a precise definition of the notion of system. but care must be exercised with mathematical
models since not all functions behave like this.8)..2.
159
Remarks: In our study of feedback systems we will encounter unstable systems. satisfied by all physical systems. See Figure 6.
. our primary interest is in the spaces £2 and G.8). All physical systems share this property. Thus. imagine
that we perform the following experiments (Figures 6.
if the outputs [Hu(t)]T and [HUT(t)]T are identical. the extended spaces are the right setting for our problem. It consists of all the functions u(t) whose truncation belongs to LP .
that the past and present outputs do not depend on future inputs. To see this. u(t) for t > T).e. 1 < p < oo.
Definition 6.3(a)(c). we compute the output y(t) = Hu(t) = HuT(t) to the input u(t) = uT(t).2 and 6.
(2) In the second experiment we start by computing the truncation u = uT(t) of the input u(t) used above. will be denoted Lpe. As mentioned earlier. Clearly yT = [Hu(t)]T(t) represents the
lefthand side of equation (6. we find the output y(t) = Hu(t).6. Notice that this corresponds to the righthand side of equation (6.8) is important in that it formalizes the notion. and finally we take the truncation yT = [HuT(t)]T of the function y. i. the mathematical representation of a physical system. The extension of the space LP . the system output in the interval 0 < t < T does not depend on values of the input outside this interval (i.e. Those systems cannot be described with any of the LP spaces introduced before. However x ¢ X since
IXT I = oo.3(d)(f). or even in any other space of functions used in mathematics. Thus. or more precisely.8)
Condition (6. and from here the truncated output yT(t) = [Hu(t)]T.
(6.. Namely.3):
(1) First we apply an arbitrary input u(t).
The difference in these two experiments is the truncated input used in part (2).. See Figure 6.
6.
H is Xstable if Hx is in X whenever u E X.
Definition 6. the notion of inputoutput system itself.2: Experiment 1: input u(t) applied to system H. are complementary to the Lyapunov theory. INPUTOUTPUT STABILITY
u(t)
H
y(t) = Hu(t)
u(t) =UT(t)
H
y(t) = HUT(t)
Figure 6. In other words.160
CHAPTER 6. while the inputoutput theory
considers relaxed systems with nonzero inputs. Experiment 2: input fl(t) = UT(t) applied to system H. and the notion of inputoutput stability in particular.
. the inputoutput theory of systems in general.6 A system H : Xe > Xe is said to be inputoutput Xstable if whenever the input belongs to the parent space X. the output is once again in X. PT(Hx) = PT[H(PTx)] dx E Xe and VT E W1
. Strictly speaking. In fact. Notice that the Lyapunov theory deal with equilibrium points of systems with zero inputs and nonzero initial conditions. We end this discussion by pointing out that the causality condition (6. we review these concepts
and consider inputoutput systems with an internal description given by a state space
realization. In this sense.e.
It is important to notice that the notion of inputoutput stability.. and in fact. does not depend in any way on the notion of state. there is no room in the inputoutput theory for the existence of nonzero (variable) initial conditions.
We may now state the definition of inputoutput stability. In later chapters. the internal description given by the state is unnecessary in this framework.8) is frequently expressed using the projection operator PT as follows:
PTH = PTHPT
(i. The essence of the inputoutput theory is that only the relationship between inputs and outputs is relevant.
(d) truncation of the function u(t). (f) truncation of the system response in part (e).2. INPUTOUTPUT STABILITY
161
(a)
t
y(t) = Hu(t)
(b)
t
Hu(t)]T
(c)
t
(d)
t
[HuT(t)]
(e)
t
[HUT(t)1T
(f )
t
T
Figure 6. Notice that this
figure corresponds to the righthand side of equation (6. (b) the response y(t) = Hu(t).3: Causal systems: (a) input u(t).8).8).6. (c) truncation of the response y(t). (e) response of the system when the input is the truncated input uT(t).
. Notice that this figure corresponds to the lefthand side of equation (6.
We conclude this section by making the following observation.
One of the most useful concepts associated with systems is the notion of gain.3 have no "dynamics" and are called static or Systems such as memoryless systems. The gain y(H) is easily determined from the slope of the graph of N. If the system H satisfies the condition Hu = 0 whenever u = 0 then the gain y(H) can be calculated as follows
y(H) = sup II (IHu
IIXI X
(6.
IIuTIIL
in example 6. however. The constant /j in (6.9)
Systems with finite gain are said to be finitegainstable. A different. INPUTOUTPUT STABILITY
For simplicity we will usually say that H is inputoutput stable instead of inputoutput Xstable whenever confusion is unlikely.= 1.10)
where the supremum is taken over all u E X.9) is called the bias term and is included in this definition to allow the case where Hu # 0 when u = 0. but do not have a finite gain.7 A system H : Xe 4 Xe is said to have a finite gain if there exists a
constant y(H) < oo called the gain of H. and all T in lR+ for which UT # 0.Gooe.
Example 6.
. As an example. i = 1.
y(H) = sup II(Hu)TIIG.162
CHAPTER 6.3 Let X = Goo. and perhaps more important interpretation of this constant will be discussed in connection with inputoutput properties of state space realizations.4. and notice that N(0) = 0. The converse is.
Definition 6. For instance. then it is inputoutput stable. any static nonlinearity without a bounded slope does not
have a finite gain. 2 shown
in Figure 6. and consider the nonlinear operator N() defined by the graph in the plane shown in Figure 6.
(6. not true. and a constant. It is clear that inputoutput stability is a notion that depends both on the system and the space of functions.5 are inputoutputstable. given that the response is an instantaneous function of the input.3 E pl+ such that
II(Hu)TIIX <_y(H) IIuTIIX +a. It is immediately obvious that if a system has finite gain. the memoryless systems Hi : Gone .
6.5: The systems H1u = u2.25
0. INPUTOUTPUT STABILITY
163
N(u)
1.
.25
U
Figure 6.4: Static nonlinearity NO.
y = elUl
u
Figure 6.2. and H2u = ek"I.25
1.
The converse is. fa E L. we limit our discussion to singleinputsingleoutput systems.
foo(t) + fa(t). if fo = 0). however. whenever dealing with LTI systems we focused our attention on state space realizations.11). and fa(. with the norm of f defined in (6. For simplicity. then IIf IIA = IIf 111.
(6.
R(s): field of fractions associated with R[s]. Note that. R(s) consists of all rational functions in s with real polynomials.
Definition 6. A rational function M E R(s) will be said to be proper if it satisfies lim M < oo. then it is also proper. The extension of the algebra A. if f E L1 (i.8 We denote by A the set of distributions (or generalized functions) of the
form
At) = 1
where fo E R.) is such that
rIfa(r)I
J
dr<oo
namely.
e+0a
It is said to be strictly proper if
8+00
lim M < 0. denoted Ae.11)
We will also denote by A the set consisting of all functions that are Laplace transforms of elements of A.
Notice that. The norm off c A is defined by
11f 11A =1 fo I +
jW
0
I fa(t) I dt.e.
We now introduce the following notation:
R[s]: set of polynomials in the variable s. if H(s) is strictly proper. is defined to be the set of all functions whose truncation belongs to A. i.. INPUTOUTPUT STABILITY
6. according to this. In this section we include a brief discussion of LTI system in the context of the inputoutput theory of systems.3
Linear TimeInvariant Systems
So far.e. t<0
denotes the unit impulse. not true.164
CHAPTER 6.
.
LINEAR TIMEINVARIANT SYSTEMS
165
Theorem 6. < Jjh11Al1u1lcp .6.15)
Definition 6.T)u(T) dT
(6. and (ii) all poles of F(s) lie in the left half of the complex plane.3. It is interesting.12)
It is not difficult to show that if f.14)
y(t) = hou(0) + J r h(T)u(t .
Definition 6. such as systems with a time delay.T) dT = hou(0) + fo t h(t .
We can now define what will be understood by a linear timeinvariant system. In the special case of finitedimensional LTI systems.) is called the "kernel" of the operator H.T) dT = hou(0) + / t h(t . then f * g E A and g * f E A. is defined by
(f * g)(t) = f 00 f (T)g(t .) = h06(t) + ha(t) E A and moreover.) E A. however. Theorem 6.10 A linear timeinvariant system H is defined to be a convolution operator of the form
(Hu)(t) = h(t) * u(t) =
J
h(T)u(t . Then F(s) E A if and only if (i) F(s) is
proper.
Proof: See the Appendix.
Theorem 6. and let represent its impulse response.
J0
(6.13)
where h(.T)u(T) dT
o
00
If in addition. We conclude this section with a theorem that gives necessary and sufficient conditions for the Lp stability of a (possibly infinitedimensional) linear timeinvariant system. to consider a more general class of LTI systems.T)g(T) dT.
Given the causality assumption h(T) = 0 for r < 0. u = 0 for t f< 0.1 implies that a (finitedimensional) LTI system is stable if and only if the roots of the polynomial denominator lie in the left half of the complex plane. The function h(. then IHuIjc.
Proof: See the Appendix. denoting y(t) ption
I
Hu(t)
(6.T)u(T) dr
(6.1 Consider a function F(s) E R(s). we have that. g c A. Then H is GP stable if and only if h(.2 Consider a linear timeinvariant system H.
. Definition 6.9 The convolution of f and g in A.10 includes the possible case of infinitedimensional systems.T) dT =
0
00 f (t .T) dT = FOO h(t . then
y(t) = hou(0) +
J0
t
h(r)u(t . if H
is LP stable. denoted by f * g.
= sup
U
IIII
IIII
=
Ilullc
p_1 IIHuIIc_
(6.17)
Consider an input u(t) applied to the system H with impulse response A.. For each fixed t.1
L
Gain
By definition
7(H). = Il h(t) II A
(6.T)dr
Ihollu(t)l + ft
sip Iu(t)I {IhoI +
= IIuIIc.4
Cp Gains for LTI Systems
Having settled the question of inputoutput stability in .166
CHAPTER 6.. We do this by constructing a suitable input. We will show that in this case
ti(H).16)
This is a very important case.
6. for simplicity we restrict our attention to singleinputsingleoutput (SISO) systems. To show that IIhIIA = ry(H).IIhIIA or
IIhIIA
>
IIHuIIcIIuIIc
This shows that IIhIIA ? 7(H).. we focus our attention on the study of gain. It is clear that the notion of gain depends on the space of input functions in an essential manner..4.Cp spaces. Once again. consists of all the functions of t whose absolute value is bounded. The space Cc. we must show that equality can actually occur. < IIuII. IIhIIA
Thus
f
c
Iha(T)Idr}
Ilyll. and constitutes perhaps the most natural choice for the space of functions X. let
u(t . We have
hoo(t)+ha(t) E
(h * u)(t) = hou(t) +
J0 t
ha(T)u(t . INPUTOUTPUT STABILITY
6.T) = sgn[h(T)]
VT
.
20) is the socalled Hinfinity norm of the system H.r)dr
Ihol + f Iha(T)ldr =
0
IhIIA
and the result follows. From here we conclude that
2r1
IIyIIcz < {supIH(Jw)I}2 I
r
i
.e. the Fourier transform of h(t). consider the output y of the system to an input u
Ilylicz = IIHuIIcz =
f
[h(t) * u(t)]dt =
21r
ii:
00
IU(JW)12dw}
where the last identity follows from Parseval's equality.18)
xIIIC2
} 1/2
IIxIIc2 =
{ f Ix(t)I2 dt
00
J
(6. sinusoids and step
functions are not in this class).)I
f
IIHIIoo
(6. and let We have that
E A be the kernel of H.4. i. and
y(t) = (h * u)(t) = bou(t) +
t
J0
t ha. functions that have finite energy.19)
We will show that in this case. L GAINS FOR LTI SYSTEMS
where
167
sgn[h(t)]
1
if h(T) > 0
l 0 if h(T) < 0
It follows that lullc_ = 1.2
G2 Gain
This space consists of all the functions of t that are square integrable or. To see this.
6. to state this in different words..F[h(t)].(T)U(t
..6.20)
where H(tw) = . the space G2 is the most widely used in control theory because of its connection with the frequency domain that we study next. the gain of the system H is given by
'Y2(H) = sup I H(. (Hu)(t) = h(t) *u(t).
Y2(H) = sup
x
III
(6. The norm (6. du E G2.4.g. Although from the inputoutput point of view this class of functions is not as important as the previous case (e. We consider a linear time
invariant system H.
the beauty of the inputoutput approach is that it permits us to draw conclusions about stability of feedback interconnections based on the properties of the
several subsystems encountered around the feedback loop.
.168
CHAPTER 6. ft(iw) + R(p) and
IIyIIc. To study feedback systems.n))I2 4 Aw
f
w&0
_ A21 H(7w)I2dw +
W+AWO
JW pW0
AZI H(M)I2dw
}
Thus.
(6. as Ow 4 0. defining A = {7r/2Aw}1/2.woI< .
has the following properties:
U(
)I_{ A if Iw . however.7w)I
W
fIIHII. is the wealth of theorems and results concerning stability of feedback systems. INPUTOUTPUT STABILITY
but
sup lH(. to show that it is the least upper bound.
As we will see.21)
IIUI12
.
It is useful to visualize the L2 gain (or Hinfinity norm) using Bode plots.P[u(t)] = U(yw). we proceed to construct a suitable input.w.21) or (6. One of the main features of the inputoutput theory of systems.
W
IlylI 2
<_
yz(H) <
(6. > 1 A2I
H(.
6. orIw+woI<Ow
0 otherwise
W+OWO
In this case
IIYIIc. The following model is general enough to encompass most cases of interest... case. which completes
the proof. Let u(t) be such that its Fourier transform. As with the G. we have that IIyII2 + IIHI12 as Aw + oo.5
ClosedLoop InputOutput Stability
Until now we have concentrated on openloop systems. = 1 I 2ir
Therefore.6. we first define what is understood by closed loop inputoutput stability. .22)
Equation (6. as shown in Figure 6. and then prove the socalled small gain theorem.22) proves that the Hinfinity norm of H is an upper bound for the gain 7(H).
It is immediate that equations (6. usually referred to as error signals and yl = H1e1. For example. the subsystems H1 and H2 can have several inputs and several outputs. disturbances.23) and (6.
(ii) The following equations are satisfied for all u1.5. CLOSEDLOOP INPUTOUTPUT STABILITY
169
H(34
IIHII. e2i yl. We also notice that we do not make explicit the system dimension in our notation. el and e2 are outputs. Y2 = H2e2 are respectively the outputs of the subsystems H1 and H2. norm of H..
Definition 6.7. and sensor noise.23) and (6.24) can be solved for all inputs U1. it is implicitly assumed that the number of inputs of H1 equals the number of outputs of 112. and y2 E Xe for all pairs of inputs ul.
. Assumptions (i) and (ii) ensure that equations (6.) regardless of the number of inputs and outputs of the
system.23) (6.C. and the number of outputs of H1 equals the number of inputs of H2.24)
e2 = u2 + H1e1.
Here ul and u2 are input functions and may represent different signals of interest such as commands.. U2 E Xe If this assumptions are not satisfied.11 We will denote by feedback system to the interconnection of the subsystems H1 and H2 : Xe * Xe that satisfies the following assumptions:
(i) e1. the operators H1 and H2 do not adequately describe the physical systems they model and should be modified. if X = . u2 E Xe .6.
.6: Bode plot of H(jw)I.H2e2
(6.
In general.. u2 E Xe :
el
= ul . the space of functions with bounded absolute value. For compatibility.24) can be represented graphically as shown in Figure 6. indicating the IIHII..
w
Figure 6. we write H : Xe > Xe (or H : £ooe * L.
Definition 6.23) and (6.26) (6. u E dom(P). Notice that questions related to the existence and uniqueness of the solution of equations (6.25)
Definition 6.27)
In words.
.e) E Xe x Xe and e satisfies (6.24) are taken for granted.14 The feedback system of equations (6.23) and (6. e. INPUTOUTPUT STABILITY
Figure 6. respectively. For this system we introduce the following inputoutput relations. and with u.
In the following definition. j = 1. and y given by (6.13 A relation P on Xe is said to be bounded if the image under P of every
bounded subset of XX E dom(P) is a bounded subset of Xe .24)}. defined as follows:
u(t)
uul (t)
=
[
2 (t) I
I
e(t) 
eel (t) [ 2 (t)
I.12 Consider the feedback interconnection of subsystems H1 and H2.25) we define
E = {(u.24) is said to be bounded or inputoutputstable if the closedloop relations E and F are bounded for all possible u1. u2 in the domain of E and F. we introduce vectors u.24)}
F = {(u.
y(t)
yi (t)
Y2
]
(6. U2 E Xe .7: The Feedback System S. 2.170
CHAPTER 6.
In other words. i. E and F are the relations that relate the inputs u2 with ei and y2. Given Ui.23) and (6. P is bounded if Pu E X for every u c X . y) E Xe x Xe and y satisfies (6.
Definition 6. That is the main reason why we have chosen to work with relations rather than functions.23) and (6. and y. e.
(6.
See also remark (b) following Theorem 6. The main goal of the theorem is to provide openloop conditions for closedloop stability. as with openloop systems.
IIelTII < IIu1TII + II(H2e2)TII < IIu1TII + 7(H2)IIe2TII
Ie2TII
(6. inputoutputstable systems will be referred to simply as "stable" systems.3 Consider the feedback interconnection of the systems H1 and H2 : Xe 4 Xe Then. if y(Hj)7(H2) < 1.5).14 we must show that u1i u2 E X imply that e1 i e2.7 is identically zero (see Exercise 6.23) and (6. one of the most important results in the theory of inputoutput systems. after truncating these equations. the outputs y1 and y2 and errors e1 and e2 are also in X. in a sense to be made precise below. the definition of boundedness depends strongly on the selection of the space X.6
The Small Gain Theorem
In this section we study the socalled small gain theorem.
Substituting (6.32)
.(H2e2)T
e2T = u2T + (Hlel)T. y1 and Y2 are also in X. we have
e1T = u1T . is less than 1. el). then its feedback interconnection is stable.6. Then.24) and.31)
Iu2TII + II(Hlel)TII < IIu2TII + 'Y(H1)II e1TIl. To emphasize this dependence.
Proof: To simplify our proof.28) (6. the feedback system is inputoutputstable. Theorem 6. (u2i e2) that belong to the relation E.1(Hl)1'(H2)]II e1TII 5 IIu1TII +7(H2)IIu2TII
(6. the most important version of the small gain theorem.
Remarks: Notice that.6.
6. In the sequel.3 given next. H1 and H2. According to Definition 6. e1.31) in (6.30) we obtain
IIe1TII
<_
IIu1TII + 7(H2){IIu2TII +'Y(Hl)IIe1TII}
5 IIu1T + ry(H2)IIu2TII +' Y(H1)'Y(H2)II e1TII
=
[1 . we will sometimes denote Xstable to a system that is bounded in the space X. we assume that the bias term 3 in Definition 6. Consider a pair of elements (u1. in some sense.30) (6.
.3. Theorem 6. is the most popular and. u1. It says that if the product of the gains of two systems. a feedback system is inputoutputstable if whenever the inputs u1 and u2 are in the parent space X.29)
Thus. u2. THE SMALL GAIN THEOREM
171
In other words.
(6. e2 must satisfy equations (6.
24) is unique.
(6.34) and (6. if a solution of equations (6. y(Hl)y(H2) < 1. In other words. the question of existence of a solution can be studied separately from the question of stability.
. imply that the solution of equations (6.y(H1)y(H2)]1{IIu1TII +y(H2)IIu2TII}. In practice.3 says exactly that if the product of the gains of the openloop systems H1 and H2 is less than 1.23) and (6.33) must also be satisfied if we let T 4 oo. Notice that we were able to ignore the question of existence of a solution by making use of relations.. That F is also bounded follows from (6.35) and (6.33).37) I(Hiei)TII < y(Hi)IIeiTII. who assume that e1 and e2 belong to Xe and define u1 and u2 to satisfy equations
(6.36) and the equation (6. however. then Theorem 6.23) and (6.
(b) Theorem 6.35 )
+y(H1)IIu1II}.172
CHAPTER 6. it does not follow from Theorem 6. in addition.y(H1)y(H2)]1{IIu1II + 1(H2)IIu2II} < [1 y(H1)y(H2)]1{IIu2II
(6. It does not. 2). If.3 that for every pair of functions u1 and u2 E X the outputs e1i e2. then it is bounded. We have
Ie1II
IIe2II
<
[1 .e. An alternative approach was used by Desoer and Vidyasagar [21]. then each bounded input in the domain of the relations E and F produces a bounded output. u1 and u2 are in X (i.3 guarantees that.23) and (6.
0
Remarks:
(a) Theorem 6.

(6. IIui1I < oc for i = 1. then (6.2
evaluated as T + oo. we have that 1 . If the only thing that is known about the system is that it satisfies the condition y(H1)y(H2) < 1. y1 y2 are bounded.34) and (6.3 provides sufficient but not necessary conditions for inputoutput stability. because the existence of a solution of equations (6. and indeed usual.23) and (6.34)
Thus.
(6. it is possible. the norms of e1T and e2T are bounded by the righthand side of (6. to find a system that does not satisfy the small gain condition y(H1)y(H2) < 1 and is nevertheless inputoutputstable.24).24) exists.y(H1)y(H2) # 0 and then
IIe1TII <_ [1 .33)
Similarly
IIe2TII < [1y(H1)y(H2)]1{IIu2TII +y(Hl)IIu1TII}. i = 1.36)
It follows that e1 and e2 are also in X (see the assumptions about the space X) and the closedloop relation E is bounded. Moreover.24) was not proved for every pair of inputs u1 and u2. by assumption. INPUTOUTPUT STABILITY
and since.
8: The nonlinearity N(.
Example 6.
Examples 6.w2)2 + 4w2]
0 as w * oo. and since IG(3w)I is a continuous function of w.5 contain very simple applications of the small gain theorem.). the maximum values of lG(jw)I exists and satisfies
Since JG(3w_) I
7(Hi) = maxlG(jw)I
W
.6. We assume
that X = G2.4 and 6. THE SMALL GAIN THEOREM
173
N(x)
x
Figure 6.
s+2s+4
W
and let H2 be the nonlinearity N() defined by a graph in the plane. First we find the gain of Hl and H2. we have
1
I(4w2)+23w1
1
[(4 .8. we have
y(H) = sup 16(3w) I
In this case. the supremum must be located at some finite frequency.6. For a linear timeinvariant system.4 Let Hl be a linear timeinvariant system with transfer function
G(s) =
2.
We apply the small gain theorem to find the maximum slope of the nonlinearity N() that guarantees inputoutput stability of the feedback loop. as shown in Figure 6.
and IG(jw`)j = 1/ 12. the small gain theorem provides a poor estimate of the
stability region. It follows that y(H1) = 1/ 12.
=
IKI<
12. the small gain theorem provides sufficient conditions for the stability of a feedback loop.) is less than
12 the system
Example 6.7
Loop Transformations
As we have seen. k > 4.
6.. in this case.e.
We conclude that. and very often results in conservative estimates of the system stability. The calculation of y(H2) is straightforward. It follows that the system is closedloopstable if and only if (4 + k) > 0. We have y(H2) = IKL.
12.jw)I twice with respect to w we obtain w = v. Since the gain of H2 is y(H2) = IkI.174
CHAPTER 6. namely. denoted H(s). we obtain
y(Hl)y(H2) < 1
is closed loop stable.5 Let Hl be as in Example 6. or equivalently. and check stability by obtaining the poles of H(s).< 0.12 < k <
Pole analysis: 4 < k < oo.
dIG
)I
ww
=0. the system is closed loop stable. However. INPUTOUTPUT STABILITY
IG(7w')I. and
d2I2w) I Iw. Applying the small gain condition y(Hl)y(H2) < 1. We
have
H(s) =
G(s) 1 + kG(s)
s2 + 2s + (4 + k)
The system is closedloopstable if and only if the roots of the polynomial denominator of H(s) lie in the open left half of the complex plane.
.4. if the absolute value of the slope of the nonlinearity N(. For a secondorder polynomial this is satisfied if and only if all its coefficients have the same sign. application of the small gain theorem produces the same result obtained in Example 6. if IkI < 12. Comparing the results obtained using this two methods. we have
Small gain theorem: .4 and let H2 be a constant gain (i. H2 is linear timeinvariant and H2 = k).
Differentiating IC(.
Thus. in this simple example we can find the closed loop transfer function.
The are two basic transformations of feedback loops that will be used throughout the book.e. LOOP TRANSFORMATIONS
175
Figure 6.
Theorem 6.
. less conservative) is to apply the theorems to a modified feedback loop that satisfies the following two properties: (1) it guarantees stability of the original feedback
loop. The
closedloop relations of SK will be denoted EK and FK.
Definition 6.Ku2 and u'2 = u2i as shown in Figure 6.4 Consider the system S of Figure 6.10. and assume that K is linear. Let H1. The same occurs with any other sufficient but not necessary stability condition. and will be referred to as transformations of Types I and Type II. In other words. H2. formed by the feedback interconnection of the subsystems Hi = Hl(I + KHl)1 and
H2 = H2 . such as the passivity theorem (to be discussed in Chapter 8).9.9: The Feedback System S. K and (I + KH1)1 be causal maps from Xe into X.7. denoted SK. it is
possible that a modified system satisfies the stability conditions imposed by the theorem in use whereas the original system does not.
Then (i) The system S is stable if and only if the system SK is stable. In other words.15 (Type I Loop Transformation) Consider the feedback system S of Figure 6. A loop transformation of Type I is defined to be the modified system. for stability analysis. the system S can always be replaced by the system SK.9 and let SK be the modified system obtained after a type I loop transformation and assume the K and (I+KH1)1 : X + X.6.K. with inputs u'1 = ul .
The following theorem shows that the system S is stable if and only if the system SK is stable. One way to obtain improved stability conditions (i. and (2) it lessens the overall requirements over H1 and H2.
K
Figure 6..........................................
Z2
I
e2
H2
+ u2
K
...................................................Ku2 +
>
Hi=Hl(I+KHl)l
yl
Hl
K
:..10: The Feedback System SK.
ul .......................................................... ................... : .......
......... INPUTOUTPUT STABILITY
...................176
CHAPTER 6.............................
H2 = H2 ...
thus leading to the result.
Proof: The proof is straightforward and is omitted (see Exercise 6.16 (Type II Loop Transformation) Consider the feedback system S of Figure 6.9. Let H1.
A type II loop transformation is defined to be the modified system SM.
(iii) Both M and M1 have finite gain. The closedloop relation of this modified system will be denoted EM and FM. although laborious. formed by the feedback interconnection of the subsystem Hi = H1M and H2 = M1H2.9 and let SM be the modified system obtained after a type II loop transformation.7.11. however. H2 be causal maps of Xe into XQ and let M be a causal linear operator satisfying
(i)M:X 4X.
.
Theorem 6. with inputs ui = Mlu1 and u2 = a2i as shown in Figure 6. M1 causal. that the transformation consists essentially of adding and subtracting the term Kyl at the same point in the loop (in the bubble in front of H1). Then the system S is stable if and only if the
system SM is stable. O
Definition 6.9).5 Consider the system S of Figure 6. and is omitted . Notice.
Proof: The proof is straightforward.11: The Feedback System SM. LOOP TRANSFORMATIONS
177
Ul
M1
M
H1
yl
Z2
M1
Y2
e2
H2
+
U2
Figure 6.
(ii) IM1 : X 1 X : MM1 = I.6.
INPUTOUTPUT STABILITY
Figure 6.12: The nonlinearity 0(t*. as shown in
Figure 6. x) is confined to a graph on the plane.
(6. 3]. it is timevarying and for each fixed t = t'. x) in the sector [a. Vx E R. if 0 satisfies a sector condition then.8
The Circle Criterion
Historically. x) < f3x2
Vt > 0.38)
According to this definition.12. We first define the nonlinearities to be considered.17 A function 0 : R+ x 1l a IR is said to belong to the sector [a. in general.
Definition 6. The Nyquist criterion provides necessary and sufficient conditions for closedloop stability of lumped linear timeinvariant systems. Q] where
a<0if
axe < xO(t. one of the first applications of the small gain theorem was in the derivation of the celebrated circle criterion for the G2 stability of a class of nonlinear systems. expanding d(s) in partial fractions it is possible to express this transfer function in the following form:
G(s) = 9(s) + d(s)
where
. t(t'.
6. Given a proper transfer function
G(s) = p(s) q(s)
where p(s) and q(s) are polynomials in s with no common zeros.
We assume that the reader is familiar with the Nyquist stability criterion.178
CHAPTER 6.
.6. if one of the following conditions is satisfied.
(iii) All zeros of d(s) are in the closed right half plane. Under these assumptions.. and
n(s)
d(s)
is a proper transfer function.
system is £2stable:
.1 (Nyquist) Consider the feedback interconnection of the systems Hl and H2. then the
Gee.
(ii) n(s) and d(s) are polynomials. because it encompasses not a particular system but an entire class of systems. This is sometimes referred to as the absolute stability problem. the polar plot of G(3w) with the standard indentations at each 3waxis pole of G(3w) if required] is bounded away from the critical point (1/K + 30).6 Consider the feedback interconnection of the subsystems Hl and H2 : Gee +
Assume H2 is a nonlinearity 0 in the sector [a. g(s) is the transfer function of an exponentially stable system). Vw E R and encircles it exactly v times in the counterclockwise direction as w increases from oo to oo. Thus.e.
The circle criterion of Theorem 6.
Lemma 6.
Theorem 6. Let Hl be linear timeinvariant with a proper transfer function d(s) satisfying (i)(iii) above.
With this notation. The number of open right half plane zeros of d(s) will be denoted by v. it will be understood in the L2 sense.6 analyzes the L2 stability of a feedback system formed by the interconnection of a linear timeinvariant system in the forward path and a nonlinearity in the sector [a. and let H2 be a constant gain K. and let Hl be a linear timeinvariant system with a proper transfer function G(s) that satisfies assumptions (i)(iii) above. THE CIRCLE CRITERION
179
(i) g(s) has no poles in the open lefthalf plane (i. the Nyquist criteria can be stated as in the following lemma.e.8. 1 < p < oo. Q]. Under these conditions the feedback system is closedloopstable in LP ..
In the following theorem.Q] in the feedback path. if and only if the Nyquist plot of G(s) [i. whenever we refer to the gain y(H) of a system H.
n(s) d(s)
contains the unstable part of G.
e. which states that the two definitions are equivalent.
(c) If a < 0 < 3: G(s) has no poles in the closed right half of the complex plane and
the Nyquist plot of G(s) is entirely contained within the interior of the circle C'.
. you are asked the following questions:
(6.5 if and only if it is causal according to Definition 6.
(6.
(b) I_f 0 = a <.3: G(s) has no poles in the open right half plane and the Nyquist plot of
G(s) remains to the right of the vertical line with abscissa f31 for all w E R.1) Very often physical systems are combined to form a new system (e. Vt < T). then the truncated outputs are also identical. U2 E Xe and VT E IIF+
(6. it is important to determine whether the addition of two causal operators [with addition defined by (A + B)x = Ax + Bx and the composition product (defined by (AB)x = A(Bx)]. are again causal.18 An operator H : Xe 4 Xe is said to be causal if
PTU1 = PTU2 = PTHui = PTHU2
VUI.g.7 Consider an operator H : Xe 4 Xe. where v is the number of poles of d(s) in the open right half plane.180
CHAPTER 6.2) Consider the following alternative definition of causality
Definition 6. (b) Show that the cascade operator D : Xe > Xe defined by D(x) = (AB)(x) is also
causal.
You are asked to prove the following theorem. if the truncations of ul and U2 are identical (i.9
Exercises
their outputs or by cascading two systems).y0) and encircles it v times in the counterclockwise direction.
Proof: See the Appendix.39)
According to this definition.18. by adding
(a) Let A : Xe > Xe and B : Xe 4 Xe be causal operators. Then H is causal according to
Definition 6. Show that the sum
operator C : Xe 4 Xe defined by C(x) _ (A + B)(x) is also causal. if
U1 = U2. With this introduction.
centered on the real line and passing through the points (a1 + 30) and ((31 +.
6.
Theorem 6. INPUTOUTPUT STABILITY
(a) If 0 < a < f3: The Nyquist plot of d(s) is bounded away from the critical circle C'. Because physical systems are represented using causal operators..
u2 in Xe and all T in pt+ for which u1T # u2T.
Theorem 6.5) Prove the small gain theorem (Theorem 6.9 Let H1. then it is finitegainstable.
(6. or simply continuous.u2)TII
(6.9.19 A system H : Xe > Xe is said to be Lipschitz continuous.2 we introduced system gains and the notion of finite gain stable system.3)
.40)
(6. is taken over all u1.41)
where the supremurn.
Show that if H : Xe 4 Xe is Lipschitz continuous.4) Prove the following theorem.7) Prove Theorem 6. for bounded causal operators. i = 1.
(6. and
assume that 3y'(H) = sup jjHxIj/jIxjI < oodx E X . all of the truncations in the definition of gain may be dropped if the input space is restricted to the space X. (6.3.9) Find the incremental gain of the system in Example 6.8) In Section 6. satisfying
r(H) =Sup I(Hul)T . (6. which states that. We now introduce a stronger form of inputoutput stability:
Definition 6. /3] for which the closedloop system is absolutely stable:
(2)
H(s)
(s+ 1)(s+3) '
(ii)
H(s) _
(s + 2)(s . called its incremental gain. find the sector [a. Then y(H1H2) < y(Hl)^t(H2) (6. x according to Definition 6. Then H has a finite gain.
0.
(6.8 Consider a causal operator H : Xe * Xe satisfying HO = 0.
(6.10) For each of the following transfer functions.6.(Hu2)TII
I(u1)T .4. 2.
Theorem 6.3) Prove the following theorem.5. EXERCISES
181
(6.3) in the more general case when Q # 0. and y*(H) = y(H).7.6) Prove Theorem 6. H2 : X > X be causal bounded operator satisfying H1O = 0. if there exists a constant I'(H) < oo.
[88] and [92]. Our presentation follows [21]) as well
as [98]. [98]. Excellent general references for the material of this chapter are [21]. [65]. [66]. (see also the more complete list of Sandberg papers in [21]).182
CHAPTER 6.
. and Zames [97]. INPUTOUTPUT STABILITY
Notes and References
The classical input/output theory was initiated by Sandberg.
The sets D and Du are defined by D = {x E 1R : jxjj < r}.
7.1). On the other hand. and discuss stability of these systems in a way to be defined. We also assume that the unforced system
x = f(x.1)
where f : D x Du + 1R' is locally Lipschitz in x and u. Lyapunov stability applies to the equilibrium points of unforced state space
realizations. and ignores the internal system description. These two concepts are at opposite ends of the spectrum. We assume that systems are described by a state space realization that includes a variable input function.Chapter 7
InputtoState Stability
So far we have seen two different notions of stability: (1) stability in the sense of Lyapunov and (2) inputoutput stability.0)
has a uniformly asymptotically stable equilibrium point at the origin x = 0.
In this chapter we begin to close the gap between these two notions and introduce the concept of inputtostatestability. which may or may not be given by a state space realization. On one hand. Du = {u E Ift"` : supt>0 IIu(t)JI = 1jullc_ < ru}.
183
. inputoutput stability deals with systems as mappings between inputs and outputs. These assumptions guarantee the local existence and uniqueness of the solutions of the differential equation (7.u)
(7.1
Motivation
Throughout this chapter we consider the nonlinear system
x = f(x.
. Indeed. Given an LTI system of the form
= Ax + Bu
the trajectories with initial condition x0 and nontrivial input u(t) are given by
x(t) = eAtxp +
t eA(tT)Bu(T)
J0 If the origin is asymptotically stable. however a lot more subtle.
. 0<T <t = suptlIx(t)II <e ?
Drawing inspiration from the linear time invariant (LTI) case. the answer to both questions above seems to be affirmative.. does it follow that in the presence of nonzero external input either (a) limt_. then limt.e.. it follows trivially that bounded inputs give rise to bounded states. in general... To see this. u(t) = 0
=:
limt__.. .) x(t) = 0 ?
(b) Or perhaps that bounded inputs result in bounded states ?. the problem to be studied in this chapter is as follows. Indeed.. these implications fail. u(t) = 0. it is easy to find counterexamples showing that. for LTI systems the solution of the state equation is well known. Given that in the absence of external inputs (i.184
CHAPTER 7.
Example 7.2)
A<0
It then follows that
IIx(t)II
<
keAtlIxoll + 1
0
t kea(t_T)
IIBII IIu(T)II dr
< keAtllxoll + kIIB II sup IIu(t)II A
t
= keAtllxoII + kIIBII IIUT(t)IIf_ .
The nonlinear case is.
dr
(7. then all of the eigenvalues of A have negative real
parts and we have that IleAtll is bounded for all t and satisfies a bound of the form
IleAtll < keAt. consider the following simple example. u = 0) the equilibrium point x = 0 is asymptotically stable..
0<T<t
Thus. INPUTTOSTATE STABILITY
Under these conditions. where all notions of stability coincide. 0 x(t) = 0. IIuT(t)Ile_ < J.1 Consider the following firstorder nonlinear system:
x=x+(x+x3)u. specifically. and that if limt__.
t) +y(b).
Interpretation: For bounded inputs u(t) satisfying Ilullk < b. It is said to be inputtostate stable. However. we now introduce the concept of inputtostate stability (ISS). the term 3(I1 xo11.3)
for all xo E D and u E Du satisfying : Ilxoll < ki and sups>o IUT(t)II = IluTllc_ < k2i 0 < T < t. u) is ISS and consider the unforced system x = f (x. however small.
Definition 7. t) + y(b).7.1) is said to be locally inputtostatestable (ISS) if there
exist a ICE function 0.2
Definitions
In an attempt to rescue the notion of "bounded inputbounded state". which clearly has an asymptotically stable equilibrium point.1) with initial state xo satisfies. which results in an unbounded trajectory for any initial condition. and the trajectories approach the ball of radius y(5). Du = Rm and (7. which we now discuss. i. t)
Vt > 0. k2 E ][i.). when the bounded input u(t) = 1 is applied. i.2. or globally ISS if D = R". Ilxoll < ki. the forced system becomes ± = x3.1.e. DEFINITIONS
185
Setting u = 0.
Vt > 0..3) is satisfied for any initial state and any bounded input u.+ such that
IIx(t)II <_ Q(IIxoll.1 has several implications..
lx(t) I1 <_ 0040 11. we obtain the autonomous LTI system ± = x.
which implies that the origin is uniformly asymptotically stable.
7.
.
As t increases.
is called the ultimate bound of the system 7.
tlim
Ilx(t)II <_ y(b)
For this reason.e.
0 <T < t
(7. as can be easily verified using the graphic technique introduced in Chapter 1.1 The system (7.)Ilc. a class K function y and constants k1.
Unforced systems: Assume that i = f (x. trajectories remain
bounded by the ball of radius 3(Ilxoll. we see that the response of (7.t) +y(IIuT(.
Definition 7.t) * 0 as t > oo. Given that y(0) = 0 (by virtue of the assumption that y is a class IC
function).
Ilx(t)II < Q(Ilxoll. 0).
1) whenever the trajectories are
outside of the ball defined by IIx*II = X(IIuPI). a21 a3 E 1Coo . It seems clear that the concept of inputtostate stability is quite different from that of stability in the sense of Lyapunov. we will show in the next section that ISS can be investigated using Lyapunovlike methods. y} < Q + y < {2$.3). To this end.
Vt > 0. and al.6)
aa(x) f (x.4) and (7.1) if it has the following properties:
(a) It is positive definite in D.
(b) It is negative definite in along the trajectories of (7.3
InputtoState Stability (ISS) Theorems
Theorem 7.
Definition 7.186
CHAPTER 7.
7.5) (7. u) < a3(IIxII)
V is said to be an ISS Lyapunov function if D = R".2 A continuously differentiable function V : D > Ift is said to be an ISS Lyapunov function on D for the system (7.3) follows from the fact that given 0 > 0 and y > 0.
Remarks: According to Definition 7. 2y}. a3.1) and let V : D a JR be an ISS Lyapunov function for this system.1.
(7.
given a positive definite function V. max{Q.
and X such that the following two conditions are satisfied:
al(IIXII) <_ V(x(t)) < a2(IIxII)
Vx E D.
(7. Du = R.)}.1 (Local ISS Theorem) Consider the system (7.'Y(IIuT(')IIG.3) with the following equation:
Iix(t)II <_ maX{)3(IIxoII.
0 < T < t. INPUTTOSTATE STABILITY
Alternative Definition: A variation of Definition 7.1) is inputtostatestable according to
. On occasions. u E Du : IIxII >_ X(IIuII). a2. there exist class IC functions al and a2 satisfying equation (7.1) if there exist class AC functions al.4) might be preferable to (7. we now introduce the concept of inputtostate Lyapunov function (ISS Lyapunov function). t). t > 0
Vx E D. Then (7. especially in the proof of some results. Nevertheless.2.4)
The equivalence between (7.1 is to replace equation (7. Notice that according to the property of Lemma 3. (7. V is an ISS Lyapunov function for the system (7.5).
t>O
0<T<t. is bounded and closed (i. this also implies that IIxHI > X(IIu(t)II) at each point x in the boundary of III.
(7. so defined.2 (Global ISS Theorem) If the preceeding conditions are satisfied with D = 1R' and Du = 1Wt.7.8)
(7. denoted O(Qc). whenever x0 E 1C..3. and let
ru f SupIuTII = IuIIG_.1: Notice first that if u = 0. and if al.
Also define
c
S2c
f
a2(X(ru))
{x E D : V (x) < c}
and notice that 1C.5) and
(7.
Moreover. 02.CD.)}).1 with
187
aj1oa2oX k1 = a21(al(r)) k2 = X1(min{k1iX(r. Qc is a compact set).e. INPUTTOSTATE STABILITY (ISS) THEOREMS
Definition 7. Therefore.5) that the open set of points
{xCR'i:IIxII <X(ru)<c}CQ. ): The previous argument shows that the closed set 1 is surrounded by 3(1L) along which V(x(t)) is negative definite. Now consider a nonzero input u.9)
Theorem 7.. We now consider two cases of interest: (1) xp is inside S2..6) of the ISS Lyapunov function guarantee that the origin is asymptotically stable. Trajectories x(t) are such that
al(IIxII) < V(x(t))
IIx(t)II : aj 1(V(x(t))) < al 1(c) = ai 1(a2(X(ru)))
defining
yfa11oa2oX(IuTIG_) 0<T<t
. then defining conditions (7.7)
(7.1) is globally inputtostate stable. x(t) is locked inside 0.
Case (1) (xo e 1. for all t > 0. then the system (7. and (2) xo is outside Q. It then follows by the righthand side of (7.6) implies that V(x(t)) < 0 Vt > 0 whenever x(t) ¢ 1 C.
Proof of Theorem 7. a3 E )Coo. Notice also that condition (7. and thus it includes its boundary.
)
Vt > 0. notice that.
To complete the proof. and = {x E D : V(x) < c}.t) (7. <'Y(IIuTIIG)
Vt >_ 0.188
CHAPTER 7. Thus
al(IIxII) <
al(r)
< V(xo)
V(x)
S
<_
a2(IIXII) a2(IIxOI)
and
lixol1
all(ai(r)).)}
or that
IIx(t)II <_ 3GIxoII. then x(t) E 8(1k).11)
Combining (7.8).
respectively.8) and (7. This argument also shows that x(t) is bounded and that there exist a class /CL function 0 such that for 0 < t < tl 11x(t)JI. IIxIl > X(IIull) and thus V(x(t)) < 0. we must have that V(x) > c.
Assuming that xo V Q. By condition (7. or equivalently.
1Ix(t)II.. it remains to show that kl and k2 are given by (7.): Assume that xo E Q. To see (7.. It then follows that for some tl > 0 we must have V(x(t))
>0
c. V(xo) > c.11). t) + 7(IIuII.7(IIuII. and we are back in case (1).t)
for t > tj
for 0<t<tj
Vt > 0
lx(t)II < max{p(Ilxoll.
Thus
kl
d=f
Ixoll
al l(ai(r))
.10)
Case (ii) (xo ¢ S1.
for 0 < t < tl
V(x(tl)) =
But this implies that when t = t1.10) and (7.6). INPUTTOSTATE STABILITY
we conclude that whenever xo E Sao.9).t).
(7. < /3(Ilx. by definition
S2c
c = a2(X(ru)).
0 <T < t. we obtain
JIx(t)1Ioo
Ilx(t)lIc
and thus we conclude that
< Y(Ilulloo) < Q(IIxoII.
a>0.5) with al(IIxII) = 020411) = 2x2.9) notice that
IMx(t)JI
189
< max{Q(IIxol(. INPUTTOSTATE STABILITY (ISS) THEOREMS which is (7. To this end we proceed by adding and subtracting aOx4 to the righthand side of (7. 0 <T <t
X(k2) <
Thus
< min{kl.
We have
V = x(ax3 .
To check for inputtostate stability.
k2 = X1(min{k1.12)
and X(.u)
< a(1.2 Consider the following system:
±=ax3+u
2x2. where the parameter 0 is such that 0 < 0 < 1
V = ax4 + aOx4 . It should be mentioned that application of Theorems 7.t).
This will be the case.3.
Example 7.2 is not an easy task.y(Ilutlloo)}
IjxII
Vt> 0.9)x4 = a3(IIx1I)
provided that
x(aOx3 . whenever IIxHj > X(IjuII).12). Thus.
We need to find
(7.7.1
Examples
We now present several examples in which we investigate the ISS property of a system.X(ru)}).) E K such that V(x) < a3(IIxII). provided that
aOIxI3 > Iul
.3. something that should not come as a surprise since even proving asymptotic stability has proved to be quite a headache.1 and 7.
7. we propose the ISS Lyapunov function candidate V (x) =
This function is positive definite and satisfies (7.aOx4 + xu
= a(1 . X(ru))}.9)x4 .x(aOx3 .8).u) > 0. our examples are deliberately simple and consist of various alterations of the same system.u). To see (7.
which is a slightly modified version of the one in Example 7. INPUTTOSTATE STABILITY
or.aOx 4 + x(1 + x2)u
a(1 .2 and 7. equivalently
IxI > (
a
)1/3
It follows that the system is globally inputtostate stable with ^j(u) _
(
aB
"
1/3
0
Example 7.aBx4 + x3u = a(1 .4 Now consider the following system.2:
x=ax3+x2u
= ax4 +x3u
a>0
Using the same ISS Lyapunov function candidate used in Example 7.x3(a0x .190
CHAPTER 7. which is yet another modified version of the one in Examples 7.u)
< a(1 .
Thus. 4
provided
.3 Now consider the following system. we have that
= ax4 + aOx 4 .u)
lxI
> 0
>
or.0)x4 .
provided
0<0<1
x3(a0x .9)x.0)x4.
I
.0)x4 .2.(1 + x2)u]
<
a(1 .
0
Example 7.3:
i=ax3+x(1+x2)u
a>0
Using the same ISS Lyapunov function candidate in Example 7.x[aOx 3 . the system is globally inputtostate stable with 7(u) = .2 we have that
ax 4 + x(1 + x2)u
ax4 + aOx 4 .
INPUTTOSTATE STABILITY REVISITED
x[aOx3
191
. as with all the Lyapunov theorems in chapters 3 and
4.1).
Theorem 7. Assume that the origin is an asymptotically
stable equilibrium point for the autonomous system x = f (x. We begin by stating that.1) is locally inputtostatestable.2.13) is satisfied if
Ixl > (
1+r2 u
1/3
jxIj < r}. For completeness.1 and 7.1) is inputtostatestable if and only if there exist an ISS
Lyapunov function V : D > R satisfying the conditions of Theorems 7.4 and 7.
Theorem 7.3 The system (7. Theorems 7. Under these conditions (7. u) is continuously differentiable.4. Proof: See Reference [73].2 state that the existence of an ISS Lyapunov function is a sufficient
condition for inputtostate stability.4
InputtoState Stability Revisited
In this section we provide several remarks and further results related to inputtostate stability.4 Consider the system (7.
Therefore. Equation (7.1 or 7.
= k2
7. we now state this theorem
without proof.
:
(7.13)
Now assume now that the sets D and Du are the following: D = {x E 1R" D = R. 0).(1 + x2)u] > 0. there is a converse result that guarantees the existence of an ISS Lyapunov function whenever a system is inputtostatestable. using only "classical" Lyapunov stability theory. As with all the Lyapunov theorems in Chapters 3 and 4. the system is inputtostate stable with k1 = r:
7(u) = (
1+(5)11 2
/ 1/3 k2 = X1[min{k1iX(ru)}]
= X 1(k1) = X1(r)
aOr
(1 + r2).7.
.5 are important in that they provide conditions for local and global inputtostate stability. respectively.
Theorems 7. and that the function f (x.
a3(IIxII)
Vx E D. u E Du : IIxii >. u) is continuously differentiable and globally Lipschitz in (x.15) is satisfied. Then we have that
ax f(x.5: See the Appendix.6) holds:
aw f (x. For the converse.O)a3(IIxII)
VIxII>x31ro u
1
which shows that (7.1) if and only if there exist class 1C functions al.
Theorem 7. under these conditions. a2i a3. assume that (7.
We now state and prove the main result of this section. u).1).X(IIuii)
To see that (7.u) < a3(IIxII)+a0IuID
Vx E D. and al.4 and 7.5 Consider the system (7.u) < a3(IIxJI) +a(IIuII)
`dx E D. INPUTTOSTATE STABILITY
Theorem 7. u) <. Assume that the origin is an exponentially stable equilibrium point for the autonomous system i = f (x. Indeed.du E Du
9<1
< (1 ...9)a3(IIxII) .14)
(7.) 
.Oa3(IIxII) + a(IIuII) < (1.6) holds for any a(.1) is inputtostatestable.6 gives an alternative characterization of ISS Lyapunov functions that will be useful in later sections. 0).192
CHAPTER 7.. (7.
Proof of theorems 7.6 A continuous function V : D > R is an ISS Lyapunov function on D for
the system (7.u E Du
(7. we consider two different scenarios:
(a) IIxII > X(IIuii): This case is trivial.15) is satisfied. Theorem 7. t > 0
Vx E D. Under these conditions (7.
Proof: Assume first that (7. Du = W' . a2i a3. and a E 1C. and that the function f (x.6) is satisfied.15)
V is an ISS Lyapunov function if D = R'. and or such that the following two conditions are satisfied:
al(IIxII) 5 V(x(t)) < a2(IIxII)
OV(x) f(x.
IxII <_ X(r)
eX
f (X.e. u) + a3(X(IIu1I))]
then
(lull = r. (ii) it is nonnegative.14) and (7.
This implies that 3d E 1R such that
a3(d) = a(ru).4. For reasons that will become clear in Chapter 9. u) < a3(Ilxll) +O(r)
o = max{0.6) in Definition 7. at some t = t') enter the region
SZd = max V(x).7. however. We can always.
Remarks: According to Theorems 7. find a E /C such that a(r) > Q.1) is
inputtostatestable if and only if there exists a continuous function V : D > R satisfying conditions (7. there exist points x E R" such that
013(IIXIi) = a(ru).
and (iii) it is continuous.
IxII<d
. To see this.3. notice that. we have that for any Ilxll > d and any u : Ilullc_ < ru:
jx f
(x.
This completes the proof.2 has been replaced by condition (7.6 and 7. satisfies the following: (i) 0(0) = 0. given ru > 0.< ru will eventually (i. so defined. we can claim that the system (7.15).15) permits a more transparent view of the concept and implications of inputtostate stability in terms of ISS Lyapunov functions. The only difference between Theorem 7. Inequality (7. (7.2 is that condition (7. INPUTTOSTATE STABILITY REVISITED
(b) IIxii < X(IIuiD: Define
193
0(r) = max [ a7x f (x. O(r)}
Now.u)
<
<_
a(Ilxll) +a(IIuIU
a(Ildll)+a(Ilulle_)
This means that the trajectory x(t) resulting from an input u(t) : lullL. defining
we have that or.15) is called the dissipation inequality. It may not be in the class /C since it may not be strictly
increasing..6 and definition 7.15).
or
d = a 1(a(ru))
Denoting Bd = {x E 1
:
IIxii < d}.
1).). Assume that & is a 1C.7 and 7.
In the following theorem we consider the system (7. a) be a supply pair for the system (7. such that (a. The functions a(.17)
We will need the following notation. we say that
x(s) = O(y(s))
if
8100 ly(s)I
as s + oo+
lim 1x(s)1 < oo.u) < a(IIx1I) +o(IIu1I)
for some a. o].u E 1W7
(7. Then. Given functions x(.
Proof of theorems 7.&) is a supply pair. because of the condition on V.
..7 and 7.8: See the Appendix. Assume that & is a 1C. d E
and or E 1C. a) be a supply pair for the system (7.
Vx E 1R".
Theorem 7.1) and we assume that it is globally inputtostatestable with an ISS pair [a. but not V itself. the trajectory is trapped inside Std. a. the pair [a(.).16) (7. the region Std seems to depend on the composition It would then appear that it is the composition of these two functions and that determines the correspondence between a bound on the input function u and a bound
of
on the state x.. there exist & E 1C.8 have theoretical importance and will be used in the next section is connection with the stability of cascade connections if ISS systems.) constitute a pair [a(. function satisfying o(r) = O(&(r)) as r a oo+. such that (&.8 Let (a. Theorem 7.
referred to as an ISS pair for the system (7. ii)
is a supply pair.).
R > IR.. function satisfying a(r) = O(a(r)) as r a 0+.1).7 and 7.194
CHAPTER 7.
According to the discussion above.8 shows how to construct the new ISS pairs using only the bounds a and a. The fact that Std depends on the composition of a given system.
a(IIxII) < V(x) < a(IIxII) VV(x) f(x. Our next theorem shows that this is in fact the case. The proof of theorems 7.7 Let (a... there exist & E 1C.)] is perhaps nonunique.
as s + 0+
Similarly
x(s) = O(y(s))
if
s+0 I y(s) I
lim
Ix(s)I
< 00.. o(.
Vx E 1R".1).) and a(. INPUTTOSTATE STABILITY
Once inside this region.
Remarks: Theorems 7. Then.
7. we have that
(i) Defining
a2
a2(s)
fors "small"
for s "large"
{
a2(S)
then there exist &2 such that (&2. where El and E2 are given by
E1 : E2 :
± = f (x. This means that there exist positive definite functions V1 and V2 such that
VV1 f(x. the new ISSpairs will be useful in the proof of further results.
7.20)
VV2 9(Z. z)
z = 9(z. The state of E2 serves as input to the
system E1.
In the following lemma we assume that both systems E1 and E2 are inputtostatestable with ISS pairs [al. 01] and [a2. U)
a2(IIZIj) +J2(jIUID)
(7.z)
<
G
al(1Ix1I)+a1(I1z11)
(7.1 Given the systems E1 and E2. Si] and [&2i Q2] for the two systems. respectively.18) (7.1: Cascade connection of ISS systems.21)
The lemma follows our discussion at the end of the previous section and guarantees the
existence of alternative ISS pairs [&1. CASCADECONNECTED SYSTEMS
195
U
E2
z
E1
x
Figure 7. As it turns out.1. o2].
Lemma 7. a2) is an ISS pair for the system E2.5.
.5
CascadeConnected Systems
Throughout this section we consider the composite system shown in Figure 7. u)
(7.19)
where E2 is the system with input u and state z.
then the composite system E
is inputtostatestable. z) + VV2 f (Z. a local version of this result can also be proved. and the theorem is proved.
Proof: A direct application of Theorems 7.9 Consider the cascade interconnection of the systems El and E2.10 Consider the cascade interconnection of the systems El and E2.
This theorem is somewhat obvious and can be proved in several ways (see exercise 7.u) < a2(IIzII) +&2(IIUII)
Define the ISS Lyapunov function candidate
V = V1 + V2
for the composite system. If both
systems are locally inputtostatestable. u)
1
a1(IIxII) . the functions V1 and V2 satisfy
VV1 f(x. We have V ((x.
We now state and prove the main result of this section. As the reader might have guessed. For completeness.3).2a2(IIZII)+a2(IIuII)
It the follows that V is an ISS Lyapunov function for the composite system.
x
z
.7 and 7. u) = VV1 f (X.1.CHAPTER 7. we now state this result without proof.
Proof: By (7.
Theorem 7. then the composite system E
E:u +
is locally inputtostatestable. If both
systems are inputtostatestable. INPUTTOSTATE STABILITY
al(s) = 2a2
there exist al : [al. 1&2] is an ISS pair for the system El.z)
<
al(IIxII)+2612(IIZII)
VV2 9(z. z).20)(7.8.
Theorem 7.21) and Lemma 7. all = [al.
1 and 7. if El is inputtostatestable and the origin x = 0 of the system E2 is globally asymptotically stable.
is globally asymptotically stable.
The following two corollaries.22)(7. shown in Figure 2:
El :
E2 :
i = f (x. Here we consider the following special case of the interconnection of Figure 7.
Proof of Corollaries 7.5.1.9 and 7. the system E2 is trivially (locally or globally) inputtostatestable.23)
is locally asymptotically stable.1. and then so is the interconnection by application of Theorem 7.7.0)
(7.23)
and study the Lyapunov stability of the origin of the interconnected system with state
z
Corollary 7.9 or 7.2: The proofs of both corollaries follow from the fact that.2 Under the conditions of Corollary 7.22)
(7. under the assumptions.10 are also important and perhaps less obvious.22)(7.
.2: Cascade connection of ISS systems with input u = 0.10.1 If the system El with input z is locally inputtostatestable and the origin x = 0 of the system E2 is asymptotically stable.
Corollary 7. then the origin of the
interconnected system (7. which are direct consequences of Theorems 7. CASCADECONNECTED SYSTEMS
197
u=0
E2
z
E1
x
Figure 7.23). then the origin of the interconnected system (7. z)
z = g(z) = g(z.
198
CHAPTER 7.6
Exercises
(7.22
(i) Is it locally inputtostatestable? (ii) Is it inputtostatestable?
ll i2 = x1x2+u
(7.2) Sketch the proof of theorem 7. Definition 7.10.7) Consider the following cascade connection of systems
2 = x3 + x2u
{ z = z3 + z(1 + Z2)x
(i) Is it locally inputtostatestable? (ii) Is it inputtostatestable?
(7.9.5) Consider the following system:
.6) Consider the following system:
I i1 = 21 . INPUTTOSTATE STABILITY
7. (7.3) Provide an alternative proof of Theorem 7.XU2 + u1u2
1 f
z
z3 +z 2 X
.
(7.5.1) Consider the following 2input system ([70]):
i= x3 + x2U1 . using only the definition of inputtostatestability.t1 = x1 + x2 Ix2 x2 = x2 + 21x2 + u {
(i) Is it locally inputtostatestable? (ii) Is it inputtostatestable?
(7.4) Sketch a proof of Theorem 7.1.XU2 + U1162
Is it inputtostate stable?
(7.8) Consider the following cascade connection of systems:
2 = x3 + x2u1 .
(7.
7.6 were taken from reference [73]. See also chapter 10 of Reference [37] for a good survey of results in this area. ISS pairs were introduced in Reference [75]. as well as Theorems 7.6.9) Consider the following cascade connection of systems:
21 + 222 + 71.1
199
22 + U2 Z3 + 2221 . See also References [70]. Theorems 7.3 and 7.
. [71]. and [72] for a thorough introduction to the subject containing the fundamental results. as presented here.5 on cascade connection of ISS systems. was introduced by Sontag [69]. EXERCISES
(i) Is it locally inputtostatestable? (ii) Is it inputtostatestable?
(7.Z22 + 2122
(i) Is it locally inputtostatestable? (ii) Is it inputtostatestable?
Notes and References
The concept of inputtostate stability. [74].8 are based on this reference. Section 7. The literature on inputtostate stability is now very extensive.7 and 7.
.
Throughout this chapter we focus on the classical inputoutput definition. We begin by recalling from basic physics that power is the time rate at which energy is absorbed or spent.1
Power and Energy: Passive Systems
Before we introduce the notion of passivity for abstract systems.)
.
(8.2)
Now consider a basic circuit element.Chapter 8
Passivity
The objective of this chapter is to introduce the concept of passivity and to present some of the stability results that can be obtained using this framework.
power energy
t
Then
.
P(t) =
where
dwd(t)
(8. it is convenient to motivate
this concept with some examples from circuit theory. represented in Figure 8. As with the small gain theorem.1 using a black box.
time
w(t) =
Jto
t
p(t) dt. State space realizations are considered in Chapter 9 in the context of the theory of dissipative systems. we look for openloop conditions for closed loop stability of feedback interconnections.
201
.1)
w(.
8.
are well behaved. We have
p(t) = v(t)i(t)
thus.3)
w(t) = J
00
v(t)i(t) dt = J
00
v(t)i(t) dt + J tv(t)i(t) dt. the box delivers energy (this is the case. In circuit theory. The assignment of the reference polarity for voltage. PASSIVITY
i
v
Figure 8. and are therefore called
passive elements. we have (i) If w(t) > 0.. the box absorbs energy (this is the case. capacitors and inductors indeed satisfy this condition. With the indicated sign convention.
(8.1). for example. for a battery.4)
The first term on the right hand side of equation (8. the energy absorbed by the circuit at time "t" is
(8. i.
(8. in general. with negative voltage with respect to the polarity indicated in Figure 8. for a resistor). for example.1: Passive network.5)
Resistors.e. and reference direction for current is completely arbitrary. elements that do not generate their own energy are called passive.
In Figure 8.202
CHAPTER 8. (ii) If w(t) < 0.1 the voltage across the terminals of the box is denoted by v. a circuit element is passive if
f
t
00
v(t)i(t) dt > 0. in an admittedly ambiguous
. and the current in the circuit element is denoted by i.4) represents the effect of initial conditions different from zero in the circuit elements. Passive networks.
or systems in a more general sense.8. Stability.2: Passive network. and it is intended to capture precisely the notion of a system that is well behaved. where we assume that the black box contains a passive (linear or not) circuit element. we consider the circuit shown in Figure 8. If the notion of passivity in networks is to be of any productive use.1. POWER AND ENERGY: PASSIVE SYSTEMS
203
i
R
e(t)
v
Figure 8. in a certain precise sense. we have
e(t) = i(t)R + v(t)
Assume now that the electromotive force (emf) source
T
is such that
e2(t)dt<oo
we have.
To study this proposition. fo i(t)v(t) dt > 0. It follows that
T
T
J0
e2(t) dt > R2
J0
i2(t) dt + f T v2(t) dt. is a concept that has been used to describe a desirable property of a physical system.
I
T
T
e2(t) dt =
Jo
(i(t)R + v(t))2 dt
= R2 J T i2(t) dt + 2R
0
J0
T i(t)v(t)
dt + J T v2(t) dt
0
and since the black box is passive. then we should be able to infer some general statements about the behavior of a passive network.2. and using Kirchhoff voltage law.
sense.
0
. in its many forms. Assuming that the network is initially relaxed. It is not straightforward to capture the notion of good behavior within the context of a theory of networks.
8. y) = 01 (y. z) + (y + z)
(iii) (ax. y E X. we want to draw conclusions about the feedback interconnection of systems based on the properties of the individual components. y. y E X. since the applied voltage is such that fo e2 (t) dt < oo. we can take limits as T + o0 on both sides of thinequality.). x) = 0 if and only if x = 0. We E R.2
Definitions
Before we can define the concept of passivity and study some of its properties we need to introduce our notation and lay down the mathematical machinery. z E X.
The function X x X + IR is called the inner product of the space X.6)
Throughout the rest of this chapter we will assume that X is a real inner product space. y E X.
(iv) (x.x). x) > 0. In particular. z) = (x. y) = (y. PASSIVITY
Moreover.. and we have
R2
j
00
i(t)2 dt+ J
0
v2(t) dt <0 e2(t) dt < 00
0 0)
which implies that both i and v have finite energy. we formalize these ideas in the context of the theory of inputoutput systems and generalize these concepts to more general classes of systems. x)
(ii) (x + y. then the inner product space is said to be a Hilbert space.1 A real vector space X is said to be a real inner product space. This. there exists a real number (x. Using these proper we can define a norm for each element of the space X as follows:
llxjfx = (x.
Vx. in turn. If the space X is complete.
(8.
An important property of inner product space is the socalled Schwarz inequality:
I (x)y)l <_ IIxIIx IlylIx
Vx. and in this sense we can say that the network is well behaved. In the next section. implies that the energy in these two quantities can be controlled from the input source e(. The essential tool needed in the passivity definition is that of an inner product space. y) that satisfies the following properties:
(Z) (x. x)
dx.204
CHAPTER 8. (v) (x. if for every 2 vectors x.
. Definition 8.
and moreover r 00 IIxIIc. Hu)T > 0
Vu E Xe.±xnyn
defines an inner product in IRn. regardless of whether x(t) itself belongs to £2.7)
where x y indicates the usual dot product in Htn. DEFINITIONS
Example 8.
In the sequel. however.
(8. y) = (x. Notice also that. defined by
205
x.8)
We have chosen. y) = f00 x(t) y(t) dt
0
(8. x) =
x2(t) dt < oo
0
Thus. Virtually all of the literature dealing with this type of system makes use of the following inner product
(x.. X. to state our definitions in more general terms. x) = f IIx(t)II2 dt.2 : Let X = £2. = (x.10)
. yT) = (xT. Then the usual "dot product" in IRn. yT)
(x.2.1 Let X be Rn. y)T
Example 8.. Indeed. given that our discussion will not be restricted to continuoustime systems.8. we will need the extension of the space X (defined as usual as the space of all functions whose truncation belongs to X). (8. with this inner product. This inner product is usually referred to as the natural inner product in G2.2
:
(Passivity) A system H : Xe 4 Xe is said to be passive if (u. the space of finite energy functions:
X = {x : IR + ]R. El
Definition 8. the function x(t) = et belongs to Gee even though it is not in G2. there is no a priori reason to assume that this is the only interesting inner product that one can find. and assume that the inner product satisfies
the following:
(XT. we have that X = G2. our attention will be centered on continuoustime systems. VT E R+. For instance.. It is straightforward to verify that defining properties (i)(v) are satisfied. = Lee is the space of all functions whose truncation XT belongs to £2. even for
continuous time systems.y=xTy=xlyl+x2y2+. and satisfy}
f00
IIxIIc2 = (x.
For the most part.
we go back to our network example:
Example 8.2 states that only a finite amount of energy.3 Consider again the network of Figure 8.4) we know that the total energy absorbed by the network at time t is
ftoo v(t)i(t) dt =
J
t v(t)i(t)
dt +
J
c
o 00
v(t)i(t) dt
(v(t). we define
u = v(t)
y = Hu = i(t).3 : (Strict Passivity) A system H : Xe 4 Xe is said to be strictly passive if there exists 6 > 0 such that
(u. introduced next.
0
Closely related to the notions of passivity and strict passivity are the concepts of
positivity and strict positivity. i(t))T +
J
v(t)i(t) dt.206
CHAPTER 8.2 the network is passive if and only if (v(t). i(t))T > Q
Choosing the inner product to be the inner product in £2.
(8.
. the last inequality is equivalent to
the following:
T
J0
x(t)y(t)dt =
T
J
v(t)i(t) dt > 0 Vv(t) E
Xe. Definition 8. PASSIVITY
Definition 8. according to definition 8.2. initially stored at time t = 0. i(t))T
J
e
v(t)i(t) dt.11)
The constant 3 in Definitions 8. To emphasize these ideas.2 and 8. Hx)T = (v(t). the network is passive if and only if
(x. VT E R.
Therefore.
According to Definition 8.3 is a bias term included to account for the possible effect of energy initially stored in the system at t = 0. To analyze this network as an abstract system with input u and output y = Hu.1.Hu)T > 611UT111 +Q
Vu E Xe1 VT E IR+.
From equation (8. can be extracted from a passive system.
(Hu)T)
= (UT. DEFINITIONS
such that
207
Definition 8.12) and consider an arbitrary input u E Xe
follows that UT E X. then the notions of positivity and passivity are entirely equivalent.
(ii) H strictly positive
H strictly passive.
Vu E X. Hu)T = (UT. we conclude that (8.12) is unbounded.12).12) implies (8. The only difference between the notions of passivity and positivity (strict passivity and strict positivity) is the lack of truncations in (8. we have that
(u. Hu)T
It follows that (u. For the converse. and since u E Xe is arbitrary. the notions of positivity and strict positivity apply to inputoutput stable systems exclusively.8.11). HUT) ? 5U7'
It
+0
by (8.1 Consider a system H : X > X .9) since H is causal by (8.
Theorem 8. HUT)
. Hu) > bI uI X + /.
but
(UT.
Proof: First assume that H satisfies (8. HU)T > 6IIuTII2.11) and consider an arbitrary input u E X.
(8. then the lefthand side of (8. (Hu)T)
(u.1
H is said to be positive if it satisfies (8. Notice that. HUT)
(UT.11).4 : A system H : X a X is said to be strictly positive if there exists d > 0
(u. (HUT)T) (UT.Hu)T ? SIIuTIIX + a
but
(u.12) we have that
(UT. By (8. and by (8.12)
The following theorem shows that if a system is (i) causal and (ii) stable.
. assume that H satisfies (8.12) with b = 0. We have that
(i) H positive b H passive.2. and let H be causal. if the system H is not inputoutput stable.9). As a consequence. (HUT)T) = (UT.
Part (i) is immediately obvious assuming that b = 0. then.2
Proof of (i): We have
(x. n are passive. HUT) > IIUTII2 + /3. the mapping from u into y defined by equations (8. The following Theorem considers two important cases
Theorem 8..11) implies (8.13) is strictly passive.. n are passive.
(ii) If all the systems Hi.
Proof of Theorem 8. (iii) If the systems Hi.X.3
Interconnections of Passivity Systems
In many occasions it is important to study the properties of combinations of passive systems. which is valid for all T E R+.
.2 Consider a finite number of systems Hi : Xe . i = 1.14)
(8.+Hn)x)T
=
(x.13)
is passive. + (x+ Hnx)T
deJ
N.
.+Hnx)T
(x+ Hlx)T + . This completes the proof. i = 1.15)
is well defined (i.15) is passive.14)(8. Hu) > 8IIuIIX + Q
Thus we conclude that (8. since u E X and
H : X . Moreover.12) and the second assertion of the theorem is proved.Hlx+.
defined by (see Figure 8.
. e(t) E Xe and is uniquely determined for each u(t) E Xe). and at least one of them is strictly
passive. then the system H : Xe + Xe . i = 1.. We have
(i) If all of the systems Hi. i = 1.Xe .. 2 are passive and the feedback interconnection defined by the equations (Figure 8.
(8.. PASSIVITY
Thus (UT.(Hl+. we can take limits as T * oo to obtain
(u. n...
8.4)
e = uH2y y = Hie
(8. then the system H defined by equation (8.3)
.208
CHAPTER 8.e.
3:
y(t)
H1
H2
Figure 8.4: The Feedback System S1.
.3. INTERCONNECTIONS OF PASSIVITY SYSTEMS
209
Hn
H2
u(t)
H1
y(t)
Figure8.3.
Hx)T = (x.3. if necessary. 1 < k < n. Y)T = (e. Y)T = (e + H2Y. PASSIVITY
+ Hn) is passive. The validity of these results in case of an infinite sequence of systems depends on the properties of the inner product.
We have:
(8..Hkx)T+.2 cannot be assumed to be infinite. Define the function S : Xe .. and assume that (I + H) is invertible in X.
Theorem 8. + HnX)T
= (x Hlx)T+. By relabeling the systems.+(x.+(x. H f (H1 +
Proof of (ii) Assume that k out of the n systems H= are strictly passive. the number of systems in parts (i) and (ii) of Theorem 8. The proof is omitted since it requires some relatively advanced results on Lebesgue integration. H1x + .I)(I + H)1. H2y)T ? (31+02)..x)T+Q1 +. Hk.3 Let H : Xe 4 Xe ..x)T +. +6k(x. that if the inner product is the standard inner product in £2. Y)T = (e..1
Passivity and Small Gain
The purpose of this section is to show that.. It can be shown.
This completes the proof.16)
..210
CHAPTER 8.
Thus.
In the following theorem Xe is an inner product space.Xe :
S = (H . . we can assume that these are the systems H1. the concept of
passivity is closely related to the norm of a certain operator to be defined.. Hle)T + (y.. H2. and the gain of a system
H : Xe + Xe is the gain induced by the norm IIxII2 = (x. +/3n
and the result follows.Hnx)T 61(x. then this extension is indeed valid. however. It follows that
(x. that is.
8.
Remarks: In general..
assume that (I + H)1 : Xe + Xe . in an inner product space. x).. y)T + (H2y..
Proof of (iii): Consider the following inner product:
(u.
To simplify our proofs.
.
(a) H is passive if and only if the gain of S is at most 1.4.4
Stability of Feedback Interconnections
In this section we exploit the concept of passivity in the stability analysis of feedback interconnections. and so the constant )3 in Definitions 8.4 : Let H1.VT E Xe.
Our first result consists of the simplest form of the passivity theorem.17)
(b) H is strictly passive and has finite gain if and only if the gain of S is less than 1.
Proof: See the Appendix.18) (8.5 (u2 = 0 in the feedback system used in Chapter 6).
8.2 and 8. The simplicity of the theorem stems from considering a feedback system with one single input u1.H2e2
(8.19)
yi = H1e1.
(8.5: The Feedback System S1.3 is identically
zero. we assume without loss of generality that the
systems are initially relaxed. as shown in Figure 8. H2 : Xe 4 Xe and consider the feedback interconnection defined
by the following equations:
ei
= ul . that is. S is such that
I(Sx)TI x < IITIIX
Vx E Xe .8.
Theorem 8. STABILITY OF FEEDBACK INTERCONNECTIONS
211
ul(t)
+
el(t)
yi(t)
H1
e2(t)
H2
Figure 8.
namely. Hlel)T = (el. y1)T
but
(e1. For these two signals to be bounded. Hle1)T = (el + H2e2.18)(8.Xe and consider again the feedback system of equations (8. Hlel)T + (H2e2.Hlel)T > (H2yl.y1)T > 61IYlTIIX
2
By the Schwarz inequality. Thus
(u1. we need a stronger assumption.
Remarks: Theorem 8. According to our definition of inputoutput stability. this implies that the closedloop system seen as a mapping from u1 to y1 is inputoutputstable. then y1 E X for every
xEX.4 says exactly the following. H2 : Xe . the output y1 is bounded whenever the input ul is bounded.20)
Therefore.
. Hence
>_ 6IIy1TIIX
2
Iy1TIIX <_ 611Iu1TIIX
(8. if both systems are passive and one of them is (i) strictly passive and (ii) has finite gain. Hlel)T + (H2y1. however. We consider this case in our next theorem. if ul E X.
Proof: We have
(ul.
I
(ul. then eli e2. yl)T 1< IIu1TIIx IIy1TIIX. we can take limits as T tends to infinity on both sides of inequality (8. does not guarantee that the error el and the output Y2 are bounded.20) to obtain
IIy1I
I
>_ b 1IIu1II
which shows that if u1 is in X. then if H1 is passive and H2 is strictly passive. PASSIVITY
Under these conditions. y1i and Y2 are in X whenever x E X.212
CHAPTER 8. Assuming that the feedback system of equations (8. Theorem 8. respectively. then y1 is also in X.19). the strictly passive system must also have finite gain. yl)T = (u1. yl)T >
0
6IIy1TI x
since H1 and H2 are passive and strictly passive. The theorem.5 : Let H1. Under these conditions. if H1 is passive and H2 is strictly passive. Hlel)T = (el.18)(8.19) admits a solution.
Then IIy2TIIX = II(H2y1)TIIx.H2e2
(8.8. Finally. H2 is passive and H1 is strictly passive with finite gain) is entirely similar. The opposite case (i. implies that e1 E X whenever u1 E X. Remarks: Theorem 8.21)
so that y1 E X whenever u1 E X. Y2 E X whenever u1 E X. Proceeding as in Theorem 8. we obtain
Ily2zTlix < ry(H2)IIy1TIIx <_ 'Y(H2)61IIu1TIIx
Thus.5 is still valid if the feedback system is excited by two external inputs (Figure 8.6).
Proof: We prove the theorem assuming that H1 is passive and H2 is strictly passive with finite gain. Also Y2 = H2y1.18) we have
IIe1TIIX: Iu1TIIX+II(H2y1)TIIX 5 IIN1TIIX+Y(H2)IIy1TIIX
which. we obtain
Iy1TIIX < b1IIU1TIIX
(8.4.4 (equation (8.23)
.e.
Theorem 8. H2 : Xe > Xe and consider the feedback system of equations
e1
= u1 . STABILITY OF FEEDBACK INTERCONNECTIONS
213
Figure 8.6 : Let H1.. and since H2 has finite gain.5 is general enough for most purposes. taking account of (8. from equation (8. we include the following theorem.21). For completeness.20).22)
e2
= u2 + H1e1
(8. which shows that the result of Theorem 8.6: The Feedback System S.
yw belong to the region of convergence of H(s).214
CHAPTER 8.5
Passivity of Linear TimeInvariant Systems
In this section.
Theorem 8. denoted We[F(3w)]
and `Zm[F(yw)].
(ii) H is strictly passive if and only if 36 > 0 such that Ste[H(yw)] > b Vw E R.
Proof: The elements of A have a Laplace transform that is free of poles in the closed right half plane. We have
(i) H is passive if and only if te[H(yw)] > 0 Vw E R. and we can drop all truncations in the passivity definition. Thus. respectively. Thus. according to Theorem 8. respectively. we have
. are even and odd functions on w. h c A. then the real and imaginary parts of P(p).7 Consider a linear timeinvariant system H : C2e > G2e defined by Hx = h* x. points of the form s = . e2. which implies that H is causal and stable. we examine in some detail the implications of passivity and strict passivity in the context of linear timeinvariant systems. and H(3w) is the Fourier transform of h(t). x2 E X.
Proof: Omitted. We also recall two properties of the Fourier transform:
(a) If f is realvalued. then el. H is passive (strictly passive) if and only if it is positive (strictly positive). if both systems are passive and one of them is (i) strictly passive and (ii) has finite gain. by assumption. we will restrict attention to the space G2. x CGee. and y2 are in X whenever xl.
Also. With this in mind.
8. yl. where h E A.1. In other
words
We[F'(7w)] = te[F(yw)]+
` m[F'(7w)] _ `3m[F(yw)]
(b) (Parseval's relation)
f
f (t)g(t) dt 
2a _C0
P(31)15(31))' dw
where G(yw)` represents the complex conjugate of G(yw). PASSIVITY
Under these conditions. Throughout this section.
7w)I2 H(7w)' dw
00
te[H(7w)] lX (7w)12 dw +
00
2rrm[H(7w)]
00
3
00
du)
(x. for some e > 0.) as follows:
X(pw) > M OW) < m
VwE 1ww*I <e
elsewhere
It follows that X(yw) has energy concentrated in the frequency interval where lRe[H(3w)] is negative and thus (x. where h c A . Hx) < 0 for appropriate choice of M and M. Hx)
(x. The proof follows the same lines and is omitted. Hx) _
and noticing that
1
00
27r
J
Re[H(34
dw
(x. we state the extension of this result to multiinputmultioutput systems.Hx) > inf te[H(yw)] W
from where the sufficiency of conditions (i) and (ii) follows immediately. For completeness. it must be true that te[H(tw)] < 0 Vu . x E L2e. We have. By the continuity of the Fourier transform as a function of w.
.5. This completes the
proof. To prove necessity.
Theorem 8.
Theorem 8.8. PASSIVITY OF LINEAR TIMEINVARIANT SYSTEMS
215
(x.E 1w .8 Consider a multiinputmultioutput linear timeinvariant system H : £2e 4 £2e defined by Hx = h * x. It follows that
f f
IX(.
assume that 2e[H(3w)] < 0 at some frequency w = w'.7 was stated and proved for singleinputsingleoutput systems.h*x)
J
27r
1
00
f
x(t)[h(t) * x(t)] dt
00
X (7w)[H(?w)X (7w)] du)
27r
1
27r
and sincem[H(yw)] is an odd function of w. We can now construct 9(y.w' I <c. x)
27r
I
I `f (7w) I2 dw
we have that
(x. the second integral is zero.
In particular.as + wo
s2+as+wo
(U)02
.e..24)
is passive. this algebra consists of systems with all of their poles in the open left half of the complex plane.7b).7 says nothing about whether a system with transfer function
H(s) =
as
s2
w2 a> 0.
(wo .
as
s2
w2 0
a > 0. H is passive. is the building block of a very important class of system to be described later. PASSIVITY
(i) H is passive if and only if [H(.7 was proved for systems whose impulse response is in the algebra A.haw
Vw.
(ii) H is strictly passive if and only if 3b > 0 such that
Amin[1 (3w)] + H(3w)'] > b Vw E ]R
It is important to notice that Theorem 8.9 Consider the system H : Gee > G2e defined by its transfer function
H(s) =
Under these conditions. w o > 0
0
(8.
Remarks: Systems with a transfer function of the form (8.w2) + haw
which has the form (a. This transfer function.3. Thus IS(3w)I = 1 and the theorem is proved.
Proof: By Theorem 8.24) is indeed passive. Thus.3b)/(a+.w)] + H(3w)*] > 0 Vw E R. H is passive if and only if
1H
1+H
but for the given H(s)
00
<1
S(s) _
SOW) _
s2 . for finitedimensional systems. which implies that IIsJi = 1. if excited.24) are oscillatory. Theorem 8.216
CHAPTER 8. A transfer function of this
. regardless of the particular value of a and w. the output oscillates without damping with a frequency w = wo. i. w0 > 0. Our next theorem shows that the class of systems with a transfer function of the form (8.w2) .
Theorem 8. in turn.
This is indeed a discouraging result. If stability is to be enforced by the passivity theorem.8. This is a very severe restriction.2 that.
In the sequel.25) is
passive.
H(s) is said to be strictly positive real (SPR) if there exists e > 0 such that H(s . help in on the way: in this section we introduce the concept of strict positive realness (SPRness). A linear timeinvariant model of a flexible structure has the following form:
H(s) _
00
cxts
(8. the case in which a passive plant (linear or not) is to be controlled via a linear timeinvariant controller. For this reason. roughly speaking.25)
Examples of flexible structures include flexible manipulators and space structures. P" denotes the set of all polynomials on nth degree in the undetermined variable s. This limitation. They are challenging to control given the infinitedimensional nature of their model. if
?Re[H(s)] > 0
for rte[s] > 0. is also stable.5 Consider a rational function ft(s) = p(s)/q(s). thus relaxing the conditions of the
passivity theorem. in turn. STRICTLY POSITIVE REAL RATIONAL FUNCTIONS
217
form is the building block of an interesting class of systems known as flexible structures.6
Strictly Positive Real Rational Functions
According to the results in the previous section. lies somewhere between passivity and strict passivity. Consider.
Remarks: Definition 8.
8.e) is PR. where p(. working in the space £2i a system with transfer function as in (8. then only controllers with relative degree zero can qualify as possible candidates. rarely satisfied by physical systems. a (causal and stable) LTI system H is strictly passive if and only if H(3w) > b > 0 Vw E R. frequencydomain conditions for SPRness are usually preferred. It will be shown later that the feedback combination
of a passive system with an SPR one. which. It follows from Theorem 8. Indeed.5 is rather difficult to use since it requires checking the real part of H(s) for all possible values of s in the closed right half plane. and
E Pm.
.) E P". Then H(s) is said to be positive real (PR). severely limits the applicability of the passivity theorem. Fortunately. no strictly proper system can satisfy this
condition. for example.6. We now state these conditions as an alternative definition for SPR rational functions.
Definition 8.
is said to be weak SPR if it is in the class Q and the degree of the numerator and denominator polynomials differ by at most 1. B) is controllable.
are in the open left half plane). and matrices Q E Rmxm W E lRm'. if H(s) is SPR (or even weak SPR). p and q have the same degree.
Definition 8. uE1im
UERm
y = Cx+Du. H(s) is said to be SPR if it
is weak SPR and in addition.
xEl[t". the several concepts introduced above. PASSIVITY
Definition 8. Then H = C(sI . it is not even positive real. not true. and e > 0 sufficiently small such that
PA + ATP = _QQT _.7 was pointed out by Taylor in Reference [79]. that is. and
(ii) Re[H(jw)] > 0. one of the following conditions is satisfied:
(i) n = m. The importance of this condition will became more clear soon.27) (8. the roots of
Vw E [0. oo). The necessity of including condition (ii) in Definition 8. and
Wi00
lim w2JIe[H(yw)] > 0
It is important to notice the difference amon. In fact.218
CHAPTER 8.e.
(ii) n = m + 1..28)
WTW = D+DT
. where E P'. known as the KalmanYakubovich lemma. For example. Clearly.6 Consider a rational function H(s) = p(s)/q(s). and (iii) (C. (ii) (A. Then H(s) is said to be in the class Q if
(i)
E P'. that is.
Lemma 8.
We now state an important result. the function Hl (s) = (s + 1)1 + s3 E Q.1 Consider a system of the form
Lb
= Ax+Bu. The converse is.26) (8.5. H(s) is strictly proper. however.n. but it is not SPR according to Definition 8.EP PB + WTQ = C
(8. A) is observable.7 H(s). then H(s) E Q. and
is a Hurwitz polynomial (i.
and assume that (i) the eigenvalues of A lie in the left half of the complex plane.A)1B + D is SPR if and only if there exist a symmetric positive definite matrix P E Rn.
the conditions of the passivity theorem can be relaxed somewhat. stability follows by theorem 8. and assume that (i) H1 is linear timeinvariant. Indeed.
(ii) H2 is passive (and possibly nonlinear).7. and p sufficiently small such that
PA + ATP = _QQT .29) (8. while strictly passive functions cannot.5.10 is very useful. a linear timeinvariant SPR controller can be used instead of a strictly passive one. strictly proper. when controlling a passive plant.A) 1B is SPR if and only if there exist symmetric positive definite matrices P and L a real matrix Q.10 (see the Appendix) will fail if the linear system is weak SPR. According to this result.4. STRICTLY POSITIVE REAL RATIONAL FUNCTIONS
219
Remarks: In the special case of singleinputsingleoutput systems of the form
i = Ax+Bu y = Cx
the conditions of lemma 8. Theorem 8. The following example shows that the loop transformation approach used in the proof of Theorem 8.
Proof: The proof consists of employing a type I loop transformation with K = E and
showing that if e > 0 is small enough. however. The details are in the Appendix.
.
Under these assumptions.10 Consider the feedback interconnection of Figure 8. thus emphasizing the importance of condition (ii) in Definition 8.6.eH1(s)] is passive
Hz = H2 + eI is strictly passive. The significance of the result stems from the fact that SPR functions can be strictly proper. and SPR.pL
(8.8. In Chapter 9. Thus. See Theorem 9.30)
PB = CT
Proof: The proof is available in many references and is omitted. then the two resulting subsystem satisfy the following conditions:
Hi(s) = Hi(s)/[1 . we will state and prove a result that can be considered as the nonlinear counterpart of this theorem.2. the feedback interconnection is inputoutputstable.
Theorem 8.1 can be simplified as follows: H = C(sI .
2 and 8. Re[H'(3w)] > 0).
(8.w2) + w2(a + b)
t II(
e[
7w)]
=
(abw2)2+w2(a+b)2
=
Wioo
lim w2 te[H(.. then a + b = c.31) (8. after the loop transformation.220
CHAPTER 8.yw
(abw2)+.2) Prove Theorem 8. we proceed as follows
1
Z
[H (7w)
+ H(7w)]
= (abeCw2)2+(a+be)2
(abc .
(8.32)
a+bce >
If a + b > c.
(ii) H(s) is weak SPR if a + b = c. if fl(s) is weak SPR.4 in the more general case when Q # 0 in Definitions 8. However.
We now consider the system H'(s).7
Exercises
(8.
.1) Prove Theorem 8.e) > 0
if and only if
abc . PASSIVITY
Example 8.32).2 and 8. We need to see whether H'(s) is passive (i. and no such e > 0 exists. and let H'(s) = H(s)/[1 . To analyze this condition.eH(s)].3. We have
H(3w )
c+. We first investigate the SPR condition on the system H(s).ec2
>0
0.5 in the more general case when 0 $ 0 in Definitions 8.3.ec2) + w2(a + b .w)] = a + b . we can always find an e > 0 that satisfies (8.c
from here we conclude that
(i) H(s) is SPR if and only if a + b > c.
8.e.yw(a+b)
c(ab .
(iii) H(s) is not SPR if a + b < c.31)(8.4 Consider the linear timeinvariant system H(s) = (s + c)/[(s + b)(s + b)].c .
See for example Narendra and Anaswami [56].7. in particular.8. Vidyasagar [88] or Anderson [1]. Example 8. including stability of feedback systems. and even the synthesis of passive
networks. which is an excellent source on the inputoutput theory of systems in general. The proof of the KalmanYakubovich lemma can be found in several works. play a very important role in several areas of system theory.4 is based on unpublished work by the author in collaboration with Dr.
. adaptive control. EXERCISES
221
Notes and References
This chapter introduced the concept of passivity in its purest (inputoutput) form. contains a very thorough coverage of the KalmanYakubovich lemma and related topics. Damaren (University of Toronto). Strictly positive real transfer functions and the KalmanYakubovich lemma 8. Our
presentation closely follows Reference [21]. See References [35] and [78] for a detailed treatment of frequency domain properties of SPR functions. and passivity results in particular. Reference [1]. C.1. Early results on stability of passive systems can be found in Zames [98].
.
In this chapter we pursue these ideas a bit further and postulate the existence of an input energy function and introduce the concept of dissipative dynamical system in terms of a nonnegativity condition on this function. the energy absorbed by the network during the same time interval. given
an inner product space X. We will also depart from the classical inputoutput
u
H
y
Figure 9. in that case (assuming X = G2)
T
(u. where u and y are.1: A Passive system. i(t))T =
100
v(t)i(t) dt
which represents the energy supplied to the network at time T or. Many systems fail to be passive simply because (u.1). For more general classes of dynamical systems. Hu)T = MO.Chapter 9
Dissipativity
In Chapter 8 we introduced the concept of a passive system (FIgure 9. This concept was motivated by circuit theory. equivalently. Thus. Specifically. the voltage v(t) and current i(t) across a network or vice versa. a system H : Xe 4 Xe is said to be passive if (u. HU)T > Q. "passivity" is a somewhat restrictive property.
223
. y) may not constitute a suitable candidate for an energy function. respectively.
Associated with this system we have defined a function w(t) = w(u(t).2) is called the dissipation inequality. called the storage function.
X The set X C 1R" represents the state space. u). y(t)) : U x y + ]R. and stability in the sense
of Lyapunov.224
CHAPTER 9. such that for all xo E X and for all inputs u E U we have
t. y E U : SZ C R > IItp. DISSIPATIVITY
theory of systems and consider state space realizations.2)
Inequality (9. and the several terms in (9.
9. called the supply rate.
to
(9.
Y the output space.u).
O(x1) < q5(xo) + f w(t) dt.
Iw(t)I dt < oo. the functions in U map a subset of the real numbers into 1R1. XEX
Sl y = h(x.
ycY
where
U is the input space of functions. which consists of functions. The use of an internal description will bring more freedom in dealing with initial conditions on differential equations and will also allow us to study connections between inputoutput stability.1 A dynamical system t' 1is said to be dissipative with respect to the supply rate w(t) if there exists a function . In other words. uEU. u E U : SZ C R > 1R'. : X + 1R+.1
Dissipative Systems
Throughout most of this chapter we will assume that the dynamical systems to be studied are given by a state space realization of the form
(i=f(x. that satisfies
tl
J to
that is.2) represent
the following:
.
is a locally integrable function of the input and output u and y of the system
Definition 9..
4) is satisfied if and only if
0a(x) f (x. equal to the sum of the energy O(xo) initially stored at time to.9. then we have (since x1 = xo)
0(Xo)
=
< 0(Xo) + i w(t) dt
0
J w(t)dt >
(9.5)
. at most. defined in Definition 9.2. u) < w(t) = w(u.to
but
tl . and denoting O(xi) the value of O(x) when t = ti. O(x(t`)) represents the "energy" stored by the system V) at
time t'. according to (9. Throughout the rest of this chapter. u)) Vx.2
Differentiable Storage Functions
In general. the stored energy l4(xi) at time t1 > to is.0(Xo)
t1 .2) by (t1 . it.
(9.to
w(t) dt
lim tl*to
O(x1) .3) states that in order to complete a closed trajectory.
fto' w(t) dt: represents the energy externally supplied to the system zG during the interval [to. Inequality (9. we can write
_(xl) .
9. plus the total energy externally
supplied during the interval [to.0(Xo)
<
1
tl . First we notice that if 0 is continuously differentiable. u)
dt
8x
and thus (9. It is important to notice that if a motion is such that it takes the system V) from a particular state to the same terminal state along a certain trajectory in the state space.
Thus. t1].1 need not be differentiable. however. DIFFERENTIABLE STORAGE FUNCTIONS
225
the storage function. we will see that many important results can be obtained by strengthening the conditions imposed on 0.to
= dO(x) = 00(x) f(x. h(x. y) = w(u. In this way there is no internal "creation" of energy. t1]. the storage function 0 of a dissipative system.to).3)
where f indicates a closed trajectory with identical initial and final states. a dissipative system requires external
energy. then dividing (9.2).
u) < w(u.y)
Vx E R' .
9. called the storage function.1
Back to InputtoState Stability
In Chapter 7 we studied the important notion of inputtostate stability.5) is called the differential dissipation inequality. In this section we study a particularly important function and some of its implications.1 as follows. DISSIPATWITY
Inequality (9.
Definition 9.
9. In the following lemma we assume that the storage function corresponding to the supply rate w is differentiable.2 and Lemma 7.2. functions.. and constitutes perhaps the most widely used form of the dissipation inequality.90
Vx E IRn
8x f
(x. y) if there exist a continuously differentiable function 0 : X > IR+.O(x) <.a2(I14II)
. functions al and a2 such that
al(IIxII) <.) is positive definite. however.
..6. u E lRm. Proof: The proof is an immediate consequence of Definition 9. There are. u). that satisfies the following properties:
(i) There exist class 1C.3
QSR Dissipativity
So far we have paid little attention to the supply rate w(t). We will see that concepts such as passivity and small gain are special cases of this supply rate.
In other words.2 (Dissipativity restated) A dynamical system Eli is said to be dissipative with respect to the supply rate w(t) = w(u. defining property (i) simply states that 0(.. We can now review this concept as a special case of a dissipative system. and y = h(x.
Lemma 9. several interesting candidates for this function.226
CHAPTER 9. we can restate definition 9. Assuming that 0 is differentiable.1 A system Eli is inputtostate stable if and only if it is dissipative with respect to the supply rate w(t) = a3(IIxII) +a(IIuII) where a3 and a are class )C. while
property (ii) is the differential dissipation inequality.
R = 0. S E R' m.7)
J0
w(t) dt = (y. which appear for specific choices of the parameters QS and R. QSR DISSIPATIVITY
227
Definition 9. y) as follows:
w(t) = yTQy+2yTSu+uTRu
Q
[YT.10) implies that
(y. and R E Rmxm with Q and R symmetric. Ru)T ? O(xi) . we will continue to assume that the input output relationship is obtained from the state space realization as defined in (9.11)
. Indeed.]R+ such that 11x(0) = xo E X and for all u E U we have that
T
J0
w(t) dt = (y.3 Given constant matrices Q E 1Rp> . the state space realization of the system V) is no longer essential or necessary.1(xo).4 The system V) is said to be QSRdissipative if there exist a storage function X .3.
(9.Passive systems: The system V) is passive if and only if it is dissipative with respect to Q = 0. Su)T + (u. UT) L
S Ri 1 LU
I
(9.6)
It is immediately obvious that
f w(t) dt = (y. SU)T + (u. QSR dissipativity can be interpreted as an inputoutput property. QY)T + 2(y. w(t) defined in (9. we define the supply rate w(t) = w(u. V) is passive if and only if it is (0. we can now state the following definition
Definition 9. Ru)
and
T
00
(9.1). Equivalently.1) with one interesting twist. doing so will allow us to study connections between certain inputoutput properties and stability in the sense of Lyapunov.9. 2.8)
moreover.10)
Definition 9. u)T ? O(xi) . As we will see.
(9. Notice that with this supply rate. Qy)T + 2(y.
1. Instead of pursuing this idea. The equivalence is immediate since in this case (9.
We now single out several special cases of interest. and S = 21. Ru)T
(9. Su) + (u.9)
Thus. 0)dissipative. using time invariance. Qy) + 2(y.6) is such that
to+T
T
to
w(t) dt =
JO
w(t) dt. by assumption)
(9.O(xo) ? O(xo)
(since O(x) > 0 Vx.4 is clearly a special case of Definition (9.
given by the initial conditions xo. u)T >. y)T < 72 (n. strictly passive.11) is identical to Definition (8.72IIuTIIGz + 20(xo)
IIyTIIGZ
72IIUTIIG2 + 20(x0)
and since for a. and S =
2I.u)T . bU)T >. we substitute these values in (9. u)T > O(xl) .
3.u)T + 20(xo) IIyTIIGZ <. y)T ? E(y. To see this. then defining 3 =
IIyTIIGZ
2 (xo).7IIUTIIGZ + 3
We have already encountered passive.10) and obtain
(y.228
CHAPTER 9. sive if it is dissipative with respect to Q = EI.Strictly passive systems: The system Vi is strictly passive if and only if it is To see this.20(x0)
or (y.O(x0) > qS(xo)
(y.I. u)T + (U.
these values in (9. Other cases of interest.0
f
2. which appear frequently in the literature are described in paragraphs 4 and 5. and finitegainstable systems in previous chapters.
Thus.0(x1) . and S =
aI. (9.O(xO)
or
f
Q
(u.
a +b2 < (a + b). This formulation is also important in that it gives 3 a precise interpretation: 3 is the stored energy at time t = 0.y)T
>_
72(u.
4.10) and obtain i
1
2 (y. b > 0. we substitute dissipative with respect to Q = 0. y)T
b(u.Finitegainstable: The system 7/i is finitegainstable if and only if it is dissipative with respect to Q = .0(x1) .10). y)T +
0
. and S = 0. U) T + Q = 5IIuIIT + 3. y)T +
2 ('a. we conclude
that
<.Strictly outputpassive systems: The system V) is said to be strictly output pasIn this case.O(x0) >. y)T + (y.
substituting these values in (9. R = 2 I. defining . DISSIPATIVITY
. R = b. R = 0.2).O(xo) ? O(xo)
or
fT
J UT y dt = (u. we obtain
e(y.O(xo).
Assuming for simplicity that the friction between the mass and the surphase is negligible.
substituting these values in (9. and S = I.10).
we obtain the following equation of the motion:
ml+0±+kx= f
where m represents the mass. p.
Proof: The proof is left as an exercise (Exercise 9.O(xo) > O(x0)
or
I
0
rT
J U T y dt = (U. shown in Figure 9.4. R = bI. we proceed to find the total energy stored in the system at any given time.2. which is a direct consequence of these
definitions.4.2). In this case. U)T + E(y.P.9.Y)T .x2 + 1 = X2 Y
To study the dissipative properties of this system.1
Examples
MassSpring System with Friction
Consider the massspring system moving on a horizontal surface. we obtain
E(y. then it has a finite L2 gain. x2. Defining state variables x1 = x. and f is an external force. We have
E = 1kx2 + mx2
21
.Very strictlypassive Systems: The system V) is said to be very strictly passive if it is dissipative with respect to Q = eI.4
9.
Lemma 9. k is the spring constant. we obtain the
following state space realization:
1 = X2 2 = mx1 . Y)T ? CU. and ±1 = x2. U)T > O(x1) . U)T + (y.b(u. Y)T + 130
The following lemma states a useful results.
9. the viscous friction force associated
with the spring. EXAMPLES
229
5.2 If 0 is strictly output passive. and assuming that the desired output variable is the velocity vector.
.
It is immediately evident that this supply rate corresponds to Q = 0.
Thus. DISSIPATWITY
Figure 9. Since the energy is a positive quantity.
8x
[kxl 7fl2]L m x1 X2 1x2
/3x2+xf
_Qy2 + Y f./3y2.230
CHAPTER 9.2: Massspring system.
J0
t
0 dt = E(t) > 0
and it follows that the massspring system with output i = x2 is dissipative with respect to the supply rate
w(t) = yf . S = 2. we propose E as a "possible" storage
function. we define:
¢r=E=2kxi+2mx2.
from where we conclude that the massspring system 0 is strictly outputpassive.
Since 0 is continuously differentiable with respect to xl and x2i we can compute the time derivative of 0 along the trajectories of i/i:
a0x . thus.
where zmx2 represents the kinetic energy of the mass and Zkxi is the energy stored by the spring. and R = 0.
AVAILABLE STORAGE
231
9.4.2
MassSpring System without Friction
Consider again the massspring system of the previous example. we now introduce this concept. the state space realization reduces to
I
it
i2 = mxl+f
y
= x2
Proceeding as in the previous example.
We conclude that the massspring system with output i = x2 is dissipative with respect to the supply rate
frt
0
0 dt = E(t) > 0.9. In this case. Given a dissipative dynamical system.
(9. cba of a dynamical system. S = 2.0 with supply rate w is
defined by
0a(x) = sup u() T>0
f
Jo
T w(u(t). at any given time? Willems [91] termed this quantity the "available storage.5 The available storage. we now
turn our attention to a perhaps more abstract question.
Definition 9.
Differentiating m along the trajectories of 0. This implies that the massspring system
is passive.5
Available Storage
Having defined dissipative systems. we obtain:
=x2f =yf
since once again. This section is not essential and can be skipped in a first reading of this chapter.
w(t) = yf which corresponds to Q = 0.5.
9." This quantity plays an important conceptual role in the theory of dissipative systems and appears in the
proofs of certain theorems. we ask: What is the maximum amount of energy that can be extracted from it. but assume that f3 = 0. along with supply rates and storage functions. we define
f
= X2
= E = 2kxi + 2rnx2.
x(0) = X.12)
. and R = 0. For completeness.
y(t)) dt.
starting at the states xo and xl.
0
This means that when extracting energy from the system Vi at time xo.12) contains the zero element (obtained setting T = 0) we have that Oa > 0. the "energies" that can be extracted from V. y) dt tl
sup fo w(u(t). respectively.) we have that
O(xo) + J T w(u. for a dissipative system we have that 0 < Oa < S.1 A dynamical system 0 is dissipative if and only if for all x E X the available storage (¢a(x) is finite.232
CHAPTER 9.
Thus cba is itself a storage function. To show that Vi is dissipative.J
T>o
uca
/T
o
w(u(t). we now consider an arbitrary input u' : [0. Moreover.1R" that takes the dynamical system V) from the initial state xo at t = 0 to a final state xl at t = T. y) dt + Oa(xl). T] .
Theorem 9.y) dt > O(x(T)) > 0.
The following theorem is important in that it provides a (theoretical) way of checking whether or not a system is dissipative. or we can force the system to go from xo to xl and then extract whatever energy is left in Vi with initial state x1. This means that 10 > 0 such that for all u(. To see this. y(t)) dt. 0a(x) denotes the energy that can be extracted from .b. and show that the cba satisfies
T
Oa(xl) < Oa(x0) +
fw(t) dt.
We have
O(xo)
sup . DISSIPATIVITY
As defined. This second process is clearly nonoptimal. we compare Wa(xo) and !aa(xl). y(t)) dt. thus leading to the result.
0
. we can follow an "optimal" trajectory that maximizes the energy extracted following an arbitrary trajectory. in terms of the available storage. starting from the initial
state x at t = 0. T"O>0
J
w(u*.
Proof:
Sufficiency: Assume first that Oa is finite. Since the righthand side of (9. and thus cba itself is a possible storage function.
T
>J
0
cl
w(u'.
Necessity: Assume now that 0 is dissipative.
That result. The benefit is a much more explicit characterization of the dissipative condition. respectively. it shows that under certain assumptions. specifically.6. to the KalmanYakuvovich lemma.e. dissipative with supply rate given by (9. ALGEBRAIC CONDITION FOR DISSIPATIVITY
Thus. Our next theorem provides a result that is in the same spirit as the KalmanYakuvovich lemma studied in Chapter 8.13)
2. and y E RP.13) is QSRdissipative (i. _ 0a(xo) "O
T > 0
9.
3.
233
T
O(xo) ? sup . leads in fact. dissipativeness can be characterized in terms of the coefficients of the state space realization of the system Vi. we assume that 0 is of the form
V)
are affine
J ±=f(x)+9(x)u
y = h(x) + 7(x)u
where x E R". y(t)) dt.Whenever the system V) is dissipative with respect to a supply rate of the form (9.Jo w(u(t).
Theorem 9. it will be shown that in the special case of linear passive systems.We assume that both f
in the state space realization and functions of the input u. is not practical in applications.
Throughout the rest of this section we will make the following assumptions:
1. Moreover. some restrictions on the class of systems considered. u E R.1. that is.2 The nonlinear system zb given by (9.
(9.
These assumptions.6). the available storage 0a. The notion of available storage gave us a theoretical answer to this riddle in Theorem 9.9. bring.(x) is a differentiable function of x.13) is reachable from the origin. however.The state space of the system (9. This means that given any xl E R" and t = tl E R+ there exists a to < tl and an input u E U such that the state can be driven from the origin at t = 0 to x = xl at t = t1.6)) if there exists a differentiable function 0 : R" + IR and function L : 1R' 4 Rq and W : R" 4 R'> "` satisfying
. this characterization of dissipativity. of course.6
Algebraic Condition for Dissipativity
We now turn our attention to the issue of checking the dissipativity condition for a given system. particularly 1 and 3.
15)
aof (x) +LTL+uTRu+2uTSTh a46f(x)+LTL+uTWTWu+2uT[2gT(O )T +WTL]
substituting (9.13) (see exercise 9. something that is true in most practical cases. L and W satisfy the assumptions of the theorem.14)
af (x) = hT (x)Qh(x) .WTL(x)
R=
for all x.234
CHAPTER 9. The necessity part of the proof can be found in the Appendix.13)
f (x) + LTL] + UTRu + 2hTSu
substituting (9. With this assumption we have that
k = R.18)
(9.19)
Proof: To simplify our proof we assume that j (x) = 0 in the state space realization (9.20)
_ 0+ (L + Wu)T (L + Wu)
. In the case of linear systems this assumption is equivalent to assuming that D = 0 in the state space realization.
We prove sufficiency.16)
a[f(x)+gu]+LTL+uTWTWu+2uTWTL
ox
+ (L + Wu)T (L + Wu)
(9.3).17) and (9.17)
R = R + jT (x)S + STj (x) +7T(x)Q7 (x)
and
S(x) = Qj(x) + S.
and
S = S. Assuming that S. we have that
w(u.15) (9. DISSIPATIVITY
q(x) > 0 vx $ 0.y) = yTQY+UTRu+2yTSu
= hT Qh + uT Ru + 2hTSu
substituting (9. where
WTW
(9.16)
19T (ax)T
STh(x) .LT (x)L(x)
1
(9.
0(0) = 0
(9.
(9.
21)
Proof: A direct consequence of Theorem 9.e. g(x) = B.y)
(9. Setting u = 0. Notice that (9.
Passive systems: Now consider the passivity supply rate w(u.20) in the sufficiency part of the proof. Q = R = 0.6.O(xo)
f(L t
+ Wu)T (L + Wu) dt
and setting xo = 0 implies that
t w(t) dt > O(x(t)) > 0.90
8xf (x)
<0
(9.21) is identical to (9. such that
d = (L+Wu)T(L+Wu)+w(u. y) = uTy (i.
9.6).2.2) that j(x) = 0.6.
Corollary 9. In this case. and S = 1 in (9.1
Special Cases
We now consider several cases of special interest. ALGEBRAIC CONDITION FOR DISSIPATIVITY
235
J
w(t) dt = O(x(t)) .22)
ax.23)
Now assume that. Theorem 9.1 If the system is dissipative with respect to the supply rate (9. we define the storage function O(x) = xT Px. and assume as in the proof of Theorem (9. 0(0) = 0. equivalently
. then there exist a real function 0 satisfying ¢(x) > 0 Vx # 0.2 states that the nonlinear system 0 is passive if and only if
8x
f (x) = LT (x)L(x)
9T(aO)T
= h(x)
or.9 = hT(x). p = PT > 0. Guided by our knowledge of Lyapunov stability of linear systems.9. and h(x) = Cx. the system V is linear then f (x) = Ax. in addition.
(9.O(xo) +
2 O(x(t)) .6)). we have that
T TP 5xf(x)=x[A+PA]x
.
Strictly output passive systems: Now consider the strictly output passivity supply rate in (9. In this case Theorem 9.e.y) = uTy .b is strictly passive if and only if
ax
gT
f (x) = LT (x)L(x)
= h(x) . for passive systems. R = 0.2 can be considered as a nonlinear version of the KalmanYakubovich lemma discussed in Chapter 8. It then follows that no system of the form (9. Q = JI..
Therefore.2WTL
a (ax)
with
R=R=WTW=8I
which can never be satisfied since WTW > 0 and 8 > 0. and assume once w(u. R = 8I.6)). Theorem 9.2 states that the nonlinear system t. with j = 0.24)
ao g
hT (x).13).23) are satisfied if and only if
ATP+PA < 0 BTP = CT. and assume once again that
j(x) = 0.
(9.
. Q = 0. y) = uT y .6)).eyTy (i. and S = again that j (x) = 0.22)(9..25)
Strictly Passive systems: Finally consider the strict passivity supply rate w(u.e. and S = 2I in (9.5UT u (i.LT(x)L(x)
or. equivalently. DISSIPATIVITY
which implies that (9.2 states that the 2I nonlinear system z/) is strictly outputpassive if and only if
a7 f(x) = ehT(x)h(x) .236
CHAPTER 9.
a f(x) < ehT(x)h(x)
(9. can be strictly passive. In this case Theorem 9.
and by condition (i) is positive definite Vx in a neighborhood of xe.
Theorem 9. Throughout this section. by (9. y) < 0
Vy. Also. the time derivative of V along the trajectories of 0 is given by
V (x) = aa(x) f (x.5) and condition (ii) we have that V(x) < 0 and stability follows by the Lyapunov stability theorem.0)=0.
tf
Theorem 9.5). that is. 0).
In the following theorem we consider a dissipative system z/i with storage function 0
and assume that xe is an equilibrium point for the unforced systems ?P. STABILITY OF DISSIPATIVE SYSTEMS
237
9. The
.
Proof: Define the function V (x) 4)(x) . and assume that the
following conditions are satisfied:
(i) xe is a strictly local minimum for 0:
4)(xe) < 4)(x) Vx in a neigborhood of xe
(ii) The supply rate w = w(u. It is also important in that it gives a clear connection between the concept of dissipativity and stability in the sense of Lyapunov. f (xe) _
f(xe.9. Notice also that the general class of dissipative systems discussed in theorem 9.7
Stability of Dissipative Systems
Throughout this section we analyze the possible implications of stability (in the sense of Lyapunov) for dissipative dynamical systems.7. which satisfies (9.3 Let
be a dissipative dynamical system with respect to the (continuously differentiable) storage function 0 : X * IR+.5). u)
thus.
Under these conditions xe is a stable equilibrium point for the unforced systems x = f (x. y) is such that
w(0.4)(xe). we assume that the storage function 4) : X > R+ is differentiable and satisfies the differential dissipation inequality (9.3 includes QSR dissipative systems as a special case.3 is important not only in that implies the stability of dissipative systems
(with an equilibrium point satisfying the conditions of the theorem) but also in that it
suggests the use of the storage function 0 as a means of constructing Lyapunov functions. This function is continuously differentiable.
Then. and satisfies V (x) = 0 if and only if x(t) = xe.0.2 Under the conditions of Theorem 9. Assume now that Q < 0. y) = w(0. V(x) is strictly negative and all trajectories of ?p = f (xe. h(x. 0) = 0. was introduced as an inputoutput property.3. In this case we have that
dO = 0
=
hT (x)Qh(x)
h(x) = 0
x=0
.
dO _ LT L + hT (x)Qh(x)
along the trajectories of i = f (x).238
CHAPTER 9. Thus. however. the free system i = f (x) is
(i) Lyapunovstable if Q < 0. and asymptotic stability follows from LaSalle's theorem. DISSIPATIVITY
property of QSR dissipativity.1 we have that if 1i is QSR dissipative.
Proof: From Theorem 9.0 we have that x(t) .
Proof: Under the present conditions.
Corollary 9. Thus. then xe is asymptotically stable. due to Hill and Moylan [30].
Theorem 9.4 Let the system
i = f(x) +g(x)u
1 y = h(x)
be dissipative with respect to the supply rate w(t) = yTQy + 2yT Su + uT Ru and zero state detectable.0.
Definition 9. 0).y)
and setting u = 0. stability follows from the Lyapunov stability theorem.
Much more explicit stability theorems can be derived for QSR dissipative systems. y . then there exists 0 > 0 such that
dt _ (L+Wu)T(L+Wu)+w(u. dissipativity provides a very important link between the inputoutput theory of system and stability in the sense of Lyapunov.6 ([30]) A state space realization of the form V) is said to be zerostate detectable if for any trajectory such that u . if in addition no solution of i = f (x) other than x(t) = xe satisfies w(0.2 and Corollary 9. (ii) Asymptotically stable if Q < 0. We now present one such a theorem (theorem ).
8
Feedback Interconnections
In this section we consider the feedback interconnections and study the implication of
different forms of QSR dissipativity on closedloop stability in the sense of Lyapunov. Notice that.
(iii) If 0 is finitegainstable.6 and the special cases discussed in Section 9.
u E Rm.
Y'1 and 02 are completely reachable.
(iv) If 0 is strictly outputpassive.
Corollary 9.9. we have assumed that both systems are "squared". for compatibility reasons. that is.
Throughout this section we assume that state space realizations VP1 and 02 each have the form
ii = fi(xi) +g(xi)ui. FEEDBACK INTERCONNECTIONS
239
by the zerostate detectability condition. then
= f (x) is asymptotically stable. thanks to the machinery developed in previous sections. for any given xl and tl there exists a to < tl and an input function E U * R' such that the state can be driven from x(to) = 0 to x(tl) = x1.7(xi)u'ie y E 1R'n for i = 1.0 implies that x(t)
0. that is.
(v) If 0 is very strictly passive. that is.2.
. 2. x c R7Wi
yi = hi(xi) +.
9. We will also make the following assumptions about each state space realization:
Y)1 and V2 are zerostatedetectable. they have the same number of inputs and outputs. then the unforced system i = f (x) is Lyapunovstable.3 Given a zero state detectable state space realization
J ± = f(x) + g(x)u
l y = h(x) +j(x)u
we have
(i) If 0 is passive. then i = f (x) is asymptotically stable. u . then i = f (x) is asymptotically stable. then i = f (x) is Lyapunovstable. We will see that several important results can be derived rather easily.0 and y . (ii) If 0 is strictly passive. The following corollary is then an immediate consequence of Theorem 9.8.
Theorem 9. The derivative of along the trajectories of the composite state [x1.
]
(9.28)
.3: Feedback interconnection. we can now state the main theorem of this section. x2]T is given by
_
w1(u1. X2) = O(x1) + aO(x2)
a>0
where O(x1) and O(x2) are the storage function of the systems a/il and z/b2i respectively.yti + 2yT Slut + u'Rut.
Having laid out the ground rules.5 Consider the feedback interconnection (Figure 9.240
CHAPTER 9.27)
Proof of Theorem 9. O(x1i X2) is positive definite by construction. and assume that both systems are dissipative with respect to the supply rate
wt(ut. DISSIPATIVITY
Figure 9.5: Consider the Lyapunov function candidate
O(x1. Thus. yi) = yT Q.yl)+aw2(u2.
(9.y2)
(y1 Qlyl + 2yTSlu1 + u1 R1ul) + a(yz Q2y2 + 2y2 S2u2 + u2 R2u2)
(9.3) of the systems Y'1 and 7'2.26)
Then the feedback interconnection of 01 and 02 is Lyapunov stable (asymptotically stable) if the matrix Q defined by
Ql + aR2 S1 + aS2 ST + aS2 R1 + aQ2
is negative semidefinite (negative definite) for some a > 0.
5. R2 = 621. we have that
(a) If both 01 and 02 are passive.
.
This theorem is very powerful and includes several cases of special interest. and the result follows. for i = 1.
Thus
Q
621
0
0
e2I
which is negative definite. With
these values. R1 = 0. FEEDBACK INTERCONNECTIONS
241
Substituting ul = Y2 and u2 = yl into (9. we obtain
= L yi
and the result follows. we have
01 passive: Q1 = 0.8.5. Ri = 0.
02 very strictly passive: Q2 = e21.andSi=2I.28).27). which we now spell out in Corollaries 9. then Qi = 0.
r
T
y2
T1
J
f
Qi + aR2
S1 + aS2 1
J
L S + aS2 Rl + aQ2
r yl YT
IL
1
J
Corollary 9. one of the following conditions are satisfied:
(1) One of 01 and 02 is very strictly passive. (3) Both 01 and 02 are strictly output passive.
(2) Both 01 and are strictly passive.
(b2) If both 01 and 02 are strictly passive. and S2 = I.5. in addition.4 and 9. we have
Qi=0. we obtain the following:
(a) If both 01 and 02 are passive.R2=6iI. then the feedback system is Lyapunov stable. and the result follows by Theorem 9. 2. The case 02 passive and 01 very strictly passive is entirely analogous. and Si = 2I. and S1 = I.4 Under the conditions of Theorem 9.
(b) Asymptotic stability follows if.9.
(bl) Assuming that ¢1 is passive and that 02 is very strictly passive. Proof: Setting a = 1 in (9. Q = 0.
29)
(9.
(b3) If both 01 and 02 are strictly outputpassive.
Corollary 9. we have:
Qt=eiI. of course.29) into (9. andSi=0. and the result follows.
Thus Q becomes
a72
L
Q
0
1)1
0
1
(7i .30)
and substituting (9. and the result follows by Theorem 9.30) leads to the stability result. Q is negative semidefinite.
The following corollary is a Lyapunov version of the small gain theorem.R1= 2. if both 01 and 02 are finitegainstable with gains 'y and 72i the feedback system is Lyapunovstable (asymptotically stable)
if 7172<1 (7172<1).5.a)I J
Thus.
Proof: Under the assumptions of the corollary we have that
Qi=I. DISSIPATIVITY
Thus
b2I
0
0
b11
which is once again negative definite.
Corollary 9.
0
.R2=0. The case of asymptotic
stability is.242
CHAPTER 9. provided that
(9.4 is significant in that it identifies closedloop stabilityasymptotic stability in the sense of Lyapunov for combinations of passivity conditions. identical.
Thus
Ell
0
0
e21
which is negative definite.5. and Si =21j.5 Under the conditions of Theorem 9.
9.g.. + 1 y2'U. substituting (9. 2
(9.9
Nonlinear G2 Gain
Consider a system V) of the form
i = f(x) + g(x)u
y = h(x)
(9.31) into (9.32) we have that
a0(x) f(x) + ao x)g(x)u < w(t) <.1 gT(a")T +aof(x) 2y2 ax ax
(9.33) we
f (x) +
ao(x)
ax
g(x)u = a6x f(x) + a_(x)g(x)v.34)
+2.31)
As discussed in Section 9.

(9. ) T u.. NONLINEAR G2 GAIN
243
9.IIy112) Assuming that the storage function corresponding to this supply rate is differentiable.2729 T
+
.34) into (9.1y2IIuII2
Ox
ax
Adding and subtracting 2y2uTu and 21 obtain
a_(x)
ax
xlo
ggT(
12 ao
2

1 IIy112..2IIyl12 < 0. if it is dissipative with supply rate w = 2(y2IIu112 .33)
)T to the the left hand side of (9.T76
ax
ax
2
.2.35)
. this system is finitegainstable with gain y.9.5) implies that
fi(t) =
aa( )x
1 < w(t) s 272IIu112 .33) results in
27
1
2
(.1 y2uTu 2
ax J (aO)T
T (ao)T ax + 2'Y axgg
1y2
2
12 aOggT
2y ax
u.90
(.axggT (ax) + 2y21lu112
Therefore.
(9. )T
99 T
<
2 11y112
or. equivalently
(LO) T
ax f(x) +
2
aax 99T
ax
+ 1 IIyll2 < 2 
1'Y2
2
v
1 T (ao 2 ax
T2
< 0. the differential dissipation inequality (9.90
Px) +
1
..32)
Thus. substituting (9.
denoted 'y'. at best. This can be
done by "guessing" a function 0 and then finding an approximate value for y.
0<y'<7
Example 9.2xlx2 .xi .x2 1x2 
l
(xl + x2)2
11x114
=
T
(ii)
2
ggT
ao . i. we consider the storage function "candidate" fi(x) = 2 (xi + x2).36)
Finding a function that maximizes f and satisfies inequality (9. we will be content with "estimating" an upper bound for y. x21 [
1
I
)3xl Qx2
2 I=P1
2
2 = PIIXU
q2
2
? L99T
(LO)
T
ate= 2ry2 IIxI14
. the three terms in the HamiltonJacobi inequality (9. DISSIPATWITY
This result is important.x2 + Qx2u
X2
3>0
f (x) = I
32.36) is. .
h(x) = (xi + x2)
To estimate the . a process that resembles that of finding a suitable Lyapunov function.C2 norm. is bounded above by y.xl
9(x) =
L
13x2
J
.e. .36) can be obtained
as follows:
(i)
f(x)
a f (X) _
=
[x1.35).
. if the system V) given by (9. According to (9.
T1. The true L2 gain of 0.xi +Qxlu . very difficult.x2 1x2 .1 Consider the following system:
xix2 . then it must satisfy the socalled HamiltonJacobi inequality:
H def
(ax )T
1
af(x) + 2. With this function.31) is finitegainstable with gain ry. Often. x21
1X2
2x
3 xl
x ..y2 agg
+
2
IyI 2< 0
(9.244
CHAPTER 9.
Assuming that this is the case.9. NONLINEAR L2 GAIN
245
(iii)
2
IHy1I2
2IIy1I2 = 2(x2 +x2)2 =
2
IIxII4
Combining these results.38).37).9. g(x) = B.38) and (9. f (x) = Ax.39)
.36). we obtain
7IT = xT 1ATp +
7
PBBT P + 2 CT CJ
x<
0.1
Linear TimeInvariant Systems
It is enlightening to consider the case of linear timeinvariant systems as a special case. and h(x) = Cx. we have that
= IIxII4 +
or
IIXI14 + 1 IIXI14
<0
9.38)
Thus. we have that
( i=Ax+Bu
Sl y=Cx
that is. Taking 4(x) = a xT Px (P > p. we obtain
7{ = xT
+ 212 PBBT P + 2CTCl 7 J
x < 0.
(9.37)
Taking the transpose of (9. we obtain
71 + NT = xT
PA+ATP+ 212PBBTP+ 2CTC
R
x<
0. adding (9.
(9. p = PT )
and substituting in (9.
(9.9.
The lefthand side of inequality (9. we have that V has L2 gain less than or equal to y if and only if
kehT (x)h(x) + 2y2 k 2 h T (x)h(x) +
2
hT (x)h(x) < 0
.9.24)(9.e.41)
is known as the Riccati equation.41) has a positive definite solution.43) into (9. Indeed.40) is well known and has played and will continue to play a very significant role in linear control theory. DISSIPATIVITY
Thus. i.36)
1
H'f axf (x) +
2y2
ax99T (ax )T +
2
11y1J2 < 0
(9.42)(9. the equality
R'fPA+ATP+ 212PBBTP+ ZCTC = 0
y
(9.43)
which implies that tli is strictly outputpassive.2) that strictly output passive systems have a finite G2 gain.44).
00
8x f (x)
< ehT (x)h(x)
hT (x)
(9.246
CHAPTER 9. See
Reference [23] or [100]. we consider the HamiltonJabobi inequality (9. the system z/i has a finite G2 gain less than or equal to y provided that for some y > 0
PA+ATP+ _ PBBTP+ ZCTC
2y2
_<
0.25). Consider a system of the
form
f x = f (x) + 9(x)u
y = h(x)
and assume that there exists a differentiable function 01 > 0 satisfying (9. To estimate the L2 gain of Eli.. It can be shown that the linear timeinvariant system V) has a finite gain less than or equal to y if and only if the Riccati equation (9. further analysis leads to a stronger result.3 (lemma 9.44)
Letting 0 = ko1 for some k > 0 and substituting (9. In the linear case.
9.42) (9. We now discuss how to compute the L2 gain y.2
Strictly Output Passive Systems
It was pointed out in Section 9.
10
Some Remarks about Control Design
Control systems design is a difficult subject in which designers are exposed to a multiplicity of specifications and design constraints. in some prescribed sense. is its ability to reduce the effect of undesirable exogenous signals over the system's output. control design can be viewed as solving the following problem: find a control law that
1. over the years a lot of research has been focused on developing design techniques that employ this principle as the main design criteria. also in some specific sense. The standard setup is shown in Figure 9.
y= measured output. where C= controller to be designed.Stabilizes the closedloop system.1
9. More explicitly. G= generalized plant.4.
.
2.10. extracting the common factor hT(x)h(x)
ke+212k2+2 <0
7>
and choosing k = 1/c we conclude that
k
2_k . and
d= "exogenous" signals (such as disturbances and sensor noise). SOME REMARKS ABOUT CONTROL DESIGN
247
or.
z= output to be regulated (such as tracking errors and actuator signals). u= control signal.9. One of the main reasons for the use of feedback. To discuss control design in some generality. Therefore. however.
Many problems can be cast in this form.Reduces the effect of the exogenous input signals over the desired output. it is customary to replace the actual plant in the feedback configuration with a more general system known as the generalized plant.
2 Consider the feedback system shown in Figure 9.e.
Suppose that our interest is to design a controller C to reduce the effect of the exogenous disturbance d over the signals Y2 and e2 (representing the input and output of the plant).
Z=
L
e2
Y2
J. DISSIPATIVITY
d
z
G u
I
I
y
C
Figure 9.4).5 and the generalized setup of Figure
9.
i.248
CHAPTER 9.4 correspond to the following variables in Figure 9.4: Standard setup.5.4. In our case.
. consider the following example:
Example 9.. To see this.5: y = el (the controller input) u = yl (the controller output) d: disturbance (same signal in both Figure 9. we must identify the inputs and outputs in the generalized plant G. the several variables in the standard setup of Figure 9. This problem can indeed be studied using the standard setup of Figure 9. z is a vector whose components are input and output of the plant.
To fix ideas and see that this standard setup includes the more "classical " feedback
configuration.
2. Zames [99]. and
(ii) reduces the effect of the exogenous signal d on the desired output z. The properties of the resulting controllers depend in an essential manner on the spaces of functions used as inputs and outputs. theory for linear timeinvariant systems. The control design problem can
now be restated as follows.6.9. and the synthesis of H. SOME REMARKS ABOUT CONTROL DESIGN
249
Figure 9..
A bit of work shows that the feedback system of Figure 9. with several important contributions by several authors. Find C such that
(i) stabilizes the feedback system of Figure 9....
D
A lot of research has been conducted since the early 1980s on the synthesis of controllers that
provide an "optimal" solution to problems such as the one just described. optimization theory was initiated in 1981 by G. optimization. controllers was solved during the 1980s. The H. norm of the transfer function mapping d 4 z.6.
.5: Feedback Interconnection used in example 9. Two very important cases are the following: L2 signals: In this case the problem is to find a stabilizing controller that minimizes the L2 gain of the (closedloop) system mapping d + z. See References [25] and [100] for a comprehensive treatment of the H. in some specific sense. For linear timeinvariant systems. we saw that the L2 gain is the H.5 can be redrawn as shown in Figure 9.4.10. which has the form of the standard setup of Figure 9. and the theory behind the synthesis of these controllers is known as H.
DISSIPATIVITY
e2 Y2
z
U
I
I
Iy2=el
I
y
Figure 9.6: Standard setup.250
CHAPTER 9.
.
Instead. Iterating this procedure can lead to a controller C that approaches the "optimal" (i. For linear timeinvariant systems. and d and y represent the measured and regulated output. We look for a controller C that stabilizes the closedloop system and minimizes the . If such a controller Cl
exists.C2 gain less than or equal to ryl. respectively. d) y = 9(x.4 and consider a nonlinear of the form
V)
i = f (x.C2 control problem for nonlinear systems.z. u.
9.C2 gain of the mapping from d to z. d) z = h(x.11. Given a "desirable" exogenous signal attenuation level. The L optimization theory was proposed in 1986 by M. See also the survey [20] for a comprehensive treatment of the Ll theory for linear timeinvariant systems.. we shall content ourselves with solving the following suboptimal control problem. u.u. minimum) y. in this case the problem is to find a stabilizing controller that minimizes the L.
251
signals:
Similarly.
We will consider the state feedback suboptimal L2gain optimization problem. and therefore it is often referred to as the nonlinear Hamoptimization problem. NONLINEAR L2GAIN CONTROL
L. we saw that the L.45)
where u and d represent the control and exogenous input. Pearson [16][19].e. optimization theory to the case of nonlinear plants. then we can choose rye < yl and find a new controller C2 such that the mapping from d > w has an G2 gain less than or equal to 'y2. Dahleh and B. This problem is sometimes referred to as the full information case. A. The first solution of the Lisynthesis problem was obtained in a series of papers by M.
system
We will consider the standard configuration of Figure 6.
All of these references deal exclusively with linear timeinvariant systems. gain is the Ll norm of the transfer function mapping d . and the theory behind the synthesis of these controllers is known as Ll optimization. because the full state is
assumed to be available for feedback.d)
(9. respectively. denoted by ryl. We will consider a state space realization of the form
. find a control Cl such that the mapping from d * w has an .9.11
Nonlinear G2Gain Control
Solving the nonlinear G2gain design problem as described above is very difficult. This problem
can be seen as an extension of the H. gain of the (closedloop) system mapping d > z. In the remainder of this chapter we present an introductory treatment of the . Vidyasagar [89].
DISSIPATIVITY
x = a(x) + b(x)u + g(x)d.46)
z=
[h(x)]
where a.50)
z=
L bT(x)
Substituting (9.b(x)bT(x)
()T +g(x)d ()T
(9.6 The closedloop system of equation (9.
Theorem 9. See Reference [85] for the necessity part of the proof.50) into the HamiltonJacobi inequality (9.46) has a finite C2 gain < y if and
only if the HamiltonJacobi inequality 9{ given by
b(x)bT (x) + 2 x_ i J has a solution 0 > 0.49) and (9.47)
T
u= bT (x) (a.g(x)gT(x) 
ll
(Ocb)T
+ hT (x)h(x) < 0
(9. d E R''
(9.36) with
f (x) = a(x) .)
(x).b(x)bT (x)
h(x)
(x)
[_bT(X)
( )T ]
T
results in
H=
ax
+2
(a(x)_b(x)bT(x)_)
[h(X)T
T
)
+ 2ry2
axg(x)gT (x) (8
hx 8xb(x). Substituting u into the system equations (9.
0
.252
CHAPTER 9. [bT(X) (g)T
]
which implies (9.47). g and h are assumed to be Ck.
u E II8"`. and so the closedloop system has a finite G2 gain ry. with k > 2.49)
(9. b.48)
Proof: We only prove sufficiency.
(9. The control law is given by
H
8xa(x)
8 [.
a(x) . Assume that we obtain
> 0 is a solution of L.46).
(b) Find the kinetic energy K2 of the pendulum. (e) Defining variables
_ q=
x
9
M= L ml cos
M+m mlcoso
ml2B
'
u_
f
0
show that the energy stored in the pendulum can be expressed as
E = 24TM(q)4 +mgl(cos0 .1. and assuming that the output equation is
y=±=X2
(a) Find the kinetic energy K1 of the cart.9. find an upper bound on the
L2
gain. (c) Find the potential energy P in the cartpendulum system.12.
1
(9.1) Prove Lemma 9. EXERCISES
253
9.4) Consider the system
a2122+u. Assuming for simplicity that the moment of inertia of the pendulum is negligible.12
Exercises
(9. (ii) strictly passive. find the total energy E = K1 + K2 + P = K + P
stored in the cartpendulum system. (iv) has a finite G2 gain.
(9.1)
(f) Computing the derivative of E along the trajectories of the system. (9.2) Prove Lemma 9.3) Consider the pendulumonacart system discussed in Chapter 1. (b) If the answer to part (a)(iv) is affirmative.
(d) Using the previous results. (iii) strictly outputpassive. show that the pendulumonacart system with input u = f and output ± is a passive system.
.2. a>0
X1X X2 = xl
(a) Determine whether i is
(i) passive.
[94]. The connections between dissipativity and stability in the sense of Lyapunov were extensively explored by Willems. See also References [6] and [38] as well as the excellent monograph [85] for a comprehensive treatment of this beautiful subject. Unlike the classical notion of passivity. Theorem 9.2 in van der Schaft [85].254
CHAPTER 9.5) Consider the system
x1+x2+u.
The definition of QSR dissipativity is due to Hill and Moylan [30]. on which Section 9.6 follows Reference [30].3 follows Theorem 6 in [91]. Nonlinear £2 gain control (or nonlinear H. The beauty of the Willems formulation is its generality. dissipativity can be interpreted as an inputoutput property. Sections 9.8 follows Reference [31] very closely. (iv) has a finite £2 gain. control) is currently a very active area of research. [29]. Section 9.9 and 9.1 is based. which was introduced as an inputoutput concept. Section 9..1 is from Willems [91]. See also Section 3.. Theorem 1. as well as [91]. [93].3 is based on Hill and Moylan's original work [30]. and [87]. Reference [91] considers a very general class of nonlinear systems and defines dissipativity as an extension of the concept of passivity as well as supply rates and storage functions as generalizations of input power and stored energy. Section 9.
. DISSIPATIVITY
(9.11 follow van der Schaft [86]. (iii) strictly outputpassive. Stability of feedback interconnections was studies in detail by Hill and Moylan.
Notes and References
The theory of dissipativity of systems was initiated by Willems is his seminal paper [91]. a>0
x1x2 .
(a) Determine whether i is (i) passive.X2
)3x1
Q>0. find an upper bound on the G2 gain. Theorem 9. Employing the QSR supply rate. state space realizations are central to the notion of dissipativity in Reference [91]. (ii) strictly passive. (b) If the answer to part (a)(iv) is affirmative. respectively.
Chapter 10
Feedback Linearization
In this chapter we look at a class of control design problems broadly described as feedback linearization. feedback linearization is a concept of paramount importance in nonlinear control theory. as we shall see. Although many successful applications have been reported over the years. The main problem to be studied is: Given a nonlinear dynamical system. A vector field
255
. Even with these shortcomings. we will limit our attention to singleinputsingleoutput systems and consider only local results.
We have already encountered the notion of vector field in Chapter 1. Once a linear system is obtained. For simplicity. whenever we write D C R". by transformation we mean a control law plus possibly a change of variables. Here. we assume that D is an open and connected subset of R'. find a transformation that renders a new dynamical system that is linear timeinvariant.
Feedback linearization was a topic of much research during the 1970s and 1980s. This time. however. The intention of this chapter is to provide a brief introduction to the subject. we need to review a few basic concepts from differential geometry. See the references listed at the end of this chapter for a more
complete coverage. feedback linearization has a number of limitations that hinder its use.
10. Throughout this chapter. the design is carried out using the new linear model and any of the wellestablished linear control design techniques.1
Mathematical Tools
Before proceeding. a secondary control law can be designed to ensure that the overall closedloop system performs according to the specifications.
1 Consider a scalar function h : D C 1R" * JR and a vector field f : D C The Lie derivative of h with respect to f.1)
Thus. g : D C ]R" * 1R" we have that
Lfh(x) = axf(x).
A slightly more abstract definition leads to the concept of Lie derivative. a vector field is an ndimensional "column. V is merely the Lie derivative of V with respect to f (x). x2 + x 2 Jx2
. denoted Lfh.XIX22 . a mapping that assigns an ndimensional vector to every point in the ndimensional space.
Lf Lf h(x)
= L2 fh(x)
1
_
8(Lfh)
8x
f (x)
Example 10. namely. we have
x
f (x) = VV f (x) = LfV(x)." It is customary to label covector field to the transpose of a vector
field. is given by
Lfh(x) = ax f (x).1. Notice that. going back to Lyapunov functions.1 Let
h(x) = 2 (xi + x2)
_
AX)
9(x) _
_ . Throughout this chapter we will assume that all functions encountered are sufficiently smooth. they have continuous partial derivatives of any required order. Defined in this way. given two vector fields f." As we well know.256
CHAPTER 10.1
Lie Derivative
When dealing with stability in the sense of Lyapunov we made frequent use of the notion of "time derivative of a scalar function V along the trajectories of a system d = f (x). FEEDBACK LINEARIZATION
is a function f : D C R" > 1R1. The Lie derivative notation is usually preferred whenever higher order derivatives need to be calculated. we have
.
10. given V : D + 1R and f (x). that is.
]R" 4 1R"
Definition 10.
(10.
and
Lfh(x) = aX9(x)
L9Lfh(x) = L9[Lfh(x)] = 2(8xh)g(x)
and in the special case f = g.
Then.
g](x) = 8xf (x)
.
10.µ(1 .µ(1 .10.
Lfh(x):
ag(x)
[
xl
x2 ]
xl .x2 Dx2
µ(1 . g : D C 1R" > R.x2 x2 2
x2 + xlx2 I
(xl + x2). denoted by [f.1.2)
.xlx
2 2
x2 + x2 Ix2
2(x2 + x2).g]. The Lie bracket of f and g.
x1 . is the vector field defined by
[f.2 Consider the vector fields f.2
Lie Bracket
Definition 10. MATHEMATICAL TOOLS
257
Lfh(x):
Lfh(x) =
ax
[X1
f(x)
=
x2]
x2
xl .
LfL9h(x):
LfL9h(x) = a(Oxh)f(x)
2 [xl
xz]
x2
I
xl)x22.xi)xz
= 2µ(1 Lfh(x):
Lfh(x)
a(8xh) g(x)
2 [xl
x21
xl .xl)x2.1.
(10.9g
af axg(x).
gl h = L fLgh(x) . g]. FEEDBACK LINEARIZATION
Example 10. g] = al [fl.f]
(iii) Jacobi identity: Given vector fields f and g and a realvalued function h.gj h represents the Lie derivative of h with respect to the vector [f. g2]
(ii) Skew commutativity:
[f.x2 j)x2
g(x) _
x1
X2
1
we have
[f.all . algl + a2g2] = a1 [f. gl] + a2 [f. frequently used in the literature.
Lemma 10. ad:f 1g]
= [f. g]
ad fg(x)
and
adfg
=
g
adfg =
Thus.1 : Given f1. is useful when computing repeated bracketing:
(x)1 f
[f.
XI . we obtain
L[f.2 : D C R + 1R . where L[f. we have
(i) Bilinearity:
(a)
(b)
[a1 f1 + a2f2.xi)
xl
X2
1
L
O 2pxix2
The following notation.µ(1 . adfg] _ [f.g] = [g.258
CHAPTER 10.
. g] [f.g](x) = 8xf(x)
0 [1
8xg(x)
[x2
0
°1
1
x1 .
adfg
ad3
[f.L9Lfh. g] + a2 [f2. [f.2 Letting
x2
f(x) =
L
.xl)x2
2µxlx2
1 µ(1 . adf2 g] 9]]]
The following lemma outlines several useful properties of Lie brackets. g]] = [f.
1.3
Diffeomorphism
Definition 10. MATHEMATICAL TOOLS
259
10.
. If the Jacobian
matrix D f = V f is nonsingular at a point xo E D. in the new variable z the state space realization takes the form
z=TAT1z+TBu = Az+Bu.10.1.3 : (Diffeomorphism) A function f : D C Rn 4 Rn is said to be a diffeomorphism on D. if
(i) it is continuously differentiable on D.
Lemma 10.2 Let f (x) : D E lW + 1R" be continuously differentiable on D. For example.1.
10. or a local diffeomorphism. where T is a nonsingular constant matrix. 11f (x) 11 = 00
The following lemma is very useful when checking whether a function f (x) is a local
diffeomorphism. Thus. then f (x) is a diffeomorphism in a
subset w C D.
The function f is said to be a global diffeomorphism if in addition
(i) D=R
.
Proof: An immediate consequence of the inverse function theorem. for a linear timeinvariant system of the form
±=Ax+Bu
we can define a coordinate change z = Tx. and (ii) its inverse f 1.4
Coordinate Transformations
For a variety of reasons it is often important to perform a coordinate change to transform a state space realization into another. and
(ii) limx'. defined by
11(f(x)) = x dxED
exists and is continuously differentiable.
Indeed. Given a nonlinear state space realization of an affine system of the form
x = f (x) + g(x)u
a diffeomorphism T(x) can be used to perform a coordinate transformation. that is. Also because of the existence of T1. we can always recover the original state space realization.260
CHAPTER 10.2x1x2 + X1
and consider the coordinate transformation:
x1
x = T(x) =
xl +X2 x2 2+X3
1
OIT
0
1
0
0
1
(x) =
2x1
0
2x2
Therefore we have.2x1x2 + xi
2x2
4x1x2
and
0
1
z =
X2
x1
+
0 0
u
+X2
. FEEDBACK LINEARIZATION
This transformation was made possible by the nonsingularity of T.2x1
U
2x2
x2 . assuming that T(x) is a diffeomorphism and defining z = T(x). knowing z
x = T1(z). we have that
z=
ax=
[f (x) + g(x)u]
Given that T is a diffeomorphism.
1
0
1
0
0
1
0
x1
1
0
1
0
0
1
1
z =
2x1
0
+
2x1
0
.3 Consider the following state space realization
0
x1
U
X2 . we have that 3T1 from which we can recover the original state space realization.
Example 10.
10.1. MATHEMATICAL TOOLS
261
and substituting xi = z1i x2 = z2  zi, we obtain
0
1
i=
z1
z2
+
0
0
u.
10.1.5
Distributions
Throughout the book we have made constant use of the concept of vector space. The
backbone of linear algebra is the notion of linear independence in a vector space. We recall, from Chapter 2, that a finite set of vectors S = {xi, x2i , xp} in R' is said to be linearly dependent if there exist a corresponding set of real numbers {) }, not all zero, such that
E Aixi = Aixi + A2x2 +  + APxP = 0.
On the other hand, if >i Aixi = 0 implies that Ai = 0 for each i, then the set {xi} is said
to be linearly independent.
Given a set of vectors S = {x1, X2, , xp} in R', a linear combination of those vector defines a new vector x E R", that is, given real numbers A1, A2, Ap,
x=Aixl+A2x2+...+APxP
Ellen
the set of all linear combinations of vectors in S generates a subspace M of Rn known as the span of S and denoted by span{S} = span{xi, x2i )XP} I i.e.,
span{x1 i x2,
, xp} _ {x E IRn' : x = A1x1 + A2x2 + . + APxp, Ai E R1.
The concept of distribution is somewhat related to this concept.
Now consider a differentiable function f : D C Rn > Rn. As we well know, this function can be interpreted as a vector field that assigns the ndimensional vector f (x) to each point x E D. Now consider "p" vector fields fi, f2,. , fp on D C llPn. At any fixed point x E D the functions fi generate vectors fi(x), f2 (x), , fp(x) E and thus
O(x) = span{ fi(x), ... f,(x)}
is a subspace of W'. We can now state the following definition.
262
CHAPTER 10. FEEDBACK LINEARIZATION
Definition 10.4 (Distribution) Given an open set D C 1R' and smooth functions fl, f2i , fp D > R", we will refer to as a smooth distribution A to the process of assigning the subspace A = span{f1, f2,.  fp} spanned by the values of x c D.
We will denote by A(x) the "values" of A at the point x. The dimension of the distribution A(x) at a point x E D is the dimension of the subspace A(x). It then follows that
dim (A(x)) = rank ([fi (x), f2(x), ... , fp(x)])
i.e., the dimension of A(x) is the rank of the matrix [fi(x), f2(x),
, fp(x)].
Definition 10.5 A distribution A defined on D C Rn is said to be nonsingular if there
exists an integer d such that
dim(O(x)) = d
Vx E D.
If this condition is not satisfied, then A is said to be of variable dimension.
Definition 10.6 A point xo of D is said to be a regular point of the distribution A if there exist a neighborhood Do of xo with the property that A is nonsingular on Do. Each point of D that is not a regular point is said to be a singularity point.
Example 10.4 Let D = {x E ii 2
span{ fl, f2}, where
:
xl + X2 # 0} and consider the distribution A
fl=[O],
We have
f2=Lx1 +x2]
1
1
dim(O(x)) = rank
0 xl+x2
])
Then A has dimension 2 everywhere in R2, except along the line xl + x2 = 0. It follows that A is nonsingular on D and that every point of D is a regular point. Example 10.5 Consider the same distribution used in the previous example, but this time with D = R2. From our analysis in the previous example, we have that A is not regular since dim(A(x)) is not constant over D. Every point on the line xl + X2 = 0 is a singular
point.
10.1. MATHEMATICAL TOOLS
263
Definition 10.7 (Involutive Distribution) A distribution A is said to be involutive if gi E i and 92 E i = [91,92] E A.
It then follows that A = span{ fl, f2,.
,
fp} is involutive if and only if
([fi(x),...
rank ([fi(x), ... , fn(x)]) rank
,
fn(x), [f i, f,]]) ,
Vx and all i,j
Example 10.6 Let D = R3 and A = span{ fi, f2} where
x2 1
1
Then it can be verified that dim(O(x)) = 2 Vx E D. We also have that
0
[f1, f21 =
a2 f1  afl f2 =
1
0
Therefore i is involutive if and only if
1
0
1
0
0
1
rank
0 x2
xi
1
= rank
0 x2
xi
1
0
This, however is not the case, since
0
1
0
0
1
rank
xi
1
= 2 and rank
0
xi
1
=3
1)
(
1
0
and hence A is not involutive. Definition 10.8 (Complete Integrability) A linearly independent set of vector fields fi, , fp on D C II2n is said to be completely integrable if for each xo E D there exists a neighborhood N of xo and np realvalued smooth functions hi(x), h2(x), , hn_p(x) satisfying the partial differentiable equation
8hj
8xf=(x)=0
1<i<p, 1<j<np
and the gradients Ohi are linearly independent.
264
CHAPTER 10. FEEDBACK LINEARIZATION
The following result, known as the Frobenius theorem, will be very important in later sections.
Theorem 10.1 (Frobenius Theorem) Let fl, f2i , fp be a set of linearly independent vector fields. The set is completely integrable if and only if it is involutive.
Proof: the proof is omitted. See Reference [36].
Example 10.7 [36] Consider the set of partial differential equations
2x3
Oh
C7x1
2
Oh
 8x2
3
=
0
i9h 8h Oh x1 2x2a +x3ax
= 0
which can be written as
2x3
8h 8h oh
1
0
X
2X2
x3
I
=0
or
Vh[f1 f2]
with
2X3
xl
,
fl =
1
0
f2 =
2x2
x3
To determine whether the set of partial differential equations is solvable or, equivalently, whether [fl, f2] is completely integrable, we consider the distribution A defined as follows:
2x3
xl
,
= span
1
0
2X2
x3
It can be checked that A has dimension 2 everywhere on the set D defined by D = {x E IIF3
xi + x2 # 0}. Computing the Lie bracket [fl, f2], we obtain
(fl,
fz] =
4x3
2
0
10.2. INPUTSTATE LINEARIZATION
and thus
223
265
21
X3
4X3
2
0
fl
f2
[fl, f2] 1 =
1 2X2
0
which has rank 2 for all x E R3. It follows that the distribution is involutive, and thus it is completely integrable on D, by the Frobenius theorem.
10.2
InputState Linearization
± = f(2) + g(x)u
Throughout this section we consider a dynamical systems of the form
and investigate the possibility of using a state feedback control law plus a coordinate transformation to transform this system into one that is linear timeinvariant. We will
see that not every system can be transformed by this technique. To grasp the idea, we start our presentation with a very special class of systems for which an inputstate linearization law is straightforward to find. For simplicity, we will restrict our presentation to singleinput systems.
10.2.1
Systems of the Form t = Ax + Bw(2) [u  cb(2)]
First consider a nonlinear system of the form
2 = Ax + Bw(2) [u  q5(2)]
(10.3)
where A E Ilt"', B E II2ii1, 0 : D C II2" 4 )LP1, w : D C 1(P" 4 R. We also assume that
w # 0 d2 E D, and that the pair (A, B) is controllable. Under these conditions, it is
straightforward to see that the control law
u = O(2) + w1 (2)v
(10.4)
renders the system
i=A2+Bv
which is linear timeinvariant and controllable.
The beauty of this approach is that it splits the feedback effort into two components that have very different purpose.
266
CHAPTER 10. FEEDBACK LINEARIZATION
1 The feedback law (10.4) was obtained with the sole purpose of linearizing the original state equation. The resulting linear timeinvariant system may or may not have "desirable" properties. Indeed, the resulting system may or may not be stable, and may or may not behave as required by the particular design.
2 Once a linear system is obtained, a secondary control law can be applied to stabilize the resulting system, or to impose any desirable performance. This secondary law, however, is designed using the resulting linear timeinvariant system, thus taking advantage of the very powerful and much simpler techniques available for control design of linear systems.
Example 10.8 First consider the nonlinear massspring system of example 1.2
I xl = x2
(l
x2 =
mxl  ma2xi
mx2 +
which can be written in the form
Clearly, this system is of the form (10.3) with w = 1 and O(x) = ka2xi. It then follows that the linearizing control law is u = ka2xi + v
Example 10.9 Now consider the system
1 it = x2
ll x2 = axl + bx2 + cOS xl (u  x2)
Once again, this system is of the form (10.3) with w = cos xl and O(x) = x2. Substituting into (10.4), we obtain the linearizing control law:
u=x2+cosl X1 V
which is well defined for  z < xl < z .
El
10.2. INPUTSTATE LINEARIZATION
267
10.2.2
Systems of the Form 2 = f (x) + g(x)u
Now consider the more general case of affine systems of the form
x = f(x) + g(x)u.
(10.5)
Because the system (10.5) does not have the simple form (10.4), there is no obvious way to construct the inputstate linearization law. Moreover, it is not clear in this case whether such a linearizing control law actually exists. We will pursue the inputstate linearization of these systems as an extension of the previous case. Before proceeding to do so, we formally introduce the notion of inputstate linearization.
Definition 10.9 A nonlinear system of the form (10.5) is said to be inputstate linearizable if there exist a diffeomorphism T : D C r > IR defining the coordinate transformation z = T(x)
and a control law of the form
(10.6)
u = t(x) + w1(x)v
that transform (10.5) into a state space realization of the form
(10.7)
i = Az + By.
We now look at this idea in more detail. Assuming that, after the coordinate transformation (10.6), the system (10.5) takes the form
i = Az + Buv(z) [u  (z)]
= Az + Bw(x) [u  O(x)]
where w(z) = w(T1(z)) and (z) = O(T1(z)). We have:
(10.8)
z axxSubstituting (10.6) and (10.9) into (10.8), we have that
ax[f(x)+g(x)u].
(10.9)
[f (x) + g(x)u] = AT(x) + Bw(x)[u  O(x)] must hold Vx and u of interest. Equation (10.10) is satisfied if and only if
(10.10)
f (x) = AT(x)  Bw(x)O(x)
ag(x)
(10.11)
= Bw(x).
(10.12)
268
CHAPTER 10. FEEDBACK LINEARIZATION
From here we conclude that any coordinate transformation T(x) that satisfies the system of differential equations (10.11)(10.12) for some 0, w, A and B transforms, via the coordinate transformation z = T(x), the system
z = f(x) + g(x)u
(10.13)
into one of the form
i = Az + Bw(z) [u  O(z)]
must satisfy the system of equations (10.11)(10.12).
(10.14)
Moreover, any coordinate transformation z = T(x) that transforms (10.13) into (10.14)
Remarks: The procedure just described allows for a considerable amount of freedom when selecting the coordinate transformation. Consider the case of singleinput systems, and recall that our objective is to obtain a system of the form
i = Az + Bw(x) [u  ¢(x)]
The A and B matrices in this state space realization are, however, nonunique and therefore so is the diffeomorphism T. Assuming that the matrices (A, B) form a controllable pair, we can assume, without loss of generality, that (A, B) are in the following socalled controllable
form:
0 0
1
0
1
,
0
0
0
Ar _
0 0
1
B, _
1
0
0
0
nxn
nxl
Letting
Ti(x) T2(x)
T(x)
Tn(x)
nx1
with A = Ac, B = B,, and z = T(x), the righthand side of equations (10.11)(10.12)
becomes
A,T(x)  Bcw(x)4(x) _
(10.15)
10.2. INPUTSTATE LINEARIZATION
and
0 0
269
(10.16)
0
L w(x)
Substituting (10.15) and (10.16) into (10.11) and (10.12), respectively, we have that
a lf(x)
= T2(x)
a
f(x) = T3(x)
(10.17)
8x 1 f (x) = T. (x)
' n f(x) = O(x)W(x)
and
a 119(x)
=0
2(x) = 0
(10.18)
ax
19(x) = 0
W(x) # 0.
a n9(x) =
Thus the components T1, T2,
(i)
, T, of the coordinate transformation T must be such that
x
29(x) = 0 Vi = 1,2,.
71
n  1.
n9(x)
(ii)
0.
2f (x) =T,+1
i = 1,2,...,n 1.
270
CHAPTER 10. FEEDBACK LINEARIZATION
(iii) The functions 0 and w are given by
W(x) = axn9(x)
(an/ax)f(x)
(0'Tn1ax)g(x) '
10.3
Examples
[01
J
Example 10.10 Consider the system
=L
exax2 1
J
+
u = f (x) + g(x)u.
To find a feedback linearization law, we seek a transformation T = [T1iT2]T such that
1g = 0
(10.19)
29 # 0
with
alf(x) =T2.
(10.20)
(10.21)
In our case, (10.19) implies that
OT1
a7X
_ aTi 0Ti
g

0
_
0T1
Fa7X1
axe
I
_0
1
axe
so that T1 = T1(x1) (independent of 7X2). Taking account of (10.21), we have that
axf(x)=T2
5T1 aTi
[57X1 57X2
f(7X)
aT1

0
57X1
ex'  1
axi
_
T2
=> T2 =
1
(e22  1).
To check that (10.20) is satisfied we notice that
8772
a7X g(7X)
=
oT2 0T2
57X1
=
OT2
57X2
=
a
57X2
axe 9(7X)
aT1 x  1)
57X1
(e
#0
10.3. EXAMPLES
provided that
271
0. Thus we can choose
T1(x) = xi
which results in
T=[ex21 1J
Notice that this coordinate transformation maps the equilibrium point at the origin in the x plane into the origin in the z plane.
The functions 0 and w can be obtained as follows:
w=
29(x) = ex2
(aT2/ax)f(x) _
(0T2/8x)9(x)
It is easy to verify that, in the zcoordinates
f z1 = x1
ll z2=eX21 { z2 = az1 z2 + azl + (z2 + 1)u
which is of the form
z = Az + Bw(z) [u  O(z)]
with
A
zl = z2
A, [
0
O
I,
B=B, = 1 0
J
w(z) = z2 + 1,
O(z) _ azi.
11
Example 10.11 (Magnetic Suspension System)
Consider the magnetic suspension system of Chapter 1. The dynamic equations are
x2
xl
aµa2
0
x2
=
xg
9 1+µa1 a
k
mX2
2m 1+µa1
+
x 2x3 )
0
(Rx3 + l 1+µa1) µ
(
A
FEEDBACK LINEARIZATION
which is of the form i = j (x) + g(x)u.25)
(10.
2
g( x )
0
(10 .22) implies that
(10.26)
a 2 AX) = q(x)w(x). To proceed we arbitrarily choose T1 = x1.25) implies that
ax 1(X) = T2
=
[1 0 011(X) = T2
and thus T2 = x2.27)
[] 0
0
r1+
1+µxi
i)
c711
8x3
A
) =0
so T1 is not a function of x3.
.22)
2
g ( x)
=
(
10 23 )
. To find a feedback linearizing law.
Equation (10.22)(10. We now turn to equation (10. we have that
T3
k
=gmx2 2m1+ (
aµx3
{tx1)2.272
CHAPTER 10. we seek a
transformation T = [T1 T2 T3]T such that
5g(x) =
l
0
0
(10. We have that
af(x)=T3
2
and substituting values.26).
Equation (10.27). 24 )
with
a1f(x) = T2
a 1(X) = T 's
2
(10. We will need to verify that this choice satisfies the remaining linearizing conditions (10.
adflg(x)]nxn
has rank n for all x E Do. the matrix
. adfg(x).4
Conditions for InputState Linearization
± = f(x) + g(x)u
(10.24) we proceed as follows:
aT2
273
ax1
9(x) = [0
1
0 0
0]
u
=
0
(
)
so that (10. adfg(x). CONDITIONS FOR INPUTSTATE LINEARIZATION
To verify that (10.28)
In this section we consider a system of the form
where f .
ad flg(x)} are linearly independent in Do.23) is satisfied.
.
Equivalently.
Theorem 10. Similarly
39(x) = m. Therefore all conditions are satisfied in D = {x E R3 : X3 # 0}.(1µ+i x1)
0
provided that x3 # 0.28) is inputstate linearizable on Do C D if and only if
the following conditions are satisfied:
(i) The vector fields {g(x).10.
C = [g(x).2 The system (10. The coordinate transformation is
xl
T =
g
k
X2
aµx2
mx2
2m(l+µxi)
The function q and w are given by
(aT3/ax)f(x)
w(x) _ 0 g(x) ax
O(x)
(aT3/ax)g(x)
10.4.. g : D + R' and discuss under what conditions on f and g this system is inputstate linearizable.23)(10...
FEEDBACK LINEARIZATION
.adfg}={LOJ'
and
rank(C) = rank
[er2
J}
2. Thus conditions (i) and (ii) of Theorem 10. [f.9] = axf(x) x2
'f g(x)
adfg =
e0
Thus. and g(x) = B. 9] = ex f
(x) . Example 10. 9]] = A2B
g = (1)tRB.10
r ex2
L
1
i
axe
+
1
u = f(x) + 9(x)u.of g(x) = AB
[f. Straightforward computations show that
adf9 =
af9 =
[f.. f (x) = Ax.2 are satisfied
VxEIR2.274
CHAPTER 10.
adfg = [f.
adf2g} is involutive in Do.
We have.12 Consider again the system of Example 10. ad fg.
Proof: See the Appendix.
(ii) The distribution A = span{g. we have that
{g.e.
.13 Consider the linear timeinvariant system
i=Ax+Bu
i.
Vx E 1R2.
E)
Example 10.
(
0
ex2
1
0
Also the distribution A is given by
span{9} = span C
1
L1 0
which is clearly involutive in R2.
5
InputOutput Linearization
So far we have considered the inputstate linearization problem. Often.
10.
In this section we consider the problem of finding a control law that renders a linear
differential equation relating the input u to the output y. g : D C Rn * lR'
h:DCRn4 R
(10. for linear systems condition (i) of theorem 10. Consider the system
y=h(x)
= f (x) + g(x)u
f. we obtain
=x2
. An1B]nxn
has rank n. Notice also that conditions (ii) is trivially satisfied for any linear timeinvariant system.10. B). we consider the following simple example. . our interest is in a certain output variable rather than the state... A2B. adfg(x).14 Consider the system of the form
.axix2 + (x2 + 1)u
We are interested in the inputoutput relationship. To get a better grasp of this
principle.. adflg(x)} are linearly independent if and only if the matrix C = [B.5. AB. .2 is equivalent to the controllability of the pair (A. AB. A2B. as in the inputstate linearization problem. is that when deriving the coordinate transformation used to linearize the state equation we did not take into account the nonlinearity in the output equation. where the interest is in linearizing the mapping from input to state.29)
Linearizing the state equation. of course. . (1)n'An'B}
and therefore. . Therefore.. ad fg(x). Differentiating. The reason.
Example 10. since the vector fields are constant and so 6 is always
involutive. such as in a tracking control problem. the vectors {g(x). . INPUTOUTPUT LINEARIZATION
Thus. does not necessarily imply that the resulting map from input u to output y is linear..
275
{g(x). so we start by considering the output
equation y = x1. adf19(x)} _ {B.
Once this linear
11
system is obtained. In this case we can define the control law
U = L91(x) [Lfh + v]
that renders the linear differential equation
y=v. the approach to obtain a linear inputoutput relationship can be summarized as follows. In this case we continue to differentiate y until u
appears explicitly:
. we obtain
x2 = xl axix2+(x2+1)u.
.
Differentiate the output equation to obtain
Oh. FEEDBACK LINEARIZATION
which does not contain u.CASE (1): Qh # 0 E D. Differentiating once again.CASE (2):
DX
= 0 E D. linear control techniques can be employed to complete the design.276
CHAPTER 10.
y
8x x ah
Oh
axf (x) + 8xg(x)u
= Lfh(x) + L9h(x)u
There are two cases of interest:
.g : D C l 4 1R' and h : D C R" + R are sufficiently smooth.
Thus.29) where f. Given a system of the form (10. letting
u
f
X2 + 1
1
[v + axi x2]
(X2 54 1)
we obtain
or
y+y=v
which is a linear differential equation relating y and the new input v.
This idea can be easily generalized.
(10. INPUTOUTPUT LINEARIZATION
277
ydef y(2)
dt[axf(x)l = Lfh(x)+L9Lfh(x)u.
y(r) = Lfh(x) + L9Lfll
h(x)u
If L9Lfh(x) = 0.5. Vx E Do
Vx E Do
L9Lf lh(x) # 0
Remarks:
(a) Notice that if r = n.10 A system of the form
x = f (x) + g(x)u
f.
22g(x)
= 0
.30)
The number of differentiations of y required to obtain (10. for some integer r < n
with L9Lf llh(x) $ 0. 0 < i < r . then denoting h(x) = Tl (x). We now define this concept more precisely. if the relative degree is equal to the number of states. we continue to differentiate until.10.
1
The assumption r = n implies that Og(x) = 0. that is. g : D C Rn
y=h(x)
h:DCR"R
R"
is said to have a relative degree r in a region Do C D if
L9L fh(x) = 0
Vi.1. we have that
y = 5[f (x) + g(x)u]. Letting
u=
L9Lf llh(x)
1
[Lrh+v]
we obtain the linear differential equation
y(r) = v. Thus. we can define
C9T1
de f
y = ax f(x) = T2
y(2) = a 22[f(x) + g(x)ul.
Definition 10.30) is important and is
called the relative degree of the system.
then inputoutput linearization leads to inputstate linearization.
5g(x) = l
2
0
0
g (x )
x) ax 1 g (
n g ( x)
0
0
which are the inputstate linearization conditions (10. Thus.278
CHAPTER 10.
(b) When applied to singleinputsingleoutput systems.14 ±1 =x2
22 = xl .16 Consider the linear timeinvariant system defined by
{ x=Ax+Bu
y=Cx
.ax2 + (x2 + 1)u 1x2 y=x1
we saw that
X2
xl .
Example 10.
Example 10.17). if the relative degree r equals the number of states n.15 Consider again the system of Example 10. the definition of relative degree coincides with the usual definition of relative degree for linear timeinvariant systems.axix2 + (x2 + 1)u
hence the system has relative degree 2 in Do = {x E R2 : x2 # 1}. FEEDBACK LINEARIZATION
y(2)
= 82 f (x ) x
+
f T3
and iterating this process
y (n) =
Therefore
n f ( X)
a
= =
n g ( x) u. as we shall see.
.
we continue to differentiate. We have
y=Ci=CAx+CBu
0 0
CB = [ Po
P1
.
l 0.. Thus.m = 1.. +Po
m<nc
The relative degree of this system is the excess of poles over zeros.. Pm 0 . that is.. the "1" entry in the column matrix A'B moves up one row... C= [ Po P1 Pin The transfer function associated with this state space realization can be easily seen to be
+Pmlsm1 +
. r = n ..
0
A=
0 0 0
1
B=
0
1
qo
Q1
.. and we continue to differentiate y:
y(2) = CAi = CA2X + CABu
0 0
CAB = [ Po
P1
. We now calculate the relative degree using definition 10. Pm 0
..in > 0. then we conclude that r = n .A) 1B =
sn+qnlss1+.. 11xn'
Pmsm
H(S) = C(SI . Assuming that this is not the case..
if m=n1
otherwise
IO
1
Thus. Assuming that this is not the case. we have that
CA'1B= 1 0
for i= l pm for i=n . we conclude that r = n ..
if m=n2
otherwise
If CAB = pm.
_
pm.m
.....
0
0
0
0
0
.m = 2. given the form of the C matrix.5..+qo
. if CB = p....
9n1
nxn
nxl
0. With every differentiation.10.10. INPUTOUTPUT LINEARIZATION
where
0
1
279
0
1
.
0.
]
J Pm. we have that y = CAx.. Pm 0 .
m.17 Consider the linear timeinvariant system defined by
x=Ax+Bu I y=Cx
where
0
1
0
1
0
0
01
.m
and we conclude that r = n .
10. we have that r = 3. In this section we discuss in more detail the internal dynamics of systems controlled via inputoutput linearization. at least for singleinputsingleoutput (SISO) systems.
A=
0 0
0
0 0
0
0
1
2 3 5 1 4
C=[ 7
2
6
0 0
0 0
1
B=
0 0
0
L 1
0
5x5
0
5x1
0 11x5
To compute the relative degree.
y(2) = CA2X + CABu. FEEDBACK LINEARIZATION
It then follows that
y(?) = CA' x + CA'''Bu. To understand the
main idea.A)1B =
6s2+2s+7 s5+4s4+s3+5s2+3s+2
which shows that the relative degree equals the excess number of poles over zeros.6
The Zero Dynamics
On the basis of our exposition so far. we compute successive derivatives of the output equation:
y=CAx+CBu. It is easy to verify that the transfer function associated with this state space realization is
H(s) = C(sI .
CA''1B # 0.
number of poles over zeros. it would appear that inputoutput linearization is rather simple to obtain.
CB=O
CAB = 0 CA2B = 6
Thus. y(3) = CA3X + CA2Bu. consider first the SISO linear timeinvariant system
.280
CHAPTER 10. r = n . and that all SISO systems can be linearized in this form. the relative degree of the system is the excess
Example 10. that is.
With this input v we have that
u = Igoxl + 41x2 + 42x3 x3J + 1[. we proceed with our design using the inputoutput linearization technique.v
P1
P1
produces the simple double integrator
y=v.
±2 x3
0 ql
P1
1
+
0
1
U
42
y = [ Po
0]
The transfer function associated with this system is
H(s) =
+ Pls
qo + q1S + g2S2 + Q3S3
U. THE ZERO DYNAMICS
281
i=Ax+Bu
y = Cx.
Suppose that our problem is to design u so that y tracks a desired output yd.
Since we are interested in a tracking control problem.10.k2e + yd.31)
To simplify matters. Ignoring the fact that the system is linear timeinvariant. In companion form.kle . we can assume without loss of generality that the system is of third order and has a relative degree r = 2.k2e + yd]
.
(10.6.42x3) + Plu
Thus. X2 + Q2x3 . we can define the tracking error e = y . We have:
Y = Pox1 +P1x2 y = POxl +P1±2
= POx2 + Plx3 = P0X2 +P1X3
= POx3 + pl(40x1 . the control law
1
u = 40x1 + q.31) can be expressed
as follows:
±1
0 0
4o
1
0
0
x2
X3 'I.yd and choose v = kle .P0 X3 + . equation (10.41x2 .
provided that the firstorder system (10. To complete the threedimensional state. Notice also
. At the end of this process observability of the threedimensional state space realization (10. Using elementary concepts of linear systems. Stability of the internal dynamics is.33) are in the left half plane. During the design stage. Notice that the output y in equation (10.
Thus. using xl along with e and a as the "new" state variables. In our case. thus producing the (external) secondorder differential equation (10.i.
The unobservable part of the dynamics is called the internal dynamics. of course.33) can be used to "complete" the threedimensional state.33) is given by y = e + yd.x3 through a onetoone transformation. the internal dynamics is thus exponentially stable if the eigenvalues of the matrix Aid in the state space realization (10. This is indeed the case in our example.33) has an exponentially stable equilibrium point at the origin.
(10.31) was manipulated using the input u.
(10. Indeed.32).32). and so y is bounded since e was stabilized by the inputoutput linearization law. based on the inputoutput linearization principle. Thus.32).31) was lost. we reason as follows. the state equation (10. xl will be bounded.33)
Equation (10. and therefore the effectiveness of the inputoutput linearization technique depends upon the stability of the internal dynamics. r = 2. and yd is the desired trajectory.32) is the same as the relative order of the system.
±1 = 
P1
xl +
1 1
P1
y = Aid xl + Bid y. A look at this control law shows that u consists of a state feedback law. and thus the design stage can be seen as a reallocation of the eigenvalues of the Amatrix via state feedback. As we well know. FEEDBACK LINEARIZATION
which renders the exponentially stable tracking error closedloop system
e + k2e + kle = 0.282
CHAPTER 10. resulting in the (external) twodimensional closed loop differential equation (10. part of the dynamics of the original
system is now unobservable after the inputoutput linearization. we know that this is possible if during the design process one of the eigenvalues of the closedloop state space realization coincides with one of the transmission zeros of the system. To see why. and plays a very important role in the context of the inputoutput linearization technique. we obtain full information about the original state variables x1 .32)
A glance at the result shows that the order of the closedloop tacking error (10. whereas the original state space realization has order n = 3. as important as the stability of the external tracking error (10. we can consider the output equation
y = Pox1 +P1x2
= Poxl +pi . Therefore.
we study a nonlinear version of this problem.34)
y = h(x) and assume that (10. We proceed as follows. This also implies that the internal dynamics of the original system is exponentially stable.
Rather than pursuing the proof of this result. specifically. Consider an nthorder system of the form
x = f(x) + g(x)u
(10.6. thus leading to the loss of observability. Now define the following nonlinear transformation
µi (x)
{finr(x)
z = T(x) =
01
def
71EII2" r. however. which pertains to linear timeinvariant system only. The extension to the nonlinear case is nontrivial since poles and zeros go hand in hand with the concept of transfer function. we conclude that the internal dynamics contains a pole whose location in the s plane coincides with that of the zero of H.34) has relative degree r.
Our discussion above was based on a three dimensional example. our conclusions can be shown to be general.35)
Lr
where
V). THE ZERO DYNAMICS
283
that the associated transfer function of this internal dynamics is
Hid =
Po + pis
Comparing Hid and the transfer function H of the original thirdorder system. ERr (10.10. provided that the zeros of the transfer function Hid are in the lefthalf plane. for linear timeinvariant systems the stability of the internal dynamics is determined by the location of the system zeros.
= h(x)
02 = a 11 f (x)
V)i+i
= V)i f (x)
i =
1
. System that have all of their transfer function zeros are called minimum phase.
+ Bcw[u .39) take the form
fl
= fo(rl.40)
77 represents the internal dynamics.
(rlax) f (x)
f represents the external dynamics. 77 is unobservable from the output.O(x)]
(10.
(10. namely. which can be linearized by the input
u' _ O(x) + wIV
(10. equations (10. )
= A. which we now formally introduce. FEEDBACK LINEARIZATION
and pi .{Lnr are chosen so that T(x) is a diffeomorphism on a domain Do C D and
Xg(x)=0.0
= Ac + Bcv y = Ccy
and thus. for 1<i<nr.0 = all 1x=T1(z) Ox
and
(00r/ax) g(x) The normal form is conceptually important in that it splits the original state x into two
parts.34) into the following socalled normal form:
fl
= fo(rl. 0)

This dynamical equation is referred to as the zero dynamics. From here we conclude that the stability of the internal dynamics is determined by the autonomous equation
7 = fo(rl. dxEDo. 77 and .37)(10.284
CHAPTER 10.39)
y = Cc
where f0(77. When the input u` is applied.37) (10.
. with the following properties:
W = a rr g(x).38)
(10.36)
It will be shown later that this change of coordinates transfers the original system (10.
0)
Thus the zero dynamics can be defined as the internal dynamics of the system when the output is kept identically zero by a suitable input function.0)
is called the zero dynamics.6.
r < n: If the relative degree of the nonlinear system is lower than the order of the system. The stability properties of the internal dynamics is determined by the zero dynamics. inputoutput linearization can be successfully applied.39) we have that
y0 ty
t. This means that the zero dynamics can be determined without transforming the system into normal form. systems whose zero dynamics is stable are said to be minimum phase. ignores robustness issues that always arise as a result of imperfect modeling.39) (i.37)(10. This analysis. represented in normal form).0
u(t) _ O(x)
f7 = fo(mI. Inputoutput linearization is achieved via partial cancelation of nonlinear terms. then the nonlinear system can be fully linearized and.37)(10.0 in equations (10..
Summary: The stability properties of the zero dynamics plays a very important role
whenever inputoutput linearization is applied.
Example 10.e. If the zero dynamics is not asymptotically stable. The remaining n .
Before discussing some examples. the autonomous equation
1 = fo(y. of course. whether inputoutput linearization can be applied successfully depends on the stability of the zero dynamics.2x2u
4
22 = x2 + xiu
y = X2
.r states are unobservable from the output. then inputoutput linearization does not
produce a control law of any practical use. Thus.11 Given a dynamical system of the form (10. then only the external dynamics of order r is linearized.10.18 Consider the system
xl = kxl .
Because of the analogy with the linear system case. Two cases should be distinguished:
r = n: If the relative degree of the nonlinear system is the same as the order of the system. THE ZERO DYNAMICS
285
Definition 10. we notice that setting y .
To determine the zero dynamics.19 Consider the system
I
xl = x2 + xi
i2=x2+u
x3 = x1 +X 2 + ax3y = X1
Differentiating y. the zero dynamics is given by
x2=0 t.
. we obtain
y=
Therefore
LEI =x2+xi
2x111 + i2 = 2x1 (x2 + xl) + x2 + u
r=2
To find the zero dynamics. and unstable if a > 0.286
CHAPTER 10.
i2=x2+u=0
Therefore the zero dynamics is given by
x3 = ax3.
Moreover.
which is exponentially stable (globally) if k > 0. r = 1. Differentiating y. we proceed as follows:
y=0
Then. FEEDBACK LINEARIZATION
First we find the relative order. and unstable if k < 0.
Example 10.
Therefore. we proceed as follows:
#.i1=0=x2+2i
=>' X2=0
u=x2.u=0. we obtain
y=i2=x2+x1u. the zero dynamics is asymptotically stable if a = 0.
1) Prove Lemma 10. Then.r. for every xo E Do.41) and assume that it has relative degree r <
n Vx E Do C D.37)(10.
.
Theorem 10.7. /t.3 Consider the system (10.8
Exercises
10. CONDITIONS FOR INPUTOUTPUT LINEARIZATION
287
10.
f.39).
10.
Proof: See the Appendix.
(ii)
for 1 < i < n . _r such that
(i) Lgpi(x) = 0.
dx E S2
f
T(x) =
ii(
)
1
Anr(x)
01
Lr
J
01 = h(x) 02 = Lfh(x)
Or = Lf lh(x)
is a diffeomorphism on Q.7
Conditions for InputOutput Linearization
According to our discussion in Section 10..6. g : D C R" > R"
y=h(x).2. The following theorem states that such a transformation exists for any SISO system of relative degree r < n.10. there exist a neighborhood S2 of xo and smooth function It. .) that converts the original system of the
form
fx = f (x) + g(x)u...
h:DCRn >R
(10.41)
into the normal form (10. the inputoutput linearization procedure depends on the existence of a transformation T(.
find the linearizing law.
(10.
(10.288
CHAPTER 10.5) Consider the following system:
21 = x1 + x2
{ ±2=x3+46 X3=x1+x2+x3
(a) Determine whether the system is inputstate linearizable.
(b) If the answer in part (a) is affirmative.
(10.2x3
{
x2 = x1 + (1 + x2)46
23=x2(1+x1)x3+46
(a) Determine whether the system is inputstate linearizable. FEEDBACK LINEARIZATION
(10.
(b) If the answer in part (a) is affirmative. find the linearizing law.6) Consider the following system
21=x1+x3
{
x2 = xix2
X3 = x1 sin x2 + U
(a) Determine whether the system is inputstate linearizable.4) Determine whether the system
J ±1 = X 2 X 3
ll X2 = U
is inputstate linearizable. find the linearizing law.11. Verify that this system is inputstate linearizable.11.
(10.7) ([36]) Consider the following system:
xl = X3 + 3.2) Consider the magnetic suspension system of Example 10. proceed as follows:
(a) Compute the function w and 0 and express the system in the form i = Az + By.3) Consider again the magnetic suspension system of Example 10. (b) Design a state feedback control law to stabilize the ball at a desired position.
(10.
. (b) If the answer in part (a) is affirmative. To complete the
design.
see the outstanding books of Isidori [36]. [77] and Hunt et al. [39]. Nijmeijer and van der Schaft [57].
For a complete coverage of the material in this chapter. used to introduce the notion of zero dynamics follows Slotine and Li [68].(X23
y = 21
(a) Find the relative order. [34]. EXERCISES
(10. [52].
(b) Determine whether it is minimum phase. The notion of zero dynamics was introduced by Byrnes and Isidori
[14]. The multiinput case was developed by Jakubczyk and Respondek. The linear timeinvariant example of Section 6.8.
.1 follows closely References [36] and [11]. Our presentation follows Isidory [36] with help from References [68].t1 =21+22
±2=2g+U
X3 = X2 . design a control law to track a desired signal y =
yref
Notes and References
The exact inputstate linearization problem was solved by Brockett [12] for singleinput systems. Section 10.10.8) Consider the following system:
289
. and Marino and Tomei. Su. (c) Using feedback linearization. [88] and [41].
.
While for linear timeinvariant
systems observer design is a wellunderstood problem and enjoys a wellestablished solution. and that D = 0 in the output equation. This reconstruction is possible provided certain "observability conditions" are satisfied. In this case. knowledge of the state is sometimes important in problems of fault detection as well as system monitoring. assuming that the vector x was measured. along with the backstepping procedure. the results of Chapter 10 on feedback linearization assume that the state x is available for manipulation. The purpose
of this chapter is to provide an introduction to the subject and to collect some very basic
results. we assume that the system is singleinputsingleoutput. an observer
can be used to obtain an estimate 2 of the true state x. the nonlinear case is much more challenging. whenever discussing state space realizations it was implicitly assumed that the state vector x was available.
11. Aside from state feedback. Unfortunately. state feedback was used in Chapter 5.1
Observers for Linear TimeInvariant Systems
We start with a brief summary of essential results for linear timeinvariant systems. Throughout this section we consider a state space realization of the form
x=Ax+Bv
y = Cx
AEII2nXn BEIfPnxl C E R1xn
where. for simplicity.
291
. Similarly. and some form of state reconstruction from the available measured output is required. For example. the state x is
seldom available since rarely can one have a sensor on every state variable.Chapter 11
Nonlinear Observers
So far. and no universal solution exists.
(11.
eA(tT)Bu(rr)
dr.
0
yl . that is.
.2)
y(ti) = Cx(tl) = CeAtlxo + C
r
0
t.
y(ti) = (Lxl)(t) = CeAt'xl +C J
0
tl
eA(tT)Bu*(r) d7
eA(tT)Bu*(r) drr
(11. We have.4) (11. the mapping L is onetoone if
y1 = Y2
For this to be the case.x2).1) is observable if and only if the mapping
Lx0 is onetoone.
Now consider two initial conditions xl and x2.5)
y(t2) = (Lx2)(t) = CeAtlx2 +CJ
and
rt.
(11.
Thus. we can write y(t) = (Lxo)(t). we argue that the state space realization (11.b1 "observable"
if and only if
Lxl = Lx2
x1 = x2.y2 = Lx1 . if this is the case. then the inversion map xo = Lly uniquely
determines x0.292
CHAPTER 11.
By definition. t1] suffices to uniquely determine the initial state xo. Indeed.Lx2 = CeAtl (xl . Accordingly. we must have:
CeAtx = 0
x=0
. the state x(ti) can be reconstructed using the wellknown solution of the state equation:
x( tl)
Also
= elxo + Lt1
eA(tT)Bu(r)
dr.1 The state space realization (11. equation (11. the knowledge of the input u and output y over [0.1) is said to be observable if for any initial state xo and fixed time tl > 0. for fixed u = u`.3)
Notice that.3) defines a linear transformation L : 1R" + ]R that maps xo to y.1
Observability
Definition 11.1. NONLINEAR OBSERVERS
11. Once xo is determined.
The matrix 0 is called the observability matrix. Assuming for simplicity that this is the case.1) is observable] if and only if the null space of CeAt is empty (see Definition 2.7)
To see this.1) reduces to
l y=Cx.
. we have that the state space realization (11.
293
Thus.
Notice that.0.6)
Observability conditions can now be easily derived as follows. the observability properties of the state space realization (11. = CAn1xo
or. only the first n . note that
y(t) = CeAtxo
2
C(I+tA+
By the Sylvester theorem.1). Given these conditions.11.
C
y=0
and the condition Oxo = 0
CA
CAn1
xp = 0. equivalently N(CeAt) = 0. From the discussion above. OBSERVERS FOR LINEAR TIMEINVARIANT SYSTEMS
or..1 powers of AZ are linearly independent.1)] is observable if and only if y(t) = CeAtxo .
x= Ax
(11. (11. setting u = 0. we can redefine observability of linear timeinvariant state space realizations as follows. without loss of generality.6) [or (11. b Ox0 = 0
xo = 0 is satisfied if and only if rank(O) = n.1. Therefore we can assume. according to this discussion.0 in (11.
We now show that this is the case if and only if
C
CA
rank(O)'Irank
CAn1
= n
(11..9). L is onetoone [and so (11. that u .0 = x .1) are independent of the input u and/or the matrix B. Thus
y = 0 b Cxo = CAxo = CA2xo = .
2 The state space realization (11.2
Observer Form
The following result is well known.1) for (11. Defining the observer error
de f x=x .1.294
CHAPTER 11.8)
(11. Consider the linear timeinvariant state space realization (11. NONLINEAR OBSERVERS
Definition 11..1.. the state space realization (11. there exists a nonsingular matrix T E IItfXf
such that defining new coordinates t = Tx. C) form an observable pair.1) takes the
socalled observer form:
r 0
1
0
0
qo
q1
Q2
PO
Pi
0
1
0
0
0
+
J
U
0
0
1
Qn1J L 2n
y=[0 .i)
.LC)(x . and let the transfer function associated with this state space realization be
H(s) = C(sI 
A)1B =
pnlsn1
+pn2sn2
yn + qnlsn1 + .i
(11.1). + qo
+ .9)
we have that
x = xi=(Ax+Bu)(Ai+Bu+LyLCi)
= (A .3
11t
Observers for Linear TimeInvariant Systems
Now consider the following observer structure:
x=Ax+Bu+L(yCx)
where L E Rn1 is the socalled observer gain.6)] is said to be observable if
C CA
CAn1
rank
=n
11. +po
Assuming that (A...
11...
.
(ii) The design of the state feedback and the observer can be carried out independently. if the observability condition (11.11). Thus. We have
(11.1.BK±
(A + BK)x .12)
i.11).
11. This is called the separation principle. then the eigenvalues of (A .LC). short of having the true state x
available for feedback.12) is the observer equation.LC)i
Thus
A
0BK ABLC] [x]
x=Ax
where we have defined
The eigenvalues of the matrix A are the union of those of (A + BK) and (A .i. Equation
i = Ax+BKx
Ax+BKx .10)
Thus. OBSERVERS FOR LINEAR TIMEINVARIANT SYSTEMS
or
295
x = (A .LC). we conclude that
(i) The eigenvalues of the observer are not affected by the state feedback and vice versa.7) is satisfied.e.11. This is. provided the socalled observer error dynamics (11.11) (11.1.
(11.
Also
(A . provided that the eigenvalues of the matrix A .10) is asymptotically (exponentially) stable.LC are in the left half of the complex plane.4
Separation Principle
Assume that the system (11. a state feedback law is used in (11. However.1) is controlled via the following control law:
u = Kx
i = Ax+Bu+L(yCx)
(11. x + 0 as t > oo. It is a wellknown result that. the case.LC) can be placed anywhere in the complex plane by suitable selection of the observer gain L. of course.BK±. an estimate x of the true state x was used in (11.
xo)) = h(xu(t. xo))
Definition 11. we consider an unforced nonlinear system of the form
'Yn1
I x=f(x)
y = h(x)
h:ii. xo)) y(xu(t.
Clearly
y(xu(t.13) at time t originated by the input u and the initial state xo.13)
For simplicity we restrict attention to singleoutput systems. xo))
Definition 11. We also assume that f ().. xo). are sufficiently smooth and that h(O) = 0.
f :1
*
[F
(11.14)
and look for observability conditions in a neighborhood of the origin x = 0.
This means that 0. t]
xo = x2
There is no requirement in Definition 11. There are several subtleties in the observability of nonlinear systems. xo) is said to be distinguishable if there exists an input function u such that
y(xu(t.
. xo))
Vt E [0. Throughout this section we will need the following notation:
xu(t.
y(xu(t.1R
(11.3 A pair of states (xo.4 that distinguishability must hold for all
functions. xo)) = y(xu(t.2
Nonlinear Observability
x=f(x)+g(x)u
y = h(x)
Now consider the system
f :R">lR'. checking observability is much more involved than in the linear case.4 The state space realization ni is said to be (locally) observable at xo E R" if there exists a neighborhood U0 of xo such that every state x # xo E fZ is distinguishable from xo. xo): represents the output y when the state x is xu(t.296
CHAPTER 11.xo): represents the solution of (11. NONLINEAR OBSERVERS
11.. and. in general. It is said to be locally observable if it is locally observable at each xo E R". In the following theorem. is locally observable in a neighborhood Uo C Rn if there exists an input u E li such that
y(xu(t.g:]R'>1R
h : PJ .
observability is independent of the input function and the B matrix in the state space realization.14) is locally observable in a neighborhood Uo C D containing the origin. The following example clarifies this point.15) is equivalent to the observability condition (11. CA. CA2. in general. equivalently.
.2 Consider the following state space realization:
k1 = x2(1 .u)
0. condition (11.
Then h(x) = Cx and f (x) = Ax.
CAn1 } is linearly inde
Roughly speaking.
The following example shows that. pendent or.7).13) is observable.
We saw earlier that for linear timeinvariant systems.)
VLf1h =
CAn1
= V(CAx) = CA
and therefore '51 is observable if and only if S = {C.4 and Theorem 11.1 The state space realization (11. and we have
Vh(x) = C VLfh = V(. imply
global observability.
. for linear timeinvariant systems. Nonlinear systems often exhibit singular inputs that can render the state space realization
unobservable.
Example 11.1 state that if the linearization of the state equation (11. as discussed in Section 11.1
x2 = x1
y = xl
.13) is locally observable around the origin. This property is a consequence of the fact that the mapping xo 1 y is linear. then (11.11. definition 11.1 Let
= Ax y = Cx.
Example 11. Of course.2. for nonlinear systems local observability does not. if
Vh
rank
= n
dx E Uo
VLf 'h
Proof: The proof is omitted.2. . See Reference [52] or [36]. if rank(O) = n. NONLINEAR OBSERVABILITY
297
Theorem 11.
11. NONLINEAR OBSERVERS
which is of the form.
(ii) Design an observer for the resulting linear system. Substituting this input function. A complete coverage of the subject is outside the scope of this textbook. In the next two sections we discuss two rather different approaches to nonlinear observer design.298
CHAPTER 11.
x = f (x) + g(x)u y = h(x)
with
X1
0
If u = 0. we obtain the following dynamical equations:
{
A glimpse at the new linear timeinvariant state space realization shows that observability has been lost.2.3
Observers with Linear Error Dynamics
Motivated by the work on feedback linearization.4.
.1
Nonlinear Observers
There are several ways to approach the nonlinear state reconstruction problem. we have
rank({Oh.VLfh}) = rank({[1 0]. each applicable to a particular class of systems. it is tempting to approach nonlinear state reconstruction using the following threestep procedure: (i) Find an invertible coordinate transformation that linearizes the state space realization. (iii) Recover the original state using the inverse coordinate transformation defined in (i). Now consider the same system but assume that u = 1. [0 1]}) = 2
and thus
J i =f(x)
y = h(x) is observable according to Definition 11.
11. depending on the characteristics of the plant.
19)
Ao =
0
0
.20)
x = T1(z)
ast4 oc.2 ([52]) If there exist a coordinate transformation mapping (11. z E Rn
(11.
FER
(11.. ry=
L 7n(y. an observer can be constructed according to the following
theorem.
If the eigenvalues of (Ao+KCo) have negative real part. then x + x
z = .K(y . we obtain
i = xi
= T1(z) T1(z . then we have that z + 0 as t 4 oo.z)
.11. u)
(11.i.zn). then..
Co=[0 0 .z. Using (11.oo
0
.u) J
L 0
0
1
0 J
then.u)
I
(11.17)
and such that. .+ 0 as t . OBSERVERS WITH LINEAR ERROR DYNAMICS More explicitly. uER yER
T(0) = 0.[Aoz +'Y(y.16) into
(11.18)
where
0
1
0 0
1
.
form
(11.21)
such that the eigenvalues of (Ao+KCo) are in the left half of the complex plane.iz
= [Aoz +'Y(y. u)] . 0 1]. suppose that given a system of the form
299
x= f(x)+g(x.21). u) .. after the coordinate transformation. defining
z = Aoz + ry(y..
0 0
'Y1 (y.
Theorem 11. the new state space realization has the
x =Ao z
y = Coz
+ 7 ( y.3... and x = x .u) y=h(x)
there exist a diffeomorphism
satisfying
xE]R". u) .
Proof: Let z = z .K(y . under these conditions.zn)] = (Ao + KC0)z. We have
(11.18). u)
yEll
71(y.16)
z = T(x).
In the new coordinates.22) takes the form
I . NONLINEAR OBSERVERS
Example 11. Thus. z > 0 for any K1i K2 > 0.3 Consider the following dynamical system
it = x2 + 2x1
{
22 = X1X2 + X U
(11. as in the case of feedback linearization. in general. the system (11. this approach to observer design is based on cancelation of nonlinearities and therefore assumes "perfect modeling.tl = 2y3 + y3u
Z2=Z1+Zy2 y=Z2
which is of the form (11.u)K(y22)
z2]+[25y123u]+[K2
Lz2JL1 00J
The error dynamics is
l]
Lx2]
Thus.18) with
Ao=[ p]. The result is that this observer scheme is not robust with respect to parameter uncertainties and that convergence of the observer is not guaranteed in the presence of model uncertanties. perfect modeling is never achieved because system parameters cannot be identified with arbitrary precision.300
CHAPTER 11.
L1 K2J Lz2
El
It should come as no surprise that. Co=(01]. andy=Lz y3u
r
2y3§
zy
J
The observer is
z=Aoz+y(y.22)
y=x1
and define the coordinate transformation
z1=x22j X2 1
z2=x1.
. the "expected" cancellations will not take place and the error dynamics will not be linear." In general.
we restrict attention to the case of Lipschitz systems.u )lI < 7IIx1 x211
Now consider the following observer structure
Vx E D. i.4.4
Lipschitz Systems
The nonlinear observer discussed in the previous section is inspired by the work on feedback
linearization.
Theorem 11. under these assumptions. u)
y=Cx
(11. u)] . the estimation error converges to zero as t + oo. and belongs to the category of what can be called the differential geometric approach.24)
x=Ax+f(x. In this section we show that observer design can also be studied using a Lyapunov approach.f(x2..27)
then the observer error 2 = x .x is asymptotically stable. f satisfies the following condition:
If(x1.
(11.3 Given the system (11.min (Q)
2A. u) + L(y .25).f (x. u).23) and the corresponding observer (11. LIPSCHITZ SYSTEMS
301
11. u) .25)
where L E The following theorem shows that. if the
Lyapunov equation
P(A . is satisfied with
(11.
(11. defined
below.
Proof:
= [Ax + f (x.
. For simplicity.26)
y<
.LC)2 + f (x.C2)] = (A .LC)T P = Q
where p = PT > 0.e.[Ax + f (x.23)
where A E lRnxn C E Rlxn and f : Rn x R a Rn is Lipschitz in x on an open set D C R'.
Consider a system of the form:
J i = Ax + f (x.u*) .11.LC) + (A .
(P)
(11. and Q = QT > 0.u)+L(yCx)
Rnxl.
f(2.4 Consider the following system:
0
LX2J=[12J[x2J+[
y=[10][
Setting
2
2
I
L=[°
we have that
ALC=
Solving the Lyapunov equation
[ 0
1
1
2
P(A .302
CHAPTER 11..fp. equivalently
y
)'min (Q)
2A""' (P)
0
Example 11. consider the
Lyapunov function candidate:
V
iPi + iPx
_ 2TQ2+22TP[f(i+2. u)
. u)
jj
< 2y\.u)]
pTQiII
but
and
Amin(Q)II.f(.LC)T P = Q
. u)
.i+ x.
Therefore. provided that
Amin(Q)II:i1I2 > 2yAmax(P)jjill2
or. u)jjj
J12±TPII II f p+ x.u) .t1I2
II2xT P[f [(. V is negative definite. NONLINEAR OBSERVERS
To see that I has an asymptotically stable equilibrium point at the origin.LC) + (A .
This principle guarantees that output feedback can be approached in two
steps:
(i) Design a state feedback law assuming that the state x is available.5
0.5
0.
11. Thus.5
which is positive definite.f(x2)112 =
(Q2
92)2
= µ2)I = 1(6 +µ2)(6 {12)1
< 2121 16 . Of course. ry = 2k and f is Lipschitz Vx = [l. Denoting
x1 =
2
. NONLINEAR SEPARATION PRINCIPLE
303
with Q = I.2929.
(ii) Design an observer. How to maximize this region is not trivial (see "Notes and References" at the end of this chapter). We now consider the function f. and replace x with the estimate i in the control law.1.7071.5
0.8284
The parameter k determines the region of the state space where the observer is guaranteed
to work.
x2 =
[µ2
we have that
Il f(x1) . The eigenvalues of P are Amjn(P) = 0.1
we have
S
C2]T
:
I£21 < k. we obtain
1.921
< 2k11x1 . and so a function of the
observer gain L.5
Nonlinear Separation Principle
In Section 11.
.11.5.x2112
for all x satisfying 161 < k. and Amax(P) _ 1. and
ry=2k<
or
1
2Amax(P)
k<
1
6. this region is a function of the matrix P.4 we discussed the wellknown separation principle for linear timeinvariant
(LTI) systems.921 = 2k112.
34)
_ kt +u+2xi
_ kl. consider the following
example. in general.28)
(11. .5 ([47]) Consider the following system: x + x4 + x2
(11.29)
k>0
We proceed to design a control law using backstepping. z = 0 is a globally asymptotically stable equilibrium point of the
system
Lb
= x +x2z
i = cz . Indeed.+x2.35)
V = x2 . With this control law. To see this.32) (11.304
CHAPTER 11. then exponential stability of the observer does not.x3
.31)
(11. + u + 2x(x + x4 + x2£)
_ k + u + 2x(x + x2z)
Letting
2(x2+z2)
x2 + z[x3 . guarantee closedloop stability.30)
t: . we obtain
2 = x+x4+ x2(61 (x) = x
Now define the error state variable
z=S01(x)=1.
we have that
c>0
(11.x3 + kl.2x(x + x2z). the system (11.
Example 11. if the true state is replaced by the estimate given by an observer.k + u + 2x(x + x2z)]
and taking
u = cz . NONLINEAR OBSERVERS
In general.28)(11.
CC
With the new variable. nonlinear systems do not enjoy the same properties.cz2
which implies that x = 0.28).33) (11.q(x)
(11. we choose the control law cb1(x) = x2.29) becomes
i = x +x2z
i=
(11. Using as the input in (11.
one that estimates only the nonavailable state 1. Using the estimated law (11.
&)ekt
Solving for x. We have
_ x +x21.35). and suppose that an observer is used to estimate l. This is a reducedorder observer. assume that the error variable
z . = l. exponentially converges toward .. leads to finite escape time for certain initial conditions.=kl. czx3+2x31.oxo)et + Eoxoekt
which implies that the state x grows to infinity in finite time for any initial condition lxo >
1+k. the presence of the terms x21. exponentially converges to zero.
It follows that
&)ekt
which implies that l.. of course. and 2x31. NONLINEAR SEPARATION PRINCIPLE
305
This control law. Now assume that only x is measured.
. that is. assumes that both x and 1. we obtain
xo(1 + k)
x(t) =
(1 + k . .0. We have
_ kl.+u.
Even though l..
The estimation error is t. To see this.+u+klu
kl.5. Let the observer be given
by
1.11. are measured.l. we obtain
in the control
x+x2z+x21.
Observers with linear error dynamics were first studied by Krener and Isidori [45] and Bestle and Zeitz [10] in the singleinputsingleoutput case. [4] and [5]. one of the main problems with this approach to observer design is that the conditions required to be satisfied for the existence of these observers are very stringent and are satisfied by a rather narrow class of systems.
. Output feedback is a topic that has received a great deal of attention in recent years. [82].
There are several other approaches to nonlinear observer design. [5]. Section 11.306
CHAPTER 11.
Observers for Lipschitz systems were first discussed by Thau [80]. See References
[90]. Besides robustness issues. and [96] for further insight into how to design the observer gain L. [47]. not covered in this chapter. high gainobservers. [60]. See References [61]. nonlinear observer design is a topic that has no universal solution and one that remains a very active area of research. and observer backstepping. [8]. As mentioned. The main problem with this approach is that choosing the observer gain L so as to satisfy condition (11. The list of notable omissions includes adaptive observers [52][54].
Conditions under which the principle is valid were derived by several authors. Section 11.4 is based on this reference. the separation principle does not hold true for general nonlinear systems. [4].27) is not straightforward.3 is based on Reference [45] as well as References [52] and [36]. Extensions to multivariable systems were obtained by Krener and Respondek [46] and Keller [42]. NONLINEAR OBSERVERS
Notes and References
Despite recent progress. [83].
we show that V(x) is positive definite if and only if there exist ai E IC such that ai(IlxI) < V(x) Vx E D.1
Chapter 3
Lemma 3. then al and a2 can be chosen in the
class
Proof: Given V(x) : D C R" > R. To prove necessity. so defined has the following properties:
(i) X < V(x). Moreover. define
X(y) = min V(x).
A.
v<_IIxII<r
for 0 < y < r
The function X() is well defined since V(. if D = R" and V() is radially unbounded. It is clear that the existence of ai
is a sufficient condition for the positive definiteness of V.1: V : D > R is positive definite if and only if there exists class IC functions al and a2 such that ai(Ilxll) < V(x) < a2(IIXII) Vx E Br C D.Appendix A
Proofs
The purpose of this appendix is to provide a proof of several lemmas and theorems encountered throughout the book.) is continuous and y < lxll < r defines a compact set in ]R".
0 < l x l l < r. For easy of reference we restate the lemmas and theorems
before each proof.
Moreover.
307
.
We also have that
X(IIxIj) <_ V(x)
for 0 < IIxII < r
is not.2).1). in the class IC since. in general. Therefore.1)
is positive. from (A.
. Thus. It is also nondecreasing since as s increases. in K because it is not strictly increasing. the set over which the minimum is computed keeps "shrinking" (Figure A. al is strictly increasing.1: Asymptotically stable equilibrium point.
The function al can be constructed as follows:
al(s) = min [x)]
al
s
s<y<r
this function is strictly increasing. Let al(y) be a class K function such that al (y) < kX(y) with 0 < k < 1. in general. however. (iv) It satisfies X(0) = 0. To see this notice that
= min [X(y)1
y
J
s<y<r
(A.
(ii) It is continuous. X is "almost" in the class 1C.308
APPENDIX A. Thus
ai(IIxil) < X(IIxlI) <_ V(x)
for IIxII < r. It is not.
(iii) It is positive definite [since V(x) > 0]. it is not strictly increasing. PROOFS
Figure A.
choose 81 = min{8.1.2: Asymptotically stable equilibrium point.
.1) is stable if and only if there exists a class 1C function a constant a such that
IIx(0)II < 8
=
IIx(t)II < a(IIx(0)II)
Vt > 0.2)
Proof: Suppose first that (A.
To simplify the proof of the following lemma. We must show that this condition implies that for each e1i361 = al(es) > 0:
Ix(0)II < 81
=
IIx(t)II < el
Vt > to.2: The equilibrium xe of the system (3.
This proves that there exists
E X such that
al(jlxlj) < V(x)
The existence of
E 1C such that
for IxMI < r
V(x) < a2(lIXIl)
for IxII < r
can be proved similarly. CHAPTER 3
309
Figure A. we assume that the equilibrium point is the origin. for 11x(0)I1 < 61i we have
Ilx(t)II C a(Ilx(O)II) < a(a1(el)) = el.3)
Given e1.
Lemma 3.
(A.A. xe = 0.2) is satisfied. a'(fl) }. Thus.
(A.
2.
8x2
19
L9
9X
Rn
e
.1) is asymptotically stable if and only if there exists a class 1CG function and a constant a such that
IIxoII < 8 = Ix(t)II <_ Q(IIxoII. and we have 11x(t)II < e = a(IIx(0)II) provided that IIx(0)II < P.3 The equilibrium xe = 0 of the system (3. Thus.5 A function g(x) is the gradient of a scalar function V(x) if and only if the matrix
jxlxlxl
19 L9
0
09
8xz
8x.3 is in class /CC which implies that . The rest of the proof (i.
This set is bounded above and therefore has a supremum.. Given el. Let 8' = sup(s). IIx(t)II * 0 as t 4 oo and x = 0 is also convergent. let s C 1R+ be the set defined as
follows:
A = 161 E 1R+ :
IIx(0)II < bl = 11x(t)II < ei}. .t)
Proof: Suppose that A. 0)
Vt > 0. We can find a class 1C function that satisfies fi(r) _< 4i(r).
IIx(0)II < a' =
11x(t)11 < el
Vt > to. we now define:
1R+.
and a(.e.
Theorem 3.
Given e.3(IIxoII.t) * 0 as t > oo.310
APPENDIX A. Thus IIx(0)II < b'. It then follows that x = 0 is asymptotically stable. Then
IIx(t)II < 3(Ilxo11. Thus.
Lemma 3.) E 1C
. assume that (A. and (iii) it is nondecreasing. for each r E Since the inverse of a function in the class K is also in the same class. (ii) P(e1) > 0 de > 0. the converse argument) requires a tedious constructions of a 1CL function Q and is omitted.4 is satisfied.
The mapping 4): el * 5*(el) satisfies the following: (i) 4i(O) = 0. Moreover.4)
whenever IIxoII < b
and the origin is stable by Lemma 3. The reader
can consult Reference [41] or [88] for a complete proof. PROOFS
For the converse. choose any x(0) such that a(IIx(0)II) = e. but it is not necessarily continuous.
(A.3) is satisfied.
0) +
x.....s2. 0) ds2
+
axl(xl.. 0. .. by assumption.
+/
X
5ag.0. assume that p Y....0) ds2 + . X2
xn x.0).7):
xl
V (x) =
fox
g(x) dx = f g1(si.5)
We now compute the partial derivatives of this function:
av
ax1
gi (xi) o. 0) ds2 +
+
10
xn
9n(xl. ... s2.. .A.sn) dsn
X2 a91
+ J
dsn
5x2(x1.. . we assume that g(x) = VV(x)..
+
0) ds1
.. J
a2v
a2v
x
L
z2 x
a2v
a2v a2v and the result follows since TX1 xI = x.x2.
where we have used the fact that'. 0.
'99i 0g
axe au
and consider the integration as defined in (3. CHAPTER 3
is symmetric..x2.sn) dsn(A.
311
Proof: To prove necessity. .. s2....
f
0
0
X2
g2(xi... xl To rove sufficiency. 0.. It then follows that
. Thus
av
Thus
a2v
717 '2v a
a2v
acv
x2 x2
x
a2v xl
T=
I
2
o vv TX.0)
f
x2
(x1.1.9gn
+
0
91(x1. .
x(tn. Moreover. PROOFS
9V
ax1
91(x1.
Lemma 3. x0i to) is bounded. x2. Suppose that this is not the case. to) . . 0. s2. it contains
some subsequence that converges to a point q
contradicts Definition 3. to) is a bounded sequence of
points. 8q E N such that
Ilgk .. t0) . xo. since the solution is bounded.
Proof: Clearly N is bounded.
IIx(tn.12. we must show that it contains all its limit points. t0) approaches N as t > oo. But this is not possible since it
. Given e > 0. x0. . . Then there exists a sequence {tn} such that
I Ip . Now take an arbitrary infinite
sequence {tn} that tends to infinity with n. To prove that it is closed. Then x(t.. which approaches a limit point
gofNasi +oo.qjI <
2
Since qk is in N there is a sequence {tn} such that to 4 oo as n 3 oo.gkli < Thus. 0) + 91(x1. consider the sequence of points {qi} in N. the solution approaches N as t + oo. to) II > e
as n + oo
for any p in the closed set N. 0) I0
xz
+
+91 x1.312
APPENDIX A.4: If the solution x(t. xo. x0i to) of the system (3. Since the infinite sequence x(tn. it can be shown that
OV
a xe
=
g= (x)
Vi.1) is bounded for t > to. it must contain a convergent subsequence that converges to some point p E R'.
N. then its (positive) limit set N is (i) bounded. and
Ix(tn.qII < e
2
which implies that q E N and thus N contains its limit points and it is closed. By Definition 3.12 p E N. . (ii) closed. It remains to show that x(t. and by properties of bounded sequences. and thus N is nonempty. x0.. To show that this is the case."'.sn 0
91(x)
Proceeding in the same fashion. xo. and (iii) nonempty.
A. to). t) dr. To prove necessity. we have
n+oo
lim x[t. xo. positive definite continuous and bounded matrix Q(t). Let p c N.oo. to] = x(t.6)
is invariant with respect to (A. x(tn. t)x(t)]dr
x(T)T Q(T)x(T)dT. t)Q(T). xo. to) of the autonomous system
x = f(x)
(A. x(tn. let
(A. x0.D(T. to) n .8)
P(t) = j D'(T. (A.6: Consider the system i = A(t)x. xo.
it TO
J
t)x(t)]T `w (T) [4(T.7)
and since x(t. to) is in NVt. t)Q(T)4)(T.
t
Thus. to). to). then x(t. t)x dT
oo r4)T(T. The equilibrium state x = 0 is exponentially stable if and only if for any given symmetric.
O
.7) shows that x(t.6). to) E N for all t > to. to] = x(t. p.
Proof: We need to show that if xo E N.6) is continuous with respect to initial conditions. xo.
00
xT 4T (T. positive definite continuously differentiable and bounded matrix P(t) such that
Q(t) = P(t)A(t) + AT(t)p(t) + P(t)
Proof: Sufficiency is straightforward. P. xo. CHAPTER 4
313
Lemma 3. to] = x[t + tn.
x[t.5: The positive limit set N of a solution x(t.p. to) approaches N as t . there exist a symmetric. xo. xo.oo
(A.
Then there exists a sequence {tn} such that to + oc and x(tn.
It follows that
lim x[t + tn. to). Since the solution of (A.2
Chapter 4
Theorem 4.A. to) > p as n > oo.2.
Also.
k2 such that (see. this implies that xTPx < A2IIx1I2 On the other hand.
or
N II x(T)T A(T)x(T) II dT
II f Nx(T)T drx(T) dTII NxTx. since Q is positive definite. 3a > 0 such that xT Qx > axT x.
a
t) = 45(T. there remains to show that P satisfies (A. Thus. Vt E R. there exist k1. t)x dT
t
x(T)TQ(T)x(T) d7
f' aIIAN)II IIx(T)II2 dT
f'
>
Thus. Also. t)Q(T)d)(T. That P is symmetric follows from the definition. A bounded implies that I I A(t) I I < N. Chen [15] pp.
xTPx <
2 kM
2
Also.
a xTPx > NxTx
AlIIxII2 << xTPx
A1IIxII2 < xTPx < A2IIxII2
which shows that P is positive definite and bounded. Chapter 4.314
APPENDIX A.8).
1
The boundedness of Q(t) implies that there exist M: 11Q(t)II < M. 404) Ix(t)II < Thus
klek2(tto)
xTPx = f
t
klek2(Tt) Q(T)klek2(Tt) dT
ft 00
xTPx =
or
k2Q(T)e2k2(Tt) dr. Thus
Jt "0
1
k2Me2k2(Tt) dr. for example.
. PROOFS
Since the equilibrium is globally asymptotically stable. We know that (see Chen
[15]. t)A(t). Thus
00
xT4T (T.
.9)
where k E 1R and d(s) is strictly proper.Q(t)
4pT
(T._Also. Dividing n(s) by d(s).
A. we have
n
f (t) = G1{F(s)} = kb(t) + E g2(t)
2=1
where n is the degree of d(s) and each g2 has one of the following forms [u(t) represents the unit step function : (i) ktii1e)tu(t).3
Chapter 6
Theorem 6. then JjHxjjr < jh11A11x11p. we can express F(s) as:
F(s) = k + d s) = k + G(s)
(A. Then H is L stable if and only if hoo(t) + ha.Q(t)
P = P(t)A(t) .
Proof: Assume first that P(s) satisfies (i) and (ii). and let represent its impulse response.9). g2 ¢ G1 and then f (t) V A and F(s) il A. and so F(s) E A.t)Q(T)a(D(T.A. t) dTl
J
.1 Consider a function P(s) E R(s).Q(t). notice that if (i) does not hold. then f () contains
derivatives of impulses and so does not belong to A. Expanding G(s) in partial fractions and antitransforming (A. then.3.
Theorem 6. t)Q(T)4D (T. CHAPTER 6
Thus. (ii) a < 0. if (ii) does not hold. To prove the converse.AT(t)p(t) . if A = (v +. clearly.t)Q(T)41(T. which completes the proof. for some i.+)tu(t).(t) E A and moreover. and ke(°+.t)
00
dT . be the numerator and denominator polynomials of F(s).t) dr+JIf
315
4T(T.3. It follows that f (t) E A. if H
is Gp stable.
P= f
Jit
l A(t)AT(t)
which implies that
rJ V
(DT(T.2 Consider a linear timeinvariant system H. and (ii) all poles of F(s) lie in the left half of the complex plane. if A < 0 is a real pole of multiplicity m. and let n(s)andd(s) E R[s] respectively. Then F(s) E A if and only if (i) P(s) is
proper.yw) is a complex pole.
= Ilhou(t)IIc. by Holder's inequality (6. )p/q
(f t Iha(s T)IIu(T)I' dT)
ds
(II(ha)TIIGI )p/q j
and reversing the order of integration. PROOFS
E A and consider
Proof: The necessity is obvious.11)
Thus..7)I11q
C (ft
<
1 Iha(tT)IdTJ(ft
T)u(T)dT)
(II(ha)t!IGI )1/q (ft lha(tT)IU(T)IdT) 1/p
fT
(II(g2)TIIG.
(A.)'
<
(II(ha)TIIG1)p/q
fIUMIP
(f
0
T
Iha(s
1 .316
APPENDIX A. To prove sufficiency.10)
ftlha(tT)I
192(t)I=I I tha(tT)u(T)dTI <
0
Iu(T)I dr
and choosing p and q E 1R+ such that 1/p + 1/q = 1.T)u(T) dT t
0
= 91+92
We analyze both terms separately. We have
h(t) * u(t)
= hou(t) + J halt . assume that the output of H to an input u E Cp . it follows that
t
I92(t)I <
f Iha(t . Clearly
IIg1TIIc.T)I1/gIu(T)I dT.
For the second term we have that
(A.T)I ds) dr
.T)I1/PIu(T)I and
I halt .T)I Iu(T)I dT =
f
t
Iha(t T)I1/p Iha(t
.)p
=
I92(3)Ip ds
s JOT (II(ha)TIIG.5) with f we have that
I92(t)I
I ha(t . we obtain
t
(ft ha(s T)IIu(r)IP dT)
ds
(II(92)TIIG.
= IhoIIu(t)IIc.
)P (IIuTIIG.12)
Thus.10) we conclude that
II(h * u)TIIc. Assume H2 is a nonlinearity 0 in the sector [a.
(A.
Proof: Consider the system S of equations (6.
<
<
II(91)TII C.24) (Fig. where v is the number of poles of d(s) in the open right half plane.
and since.12) and (A. from (A..
(c) If a < 0 < 0: G1s) has no poles in the closed right half of the complex plane and the Nyquist plot of G(s) is contained entirely within the interior of the circle C'..
IIuTIIc. (Ihol + II(ha)TIIc. /3]. u E LP and
E A.) IkTIic. if one of the following conditions is satisfied then the system is G2 stable:
(a) If 0 < a < /3: The Nyquist plot of G(s) is bounded away from the critical circle C'. centered on the real line and passing through the points (a1 +30) and (/31 +30). Thus
IIUIIC. we can take limits as T a oo. (b) If 0 = a < /3: G(s) has no poles in the open right half plane and the Nyquist plot of G(s) remains to the right of the vertical line with abscissa 01 for all w E IR. + II(92)TIIc.6 Consider the feedback interconnection of the subsystems H1 and H2 : £2e > Gee.
11
IIHullc. by assumption.
<
II(ha)TIIG.)p. 6.A. CHAPTER 6
<
(II(ha)TIIGI )p/q II(ha)TIIC1 (IIuTIIc.)p
317
= (II(ha)TIIC.
It follows that
II(92)TIIc. Under these assumptions. and encircles it v times in the counterclockwise direction. and let H1 be a linear timeinvariant system with transfer function G(s) of the form: G(s) = 9(s) + d(s)
Where G(s) satisfies assumptions (i)(iii) stated in Theorem 6. = (Ihol + IIhaIIc1)
Theorem 6.3. 13)
.23) and (6.
(A. = IIh * ullc.6) and define the
system SK by applying a loop transformation of the Type I with K = q defined as follows:
o
f (l3 + a) 2
.6.
15)
By Theorem 6.9.
We have
Hi
H2'
= H [1+ q Hi ]s
s
=
G(s)
[1 + qd (s)]
(A
.
(ii) 7(Hi)7(Hz) < 1. Namely. the gain of H2 is
2
a)
(see Figure A.3: Characteristics of H2 and H2.318
APPENDIX A.
'Here condition (ii) actually implies condition (i). By lemma 6. if H2 = ¢ E [a.
.
(A. Separation into two conditions will help clarify the rest of the proof. the stability of the original system S can be analyzed using the modified system SK. The significance is that the type I loop transformation has a considerable effect on the nonlinearity 0.4.
Condition (i): H2 is bounded by assumption. if the following two conditions are satisfied. and moreover. it is easy to see that H2 = 0' E [r. (qs Hi is G2 stable if and only if the Nyquist plot of d(s) encircles the point + 30) v
times in counterclockwise direction.
14)
= H2q.
(i) Hl and H2 are G2 stable. 7(H2) = r. I3].3. PROOFS
x
Figure A.16)
According to the small gain theorem.)
y(H2) = r
(A. r]. where
r f (Q
Thus.1. then the system is stables.
Therefore.
. i.19)
Let G(3w) = x + jy. then C satisfies (x + a)2 = R2.r2) + y2(g2 .6. CHAPTER 6
319
Condition (ii): H1 and Hi are linear timeinvariant systems. To satisfy the stability condition (i). thus. the gain condition y(Hi)y(H2) < 1 is satisfied if the Nyquist plot of G(s) lies outside the circle C*.
which has the following solutions: x1 = (/31 + 30) and x2 = (a1 + 30).7w)J < 1
or
(A.19) can be written in the form
r2(x2 + y2) x2(82 .23)
Inequality (A.20) by a.r2) + 2qx + 1
<
>
(1 + qx)2 + g2y2
> 0
0.3: In this case we can divide equation (A.20)
a.22) is satisfied by points outside the critical circle. C is equal to the critical circle C' defined in Theorem 6.3 to obtain
x2 + y2 + (0.22) divides the complex plane into two regions separated by the circle C of center (a + 30) and radius R.
(A.A. Notice that if y = 0.3
(A. Then equation (A. the Nyquist plot of d(s) must encircle the point (q1 +30). Since the critical point
(q1 +. j0) is located inside the circle C*.C2 gain of H1 is
'Y(H1) = j1G1j. = sup IG(7w)Iw
Thus
ry(Hl)ry(H2) < 1
(A. the . and part (a) is proved. v times in counterclockwise direction. Thus
C = C'.e.3(x2 + y2) + (a + 3)x + 1
We now analyze conditions (a)(c) separately:
(a) 0 < a <.7w)l
dwER.22)
where
a
_ a+Q 2af
R
_ (/3a)
2a.21)
which can be expressed in the following form
(x + a)2 + y2 > R2
(A. It is easy to see that inequality (A.3. a)3) x +
>0
(A. it follows that the Nyquist plot of d(s) must encircle the entire critical circle C' v times in the counterclockwise direction.
(A.18)
rlG(yw)l < l1+gG(.17)
if and only if
G(7w) r [sup l[1+gG(.
24) is satisfied by points inside C.4: We will only sketch the proof of the theorem. This completes the
proof of Theorem 6.4
Chapter 7
Theorem 7. condition (ii) is satisfied if the Nyquist plot of d(s) lies entirely to the right of the vertical line passing through the point (/31 + 30).24). in this case (q1 +30) lies outside the circle C'. and that the function f (x.1). However. because of the change in sign in the inequality. Under these conditions (7.25)
where a and R2 are given by equation (A. But q1 = 2(a +.31. Equation (A. or Q[((3w)] > Q1 Thus.26)
(A 27)
. q1 = 2. 0).6.0)'. To
satisfy condition (i) the Nyquist plot of C(s) must encircle the point (q1 + 30) v times in the counterclockwise direction.
Proof of theorem 7. the critical point is inside the forbidden region and therefore d(s) must have no poles in the open right half plane.23). Assume that the origin is an asymptotically
stable equilibrium point for the autonomous system . PROOFS
(b) 0 = a <)3: In this case inequality (A.01.22) reduces to x >. Poles on the imaginary axis are not permitted since in this case the Nyquist diagram tends to infinity as w tends to zero. As in case (a) this circle coincides with the critical circle C`. violating condition (A. the converse Lyapunov theorem guarantees existence of a Lyapunov function V satisfying
al(IIxII) < V(x(t)) < a2(jjxjj)
OV (x) f (x 0) < c' (11X11) 8x
dx E D
Vx E D
(A.1) is locally inputtostatestable. (A. we obtain
x2 + y2 + ( a+
0) x +
a
<0
(A. Given that x = 0 is asymptotically stable.
A. u) is continuously differentiable. and if a = 0.24) can be written in the form
(x + a)2 + y2 < R2
(A.24)
where we notice the change of the inequality sign with respect to case (a).
(c) a < 0 <. The reader should consult Reference [74] for a detailed discussion of the ISS property and its relationship to other forms of stability.22) by a/3.4: Consider the system (7. however. It follows that d(s) cannot have poles in the open right half plane.320
APPENDIX A. It follows that condition (ii) is satisfied if and only if the Nyquist plot of d(s) is contained entirely within the interior of the circle C'.k = f (x. Thus.3: In this case dividing (A.
I&
!1x E Rn.u) < a(Ilxll)+a(Ilull)
We now show that given o E 1C.u E Rm.u E R'n.
E>0.A.. under the assumptions of the theorem. i.u)Il < Ilf(x.31)
.X(Ilull)
With a so defined.1) is inputtostate stable. Assume that the origin is an exponentially stable equilibrium point for the autonomous system t = f (x. there exist a E K such that
a f(x. and that the function f (x. we have that. By the assumptions of the theorem.
dx : lxll > a(IIull)
To this end we reason as follows. f (.
(A.f (x.a3(I1xll) +E.
VV(x) f(x. 0)II < L Ilull.
(A.
dx : IIxII >.u) < a(Ilxll) +a(Ilull)
Vx E W . u).
Now consider a function a E IC with the property that
a(llxll) >.
eX
f(x. a) is a supply pair with corresponding ISS Lyapunov function V.) satisfies
ll f (x.5: The proof follows the same lines as that of Theorem 7. Under these conditions (7.4.
Proof of theorem 7.
Theorem 7.
kl >0
(A.u) < a(IIxID.29)
the set of points defined by u : u(t) E Rm and lull < b form a compact set. u) is continuously differentiable and globally Lipschitz in (x.1).e.30)
VV(x) f(x..29) implies that in this set
lIf(x.4 and is
omitted.5: Consider the system (7.
L>0
(A.
Proof of Theorem 7.28)
We need to show that. 0).0)1I+E.7: Assume that (a. CHAPTER 7
and
321
av
ax
< ki
lxll.u) < a(IIxII). u) .
Vx : II4II ? X(Ilull)
and the result follows. Inequality
(A.
2q[a(s))a(s) < a (r) .n)
= q[V(x)) [a(IIuII)a(IIxII)l. we have that V(x) 5 a(IIxII) < 9(IIuII).34) is bounded by
2q[V(x)]a(IIxII)
(ii)
2a(IIxII) < a(IIuII): In this case.34) is bounded by
q[9(IIuIUa(IIuII) .34) is bounded by (A. smooth. the the righthand side of (A. PROOFS
To this end we consider a new ISS Lyapunov function candidate W..u) = P (V(x))V(x.32) we have that
W = aW f(x. nondecreasing function.
From (i) and (ii).
The rest of the proof can be easily completed if we can show that there exist & E 1C.a(s)
Vr.1q[V(x)1a(IIxII)(A. and
nondecreasing function q:
q(r)/3(r) < 4(r)
Vr E [0..
such that
q[9(r)]o(r) ..
Now define
(A. the right handside of (A. defined as follows
W = p o V = p(V(x)). notice that for any /3.
0(s) = d(a1(2a)).
. smooth.37)
To this end.
(A. then there exist a positive definite.35)
We now show that the right handside of (A.30) in the new ISS Lyapunov function W.34)
9=aoa1o(2o)
that is..32)
(A.36)
To see this we consider two separate cases:
(i) a(IIuII) < za(IIxII): In this case. Using (A.
(A.33)
p(s) = f q(t) dt
0
where
pt+ + R is a positive definite. oo).36). the following properties hold:
Property 1: if /3 = O(13(r)) as r + oo.
We now look for an inequality of the form (A.
where
(A. /3 E 1C.s > 0.322
APPENDIX A.
8 we need the following property
Property 2: if /3 = 0(13(r)) as r f 0+. oo).39)
we have that
q[a(s)]a(s) > 2&(s) j q[B(s)]a(s) < &(r)
and the theorem is proved substituting (A.
(A.41)
a(r) = q[9(r)]a(s)
we have that (A. there exist a positive definite. [so that 8.37).42)
. By property 2. smooth.]. defining
a(s) = 1 q(c (s))a(s)
(A.38)
(A. and
nondecreasing function q:
a(s) < q(s). there exist q:
/3 < q(s)Q(s)
Vs E [0. /3(.3(s)
Vs E [0.40) is satisfied. consider a E K.37). oc).
Finally. defined in (A. then there exist a positive definite. With these definitions. and nondecreasing function q():
q(r)/(r) < /3(r)
Thus
Vr E [0. By property 1.) and smooth.
Defining /3(r) = 2a[91(s)] we obtain /3(r) = d[91(s)].
Thus
2q[9(s)]a(s)
Finally.A.40) into (A.4.35) is 1C. which implies (A.40)
Proof of Theorem 7.8: As in the case of Theorem 7.
q[9(r)]a(r) < &. oo).
(A. defining
(A.
/ (r) = &(O'(r))
E K. CHAPTER 7
323
With this property in mind. and
define
/3(r) = a(B1(r)).
we obtain
(A.Hx>T
IISyIIT = IIyIIT .49)
= (y.I)x >T
(Hxx. and (A.
. if the gain of S is less than or equal to 1.4(x.Hxx>T
= IIHxliT + IIxIIT . y >T ((I + H)x.2(x.50) from (A.48)
IISyIIT = ((H . (x.43)
(a) H is passive if and only if the gain of S is at most 1.IIyIIT =
IISyIIT <.45) (A. then (A.I)x
(A.IIyIIT
(A. so we can define
xr (I + H)1y
(I + H)x
(A.5
Chapter 8
H)1
Theorem 8..47)
= =
y
Hx =
= Sy
yx
(H . subtracting (A. Hx >T
(A. This completes the proof of part (a). and assume that (I + H) is invertible in Xe .51)
Now assume that H is passive.IISyIIT .49).I)x.3: Let H : Xe > Xe .51) implies that
IISyIIT
. In this case.52)
which implies that (A. that is. Hx >T= IIyIIT . i. Hx >T
IIyIIT
(A. (H .44) is satisfied. S is such that
II(Sx)TIIx s IIxTIIx
Vx E XefVT E Xe
(A. On the other hand. Define the function S : Xe > Xe :
S = (H . (I + H)x >T IIHxIIT+IIxIIT+2(x.0
and thus H is passive. PROOFS
A. (I + H) is invertible.I)(I +
H)1
(A.e.50)
Thus. assume
that (I +
We have
: Xe > Xe .46) (A.324
APPENDIX A.44)
(b) H is strictly passive and has finite gain if and only if the gain of S is less than 1.51) implies that
4(x.
(a) : By assumption.I)(I + H)1y = (H . Hx >T> 0.
we have
IIyIIT . 45(1 + y(H) )2].55)
where
0 < 5' < min[l.56)
and since 0 < 5' < 1.53) in (A. we see that
ISyIIT +5'IIyIIT
=
<<
IIyIIT
IISyIIT
(1.
(A. assume that S has gain less than 1.5.Hx >T >
b'IIyIIT
(A.55) in (A. The finite gain of H
implies that
IIHxIIT << y(H)IIxIIT
Substituting (A.
(A. if H is strictly passive.57)
Substituting IISyIIT by its equivalent expression in (A.IIyIIT
I
y(H)IIxIIT
IIyIIT
(1+y(H))IIxIIT
or
(1 + y(H))l IIyIIT <_ IIyIIT.51).Hx>T >_ SIxIT 4(x.A.5')IIyIIT
0 < 5' < 1. We can write
ISyIIT
(1 . Hx >T
IIyIIT
5'11yI
= 4(x.5')112IIyIIT
(A. then
(x.4(x.
Thus. which implies that S has gain less than 1.Hx >T >

51IIyII2
T
.51)1/2 < 1.51).47) in the last equation. we have
II Y . we obtain
(A.54)
4(x.54). For the converse. substituting (A. Hx >T > 461I x I T
Substituting (A.XIIT
y(H)IIXIIT
<
IIYIIT . CHAPTER 8
(b)
:
325
Assume first that H is strictly passive and has finite gain.53)
Also. we have that 0 < (1 . Hx >T > 45(1 + y(H))211YIIT
or
4(x.
we start by applying a loop transformation of the type I. First consider the subsystem
H2' = H2 + EI.
(4 .326
APPENDIX A.
. Hx >T
6'(II HxIIT + IIxIIT + 2(x.
We know that the feedback system S of Figure 7.26')IIHxIITIIxIIT
so that
II HxIIT
(4 626/) IIxIIT
from where it follows that H has finite gain. This completes the proof.
(ii) H2 is passive (and possibly nonlinear).5. H2xT > +e(XT.26')
IIxIIT
so that H is strictly passive. We have.6'II x112
by Schwartz' inequality
6'II HxIIT < (4 .5 is inputoutput stable if and only if the system SE of Figure A. To see this.50).50) to obtain
6'II HxIIT < (4. XT > . e > 0.26')(x. we have
6'(IIHxJIT+IIxIIT)26'IIxIIT
(x. (H2 + EI)xT >= (XT. PROOFS
and substituting IIyIIT using (A.10 Consider the feedback interconnection of Figure 7. > 0. To see that H has also finite gain.4. and assume that
(i) Hl is linear timeinvariant.26')(x.49) and (A.56) by their equivalent expressions in (A.26')IIHx1ITIIxjIT
. substitute IISyJIT and IIyIIT in (A. as shown in Figure A. we obtain
4(x. Hx >T)
(4 . Hx >T 6'IIxIIT
<
and since 6'11x117. strictly proper.4 is inputoutput stable. and SPR.
Under these assumptions.
Theorem 8. XT » e(xT. with K = e.
Proof: We need to show that y E X whenever u E X. the feedback interconnection is inputoutputstable.
(XT. Hx >T>
it
(4 .Hx>T
and since 0 < 6' < 1.
............. :
Figure A... :
.4: The feedback system S............................... .. CHAPTER 8
327
Hi = H1(I H1
Yi
eHl)1
:........5....
H2
14
H2 = H2+e
:.......................................................................................
......................A.....
Also. Q.30) are satisfied.60) (A. and HZ is strictly passive. y.61) for appropriate functions L and W. denoting 7(H1).
A.
w(t) dt =
o . since H1 is SPR... the result follows from Theorem 8.e. notice that.13) is QSRdissipative [i.29)(8. R and Rgxm satisfying function L : Rn > Rq and W : 1Rn
fi(x)
>
0.O(x(t_1)) > 0
. then by the small gain theorem.µL + 2CCT C
and provided that 0 < e < e' = p)tmjn[L]/(2A.61)(A. given P.6)] if there exists a differentiable function 0 : ]Rn .5 is a solution of equations (A. min(c*. dissipative with supply rate given by (9. L..61)
ST h(x) .328
APPENDIX A.4.6
Chapter 9
Theorem 9.
0(0) = 0
(A. hence passive. To see this.
P(A + eBC) + (A + eBC)T P = _QQT .58)
81
(O)T
gT
ax
31f (x) = hT (x)Qh(x)
.
w(t) dt +
fo
T w(t)
dt = cb(x(T)) . PROOFS
It follows that HZ is strictly passive for any e > 0.4(x(to)) > 0. a real matrix Q and µ sufficiently small such that (8. Thus.
Notice first that for any state x0 there exist a control input u E U that takes the state from any initial state x(t_1) at time t = t1 to xo at t = 0. Hi is strong SPR.LT(x)L(x)
(A. We now argue that Hi is SPR for
sufficiently small E. the matrixµL2eCTC is positive definite. we have that t. But then it follows that.ny[CTC]). To prove necessity we show that the available storage defined in Definition 9. µ.
t'
/
t
In particular
T
/t. We conclude that for all e E (0. there exist positive definite matrices P
and L.59)
(A. w(t) dt = O(x(tl)) . Hi is stable for any e < '. Since the system is dissipative.2: The nonlinear system Eli given by (9.WT L(x)
R = WTW
Proof: Sufficiency was proved in Chapter 9.1)).
LT (x)L(x)
This completes the proof.y)
[f(x) +g(x)u] +yTQy+2yTSu+uTRu a 00"2x) f(x) .6.0a(xo) Vt > 0
d(x.e. implies that
i
t
w(s) ds > 4a(x(t)) .1 the available storage is itself a storage function.A. we have that
d(x. has the following properties: (i) d(x. u). CHAPTER 9
that is
ff
329
Jo Jt. The righthand side of the last inequality depends only on xo. Hence there exists a bounded function C : R' 4 IR such that
T
T w(t)
dt > 
fo f
w(t) dt
w(t) dt > C(xo) > oc
JO
which implies that 0a is bounded.u) = [L(x) + Wu]T [L(x) + Wu] = LTL+2LTWu+uTWTWu
which implies that
R = WTW
g ax = ST h(x) . u) can be factored as
d(x.
. whereas u can be chosen
arbitrarily on [0. u. i. and (ii) it is quadratic in u.u)
a0axx)
fw
d
dtx)
> 0.W T L(x)..
which. 2
ax f = 1 T aSa
aSa
hT (x)Qh(x) .
+w(u. since 0a is differentiable by the assumptions of the theorem.u)
Substituting (9. so defined. It then follows that d(x.13). T].a(x)g(x)u+hTQh+2hTSu+UTRu
we notice that d(x. By Theorem 9. u) > 0 for all x.
(ii) The distribution A = span{g.2.. .63)
8x
1 g(x) = 0
54
a ng(x)
0. f (x)
and
i4
0
ag(x) = 0 l
ag(x) = 0
2
(A. adflg(x)}nxn
has rank n for all x E Do. the matrix
.
C = [g(x). adfg(x).
adflg(x)} are linearly independent in Do.
Proof: Assume first that the system (10. Then there exists a coordinate transformation z = T(x) that transforms (10.28) into a system of the form . ad f2g} is involutive in Do.62)
ax 1 f W = Tn(x)
T.. From (10.7
Chapter 10
Theorem 10. PROOFS
A.330
APPENDIX A. .
.17) and (10..
Equivalently. adfg(x).z = Az + By. ad fg. with A = Ac and B = B.28) is inputstate linearizable on Do C D if and only if
the following conditions are satisfied:
(i) The vector fields {g(x).18) we know that T is such that
ax f (x) = T2(x)
axf(x) = T3(x)
(A. as defined in Section 10.28) is inputstate linearizable.
.2: The system (10.
A. 7. CHAPTER 10
Equations (A.62) and (A.63) can be rewritten as follows:
331
LfTi=LfTi+1,
L9T1 = L9T2 =
(A.64)
= L9Tn_1 = 0 L9Tn # 0
(A.65)
By the Jabobi identity we have that
VT1[f,g]
_ V(L9T1)f V(LfT1)g
0L9T2=0
or
VTladfg = 0.
Similarly
OT1adfg = 0 VT1ad f1g # 0.
(A.66)
(A.67)
We now claim that (A.66)(A.67) imply that the vector fields g, adf g, , adf1g are linearly independent. To see this, we use a contradiction argument. Assume that (A.66) (A.67) are satisfied but the g, adfg, , adf 1g are not all linearly independent. Then, for some i < n  1, there exist scalar functions Al (X), A2(x), , .i_1(x) such that
i1
adfg =
k=0
Ak adfg
and then
n2
adf19=
k=ni1 and taking account of (A.66). we conclude that
n2
adfg
OT1adf 1g = > Ak VT1 adf g = 0
k=nil
which contradicts (A.67). This proves that (i) is satisfied. To prove that the second property is satisfied notice that (A.66) can be written as follows
VT1 [g(x), adfg(x), ... , adf 2g(x)] = 0
(A.68)
that is, there exist T1 whose partial derivatives satisfy (A.68). Hence A is completely integrable and must be involutive by the Frobenius theorem.
332
APPENDIX A. PROOFS
Assume now that conditions (i) and (ii) of Theorem 10.2 are satisfied. By the Frobenius theorem, there exist Tl (x) satisfying
L9T1(x) = LadfgTl = ... = Lad,._2gT1 = 0
and taking into account the Jacobi identity, this implies that
L9T1(x) = L9L fT1(x) _ ... = L9L' 2Ti (x) = 0
but then we have that
VT1(x)C=OT1(x)[9, adf9(x), ..., adflg(x)] = [0, ... 0, Ladf19Ti(x)]
The columns of the matrix [g, adfg(x), , adflg(x)] are linearly independent on Do and so rank(C) = n, by condition (i) of Theorem 10.2, and since VT1(x) # 0, it must be true that adf19Ti(x) 0
which implies, by the Jacobi identity, that
LgL fn1T1(x) 4 0.
Proof of theorem 10.3:
The proof of Theorem 10.3 requires some preliminary results. In the following lemmas we consider a system of the form
f x = f (x) + g(x)u,
S
f, g : D C R"  Rn
l y=h(x),
h:DClg">R
(A.69)
and assume that it has a relative degree r < n. Lemma A.1 If the system (A.69) has relative degree r < n in S2, then
Lad)fgLfh(x)
k
 tf 0
0<j+k<r1
j+ k = r  1
(1) L9Lf lh(x) 54 0
(A.70)
VxEc, Vj<r1, k>0.
Proof: We use the induction algorithm on j.
A.7. CHAPTER 10
(i)
333
j = 0: For j = 0 condition (A.70) becomes
LAkh(x)=
(0
tl L9Lf 1h(x)
0<k<r1
k=r1
which is satisfied by the definition of relative degree.
(ii) j = is Continuing with the induction algorithm, we assume that
k
_
0
(1)'L9Lf lh(x)
0<i+k<r1
i+k=r1
(A.71)
is satisfied, and show that this implies that it must be satisfied for j = i + 1. From the Jacobi identity we have that
Lad1gR. = LfLpa  LOLfA
for any smooth function A(x) and any smooth vector fields f (x) and 3(x). Defining
A = Lf h(x) = adfg
we have that
Laf i9Lf h(x) = LfLa fgLfh(x)  Lad;I9Lf+lh(x)
(A.72)
Now consider any integer k that satisfies i + 1 + k < r  1. The first summand in
the righthand side of (A.72) vanishes, by (A.71). The second term on the righthand side is k+ 1h() x
LadisLf
=
10
0<i+k+1<r1
i+k+1=r1
(1)'L9Lf lh(x)
and thus the lemma is proved.
Lemma A.2 If the relative degree of the system (A.69) is r in 1, then Vh(x), VLfh(x), , VL f lh(x) are linearly independent in Q.
Proof: Assume the contrary, specifically, assume that Vh(x), VLfh(x), , VLf'h(x)
are not linearly independent in 0. Then there exist smooth functions a1(x), such that + arVLf lh(x) = 0. aiVh(x) + a2VLfh(x) +
,
ar (x)
(A.73)
334
APPENDIX A. PROOFS
Multiplying (A.73) by adfg = g we obtain
a1Lgh(x) + a2L9Lfh(x) +
+ arLgLf 1h(x) = 0.
(A.74)
By assumption, the system has relative degree r. Thus
L9L fh(x) = 0
LgLf lh(x)
Thus (A.74) becomes
0.
for 0 < i < r  1
arLgLr 1h(x) = 0
and since L9Lr 1 76 0 we conclude that ar must be identically zero in Q.
Next, we multiply (A.73) by ad fg and obtain
aladfgh(x) + a2adfgL fh(x) +
+ aradfgLr 1h(x) = arlad fgLf lh(x) = 0
where lemma A.1 was used. Thus, ar1 = 0. Continuing with this process (multiplying each time by ad fg, ... , adr1g we conclude that a1 i , ar must be identically zero on 0, and the lemma is proven.
Theorem 10.3: Consider the system (10.41) and assume that it has relative degree r < n Vx E Do C D. Then, for every x0 E Do there exist a neighborhood S of x0 and smooth
function µ1i
,
such that
(i) L9uj(x)=0,
(i2)
for 1 <i <nr, VxCQ, and
F
Ai (x)
1
T(x) =
01
L
Or
J
01 = h(x) 02 = Lfh(x)
Or = Lf 1h(x)
is a diffeomorphism on I.
A.7. CHAPTER 10
Proof of Theorem 10.3: To prove the theorem, we proceed as follows.
335
The single vector g is clearly involutive and thus, the Frobenius theorem guarantees that for each xp E D there exist a neighborhood SZ of xo and n  1 linearly independent smooth functions µl, . ,µ._1(x) such that
Lyµi(x) = 0
for 1 < i < n  1 dx E SZ
,
also, by (A.73) in Lemma A.2, Vh(x),
defining
r
VLf2h(x) are linearly independent. Thus,
µ1(x)
1
T(x) =
i.e., with h(x),
, Lf lh(x) in the last r rows of T, we have that
VLFlh(xo) Vspan{Vµ1, ...,
phn1}
= rank (15 (xo)J I = n.
which implies that (xo) # 0. Thus, T is a diffeomorphism in a neighborhood of xo, and the theorem is proved.
Bibliography
[1]
B. D. O. Anderson and S. Vongpanitlerd, Network Analysis and Synthesis: A Modern Systems Theory Approach, PrenticeHall, Englewood Cliffs, NJ, 1973.
[2] P. J. Antsaklis and A. N. Michel, Linear Systems, McGrawHill, New York, 1977.
[3] V. I. Arnold, Ordinary Differential Equations, MIT Press, Cambridge, MA, 1973.
[4] A. Atassi, and H. Khalil, "A Separation Principle for the Stabilization of a Class of Nonlinear Systems," IEEE Trans. Automat. Control," Vol. 44, No. 9, pp. 16721687,
1999.
[5]
A. Atassi, and H. Khalil, "A Separation Principle for the Control of a Class of Nonlinear Systems," IEEE Trans. Automat. Control," Vol. 46, No. 5, pp. 742746, 2001.
[6]
J. A. Ball, J. W. Helton, and M. Walker, "H,,. Control for Nonlinear Systems via
Output Feedback," IEEE Trans. on Automat. Control, Vol. AC38, pp. 546559, 1993.
[7]
R. G. Bartle, The Elements of Real Analysis, 2nd ed., Wiley, New York, 1976.
[8]
S. Battilotti, "Global Output Regulation and Disturbance ttenuation with Global Stability via Measurement Feedback for a Class of Nonlinear Systems," IEEE Trans. Automat. Control, Vol. AC41, pp. 315327, 1996.
,
[9]
W. T. Baumann and W. J. Rugh, "Feedback Control of Nonlinear Systems by Extended Linearization," IEEE Trans. Automat. Control, Vol. AC31, pp. 4046, Jan.
1986.
[10] D. Bestle and M. Zeitz, "Canonical Form Observer Design for Nonlinear TimeVariable Systems," Int. J. Control, Vol. 38, pp. 419431, 1983.
[11] W. A. Boothby, An Introduction to Differential Manifolds and Riemaniann Geometry, Academic Press, New York, 1975.
337
Isidori. No. "A Frequency Domain Philosophy for Nonlinear Systems. Decision and Control. 32. Pearson. "C'Optimal Compensators for ContinuousTime Systems.
[16] M. "Optimal Rejection of Persistent Disturbances. 831847. New York. Brockett. L. Desoer and M. J.
[26] J. IFAC World
Congress. 23rd IEEE Conf. pp. pp. Controllers for MIMO DiscreteTime Systems. A. MA. 889895. New York. Linear Systems Theory and Design. K. Vol. 32. SpringerVerlag. pp. Prentice Hall.. 19641968. Control Probles.
[24]
I. 2002. 1988. Vidyasagar.. Guckenheimer and P. Fantoni and R.
Academic Press. 38. PrenticeHall. Control Theory. on Automatic Control. A. Brogan. L. Vol. 8. Reading. 1986. Glover. W. 4. 33. 1975. 2nd ed. Robust Stability and Mixed Sensitivity Minimization. "State Space Solutions to Standard H2 and H. 10. Nonlinear Oscillations. and
Bifurcations of Vector Fields. 1984.338
BIBLIOGRAPHY
[12] R. NonLinear Control for Underactuated Mechanizal Systems. Lozano. Vol. 1984. A.
[25] B. Holt.
[13] W. No. 1978. "Feedback Invariants for NonLinear Systems. 11151120. Doyle.
1991. pp. on Automatic Control. No. New York. No.
. New York. Dahleh and J." IEEE Trans. August 1989." IEEE Trans.
[18] M. A. A Course in H. 6. 1995. [23] J. 1989. 722731. An Introduction to Chaotic Dynamical Systems. Pearson. Chen. 3rd ed. 11Optimal Controllers for DiscreteTime Systems. SpringerVerlag. Francis. WA. Vol. P. I.
[22] R.. NJ. Devaney." AddisonWesley. Byrnes and A. Dahleh and J. NJ. Dahleh and J. Dahleh and I.
[20] M. pp. Control of Uncertain Systems: A Linear Programming Approach.. ENglewood Cliffs. on Automatic Control. Modern Control Theory. on Automatic Control. P. 1983." Proc. Francis. Rinehart and Winston. SpringerVerlag.
[17] M. A. DiazBobillo. Vol. Seattle. 8. Englewood Cliffs.. 1987. Pearson." IEEE Trans. Khargonekar and B. 1987. 1987.
[15] C. Dahleh and J. T." Proc. Holmes. pp.
[14] C.
[21] C.
[19] M. A." IEEE Trans. "Feedback Systems: InputOutput Properties". 15691573.. American Control Conf. New York." Proc. 2nd ed. Dynamical Systems. A. C. Pearson.
1987.. J.
[31] D. 1 pp. Academic Press. 28. Berlin. Control. Hauser.1992. 377382. PrenticeHall. New York. and G. SpringerVerlag. "Stability of Nonlinear Dissipative Systems.
[42] H.. pp.
[33] P. 13. 309. 3rd ed. Brockett. Isidori and A. AC37. "Design for MultiInput Nonlinear Systems. Kokotovic.
339
[28] J. Isidori. on Automat.. pp. Sussmann.
12831293. pp. Jan. 1980. 46. AC21. 392398. R. Halmos. Khalil. pp. and H. J. Vol. J. Hill and P.. 1995. SpringerVerlag. "On Linearization of Control Systems. 6. reprinted 1987. Smale. 133. "NonLinear Observer Design by Transformation into a Generalized Observer Canonical Form.. New York. Moylan. Jan. K. Nonlinear Control Systems. Vol. S. W. Nonlinear Control Systems. "Stability Results for Nonlinear Feedback Systems. Vol. "Dissipative Dynamical Systems: Basic State and InputOutput Properties. Sci. Boston. Englewood Cliffs. Englewood Cliffs. R. "Disturbance Attenuation and H. 5354. Respondek. 327357. Automat. Isidori. Moylan.
[38] A. J. NJ. 1980. Vol. Automat. Hirsch and S. March 1992.. 1996. Differential Equations. Math. SpringerVerlag. 268298. R. pp.
[30] D.
[36] A. 1987. Control. Hunt. 1976.
[40] T. Polonaise Sci. Control.
. Ser. Nonlinear Systems.
[32] M. "Nonlinear Control via Approximate InputOutput Linearization: The Ball and Beam Example. 1980. No.
[39] B. 2nd ed. Vol. and P. Control.
[34] L. Linear Systems. Hill and P. Vol 32. 1967. 708711. Tao. W. pp. Su.BIBLIOGRAPHY
[27] W. 19151930. R. 1974. PrenticeHall." Int. Franklin Inst. Finite Dimensional Vector Spaces. Moylan. Ioannou and G. Meyer. Stability of Motion. J. Control."
Automatica. eds. Hahn. July 1977. pp. [35] P. NJ. Acad. Jakubczyk and W. Vol 37." IEEE Trans. No. 1974. SpringerVerlag. Control via Measurement
Feedback in Nonlinear Systems. R. 1983." IEEE Trans. [41] H.
[29] D." IEEE Trans. Astolfi. New York. Millman. J."." IEEE Trans. Oct. "Frequency Domain Conditions for Strictly Positive Real Functions. 1999. S. Dynamical Systems and Linear Algebra." Bull." in Differential Geometric Control Theory. Automat. Kailath. Hill and P. J." the J.
[37] A. New York. Keller. Sastry. pp. Birhauser. Vol..
[54] R. 1961. I. NJ. "The Joy of Feedback: Nonlinear and Adaptive. Nonlinear and Adaptive Control
Design.". Nonlinear Control Design: Geometric. Krstic.
1963. 1992. 7. Lefschetz.. 1983. Respondek. Vol. No. Nijmeijer and A. Michel. J. of Circuit Theory. Stability of Motion. Kokotovic. New York." IEEE Trans. Stanford. Stability by Lyapunov's Direct Method. CA.
[49] J. 3." Int. 1988. 1960. No. P.340
BIBLIOGRAPHY
[43] P. Krener and W. N. 4. 520527. K. "Deterministic Nonperiodic Flow.
[53] R. van der Schaft. 197216.. pp. "Adaptive Observers for a Class of MultiOutput Nonlinear Systems. pp. "Some Extension of Lyapunov's Second Method. Wiley. Mag. 2nd Edition.
[45] A. 13001304. Kanellakopoulos.
(44] N. 1990. 717. P. Englewood Cliffs. No. Academic
Press.
[50] E.. Dec. 23. Isidori. 130141. pp. Control Optimization. Vol. Tomei. pp. NJ. Krasovskii. N.. 20. N. 40. Stanford University Press. 6. Cambridge. Vol. June 1992. Vol. "Linearization by Output Injection and Nonlinear Observers. 4752. Tomei. 1989. Nonlinear Dynamical Control Systems. Academic Press. Adaptive and Robust. Narendra and A. Anaswami. Stable Adaptive Systems. J. M. Adaptive Control Signal Process. 1985. Vol. J. Kokotovic." SIAM J." J.
[56] K. Miller and A. New
York. Automat.
[52] R.
pp." IEEE Control Sys. Control. Berlin. "Adaptive Observers with Arbitrary Exponential Rate of Convergence for Nonlinear Systems. Marino and P. J. LaSalle. Atmos. 353365. Lorenz. 1982. "Nonlinear Observers with Linearizable Error Dynamics. Marino and P. pp." Syst. 7. and S. Krener and A. Springer Verlag. PrenticeHall. Tomei.
[46] A. Cambridge University Press. LaSalle. Marino and P.
[51]
I. Ordinary Differential Equations. Elements of Functional Analysis. 1995. UK. New York. 1995. 6.
[47] M.
[55] R. 1995." IRE Trans. and P. pp. Englelwood Cliffs.
. PrenticeHall. Vol.
[48] J. 1963.
[57] H. Sci. Maddox. Control Lett.
" IEEE Trans. Sontag. 7984. Ott.
[64] W. 1995. Sontag.
.
[70] E.. Sandberg. Control. Berlin. March 1998. Lecture Notes in Control and Information Sciences. pp. AC43. D.44. Rajamani.. J.
[68] J. Control. "Analytical Framework for Gain Scheduling". Automat. Li. pp. "On the G2Boundedness of Solutions of Nonlinear Functional Equations. 2nd ed. Vol. Sandberg.BIBLIOGRAPHY
341
[58] E. Tech J. Vol. eds. Hedrick.. 18091812.
1991. K.
[61] R. J. 2. SpringerVerlag. "Observers for Lipschitz Nonlinear Systems.
Cliffs. Chaos in Dynamical Systems. Vol." Feedback Control. Vol. 34. Principles of Mathematical Analysis. E.
1965. Slotine and W. D. 215235.
1996. 59." Bell Sys. "Smooth Stabilization Implies Coprime Factorization.
[71] E. "An observation Concerning the Application of the Contraction Mapping FixedPoint Theorem and a Result Concerning the NormBoundedness of Solutions of Nonlinear Functional Equations.35. No. Differential Equations and Dynamical Systems.
[66]
[67]
I. 871898.4. Cambridge University Press. I. Tech. NJ. McGrawHill. Automat. pp." Bell Sys. "Observer Design for a Class of Nonlinear Systems. W.
1993.
1976.44. UK. New York. Jan. Linear System Theory. 1989. [62] W. W. 1991. IEEE Control Systems Magazine. 3rd ed. "Further Facts about InputtoState Stabilization. "Some Results Concerning the Theory of Physical Systems Governed by Nonlinear Functional Equations. pp. 1990. B. Raghavan and J. Prentice Hall. Control. Englewood Cliffs.
[60] S. Vol. 1964. 11.. Nonlinear Systems and Complexity. Applied Nonlinear Control. Rudin. "State Space and I/O Stability for Nonlinear Systems. Francis and A. Vol. D. NJ. pp.
[65]
I. 1991. vol." IEEE Trans. 15811599. pp. Automat. Cambridge. 1965.
[63] W.J. R. 397401. SpringerVerlag. New York. 473476. Sontag. 435443." Bell Sys. Control. 515528. 1994. 1. Tech J. No." IEEE Trans. Sandberg. W..
[59] L. Rugh. Perko..43. pp." Int. Englewood
[69] E. Rugh. Tannenbaum. J. pp. PrenticeHall. J. No. pp. Vo1.
pp. 310311. pp. Ioannou. "On the Linear Equivalents of Nonlinear Systems. Reading.
[77] R.
[81] D. Automat. "Strictly Positive Real Matrices and the LefschetzKalmanYakubovich Lemma. 1995." Int. 1973. Control. 1996. 17. Vol. 20. S. Control Sys. 17. pp.
Control Lett. pp. pp. UK. 704754. March 1974. pp. Control. pp. "On a State Space Approach to Nonlinear H"C' Control. 1991. pp..
[76] S. Control. Control Lett. Su. 1. H. 1982. Strogatz. 1994." Syst. Subrahmanyan. Vol 33. M. 1991." Syst. Control Lett. [74] E. No. Olson and P. H. 40. pp. 14761478.. 427438. "Changing Supply Functions in Input/State Stable Systems.
[75] E. "On the InputtoState Stability Property. Sontag and A.
[83] J.
Vol. D.
1988. London. "Strictly Positive Real Functions and the LefschetzKalmanYakubovich (LKY) Lemma." IEEE Trans. MA. E. D. Nonlinear Dynamics and Chaos. Control.342
BIBLIOGRAPHY
[72] E. 5. 1993.. Technol. Radio Review. Sontag and Y. Automat. 41.. "Linearizing Control of Magnetic Suspension Systems. 3742." IEEE Trans. 471479.
[86] A. van der Schaft. 219226. pp. 4852. D. July 1997. Circuits Sys. 351359. 24. pp. Dec. Teel. Vol. 3. "A Generalization of Vidyasagar's theorem on Stability Using State Detection. 12831294. Vol. Vol. Thau. "On Characterizations of the InputtoState Stability
Property. K.." IEEE Trans.
[84] B. 11831185. No. Control Lett. Vol. Van der Pol.
[80] F. Vol. pp. Automat. 1995. Tsinias. Wang. Tao and P.
[85] A.
[82] J. "Sontag's InputtoState Stability Condition and Global Stabilization Using State Detection. 1999. "New Characterizations of InputtoState Stability. 18.
[78] G. Tsinias.. pp." IEEE Tins. 4. AddisonWesley. G2Gain and Passivity Techniques in Nonlinear Control. Taylor. D." Syst." IEEE Trans." Syst. 2. Vol. van der Schaft.
[79] J. 16. 2436. Control Lett. J. Syst. 1920. L. Sontag. 1995.
[73] E." European Journal of Control. Vol. Sontag and Y.
.. "Observing the State of Nonlinear Dynamic Systems. Springer Verlag. Wang. Vol. Trumper.
[91] J. C. Zames.
[96] S. "Optimal Rejection of Persistent Bounded Disturbances. Automat. "Dissipative Dynamical Systems. Automat. 2nd ed. Control. Willems.
[97] G." IEEE Proc. 392404. Glover. Multiplicative Seminorms and Approximate Inverses. 301320.BIBLIOGRAPHY
343
[87] A. Zames." IEEE Trans. NJ. "Functional Analysis Applied to Nonlinear Feedback Systems. Part I: General Theory. pp. The Analysis of Feedback Systems. pp. 1980. Automat. Rational Mech. "G2Gain Analysis of Nonlinear Systems and Nonlinear State Feedback. pp. Vol. Sept. 37. pp. Willems. PrenticeHall. 3. 1972. 1981. Cambridge. Vidyasagar. Vol.
[92] J. May 1990. "On the Stabilization of Nonlinear Systems Using State Detection. Wiley. 9." IEEE Trans." SIAM J." IEEE
Trans. 1996. 770784. 1963. 228238. Automat. Prentice Hall. Willems. C. Parts I and II . IEEE Trans. Vidyasagar.
[100] K. 321351. Vidyasagar. J.
[99] G.
[89] M. Englewood Cliffs. 527535. Control. 604607. Vol. No. Zak. Control. 105133. Vol. 1992. pp. Willems. C. MIT Press. NJ. No. Circuit Theory. New York. Doyle. L. pp. Englewood Cliffs.
. Vol.." IEEE Trans. Zames. H. 1993.
[93] J. 64. CT10. Willems. 1971. Control. Vol..
[98] G." IEEE
Trans. 2435. Vol. pp. Feb.. 1970. 504509. C. "On the Stabilization and Observation of Nonlinear/Uncertain Dynamic Systems. Vol. Robust and Optimal Control. 1966. Stability Theory of Dynamical Systems. and 465477. "The Generation of Lyapunov Functions for InputOutput Stable Systems. MA. Zhou. 1971. pp. AC26. [88] M. Automat.
[90] M. AC25. 45." IEEE Trans. pp. Nonlinear Systems Analysis. C. AC35. 1986. Control. AC11. 1976. pp. Control. 31. and J. van der Schaft.
[95] J. Vol. 5." Arch. "Feedback and Optimal Sensitivity: Model Reference Transformations. Automat. Control. "On the Input/Output Stability of Nonlinear TimeVarying Feedback Systems".
[94] J. Vol. C. "Mechanisms for the Stability and Instability in Feedback Systems. Anal.
.
.
. .
..
. .
.
. .
.11 Stable limit cycle: (a) vector field diagram. Trajectories for the system of Example 1.
. .... .15 Magnetic suspension system .
.
.
.6 .
.
.
.
.. .
1.
.
.
. . .
.
.
.7
1.
.
.
.
.
.
. ...
.. .
...
..
.
.
.
.
. . ..
. .
.
..
..
.
..
..
. .
.
.
.
.
1.
.13 Unstable limit cycle . . .1
massspring system .
.
..
.
. . .
.
.
.
. ...
.
. .
..8 (a) uncoupled system..
1.
. .
..
. .
.
.
.. .
.
. .
.4
1..
. . (b) the closed orbit. .
.
.14 (a) Threedimensional view of the trajectories of Lorenz' chaotic system..
. (b) original system
System trajectories for the system of Example 1.
. .
. .
2
7
1.
..5) .
.
.. . .
.
.
.
..
.
.
.
..
1.
.
20
.
.
..
.
.3 1.
..
.
.
.
..
.
.9
1.
.
.
.5 1.
... .
. .
. .
.
.
. ..
..
.
.
.10 .. (b) original system System trajectories of Example 1.
. . . .
..12 Nonlinear RLCcircuit . .7: (a) original system.
.
. . .
.2
The system i = cos x ..
. .List of Figures
1.
. . .
.
..
.
.
.
.12
1.
.
. ..
.
System trajectories of Example 1. ...
System trajectories of Example 1. r < 0 .
12
13
14
1.
.11
.
.
.
. (b) twodimensional projection of the trajectory of Lorenz' system..
.
.
.
.
..
.
.
. ..
. ..
.
..
.
23
25 26
1. .
..
.
..
. .
.
. .
.
..
.
..
.
.
. .
.
.
.
.
.
.
.
.
1.
.
.
. .
. .
.17 Freebody diagrams of the pendulumonacart system .
1..
.
.
. . .
.
.
.
.
.
.
. .8
15 16 17 19
1. . .
..
345
..
.
..
. .
.
. .
1.. .6
The system ± = r + x2.
.
. .
.
. .
.
.
.
.
. .
.
.
..
..18 Ballandbeam experiment . . .
27
29
..
.
.
.
.
20
22
1. (b) uncoupled sys
tem . .
.16 Pendulumonacart experiment .
.
...
.10 Trajectories for the system of Example 1. .
..
. .
.9 (a) uncoupled system.
. .
8
10
Vector field diagram form the system of Example 1.
. . . . . .
.
.
.
.19 Doublependulum of Exercise (1. ... ...
. .
. .
.
... . .
.
.
.
.. .
Static nonlinearity N(. .
The nonlinearity N(. .
.
. . .
. .
Discretetime system Ed ..
...6
4. . .
.
. . .
. .
.
. Experiment 2: input u(t) = . .
.
.
. .
.
67
69 74 79
95
126
.) ..
..
.3
Experiment 1: input u(t) applied to system H. ..
.
.
.
.. .
.6
..
.
.
.. . .
. .
.
.
. . . . .
.. . .
. . . (b) modified system after introducing O(x).
..
.
.. .
.
. . ..
.
.
. ..
..
.
.
.11 The Feedback System 5M .2 6..
.
..
.. .
.
. ...
. . . (c) "backstepping" of O(x). .
.5 6.
.
The system H . .
.
.
.
.
. . . ..
.
.
.
. ..
. .
..
.
.
.
.
169 170
173 175 176
6.
.
.
..
. .. .
.
. .1
LIST OF FIGURES
Stable equilibrium point .
. . . ..
Currentdriven magnetic suspension system .
.
Bode plot of JH(yw)j. . . . .
.
151 155
.
.
Massspring system . . .. .
. .
.
.
142
.
. .
.
. . .346
3..
.
. .
..
. .
.
.
. . .
.. .
.
.
. . .
5.
.
.
. .
.
.
.
. .
. . . .
.
.
.
. ..
.8) . . .
. .. (b) discretetime system Ed.
Causal systems: (a) input u(t).
160
6.
.
.
66
3.3 3.
. .. .. . . .
. .
. . .
. .2
. .1
Asymptotically stable equilibrium point . . Notice that this figure corresponds to the righthand side of equation (6.
. ..
(a) Continuoustime system E.
.
...
..
161
163 163
The systems Hlu = u2.
. .
. .
.2
6..
.
.
. (d) truncation of the function u(t).
. . .
. .
.. .
. .
.
.
. .
.
.. . . . .
.
.
6. . . ..
. .
. .3
5. . .
..7
6..8
The Feedback System S .
.
...
.
.
. . and H2u = e1" 1 .
. . . .
... .
. .
6.
.
.
.
System trajectories in Example 3..2 3. ..
.
.
. ..
. .21 .
. .
..
.
128 128
4. .
. .
.
.. . ..
.
. .
. uT(t) applied to system H .. . .
..
.. .. .
. .
.
..
.
.
. .
.
. .
. . . .. . .
.
. . (f) truncation of the system response in part (e). .
. (e) response of the system when the input is the truncated input uT(t). .
. .5 3.. .1
.4
6.
. . .
. . . .
.
..
177
.. ..
.. . .. .
. . . .10 The Feedback System SK .
.
.9
The Feedback System S .
. . .. (c) truncation of the response y(t).
. .
. .
. . (b) the response y(t) = Hu(t)..
. . indicating the IIHIIoo norm of H
. . .
.
.8)..8). . . . .
. . .
. .3 .
.
. . ... .
.
.
. (d) the final system after the change of variables.
6.
(a) The system (5.
.
.
.
.. .
.
.1
Action of the hold device H . ..
. . ..
.. .
.
.
.
.
.
. . ..
.
.. .
.
..
.
. Notice that this figure corresponds to the lefthand side of equation (6.
. .....
6. . .
.
. . . . .
. .
.
. . . ...
.
.
..
.
.
.
..7)(5.
.
4.
Pendulum without friction . . .
The curves V(x) =. .
.
. ..) .
.
. .4 3. . .
.
.
.
. . . . . . . .
.
.
.. .
.
.
.
. 250
. ... ..
.
.
.
.
.LIST OF FIGURES
347
.
6.
.
.
..
.6
9.
.
.
.
.
.
.
.
.3 9.
.
.
.
.
. .
..
.
.
. . .
..
...
.
.
..
.
.. ...
. .
.
.
.
. . ..
.
.4
8..
.
.
.
. 211
The Feedback System S .
.
..
.2 Asymptotically stable equilibrium point .
.
.
..
.
.
.
.2
.
..
.
.
.
. Passive network ....
..
.
.
.
..
.
.
213 223 230
.3 Characteristics of H2 and H2 A.
.
.
.
. ..6
Standard setup .
.
.
.
.
.
. ..
..
.
.
.
.
.
.
.2
.
. x) in the sector [a.
.
. .
The Feedback System S1 .
248 249
Feedback Interconnection used in example 9.
.
. .12 The nonlinearity 0(t*.
.
..
. .
.
9.
.
.
.
.
. .
.
.
.
.1
Cascade connection of ISS systems with input u = 0 .
.
.
.
7..
..
.
.
.
. .
..
..
.
.
.
.
..
.
.
.
.
..
.
. A.
..
.
.
.+H.
.
..4
9.
. .
. . .
.
.
327
..
308 309 318
.. .
Massspring system .
.
.
...
.
.
.2 8.
...
..
.
.3 H=H1+H2+.
A Passive system .
.
..
.
.
..
. ...
.
.
..1 Asymptotically stable equilibrium point . .
.
.
. ..
.
.....
.
.
.
.
..
.
..
.
.
. ..
.
.
..
.
..
.
.
.
.
202 203 209
8..
.
.
9.
...
..
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
. .
.1
.
.
.
.5 8. .
.
. .
. . .
..
.
..
.
.
.
.
.
..
.
197
.
.
...
...
.
. .3] . .
..
.
.
..
. ..
..
.
...
.. 240
.
.
.
.
Standard setup .
..
.. .
.
.
.
.
.
.
..
..
.
.
..
. . .
..
...
.
. .
.
.
.
.
A.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
. .
.
Passive network .
.
8..
.
.. .
..
.
...
.
.
7.
.
...
209
The Feedback System S1 ..
.
.
.
.
.
178
195
Cascade connection of ISS systems
.
.
.
..
....
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.5
Feedback interconnection
..
.
. .
.
.
.
.4 The feedback system Sf
..
..
8...... .
.
.
. .
.
9.
.
.
.
.
.
.2
.
.
.
.
.
..
.
.1
.
. .
. . .
.
... . . .
..
A.
.
263 Contraction mapping. 40
Equilibrium point. 224 for inputtostate stability.. 40 Eigenvectors.Index
Absolute stability. 179 Asymptotic stability in the large. 80
Closedloop inputoutput stability. 296 Distribution involutive. 233 Distinguishable states. 44 Complete integrability. 46 Cauchy sequence. 148 Ballandbeam system. 45
349
algebraic condition for. 259 Differentiability. 263 nonsingular. 261
Eigenvalues. 168 Compact set. 130 Discretization. 34 Bounded linear operators. 16 Chaos. 48 Bounded set. 262 regular point. 80 Class JC function.. 262 singular point. 54 Converse theorems. 145 integrator backstepping. 226
Cartesian product. 178 Class )CL function. 5 asymptotically stable. 193 Dissipative system. 262 variable dimension. 262 Distributions. 21 Chetaev's theorem. 26 Basis. function. 81
Class TC. 77 Autonomous system. 4 Available storage. 40 Diffeomorphism. 45 Causality. 159 Center. 127 Dissipation inequality. 67
. 223
QSR. 125 Convex set. 54 theorem. 256 Diagonal form of a matrix. 100 Circle criterion. 49 Differentiable function. 49 Discretetime systems. 45 Complement (of a set). 44
Coordinate transformations. 231 Backstepping chain of integrators. 67 convergent. 224 Dissipativity. 126 stability. 259 Covector field. 66 exponentially stable. 141 strict feedback systems. 49 Differential.
164
analysis. 48 Linear timeinvariant systems. 106 LaSalle's Theorem. 226 Instability. 265
47
Gradient. 86 Inverse function theorem. 96 Linearization
inputstate. 157
Feedback interconnections. 82 Extended spaces. 16 Fractal. 265 Inputoutput stability. 301 Loop transformations. 38 Exponential stability. 44 Limit set. 90 Lie bracket. 100 Interior point. 183. 109 uniformly stable. 155 InputState linearization. 204 Holder's inequality. 186
Jacobian matrix. 66 Euclidean norm. 264 Function spaces. 16 unstable. 65. 109 uniformly convergent. 46 restriction. 52 Inverted pendulum on a cart. 50
KalmanYakubovich lemma. 174 Lyapunov function. 46 injective. 46 range. 257 Lie derivative. 109 stable. Finitegainstable system. 239 Feedback linearization. 38
Inner product. 228 Fixedpoint theorem. 137 basic feedback stabilization. 204
Linearization principle.350
INDEX
globally uniformly asymptotically stable. 160 Inputoutput systems. transformation. 46 continuous. 74 Lyapunov stability autonomous systems. 18 Limit point. 47 surjective. 282 Invariance principle. 46 uniformly continuous. 47 domain. 54 Focus stable. 85 Invariant set. 256 Limit cycles. 21 Frobenius theorem. 34 Linear map. 156
Functions. 46 bijective. 52 Lipschitz systems..C2. 159 Inputtostate stability. 86 Linear independence. 204 Inputoutput linearization. 218
L. 25 ISS Lyapunov function. 109 unstable. 1
£ stability.
Lagrange stable equilibrium point. operator. 44 Internal dynamics. 156
138
Inner product space. 156
. 51 Hilbert space. 71
. 255 Feedback systems. 66 uniformly asymptotically stable. 120 Lipschitz continuity. 275 Inputoutput stability.
208 Passivity. 291 Open set. 225 Strange attractor. 11 stable. 162 Metric spaces. 228 Strictly passive system. 32 Minimum phase systems. 3 Static systems. 12 unstable. 291 Nonlinear systems first order. 110 Positive real rational functions. 45
Sets. 23. 52 Memoryless systems. 39 skew symmetric. 69 decrescent. 214 theorem. 162 Storage function. 44
interconnections. 292 nonlinear. 228 Strictly positive real rational functions. 131 time dependent.INDEX
discretetime. 8 Sequences. 1 Matrices. 110 discretetime. 38
Neighborhood. 5 linearization. 13 Schwarz's inequality. 39 inverse. 277 Riccati equation. 246
Saddle. 227
Small gain theorem. 44 Node. 21 Positive definite functions. 48 Meanvalue theorem. 207
. 21 Strict feedback systems. 204 Second order nonlinear systems. 298 Observers. 13 Nonautonomous systems.. 148 Strictly outputpassive system. 171 Stability inputtostate. 217 Positive system. 207
Quadratic forms. 285 Minkowski's inequality. 43 Region of attraction. 113
351
Magnetic suspension system. 93 Relative degree.. 39 Matrix norms. 107 Nonlinear observability. 271 Massspring system. 39 transpose. 37 Observability linear systems. 132 nonautonomous. 8 PoincareBendixson theorem. 296 Observer Lipschitz systems. 78 Rayleigh inequality. 237 State vector. 99 Normed vector spaces. 39 orthogonal. 206. 183 of dissipative systems. 39 symmetric. 301 with linear error dynamics. 201 of linear timeinvariant systems. 210 Perturbation analysis. 217 Strictly positive system. 41 Radially unbounded function. 205. 31
Partial derivatives. 296 Nonlinear observers. 122 Phaseplane analysis. 211
Passivity and small gain. 50 Passive system.
4
Van der Pol oscillator. 229
Zero dynamics. 44 Total derivative. 9 Vector spaces. 243 G2 gain of an LTI system.352
INDEX
Subspaces. 185 Unforced system. 19 Vector field. 224 System gain. 162 G2 gain of a nonlinear system. 280. 167 G. 36 Supply rate. 166
Topology.. 44
in R". 9. 256 diagram. 32 Verystrictlypassive system. 49
Ultimate bound.. 284
. gain of an LTI system.