You are on page 1of 369

N O N L I N E A R

C O N T R O L S Y S T E M S
H O R A C I O J . M A R Q U E Z
D e p a r t m e n t o f E l e c t r i c a l a n d C o m p u t e r E n g i n e e r i n g ,
U n i v e r s i t y o f A l b e r t a , C a n a d a
W I L E Y -
I N T E R S C I E N C E
A J O H N W I L E Y & S O N S , I N C . , P U B L I C A T I O N






















































































































































































































































































































NONLINEAR CONTROL SYSTEMS

HORACIO J. MARQUEZ
Department of Electrical and Computer Engineering, University ofAlberta, Canada

WILEYINTERSCIENCE
A JOHN WILEY & SONS, INC., PUBLICATION

Copyright © 2003 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, e-mail: petmreq@wiley.com. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representation or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993 or fax 317-572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.

Library of Congress Cataloging-in-Publication Data is available.
ISBN 0-471-42799-3 Printed in the United States of America.

109876543

To my wife, Goody (Christina); son, Francisco; and daughter, Madison

Contents
1

Introduction
1.1

1
. . . . . . .

Linear Time-Invariant Systems
Nonlinear Systems
.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

1

1.2 1.3 1.4 1.5 1.6 1.7

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

3
5
5

Equilibrium Points

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

First-Order Autonomous Nonlinear Systems Second-Order Systems: Phase-Plane Analysis

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

8

Phase-Plane Analysis of Linear Time-Invariant Systems Phase-Plane Analysis of Nonlinear Systems
1.7.1
. . . . .

.

.

.

.

.

.

.

.

.

.

.

10

.

.

.

.

.

.

.

.

.

.

.

.

.

18
18

Limit Cycles

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

1.8

Higher-Order Systems
1.8.1

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

20
21 22

Chaos

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

1.9

Examples of Nonlinear Systems
1.9.1

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Magnetic Suspension System

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

23
25 26
27

1.9.2

Inverted Pendulum on a Cart
The Ball-and-Beam System
. . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

1.9.3

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

1.10 Exercises

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

2 Mathematical Preliminaries
2.1

Sets ........................................
Metric Spaces .
. .

31
31

2.2

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

32

vii

viii
2.3

CONTENTS Vector Spaces .
2.3.1
. . . . . . . . . . . . . . .

. .
. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

32 34 36 37

Linear Independence and Basis
Normed Vector Spaces .
. . . . . . . . .
. . .

.

.

.

.

.

.

.

.

.

.

2.3.2

Subspaces .................... .
. . . . . . . . . . . . .
.

.

.

. .........
. . . . . . . . . . . . .
.

.

.

.

.

.

.

.

.

.

2.3.3
2.4

.

.

.

Matrices .
2.4.1

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

39 40
41

Eigenvalues, Eigenvectors, and Diagonal Forms

.

.

.

.

.

.

.

.

.

.

.

.

2.4.2
2.5

Quadratic Forms
.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Basic Topology
2.5.1

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

44

Basic Topology in ]It"
. . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.
.

.

.

.

.

.

.

.

.

.

.

44
45

2.6

Sequences

. . .
.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

2.7

Functions
2.7.1

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

46 48 49
51 52

Bounded Linear Operators and Matrix Norms .
. . . . . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

2.8

Differentiability
2.8.1

.

. .

.

.

.

.

.

.

.

.

.

. .

Some Useful Theorems .
. . .
.

.

.

.

.

.

.

.

. . .
.
.

.

.

.

.

.

.

.

.

.
.

.

.

2.9

Lipschitz Continuity

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

2.10 Contraction Mapping .
2.12 Exercises

.

.

.

.

.

.

. .
.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

54 56 59

2.11 Solution of Differential Equations .
.
. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

3 Lyapunov Stability I: Autonomous Systems
3.1

65
. .

Definitions .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

65

3.2

Positive Definite Functions

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

69
71

3.3 3.4
3.5

Stability Theorems .
Examples
. . . . . .

.

.

.

.

.

.

.

.

.

.
.

.

.

. .
. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.
.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

74 77

Asymptotic Stability in the Large
3.6.1

.

.

.

.

.

.

.

.

. .
. .

.
.

.

.

.

.

.

.

.

.

.

.

. .
. .

3.6

Positive Definite Functions Revisited Construction of Lyapunov Functions

.

.

. . .

.

.

.

.

.

Exponential Stability ................
. . . . . . . . . . .

.

. .........
. . . . .

.

.

.

.

.

.

.

80
82 82

3.7

.

. .

.

.

.

9 . . . . . . . . . 93 96 99 . . . . . . . . Discrete-Time Systems . . . . . . . .10.10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Discrete-Time Positive Definite Functions . . . . . . . . . .3 Backstepping: More General Cases . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . 113 115 119 120 122 125 Proof of the Stability Theorems . . . .5 Stability Theorems . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . 4. . . . . . . . . . .3 Stability Theorems . .1 . . . . . . . . . . . . . . . . . . .. . . . . . . .10. . . . . . 130 130 131 132 133 . . . . . . . . . . . . . . . . . . . 107 110 111 4. . . . . . . . . . . . . . . .4 4. . . . . . . 4 Lyapunov Stability II: Nonautonomous Systems 4. .8 3.. . . . . . . . . . . . . . . . . . . . .10 Stability of Discrete-Time Systems 4. . 127 4. . . . . .11 Instability 3. .. . 4. . . . . . . . . . . . . . .10. . . . . . . . . . . . 145 . . . . . . . .2. . . . . . . . . .. . . . . .CONTENTS 3. . .5. .3 4. . . . . . . . . .9 ix . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . .12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Analysis of Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . .1 Linearization of Nonlinear Systems . 4. . Examples . . . . .. . . . . . . . . . . . .11 Exercises . . . . . . . . . . . . .1 5. The Linearization Principle . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . .1 107 . . . . . .. . . . . . . . . . . . . . . . . 3. . . . . . . . . . 3.8 4. . . . .2 Basic Feedback Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . 5 Feedback Systems 5. . . . . . . . . . . . . . . . . . 137 138 141 Integrator Backstepping . . . . . . . . Analysis of Linear Time-Varying Systems . .2 Positive Definite Functions 4. . . .7 4. . . . . . . 3. . . . . . . . The Invariance Principle Region of Attraction . . . . . . . . . . . . . . . . . 100 102 . . . . . . 4. . . . . . . . . . . . . . . . . . . Definitions . 4. . . . . . . . . . . . . . . . . . . . . . . . . .1 Definitions . 4. Discretization . . . .1 .. . 126 . . . . . . . . . . . . . . 85 . . . . . . . . .6 Perturbation Analysis Converse Theorems .

. . . . . . . . . . . .5 Strict Feedback Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . .1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 6. . . . . . . 6. . . . .1 . . . . . . 8 Passivity 201 . .5 6. . . . . . . . . . . . . . . . . . . . . 183 185 186 189 191 7. . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . .3 . . . . . . 167 168 171 Closed-Loop Input-Output Stability The Small Gain Theorem Loop Transformations . . . . . .3. . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . Examples . . . . . . . .5 7. . . . . . . . . . . . . . . . 153 6 Input-Output Stability 6. . . . . . . . . . . 195 198 . . . . . . . . .x 5. . . . . . . . . . . . . . . . . . . . . . . . . Input-to-State Stability (ISS) Theorems 7. . 148 151 Example .2 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . 174 178 180 The Circle Criterion Exercises . . . . . . . . . . . . . . . . . . . . 156 Extended Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Input-to-State Stability 7. . . . . 6. . .1 CONTENTS Chain of Integrators . . . . . . . . . . . . . . . . . . . .8 6. . . . . . . . . . . . . . . . . . 157 159 6. . . .2 Input-Output Stability . Function Spaces .1 183 . . . . . . . . . . .3 Linear Time-Invariant Systems Lp Gains for LTI Systems 6. .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 5. .4 . . 164 166 166 6. . . Gain G2 Gain . . . . . . . . . . . . . . . . .1 155 . . . . . .9 . . . . . . . .4. . L. . . . . . . . . . . . . .4 7. .2 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5. .1 . . . . . . . . . . . . . . . Motivation Definitions . . . . . . . . . . . . . . .6 Input-to-State Stability Revisited Cascade-Connected Systems Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 . . . . . . .2 6. . . . . . .3. . . 6. . . . . .

. . 214 8.. . . . .. . . . . . . . . . . . . . . . . . .. . ..1 . . . 247 251 . . . 217 220 8. . . . . . . . . . . . . . . . .. . .2 8. . Nonlinear L2 Gain 9.. . . . . . . . Exercises . . . . . . . . . . . . . . .1 223 . . . . . . . . . . . . . . . .. 9. . . . . . . . .. . . .. . . . . . . . . . . . . . . . Examples 9. . . . . . . . . . . . . . . . . . . .3 QSR Dissipativity 9. . . . . . . . . . . . . . . . . .1 . . .1 . . . . . . . . . . . .. . . .CONTENTS 8. . . . . . . . . . . . . . . . . . .7 9. . .. . . . 204 208 210 211 Interconnections of Passivity Systems 8. . . . . . . .9. . . .. . .9. . . . . . . .. . . .. . . . . . . . . . .. . . . Algebraic Condition for Dissipativity 9. . .. . .4.6 Strictly Positive Real Rational Functions . .2.2 Strictly Output Passive Systems . Passivity of Linear Time-Invariant Systems . Dissipative Systems . .. .. . . .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. ..2 9. .. . . . . .. . . . . . . 9. . . . . . . . . . . . . .. . .. . . . . . . . . . . ... . . . . .. . . . . . 233 235 237 239 243 245 246 Special Cases . . . . . . . .11 Nonlinear L2-Gain Control 9. . 201 . . . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . 253 . 9. . .6 Mass-Spring System without Friction . . . .3. . . . ... .1 xi Power and Energy: Passive Systems Definitions . . . 8. . . .7 9 . . ... . . . . . .. . Mass-Spring System with Friction .4 8. . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . Linear Time-Invariant Systems . . . . . Back to Input-to-State Stability . . . . .8 9. .1 .4 . . . .5 9.3 . ..12 Exercises . . . . . . Dissipativity 9. . . . . 9. . . . .6.. . . Passivity and Small Gain . . . . . . . . . . . . Available Storage . . . . . . . .. . . . . . . .. .. . . . . . . . . . . .. . . .. . . . . . .9 Stability of Dissipative Systems Feedback Interconnections . . . .. . .10 Some Remarks about Control Design 9. . . . . . . . . . . . .. . .. . . . . ... .. . . . .2 Differentiable Storage Functions . . . .. . . . . . . . . .4. 8. . . . . . . . . . . . . . . . . . .5 Stability of Feedback Interconnections . . . . . . . . . . . . . . 224 225 226 226 229 229 231 231 9. 9. . . . 9. . . . . . . . . . .. . . . .1 . .

. .. . 10. . . . . . . . . 294 295 296 298 298 301 11. . . . .. . . . . . . . . . . . . . . . . . . . . . . 11 Nonlinear Observers 11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. 10. . . .. . . . . . . . ..1 Lie Derivative . . . .. . .. . . . . . . . . . . . . . . .. . . . . . . . . .. 10. . . . . . . . . . . . . . . 292 294 . . . . .. . . . . .. . . 255 255 256 10. . . . . .1. . . .2 Lie Bracket .. . . . . . . . . . . . . . . . . . . 11. .4 Conditions for Input-State Linearization . . . . . . .. .1 Observability 11. . . . . . . . .. . .5 Input-Output Linearization 10. . . . . . . . . . . . . . . . . . .. . . 280 287 287 10.6 The Zero Dynamics ..1. . . . . . . . . .3 Observers with Linear Error Dynamics . . . . . 273 . . . . . .. . . . . .. . 257 259 259 261 10. . . . .1. . . .. . . . . . . . . . ... .2.1 Systems of the Form x = Ax + Bw(x) [u . . . . . . . . . . . . . 303 . . .. . . . . . . 10. .2 Nonlinear Observability . . . . . . . . . . . . .8 Exercises .. . . . . . . . 10.. . .2 Observer Form . . . .. . . . . . . .. . . . .5 Distributions . . . .3 Observers for Linear Time-Invariant Systems . . . . . . . . .2. .2 Systems of the Form i = f (x) + g(x)u 10. .. . . . . . . . . . .1.. . . . . . . . . . . . . . .xii CONTENTS 10 Feedback Linearization 10.1. . . . . .. .2 Input-State Linearization . . . . . ... .. . . . . . . . . .1. . 11. . .. . . . .O(x)] .. . . . . . .. . . . . . . . ..4 Lipschitz Systems . .. .1. . . .4 Separation Principle 11. . . . . . . . . . . .. . . . . . . . . . . . . . . 11. . . . 291 . . . . . . . . . . . 265 265 10. . . . . . . . . .. . . . . . .. . . . . . . . . .. . . . . . . . . . 10. . . . . . . .7 Conditions for Input-Output Linearization . . . . . . . . .3 Diffeomorphism . . .. . .2. . . . . . . 291 .. . . . . . . . . .1 Nonlinear Observers 11.. . . .. . . . . . . . . . . 267 270 . .. . . . . . . . . . . . . . . . .1. 11. . . . . . . . . . . . . . . . .. . . . . . . . . . .5 Nonlinear Separation Principle . .1. . . . . . . ..3 Examples .1 Observers for Linear Time-Invariant Systems 11. .1 Mathematical Tools 10. . . . . . .4 Coordinate Transformations . 275 . . . . . . . . . . . . . . . . .. . 10..

.. . . . ... . . .. .7 Chapter 10 . . . ... . . . . . . . . . . .1 Chapter 3 A. . . . . . . . .6 Chapter 9 A... . .. .. . . . . . . . . .2 Chapter 4 A.. . .. . .. .. .. .. . . .. . ... ..CONTENTS xiii A Proofs A. . .. .. . . . . .. .. . . . 307 . ... 307 313 315 320 . .. . . . . . .. .. . . . . 324 328 .5 Chapter 8 A. . . . . ... . . . . . ... .4 Chapter 7 A. . .... 330 . . .. . . . . . . . . . .. .. . .. .. . Bibliography 337 345 349 List of Figures Index . . . . . . . . . . . ... . . . . . . .. . . . . ... . . . . ... . . . . . . .. . . . . .... . . . .. . . . . . .. . . . .. . . .. .. .. . . . . .. .3 Chapter 6 A.... .. . .. .. . . . . . . .

.

Autonomous systems are discussed in Chapter 3 and nonautonomous systems in Chapter 4. with zero initial conditions) and subject to an external input. I have tried to write the kind of textbook that I would have enjoyed myself as a student. I find that introducing this technique right after the main stability concepts greatly increases students' interest in the subject. Chapters 3-5 and 6 present two complementary views of the notion of stability: Lyapunov. an active area of research. The first chapter discusses linear and nonlinear systems and introduces phase plane analysis. The result of this effort is the present book.e. The chapter begins with the basic notions of extended spaces. My goal was to write something that is thorough. Passive systems are studied first in Chapter . including its importance in the socalled nonlinear L2 gain control problem. Chapters 3 and 4 contain the essentials of the Lyapunov stability theory. which doesn't even resemble the original plan. Chapter 7 focuses on the important concept of input-to-state stability and thus starts to bridge across the two alternative views of stability. The same chapter also discusses the stability of feedback interconnections via the celebrated small gain theorem.e. I have chosen this separation because I am convinced that the subject is better understood by developing the main ideas and theorems for the simpler case of autonomous systems. As such. This material is intended as a reference source and not a full coverage of these topics. The approach in this chapter is classical. and the input-output theory. Chapter 5 briefly discusses feedback stabilization based on backstepping. Chapter 2 introduces the notation used throughout the book and briefly summarizes the basic mathematical notions needed to understand the rest of the book. At that time my intention was to write a research monograph with focus on the input-output theory of systems and its connection with robust control.e. causality. leaving the more subtle technicalities for later. and my interests quickly shifted into writing something more useful to my students. where the focus is on the stability of equilibrium points of unforced systems (i. input-output systems are considered without assuming the existence of an internal (i. Chapter 6 considers inputoutput systems. state space) description. without external excitations). In the middle of that venture I began teaching a first-year graduate-level course in nonlinear control. and system gains and introduces the concept of input-output stability. In Chapters 8 and 9 we pursue a rather complete discussion of dissipative systems. yet readable. including a thorough discussion of passivity and dissipativity of systems. where systems are assumed to be relaxed (i.Preface I began writing this textbook several years ago.

most of whom I never had the pleasure to meet in person. and 10. I will not provide a list because I do not want to forget anyone. and the many that will come after. Most of them are not meant to be real-life applications. and Rajamani Doraiswami (both of the University of New Brunswick). simply because this would be impossible.ca I will keep an up-to-date errata list on my website: http://www. 9. but have been designed to be pedagogical. My philosophy is that real physical examples tend to be complex. I have tried to acknowledge those references that have drawn my attention during the preparation of my lectures and later during the several stages of the writing of this book. Please email your comments to marquez@ee. for the beautiful things that they have published. It was through their writings that I became interested in the subject. which is the explanation of a particular technique or a discussion of its limitations.ualberta. Chris Diduch. along with some of the most important results that derive from this concept. I have tried my best to clean up all the typographical errors as well as the more embarrassing mistakes that I found in my early writing. I would argue that most of the material in this book is essential enough that it should be taught to every graduate student majoring in control systems.xvi 8. I would also like to thank the many researchers in the field. I have chosen this presentation for historical reasons and also because it makes the presentation easier and enhances the student's understanding of the subject. Chapter 9 generalizes these ideas and introduces the notion of dissipative system. and without their example this book would have never been written. Each one of them had a profound impact in my career. There are many examples scattered throughout the book. I sincerely apologize to every .ualberta. However.ee. respectively. In fact. the emphasis of the book is on analysis and covers the fundamentals of the theory of nonlinear control. Chapters 10 and 11 provide a brief introduction to feedback linearization and nonlinear observers. I have not attempted to list every article by every author who has made a contribution to nonlinear control. like many before me.ca/-marquez Like most authors. but I would like to acknowledge four people to whom I feel specially indebted: Panajotis Agathoklis (University of Victoria). I have restrained myself from falling into the temptation of writing an encyclopedia of everything ever written on nonlinear control. and often distract the reader's attention from the main point of the book. Chris Damaren (University of Toronto). I am sure that I have failed! I would very much appreciate to hear of any error found by the readers. Finally. Although some aspects of control design are covered in Chapters 5. and focused on those parts of the theory that seem more fundamental. require elaboration. I owe much to many people who directly or indirectly had an influence in the writing of this textbook.

xvii author who may feel that his or her work has not been properly acknowledged here and encourage them to write to me. as well as my son Francisco and my daughter Madison. for their professionalism and assistance. Kirsten Rohstedt and Brendan Cody. Guess what guys? It's over (until the next project). Tonight I'll be home early. Horacio J. Marquez Edmonton. I am deeply grateful to the University of Alberta for providing me with an excellent working environment and to the Natural Sciences and Engineering Research Council of Canada (NSERC) for supporting my research. Alberta . To all three of them I owe many hours of quality time. I would like to thank my wife Goody for her encouragement during the writing of this book. Kristin Cooke Fasano. I am also thankful to John Wiley and Son's representatives: John Telecki.

.

1.Chapter 1 Introduction This first chapter serves as an introduction to the rest of the book. We also define the several classes of systems to be considered throughout the rest of the book. and D are (real) constant matrices of appropriate dimensions. Equation (1.1. Equation (1. showing the evolution from linear time-invariant to nonlinear. Example 1. We present several simple examples of dynamical systems.1 Linear Time-Invariant Systems In this book we are interested in nonlinear dynamical systems. we obtain the equilibrium equation 1 . C.2) is often called the read out equation and gives the desired output as a linear combination of the states.2) where A.1 Consider the mass-spring system shown in the Figure 1. Using Newton's second law.1) y=Cx+Du (1.1) determines the dynamics of the response. B. Phase-plane analysis is used to show some of the elements that characterize nonlinear behavior. We recall that a state space realization of a finite-dimensional LTI system has the following form x = Ax + Bu (1. The reader is assumed to be familiar with the basic concepts of state space analysis for linear time-invariant (LTI) systems.

INTRODUCTION Figure 1. then y=x1 = [1 0 thus. Thus. we obtain the following state space realization i1 = x2 1 x2=-mkxl-mx2+g or i1 X1 1 X2 X2 I If our interest is in the displacement y. x2 = y.1: mass-spring system. my+I3'+ky=mg. and fk = ky. we have that fp = Qy. fp is the viscous friction force.fk . Defining states x1 = y. a state space realization for the mass-spring systems is given by i = Ax+Bu y = Cx+Du . Assuming linear properties. my = E forces = f(t) .2 CHAPTER 1.fp where y is the displacement from the reference position. and fk represents the restoring force of the spring.

ul. u) x= UP f (t.. 0) = f (x. t).u1.4) Equation (1. .4) is a generalization of equation (1...5) are referred to as the state space realization of the nonlinear system. . (1. In this case..2 Nonlinear Systems Most of the book focuses on nonlinear systems that can be modeled by a finite number of first-order ordinary differential equations: ±1 = fl(xl. up) Defining vectors ul fl (x.. t. t. t. .xn.1) to nonlinear systems. (1. The vector x is called the state vector of the system.. (1.5) Equations (1. t.3) as follows: x = f (x. t. u) we can re write equation (1...2. the equation takes the form x = f (x.. Similarly. xn.1. t. Special Cases: An important special case of equation (1.4) is when the input u is identically zero.Up) xn = fn(xl. u). . the system output is obtained via the so-called read out equation y = h(x. u).t. NONLINEAR SYSTEMS with 3 A=[-o -13 ] B= [0] C=[1 0] D=[0] 1.4) and (1.6) . and the function u is the input... x)u) _ fn(x..

Defining state variables xl = y. INTRODUCTION This equation is referred to as the unforced state equation. In particular. With this constant. x2 = y results in the following state space realization r xl = X2 (l x2 = -kM - La2X3 - Ax2 + (e t which is of the form i = f (x.4) eliminates u and yields the unforced state equation. there is no difference between the unforced system with u = 0 or any other given function u(x.7) x = f (x) in which case the system is said to be autonomous. if u = 0.4 CHAPTER 1. Throughout the rest of this chapter we will restrict our attention to autonomous systems.1. t) in equation (1.e.1 we assumed linear properties for the spring. t) is not a function of time.2 Consider again the mass spring system of Figure 1. We can approximate this model by taking fk = ky(1 + a2y2).fk . u is not an arbitrary variable).a does not change the right-hand side of the state equation. According to Newton's law my = E forces = f(t) . then J it = x2 or i = f(x). The second special case occurs when f (x. We now consider the more realistic case of a hardening spring in which the force strengthens as y increases.fOIn Example 1. u). in general. ll x2 = -m x1 . Notice that. Example 1. In this case we can write (1.t) (i..ma2xi 'M m X2 . Substituting u = 'y(x. the differential equation results in the following: my+Qy+ky+ka2y3 = f(t). Autonomous systems are invariant to shifts in the time origin in the sense that changing the time variable from t to T = t .

1. if r > 0. and the unique equilibrium point is x = 0. namely x = ff. it is an equilibrium point. EQUILIBRIUM POINTS 5 1. we solve the equation r + x2 = 0 and immediately obtain that (i) If r < 0. if x _ dx dt = f(xe) = 0 it follows that xe is constant and. the equilibrium points of (1.3 Consider the following first-order system ± = r + x2 where r is a parameter.1 A point x = xe in the state space is said to be an equilibrium point of the autonomous system x = f(x) if it has the property that whenever the state of the system starts at xe1 it remains at xe for all future time.6) are the real roots of the equation f (xe) = 0. Definition 1. (ii) If r = 0. In this section we consider the simplest . the system has two equilibrium points. although the time dependence brings some subtleties into this concept.3 Equilibrium Points An important concept when dealing with the state equation is that of equilibrium point.4 First-Order Autonomous Nonlinear Systems It is often important and illustrative to compare linear and nonlinear systems. Equilibrium point for unforced nonautonomous systems can be defined similarly. According to this definition. (iii) Finally. by definition. Example 1. both of the equilibrium points in (i) collapse into one and the same.1. To find the equilibrium points of this system.3. This is clear from equation (1. It will become apparent that the differences between linear and nonlinear behavior accentuate as the order of the state space realization increases. then the system has no equilibrium points. Indeed.6). See Chapter 4 for further details.

8) takes the form x = ax. One way to do this is to acknowledge the fact that the differential equation x = f(x) represents a vector field on the line. A very special case of (1. Thus. we look for a qualitative understanding of the behavior of the trajectories.8) or (1. we consider a system of the form (1. Indeed.10) A solution of the differential equation (1. Our analysis in the linear case was guided by the luxury of knowing the solution of the differential equation (1. and (1. a > 0: Starting at x0.9) starting at xo is called a trajectory. Short of a solution.9) that the only equilibrium point of the first-order linear system is the origin x = 0. Thus. f (x) = ax. representing x = f (x) in a twodimensional plane with axis x and x. 'See Chapter 3 for a more precise definition of the several notions of stability. INTRODUCTION case. 2This point is discussed in some detail in Chapter 2.8). the equilibrium point of a first-order linear system can be either attractive or repelling. which is that of first order (linear and nonlinear) autonomous systems.9).8) is that of a first-order linear system. the sign of ± indicates the direction of the motion of the trajectory x(t).8) x = f (x) where x(t) is a real-valued function of time. the solution of this equation with an arbitrary initial condition xo # 0 is given by x(t) = eatxo (1.6 CHAPTER 1. In this case. while repellers are called unstable. In other words. most nonlinear equations cannot be solved analytically2. f (x) dictates the "velocity vector" x which determines how "fast" x is changing. Unfortunately. Attractive equilibrium points are called stables. at each x. x(t) exponentially converges to the origin. Consider now the nonlinear system (1.10). x(t) diverges to infinity as t tends to infinity. a < 0: Starting at x0. (1. that is. According to (1.9) It is immediately evident from (1. .9). the trajectories of a first order linear system behave in one of two possible ways: Case (1). We also assume that f () is a continuous function of x. The simplicity associated with the linear case originates in the simple form of the differential equation (1. Case (2).

From this analysis we conclude the following: 1. and vice versa. are the equilibrium points o f (1. but it cannot change sign without passing through an equilibrium point. In all cases. The behavior described in example 1.11). all points o f the form x = ( 1 + 2k)7r/2. +1. we plot i versus x as shown in Figure 1.4 Consider the system th = cosx To analyze the trajectories of this system. Example 1. 2. . a bit of thinking will reveal that the dynamics of these systems is dominated by the equilibrium points.2. From the figure we notice the following: The points where ± = 0.2: The system i = cos x. oscillations around an equilibrium point can never exist in first order systems. trajectories are forced to either converge or diverge from an equilibrium point monotonically.4. k = 0. Thus.4 is typical of first-order autonomous nonlinear systems. The system (1. notice that ± can be either positive or negative. in the sense that the only events that can occur to the trajectories is that either (1) they approach an equilibrium point or (2) they diverge to infinity. Exactly half of these equilibrium points are attractive or stable. that is where cos x intersects the real axis.11) has an infinite number of equilibrium points.10) that a similar behavior was found in the case of linear first-order systems. Recall from (1. the trajectories move to the right hand side. or repellers. and the other half are unstable. Whenever ± > 0. The arrows on the horizontal axis indicate the direction of the motion. are equilibrium points of (1. To see this. Indeed. FIRST-ORDER AUTONOMOUS NONLINEAR SYSTEMS 7 2 / \ 5ir/2 x Figure 1.1. ±2.11). Thus.

diverge toward infinity. -r) U (-r.x2) or i = f(x).12) (1. Any trajectory starting in the interval x0 = (-oo.x20 ]T has a unique solution of the form x(t) = [x1(t). Example 1. From the analysis of the sign of ±. As in the . Trajectories initiating in the interval xo = (r. namely xe. oo). r) monotonically converge to xe.3. r < 0. Consider a second-order autonomous system of the form: i1 = fl(xl.3: The system ± = r + x2.8 CHAPTER 1. (1. that is x=r+x2 Plotting ± versus x under the assumption that r < 0. and xe1 are shown in the Figure. we obtain the diagram shown in Figure 1. we see that xe is attractive.13) = f2(xl. they can be used to explain interesting features encountered in the nonlinear world.3. 1.5 Second-Order Systems: Phase-Plane Analysis In this section we consider second-order systems.x2) -t2 (1.5 Consider again the system of example 1. while xe is a repeller.x2(t)]T. The two equilibrium points. INTRODUCTION x Figure 1.14) with initial condition x(0) = xo = [x10.14) Throughout this section we assume that the differential equation (1. This class of systems is useful in the study of nonlinear systems because they are easy to understand and unlike first-order systems. on the other hand.

Figure 1. as well as many similar ones presented throughout this book.4 shows a phase-plane diagram of trajectories of this system. In this book we do not emphasize the manual construction of these diagrams.x2 plane.14). From equation (1. and the xl . Thus it is possible to construct the trajectory starting at an arbitrary point x0 from the vector field diagram. along with the vector field diagram. Repeating this operation at every point in the plane. . The technique is known as phase-plane analysis. it is useful to visualize the trajectories corresponding to various initial conditions in the xl .x2 plane is usually referred to as the phase-plane. we obtain a vector field diagram.x2 plane. The function f (x) is called a vector field on the state plane. Given any initial condition x0 on the plane. This plot.x2. SECOND-ORDER SYSTEMS: PHASE-PLANE ANALYSIS 9 case of first-order systems. This means that to each point x' in the plane we can assign a vector with the amplitude and direction of f (x').6 Consider the second-order system 1 = x2 22 = -xi . Very often. was obtained using MAPLE 7. Example 1. we assign to x the directed line segment from x to x + f (x). that is. we have that 1 ±2 f2 (X) 11 (X) = f(). when dealing with second-order systems. Several computer packages can be used for this purpose.5. For easy visualization we can represent f (x) as a vector based at x. from the phase diagram it is easy to sketch the trajectories from x0. Notice that if x(t) = xl(t) X2(t) is a solution of the differential equation i = f (t) starting at a certain initial state x0i then i = f (x) represents the tangent vector to the curve. this solution is called a trajectory from x0 and can be represented graphically in the xl .1.

INTRODUCTION l' `.\ \ . the corresponding eigenvectors. Figure 1. and the solution of this differential equation starting at t = 0 with an initial condition xo has the following well-established form: x(t) = eAtxo We are interested in a qualitative understanding of the form of the trajectories. CASE 1: Diagonalizable Systems Consider the system (1. Assume that the eigenvalues of the matrix A are real.4: Vector field diagram form the system of Example 1. 1. N \\' 1 . To this end. A2 the eigenvalues of the matrix A. Now consider a linear time-invariant system of the form AEII22x2 (1. and vl. These systems are well understood.15). Throughout this section we denote by Al. depending on the properties of the matrix A. (1.6 Phase-Plane Analysis of Linear Time-Invariant Systems i=Ax.15) where the symbol 1R2x2 indicates the set of 2 x 2 matrices with real entries. v2.6.16) . and define the following coordinate transformation: x = Ty. T E IIt2x2. T nonsingular.10 CHAPTER 1. we consider several cases.

or [y20 yl y2 '\2J [Y2 (1. PHASE-PLANE ANALYSIS OF LINEAR TIME-INVARIANT SYSTEMS 11 Given that T is nonsingular. The following examples clarify this point.-2]. Example 1. The eigenvalues in this case are Al = -1.19) = Alyl = A2y2 Both equations can be solved independently. D-[ . In this case the matrix T defined in (1. A is diagonalizable and D = T-1AT T=[0 1 -3 1]. This means that the trajectories along each of the coordinate axes yl and Y2 are independent of one another.. in this case the matrix A is similar to the diagonal matrix D.17) Transformation of the form D = T-'AT are very well known in linear algebra. and the matrices A and D share several interesting properties: Property 1: The matrices A and D share the same eigenvalues Al and A2.7 Consider the system [± 2]-[ with 0 1 -2] [x2 I .e.10). Property 2: Assume that the eigenvectors v1i v2 associated with the real eigenvalues A1. 0] A2 0 that is.17) can be formed by placing the eigenvectors vl and v2 as its columns. and we can write y = T-1ATy = Dy (1.A2 = -2. depending the sign of the eigenvalues Al and A2. For this reason the matrices A and D are said to be similar.18) (1. In this case we have that DAl i. 0 1 0 . The importance of this transformation is that in the new coordinates y = [y1 y2]T the system is uncoupled.6. The equilibrium point of a system where both eigenvalues have the same sign is called a node. and transformations of the form T-1AT are called similarity transformations. denoted T-1 exists. and the general solution is given by (1.1. its inverse. Several interesting cases can be distinguished. A2 are linearly independent.

The equilibrium point is thus said to be a stable node.7: (a) original system.5 shows the trajectories of both the original system [part (a)] and the uncoupled system after the coordinate transformation [part (b)].5 (b) by applying the linear transformation of coordinates x = Ty. The eigenvalues in this case are Al = 1. Example 1. Applying the linear coordinate transformation x = Ty. as expected given that both eigenvalues are negative. (b) uncoupled system. It is clear from part (b) that the origin is attractive in both directions. A2 = 2.6 shows the trajectories of both the original and the uncoupled systems after the coordinate transformation. It is clear from the figures that the origin is repelling on both . It is in fact worth nothing that Figure 1. only with a distortion of the coordinate axis. In the new coordinates y = Tx. Figure 1.8 Consider the system [ 22 j = [ 0 2 ] [ x2 ] .5 (a) can be obtained from Figure 1. the modified system is yi = -Yi y2 = -2Y2 or y = Dy. which is uncoupled. Part (a) retains this property. we obtain y1 = yi y2 = 2Y2 Figure 1.12 CHAPTER 1. INTRODUCTION Figure 1.5: System trajectories of Example 1.

If. only one linearly independent eigenvector v can be associated with A. CASE 2: Nondiagonalizable Systems Assume that the eigenvalues of the matrix A are real and identical (i.1. The eigenvalues in this case are Al = -1.6. 1\2 = 2. Given the different sign of the eigenvalues. In this case it may or may not be possible to associate two linearly independent eigenvalues vl and v2 with the sole eigenvalue A. (b) original system directions. Applying the linear coordinate transformation x = Ty. Example 1.9 Finally. consider the system. The equilibrium point in this case is said to be a saddle. on the other hand. [ x2 ] yl y2 [ 0 2 ] [ x2 J . If this is possible the matrix A is diagonalizable and the trajectories can be analyzed by the previous method. The equilibrium point in this case is said to be an unstable node.7 shows the trajectories of both the original and the uncoupled systems after the coordinate transformation. PHASE-PLANE ANALYSIS OF LINEAR TIME-INVARIANT SYSTEMS 13 Figure 1.. then the matrix A is not . the equilibrium point is attractive in one direction but repelling in the other.e.8 (a) uncoupled system. as expected given that both eigenvalues are positive. we obtain = -yi = 2112 Figure 1.6: System trajectories of Example 1. Al = A2 = A).

7: System trajectories of Example 1.10 Consider the system 0 -2][x2 The eigenvalues in this case are Al = A2 = A = -2 and the matrix A is not diagonalizable. Figure 1.8 shows the trajectories of the system.14 CHAPTER 1.9 (a) uncoupled system. y20]T is as follows yl y2 = yloeAt + = y2oeAc y2oteat The shape of the solution is a somewhat distorted form of those encountered for diagonalizable systems. there always exist a similarity transformation P such that P-1AP = J = L Al 0 1 A2 ] . the eigenvalue A < 0 and thus the equilibrium point [0. INTRODUCTION Figure 1. . In this case. 0] is a stable node. Example 1. In this case the transformed system is y = P-'APy or yl = Ayl + y2 y2 = 42 the solution of this system of equations with initial condition yo = [ylo. The matrix J is in the so-called Jordan canonical form. In this example. (b) original system diagonalizable. The equilibrium point is called a stable node if A < 0 and unstable node if A>0.

1.6. PHASE-PLANE ANALYSIS OF LINEAR TIME-INVARIANT SYSTEMS

15

Figure 1.8: System trajectories for the system of Example 1.10.

CASE 3: Systems with Complex Conjugate Eigenvalues
The most interesting case occurs when the eigenvalues of the matrix A are complex conjugate, .\1,2 = a ±13. It can be shown that in this case a similarity transformation M can be found that renders the following similar matrix:

M-1AM=Q=
Thus the transform system has the form
y1

/3

a

Q

= aY1 -)3Y2
i3y1 + aye

(1.20) (1.21)

y2 =

The solution of this system of differential equations can be greatly simplified by introducing polar coordinates:

P0=

yi + y2
tan-1(1).

yl

Converting (1.20) and (1.21) to polar coordinates, we obtain

which has the following solution:

p = Poe"
0 = 0o + Qt.

(1.22) (1.23)

16

CHAPTER 1. INTRODUCTION

Figure 1.9: Trajectories for the system of Example 1.11 From here we conclude that
In the polar coordinate system p either increases exponentially, decreases exponentially, or stays constant depending whether the real part a of the eigenvalues A1,2 is positive, negative, or zero.

The phase angle increases linearly with a "velocity" that depends on the imaginary
part /3 of the eigenvalues \1,2-

In the yl - y2 coordinate system (1.22)-(1.23) represent an exponential spiral. If a > 0, the trajectories diverge from the origin as t increases. If a < 0 on the other hand, the trajectories converge toward the origin. The equilibrium [0, 0] in this case is said to be a stable focus (if a < 0) or unstable focus (if a > 0).
If a = 0 the trajectories are closed ellipses. In this case the equilibrium [0, 0] is said to be a center.

Example 1.11 Consider the following system:

0 -0 1
1

(1.24)

[ X2

The eigenvalues of the A matrix are A1,2 = 0 f 3f . Figure 1.9 shows that the trajectories in this case are closed ellipses. This means that the dynamical system (1.24) is oscillatory. The amplitude of the oscillations is determined by the initial conditions.

Example 1.12 Consider the following system:

1.6. PHASE-PLANE ANALYSIS OF LINEAR TIME-INVARIANT SYSTEMS

17

Figure 1.10: Trajectories for the system of Example 1.12

±2l_r0.1
A1,2

0. 5j -1

X2 X,

The eigenvalues of the A matrix are = 0.5 ± j; thus the origin is an unstable focus. Figure 1.10 shows the spiral behavior of the trajectories. The system in this case is also oscillatory, but the amplitude of the oscillations grow exponentially with time, because of the presence of the nonzero o: term.

The following table summarizes the different cases:

Eigenvalues
A1, A2 real and negative A1, A2 real and positive A1, A2 real, opposite signs A1, A2 complex with negative real part A1, A2 complex with positive real part A1, A2 imaginary

Equilibrium point
stable node unstable node saddle stable focus unstable focus center

As a final remark, we notice that the study of the trajectories of linear systems about the origin is important because, as we will see, in a neighborhood of an equilibrium point the behavior of a nonlinear system can often be determined by linearizing the nonlinear equations and studying the trajectories of the resulting linear system.

18

CHAPTER 1. INTRODUCTION

1.7

Phase-Plane Analysis of Nonlinear Systems

We mentioned earlier that nonlinear systems are more complex than their linear counterparts, and that their differences accentuate as the order of the state space realization increases. The question then is: What features characterize second-order nonlinear equations not already seen the linear case? The answer is: oscillations! We will say that a system oscillates when it has a nontrivial periodic solution, that is, a non-stationary trajectory for

which there exists T > 0 such that

x(t +T) = x(t) dt > 0.
Oscillations are indeed a very important phenomenon in dynamical systems.
In the previous section we saw that if the eigenvalues of a second-order linear time-invariant

(LTI) system are imaginary, the equilibrium point is a center and the response is oscillatory. In practice however, LTI systems do not constitute oscillators of any practical use. The reason is twofold: (1) as noticed in Example 1.11, the amplitude of the oscillations is determined by the initial conditions, and (2) the very existence and maintenance of the oscillations depend on the existence of purely imaginary eigenvalues of the A matrix in the state space realization of the dynamical equations. If the real part of the eigenvalues is not identically zero, then the trajectories are not periodic. The oscillations will either be damped out and eventually dissappear, or the solutions will grow unbounded. This means that the oscillations in linear systems are not structurally stable. Small friction forces or neglected viscous forces often introduce damping that, however small, adds a negative component to the eigenvalues and consequently damps the oscillations. Nonlinear systems, on the other hand, can have self-excited oscillations, known as limit cycles.

1.7.1

Limit Cycles

Consider the following system, commonly known as the Van der Pol oscillator.

y-µ(1-y2)y+y=0
defining state variables xl = y, and x2 = y we obtain

IL > 0

xl = X2
x2 = xl + µ(1 - xi)x2.
Notice that if µ = 0 in equation (1.26), then the resulting system is

(1.25)

(1.26)

[22]

_[-1

0]

[x2

1.7. PHASE-PLANE ANALYSIS OF NONLINEAR SYSTEMS

19

Figure 1.11: Stable limit cycle: (a) vector field diagram; (b) the closed orbit.

which is linear time-invariant. Moreover, the eigenvalues of the A matrix are A1,2 = ±3, which implies that the equilibrium point [0, 0] is a center. The term µ(1- x1)x2 in equation (1.26) provides additional dynamics that, as we will see, contribute to maintain the oscillations. Figure 1.11(a) shows the vector field diagram for the system (1.25)-(1.26) assuming µ = 1. Notice the difference between the Van der Pol oscillator of this example and the center of Example 1.11. In Example 1.11 there is a continuum of closed orbits. A trajectory initiating at an initial condition xo at t = 0 is confined to the trajectory passing through x0 for all future time. In the Van der Pol oscillator of this example there is only one isolated orbit. All trajectories converge to this trajectory as t -> oo. An isolated orbit such as this is called a limit cycle. Figure 1.11(b) shows a clearer picture of the limit cycle.

We point out that the Van der Pol oscillator discussed here is not a theoretical
example. These equations derive from simple electric circuits encountered in the first radios. Figure 1.12 shows a schematic of such a circuit, where R in represents a nonlinear resistance. See Reference [84] for a detailed analysis of the circuit.

As mentioned, the Van der Pol oscillator of this example has the property that all trajectories converge toward the limit cycle. An orbit with this property is said to be a stable limit cycle. There are three types of limit cycles, depending on the behavior of the trajectories in the vicinity of the orbit: (1) stable, (2) unstable, and (3) semi stable. A limit cycle is said to be unstable if all trajectories in the vicinity of the orbit diverge from it as t -> oo. It is said to be semi stable if the trajectories either inside or outside the orbit converge to it and diverge on the other side. An example of an unstable limit cycle can be
obtained by modifying the previous example as follows:
±1

= -x2

±2 = xl - µ(1 - xi)x2.

20

CHAPTER 1. INTRODUCTION

C T

Figure 1.12: Nonlinear RLC-circuit.

Figure 1.13: Unstable limit cycle.

Figure 1.13 shows the vector field diagram of this system with µ = 1. As can be seen in the figure, all trajectories diverge from the orbit and the limit cycle is unstable.

1.8

Higher-Order Systems

When the order of the state space realization is greater than or equal to 3, nothing
significantly different happens with linear time-invariant systems. The solution of the state equation with initial condition x0 is still x = eATxo, and the eigenvalues of the A matrix still control the behavior of the trajectories. Nonlinear equations have much more room in which to maneuver. When the dimension of the state space realization increases from 2 to 3 a new phenomenon is encountered; namely, chaos.

1.8. HIGHER-ORDER SYSTEMS

21

1.8.1

Chaos

Consider the following system of nonlinear equations

= a(y - x)

rx-y-xz
xy-bz
where a, r, b > 0. This system was introduced by Ed Lorenz in 1963 as a model of convection

rolls in the atmosphere. Since Lorenz' publication, similar equations have been found to appear in lasers and other systems. We now consider the following set of values: or = 10, b = 3 , and r = 28, which are the original parameters considered by Lorenz. It is easy to prove that the system has three equilibrium points. A more detailed analysis reveals that, with these values of the parameters, none of these three equilibrium points are actually stable, but that nonetheless all trajectories are contained within a certain ellipsoidal region
in ii
.

Figure 1.14(a) shows a three dimensional view of a trajectory staring at a randomly selected initial condition, while Figure 1.14(b) shows a projection of the same trajectory onto the X-Z plane. It is apparent from both figures that the trajectory follows a recurrent, although not periodic, motion switching between two surfaces. It can be seen in Figure 1.14(a) that each of these 2 surfaces constitute a very thin set of points, almost defining a two-dimensional plane. This set is called an "strange attractor" and the two surphases, which together resemble a pair of butterfly wings, are much more complex than they appear in our figure. Each surface is in reality formed by an infinite number of complex surfaces forming what today is called a fractal.

It is difficult to define what constitutes a chaotic system; in fact, until the present time
no universally accepted definition has been proposed. Nevertheless, the essential elements constituting chaotic behavior are the following:

A chaotic system is one where trajectories present aperiodic behavior and are critically sensitive with respect to initial conditions. Here aperiodic behavior implies that the trajectories never settle down to fixed points or to periodic orbits. Sensitive dependence with respect to initial conditions means that very small differences in initial conditions can lead to trajectories that deviate exponentially rapidly from each other.

Both of these features are indeed present in Lorenz' system, as is apparent in the Figures 1.14(a) and 1.14(b). It is of great theoretical importance that chaotic behavior cannot exist in autonomous systems of dimension less than 3. The justification of this statement comes from the well-known Poincare-Bendixson Theorem which we state below without proof.

22 CHAPTER 1. as seen in the Lorenz system.oo. in two dimensions. that is.14: (a) Three-dimensional view of the trajectories of Lorenz' chaotic system. Then either R is a closed orbit. Our intention at this point is to simply show that nonlinear equation arise frequently in dynamical systems commonly encountered in real life. the Poincare-Bendixson theorem predicts that. In higher-order systems the new dimension adds an extra degree of freedom that allows trajectories to never settle down to an equilibrium point or closed orbit. 1. a trajectory that is enclosed by a closed bounded region that contains no equilibrium points must eventually approach a limit cycle. (2) There exists a trajectory x(t) that is confined to R. (b) two-dimensional projection of the trajectory of Lorenz' system. The examples in this section are in fact popular laboratory experiments used in many universities around the world.1 [76] Consider the two dimensional system x=f(x) where f : IR2 -> j 2 is continuously differentiable in D C 1R2. one that starts in R and remains in R for all future time. .9 Examples of Nonlinear Systems We conclude this chapter with a few examples of "real" dynamical systems and their nonlinear models. According to this theorem. or converges toward a closed orbit as t . Theorem 1. and assume that (1) R C D is a closed and bounded set which contains no equilibrium points of x = f (x). INTRODUCTION Figure 1.

we need to find a proper model for the magnetic force F. EXAMPLES OF NONLINEAR SYSTEMS 23 V (t) Figure 1. as well as in gyroscopes and accelerometers.15. We can approximate L as follows: L = L(y) = 1+µy. This parameter is not constant since it depends on the position of the ball.15: Magnetic suspension system. The basic configuration is shown in Figure 1. the equation of the motion of the ball is my = -fk + mg + F (1. and F is the electromagnetic force due to the current i.9.9. According to Newton's second law of forces.1 Magnetic Suspension System Magnetic suspension systems are a familiar setup that is receiving increasing attention in applications where it is essential to reduce friction force due to mechanical contact.27) where m is the mass of the ball. To this end we notice that the energy stored in the electromagnet is given by E = 1Li2 where L is the inductance of the electromagnet.1. fk is the friction force. (1. Magnetic suspension systems are commonly encountered in high-speed trains and magnetic bearings. To complete the model.28) . g the acceleration due to gravity. 1.

\µi2 my=-ky+mg-2(1+µy)2 (1. INTRODUCTION This model considers the fact that as the ball approaches the magnetic core of the coil.29) into (1. we recognize that the external circuit obeys the Kirchhoff's voltage law.\ (1+µy)2 dt + 1+µy di dt. x2 = y. and thus we can write v=Ri+at-(Li) where (1.32) d dt (LZ) _ d Mi dt 1 + µy y(1+µy) dt+o9 (1+µy) _ dt (1.27) we obtain the following equation of motion of the ball: 1 . y) is given by _ i2 8L(y) -1 µz2 F(i. we obtain the following state space model: it 2 = x2 9 x3 = k m x2 1 + p xl \µx3 2m(1+µx1)2 [_RX3+ (l + xl)2 x2x3 + vJ l1 . . y) = 8E (1. and substituting into (1.31) To complete the model. Substituting (1.µi dy .33) .29) 2 ay 2(1+µy)2 8y Assuming that the friction force has the form - fk = ky (1. x3 = i.34).31) and (1. y) = 2L(y)i2. The energy in the magnetic circuit is thus E = E(i. we obtain v = Ri - (1 + Uy)2 dt + l + µy dt (1. resulting in an increase of the value of the inductance.30) where k > 0 is the viscous friction coefficient and substituting (1.33) into (1. the flux in the magnetic circuit is affected.32). and the force F = F(i.34) Defining state variables xl = y.30) and (1.24 CHAPTER 1.

m162 cos 6 Fyl sinO .40) Substituting (1. Consider first the pendulum. mass.2 Inverted Pendulum on a Cart Consider the pendulum on a cart shown in Figure 1.38) (1.mg = -mlO sin O . and moment of inertia about the center of gravity of the pendulum.mlO2 sin O Fy .1.40) into (1.36) 2 cos 0 = l cos 0 The free-body diagrams of the cart and pendulum are shown in Figure 1.Fxl cosO = J6. L = 21.35) (1.7. and defining state variables xl = 0.39) MX = fx . x2 = 6 we obtain ±1 = x2 .40). M represents the mass of the cart. we have that (1. whose horizontal and vertical coordinates are given by x = X + 2 sing = X +lsinO y (1.37) taking account of (1.Fx.37) (1.38) and (1.9.9. Considering the horizontal forces acting on the cart. We denote by 0 the angle with respect to the vertical. Summing forces we obtain the following equations: Fx = mX + ml6 cos O . EXAMPLES OF NONLINEAR SYSTEMS 25 Figure 1. where Fx and Fy represent the reaction forces at the pivot point. 1. and G represents the center of gravity of the pendulum.16.16: Pendulum-on-a-cart experiment. and J the length. m. (1.

respectively. Figure 1. INTRODUCTION Figure 1.amlx2 sin(2x1) .9. Assuming that the ball is always in contact with the beam and that rolling occurs without slipping. g sin x1 .3 1z 2 . and the ball can move freely along the beam.26 CHAPTER 1. mass and moment of inertia of the ball.2a cos(xl) fx 41/3 .17: Free-body diagrams of the pendulum-on-a-cart system. The acceleration of gravity is represented by . The beam can rotate by applying a torque at the center of rotation.18 shows an schematic of this system.mrO2 r = (mr2+J+Jb)B+2mr7B+mgrcos0 where J represents the moment of inertia of the beam and R. and a f = 2(m+M) The Ball-and-Beam System The ball-and-beam system is another interesting and very familiar experiment commonly encountered in control systems laboratories in many universities. the Lagrange equations of motion are (see [28] for further details) 0= (R2 + m)r + mg sin 0 . m and Jb are the radius.2aml cos2(xl) 22 where we have substituted J= 1.

Defining now state variables xl = r. you are asked to (i) Find the eigenvalues of the A matrix and classify the stability of the origin. (b) ±=x3-2x2-x+2. we obtain the following state space realization: x2 -mg sin x3 + mx1x4 m+ R X4 T . and r and 0 are shown in Figure 1.10 Exercises (1.1. x2 = r.10. . g.18.rngx1 COS X3 . (ii) Draw the phase portrait and verify your conclusions in part (i). x3 = 0.2) Given the following linear systems. find the equilibrium points and analyze their stability: (a) a=x2-2. plot ± = f (x) versus x.1) For the following dynamical systems. (c) x=tanx . EXERCISES 27 Figure 1.18: Ball-and-beam experiment.2 <x < 2 (1. From each graph.2mxlx2x4 mxi+J+Jb 1. and x4 = 0.

INTRODUCTION (a) [i2J-[0 (b) 4J[x2] [i2J-[ (c) 0 -4][x2 1 [x2J-[ 01 41 [xz (d) I 1 iz1 -[0 41[x2J (e) [2] =[2 01][X2] (f) [x2J-[2 (g) -2J[x2 [ xl ±2 1 -[ 2 0 0 1 1 lIxl 2 -1 X21 (1.28 CHAPTER 1.3) Repeat problem (1.2) for the following three-dimensional systems: (a) zl X2 = -6 5 -4 5 0 6 xl X2 ±3 (b) x3 xl i2 23 (c) -2 -2 -1 I = 0 0 -4 -1 0 1 1 xl x2 x3 -6 xl x2 -2 6 0 0 xl X2 X3 4 0 23 6 .

24 = 92. and (iii) classify each equilibrium point as stable or unstable. Define state variables xl = 01.19: Double-pendulum of Exercise (1. x3 = 02.10. . EXERCISES 29 m2 Figure 1.5) Find a state space realization for the double-pendulum system shown in Figure 1.19.4) For each of the following systems you are asked to (i) find the equilibrium points.5). (ii) find the phase portrait. x2 = 91.1. based on the analysis of the trajectories: (a) xi 21 -X 1 +X2 { 22 (b) -22 -x2 + 2x1(21 + x2) xl + 2x2(21 + x2) { (c) = i2 = ±1 21 = cos 22 i2 sin x1 (1. (1.

See Reference [28] for a more complete version of this model. references [15]. A. There are many good references on state space theory of LTI systems. The magnetic suspension system of Section 1. See. Section 1. Excellent sources on chaotic dynamical systems are References [26]. [22] and [58].30 CHAPTER 1.9. prepared by Drs. Zhao. See also Arnold [3] for an excellent in-depth treatment of phaseplane analysis of nonlinear systems. Our modification follows the model used in reference [81]. See Strogatz [76] for an inspiring.4 follows Strogatz [76].8 is based mainly on this reference. Our simple model of the ball-and-beam experiment was taken from reference [43]. The literature on chaotic systems is very extensive. [2]. This model is not very accurate since it neglects the moment of inertia of the ball. [64]. Lynch and Q. remarkably readable introduction to the subject. First-order systems are harder to find in the literature. Section 1. or [40].1 is a slightly modified version of a laboratory experiment used at the University of Alberta. INTRODUCTION Notes and References We will often refer to linear time-invariant systems throughout the rest of the book. The pendulum-on-a-cart example follows References [9] and [13]. for example. . Good sources on phase-plane analysis of second-order systems are References [32] and [59].

andbEB}.Chapter 2 Mathematical Preliminaries This chapter collects some background material needed throughout the book. Then the Cartesian product A x B of A and B is the set of all ordered pairs of the form (a. not essential for the understanding of the rest of the book. and we can write 0 C A. If A and B are sets and if every element of A is also an element of B. b) with a E A and b E B: AxB={(a. As it has no elements. 2. As the material is standard and is available in many textbooks.1 Sets We assume that the reader has some acquaintance with the notion of set. and that A is a subset of B. is thus contained in every set.1) (2. we write x E A. (2. The emphasis has been placed in explaining the concepts. A set is a collection of objects.2) Assume now that A and B are non empty sets.3) . This is. however. 31 (2. sometimes called elements or points. If A is a set and x is an element of A. and we write A C B or B D A. and pointing out their importance in later applications. The union and intersection of A and B are defined by AUB = {x:xEAorxEB} AnB = {x:xEAandxEB}. More detailed expositions can be found in the references listed at the end of the chapter. the empty set. denoted by 0.b):aeA. we say that B includes A. few proofs are offered.

. Defining property (iii) is called the triangle inequality.1 A metric space is a pair (X. x). d) of a non empty set X and a metric or distance function d : X x X -4 ]R such that. z). R+ and Z+ represent the subsets of non negative elements of IR and Z. Metric spaces form a natural generalization of this concept. 2. Alternative names for vector spaces are linear spaces and linear vector spaces.32 CHAPTER 2. Notice that. y) + d(y. x) = 0 < d(x. Finally Ilt"" denotes the set of real matrices with m rows and n columns. (ii) d(x. y) from which it follows that d(x. The next step consists of providing the space with a proper algebraic structure. y) + d(y. and C. In the following definition F denotes a field of scalars that can be either the real or complex number system. for all A. y) = d(y. and a function ". Z represents the set of integers. Throughout the rest of the book. z) < d(x. we arrive at the notion of vector space. x) = 2d(x. letting x = z in (iii) and taking account of (i) and (ii) we have. p E F and x. IR and C denote the field of real and complex numbers. (2) x + (y + z) = (x + y) + z (addition is associative). If we define addition of elements of the space and also multiplication of elements of the space by real or complex numbers. y) > 0 Vx. ": F x X --* X such that. y. for all x. z E X we have (1) x + y = y + x (addition is commutative). respectively.3 Vector Spaces So far we have been dealing with metric spaces where the emphasis was placed on the notion of distance. z c X the following conditions hold: (i) d(x. (iii) d(x. y E X. y) = 0 if and only if x = y. respectively. MATHEMATICAL PRELIMINARIES 2. Definition 2.2 A vector space over F is a non empty set X with a function "+ ": X x X -> X. Definition 2. d(x. y. IR.2 Metric Spaces In real and complex analysis many results depend solely on the idea of distance between numbers x and y.

if X = IRn and addition and scalar multiplication are defined as the usual coordinatewise operation xl xl Axl xl + yl yl + y2 de f de f Ax2 x2 + y2 Ax = A X2 X2 L xn Jn xn + yn xn Axn J then it is straightforward to show that 1Rn satisfies properties (1).(8) in Definition 2.\x + Ay (first distributive property). the resulting scaled vector ax is also in X. y E X are added.2. A simple and very useful example of a linear space is the n-dimensional "Euclidean" space 1Rn consistent of n-tuples of vectors of the following form: xl x X2 xn More precisely. if xl x X2 . Similarly. (6) \(x + y) = . (5) 31 E F : 1. the resulting vector z = x + y is also an element of X. when a vector x E X is multiplied by a scalar a E IR. According to this definition.3.x = x ("1 " is the neutral element in the operation of scalar multiplication).x E X : x + (-x) = 0 (Every x E X has a negative -x E X such that their sum is the neutral element defined in (3)). that is. VECTOR SPACES (3) 3 0 E X : x + 0 = x ("0" is the neutral element in the operation of addition). This means that when two vectors x. (7) (A + µ)x = Ax + µx (second distributive property). In the sequel. so from now on we assume that F = R.2. We will restrict our attention to real vector spaces. The essential feature of the definition is that the set X is closed under these two operations. 33 (4) 3 . A vector space is called real or complex according to whether the field F is the real or complex number system. a linear space is a structure formed by a set X furnished with two operations: vector addition and scalar multiplication. we will denote by xT the transpose of the vector x. (8) A (µx) = (Aµ) x (scalar multiplication is associative).

R is xT y = E° 1 x: y: x.].34 CHAPTER 2. Example 2. It is easy to see that this X is a (real) linear space. namely spaces where the vectors in X are functions of time.1 Linear Independence and Basis We now look at the concept of vector space in more detail. Our next example is perhaps the simplest space of this kind.3 A finite set {x2} of vectors is said to be linearly dependent if there exists a corresponding set {a. y E Throughout the rest of the book we also encounter function spaces. such that On the other hand. not all zero. x2.2 Every set containing a linearly dependent subset is itself linearly dependent. then xT is the "row vector" xT = [xi.} is said to be linearly independent. The inner product of 2 vector in x.1 Let X be the space of continuous real functions x = x(t) over the closed interval 0 < t < 1. Example 2. Notice that it is closed with respect to addition since the sum of two continuous functions is once again continuous.3 Consider the space R" and let 0 et= 1 1 0 . = 0 implies that a2 = 0 for each i. Definition 2. MATHEMATICAL PRELIMINARIES . 2.. The following definition introduces the fundamental notion of linear independence. if >2 atx. Example 2. the set {x.} of scalars.3.

1 rows... Example 2. since Al Ale. since Al Alel+. + . e2i . and moreover. To prove that this is the case. i = 1. is a set of linearly independent vectors B such that every vector in X is a linear combination of elements in 13. Then {el. Ale. . by subtraction we have that n r7:)bz=0 i=1 . n forms a basis for this space since they are linearly independent.. values. en} is called the set of unit vectors in R... +. Thus.. any vector x E R can be obtain as linear combination of the e. This set is linearly independent.4 A basis in a vector space X. we must have that n n x = E labz = =1 z=1 But then. . b2..n It is an important property of any finite dimensional vector space with basis {bl.3. + Anen = A. bn} that the linear combination of the basis vectors that produces a vector x is unique.2.+Anen An Thus..=An=0 Definition 2.4 Consider the space lR'. assume that we have two different linear combinations producing the same x.+Anen=0 is equivalent to A1=A2=. VECTOR SPACES 35 where the 1 element is in the ith row and there are zeros in the other n . The set of unit vectors ei.

a subspace also contains 0 = 1 x + (-1) X. = 0 for and since the b. MATHEMATICAL PRELIMINARIES 71. A set of n vectors in X is a basis if and only if it is linearly independent. a finite-dimensional vector space can have an infinite number of basis. n. in a vector space X. Definition 2. Theorem 2.'s. Example 2.5 In any vector space X. we now state the following theorem. which means that the It's are the same as the it's. Example 2. any two-dimensional plane passing through the origin is a subspace. or simply the span of S. Definition 2. a subspace M in a vector space X is itself a vector space. The necessity of passing through the origin comes from the fact that along with any vector x. being a basis. The reader is encouraged to complete the details of the proof.2 Subspaces Definition 2. which is a corollary of previous results.36 CHAPTER 2. the intersection of all subspaces containing S is called the subspace generated or spanned by S. .6 A non-empty subset M of a vector space X is a subspace if for any pair of scalars A and µ.7 Given a set of vectors S. Similarly.6 In the three-dimensional space 1R3. .5 The dimension of a vector space X is the number of elements in any of its basis. In general. we have that 1 = 1. Ax + µy E M whenever x and y E M. For completeness.1 Every set of n + 1 vectors in an n-dimensional vector space X is linearly dependent. Theorem 2.3. X is a subspace of itself. are linearly independent. The next theorem gives a useful characterization of the span of a set of vectors.2 Let S be a set of vectors in a vector space X. According to this definition. any line passing through the origin is a one-dimensional subspace of P3. The subspace M spanned by S is the set of all linear combinations of the members of S. 2.

2.3 Normed Vector Spaces As defined so far. Vx. and therefore contains all linear combinations of the elements of S. . The following example introduces the most commonly used norms in the Euclidean space R.dxEX.yEX (2.4) shows that every normed linear space may be regarded as a metric space with distance defined by d(x. (iii) 11x+yj<-11x11 +11y11 Notice that. Also.II consisting of a vector space X and a norm 11 11: X -* IR such that (z) - I 11 x lI= 0 if and only if x = 0.yEX. Definition 2. To recover this notion. the triangle inequality. I1 x-y11=11 x-z+z-yII511x-z11+11z-y 11 dx. 1 (ii) 1 Ax 111 x11 VA ER. Equation (2. and taking account of (i) and (ii). and the theorem is proved. Denote this subspace by N. Thus M C N. letting y = -x # 0 in (iii). y) _II x . For the converse notice that M is also a subspace which contains S.3. The limitation with the concept of vector spaces is that the notion of distance associated with metric spaces has been lost. by defining property (iii).8 A normed vector space (or simply a nonmed space) is a pair (X.4) holds. It is immediate that N contains every element of S and thus N C M. This is straightforward since linear combinations of linear combinations of elements of S are again linear combinations of the elements of S.y 11.3. we have IIx + (-x) II 011=0 =>11 X 11 < < >-0 Ix 11 + 11 -x II 11 x11 +1-1111x11=211x11 thus. the norm of a vector X is nonnegative. VECTOR SPACES 37 Proof: First we need to show that the set of linear combinations of elements of S is a subspace of X. we now introduce the concept of norrned vector space. vector spaces introduce a very useful algebraic structure by incorporating the operations of vector addition and scalar multiplication.2.

The 2-norm is the so-called Euclidean norm. there exist constants k1 and k2 such that (see exercise (2. to simplify notation.. Vx. dx. where IlxiiP f (Ix1IP + . dcf maxlx.10) .+Ixnl2. makes this space a normed vector space. For each p. p > 1. MATHEMATICAL PRELIMINARIES Example 2. In these cases.7 Consider again the vector space IRn. and not on the specific norm adopted.. `dx E >Rn Two frequently used inequalities involving p-norms in Rn are the following: Holder's inequality: Let p E 1R. + IxnIP)1/P In particular Ilxlll IIxlI2 def Ix11 +. p > 1 and let q E R be such that -+ -=1 p then q 1 1 IIxTyll1 <_ IIxIIP Ilyllq . 4 (2.6) (2. y E IRn.38 CHAPTER 2. Ixnl. it is customary to drop the subscript p to indicate that the norm can be any p-norm. known as the p-norm in Rn.7) def I21Iz+.. The distinction is somewhat superfluous in that all p-norms in 1R are equivalent in the sense that given any two norms II IIa and II IIb on ]Rn.6)) - ki llxlla <_ Ilxllb <_ k2IIxIIa. the function 11 lip. as well as some of the properties of functions and sequences (such as continuity and convergence) depend only on the three defining properties of a norm.I. (2.9) Minkowski's inequality Let p E 1R. (2.. Many of the theorems encountered throughout the book.. the oo-norm is defined as follows: IIxiI.8) By far the most commonly used of the p-norms in IRn is the 2-norm. Then llx + yIIP IIx1IP + IIyIIP . 1 < p < oo. y E IRn. Also. (2..

2. and invertible. that is the mapping y = Ax maps the vector x E ]R" into the vector y E R'. provided that A and B are square of the same size.4 39 Matrices We assume that the reader has some acquaintance with the elementary theory of matrices and matrix operations. denoted rank(A). if QT =Q-1 Inverse matrix: A matrix A-1 E R"' is said to be the inverse of the square matrix A E R""" if AA-1 = A-1A = I It can be verified that (A-1)-' = A. its transpose. . (AB)-1 = B-'A-1.4. is the maximum number of linearly independent columns in A. Every matrix A E 1R""" can be considered as a linear function A : R' -4 IR". Symmetric matrix: A is symmetric if A = AT. Rank of a matrix: The rank of a matrix A. MATRICES 2. We now introduce some notation and terminology as well as some useful properties. Skew symmetric matrix: A is skew symmetric if A = -AT. Transpose: If A is an m x n matrix. denoted AT. Orthogonal matrix: A matrix Q is orthogonal if QTQ = QQT = I. is the n x m matrix obtained by interchanging rows and columns with A. or equivalently. The following properties are straightforward to prove: (AT)T = A. (AB)T = BT AT (transpose of the product of two matrices)- (A + B)T =A T +B T (transpose of the sum of two matrices).

We now state the following theorem without proof.1 Eigenvalues. An}.3 Let A E .. Eigenvectors. Then A has the following property: rank(A) + dimM(A) = n. defined by N(A)={xEX: Ax=0} It is straightforward to show that N(A) is a vector space. Eigenvalues and eigenvectors are fundamental in matrix theory and have numerous applications.9 The null space of a linear function A : X --+ Y zs the set N(A). denoted dimA((A).10 Consider a matrix A E j. A 2 .. is important.. MATHEMATICAL PRELIMINARIES Definition 2.4 I f A E R.AI)x = 0 thus x is an eigenvector associated with A if and only if x is in the null space of (A . 2. V2i .4. Theorem 2. and S= L vl .. An has n linearly independent eigenvectors vl. vn. . Theorem 2. .Pn". with eigenvalues Al. The dimension of this vector space.--'.AI). We first analyze their use in the most elementary form of diagonalization. vn .xn.. . A2i . and Diagonal Forms Definition 2. then it can be expressed in the form A = SDS-1 where D = diag{Al.40 CHAPTER 2. A scalar A E F is said to be an eigenvalue and a nonzero vector x is an eigenvector of A associated with this eigenvalue if Ax = Ax or (A .

IR of the form q(x) = xT Ax. MATRICES 41 Proof: By definition. a function q : lR .. The columns of the matrix S are.2 Quadratic Forms Given a matrix A E R""n.4. without proof: (i) The eigenvalues of a symmetric matrix A E IIF"' are all real. by assumption.. that is. 2. if A is symmetric.. There is. Thus. Moreover. then there exist a matrix P satisfying PT = P-l such that P-'AP =PTAP=D.4. we have AS=A :l vn Alvl . An Therefore.. no loss of generality in restricting this matrix to be symmetric. Anon and we can re write the last matrix in the following form: Alvl . Anvn = V1 . Here we mention two of them. the columns of the matrix S are the eigenvectors of A. then the diagonalizing matrix S can be chosen to be an orthogonal matrix P. if A = AT.. however.. It follows that S is invertible and we can write A = SDS-l or also D = S-IAS. linearly independent. El Special Case: Symmetric Matrices Symmetric matrices have several important properties.2. notice ... Al vn . x E R" is called a quadratic form. To see this. (ii) Every symmetric matrix A is diagonalizable. The matrix A in this definition can be any real matrix. This completes the proof of the theorem. we have that AS = SD.

11 Let A E ]R""" be a symmetric matrix and let x EIIF".. (ii) Positive semidefinite if xT Ax > OVx # 0. and C = 2(A-AT)=-CT thus B is symmetric. 2. MATHEMATICAL PRELIMINARIES any matrix A E 1R"' can be rewritten as the sum of a symmetric and a skew symmetric matrix. the real number xT Cx must be identically zero. + . n are the eigenvalues of A.42 CHAPTER 2. (v) Indefinite if xT Ax can take both positive and negative values. B = 2(A+AT) = BT. Thus defining x = Py. (iv) Negative semidefinite if xT Ax < OVx # 0.. given a A = AT. Definition 2. we have that yTPTAPy yT P-1APy yTDy '\l yl + A2 y2 + . Indeed. whereas C is skew symmetric. Hence.. From this construction. This means that the quadratic form associated with a skew symmetric matrix is identically zero.1"y2 n where A. (iii) Negative definite if xT Ax < OVx # 0. it follows that . For the skew symmetric part. there exist P such that P-'AP = PT AP = D. Then A is said to be: (i) Positive definite if xTAx > OVx # 0. i = 1. It is immediate that the positive/negative character of a symmetric matrix is determined completely by its eigenvalues. we have that xTCx = (xTCx)T = xTCTx = -xTCx. as shown below: A= 2(A+AT)+2(A-AT) Clearly.

. is diagonalizable and it must have a full set of linearly independent eigenvectors. every vector x can be written as a linear combination of the elements of this set. (iii) A is negative definite if and only if all of its eigenvalues are negative.(Q)11_112.. divide by 11x11). The following theorem will be useful in later sections. Theorem 2.. a=1 Finally.. MATRICES (i) A is positive definite if and only if all of its (real) eigenvalues are positive. + Anxnun] = AixT x1u1 + .1n are orthonormal.12) n 1x112=1.. it is worth nothing that (2.2. < 0. which is symmetric. + xnun] = xT [Alxlu1 + . .. x = x1u1 +x2U2 +"' +xnun for some scalars x1i x2. we can always assume that the eigenvectors .(Q) since (2. Consider an arbitrary vector x.. un} form a basis in Rn. . u2i . a2. Vi = 1.. We can write ..+. amin(Q)IIXI12 < xTQx < am. and let Aman and A.11) Proof: The matrix Q. Vi = 1.n. (iv) A is negative semi definite if and only if \. (2.11. un associated with the eigenvalues . xn.12) is a special case of (2. 43 (ii) A is positive semi definite if and only if \. ..4. n.. Moreover.5 (Rayleigh Inequality) Consider a nonsingular symmetric matrix Q E Rnxn. . for any x E IItn.11) when the norm of x is 1. We can assume that I xji = 1 (if this is not the case.nxT xnun = Ai which implies that x112+. (v) Indefinite if and only if it has positive and negative eigenvalues. 2. 2. the set of eigenvectors Jul.ay be respectively the minimum and maximum eigenvalues of Q..+Anlxn12 Amin(Q) < xTQx < A. > 0. Thus xT Qx = XT Q [XI U1 +. Under these conditions. Given that u1i U2.

Then p is said to be a limit point of A if every neighborhood of p contains a point q # p such that q E A. (f) A set A C X is said to be closed if it contains all of its limit points.5.44 CHAPTER 2. (d) A set A c X is said to be open if every point of A is an interior point.5 Basic Topology A few elements of basic topology will be needed throughout the book. A is closed if and only if and only if A` is open. We emphasize those concepts that we will use more frequently.p1I < r}. q) < b. (e) The complement of A C X is the set A` = {p E X : p V A}. Let X be a metric space. (g) A set A C X is bounded if there exist a real number b and a point q E A such that d(p. It is important to notice that p itself needs not be in the set A. Now consider a set A C 1R". MATHEMATICAL PRELIMINARIES 2. (b) Let A C X and consider a point p E X.q) < r. . Neighborhood: A neighborhood of a point p E A C R" is the set Br (p) defined as follows: Br(p) = {x E R' :Iix . dp E A. We say that (a) A neighborhood of a point p E X is a set Nr (p) C X consisting of all points q E X such that d(p.1 Basic Topology in 1R" All the previous concepts can be specialized to the Euclidean space 1R". Neighborhoods of this form will be used very frequently and will sometimes be referred to as an open ball with center p and radius r. Open set: A set A C 1R" is said to be open if for every p E A one can find a neighborhood Br(p) C A. Equivalently. (c) A point p is an interior point of a set A c X if there exist a neighborhood N of p such that N C E. 2. Bounded set: A set A C ]R" is said to be bounded if there exists a real number M > 0 such that IIxli < M Vx E A.

In other words. d). . The converse is.1. provided that n. x2.414.. and consider the sequence {xn} = {1. in this case f Q.9 Let X2 = Q. However. denoted {xn} Definition 2. It is important to notice that in Definition 2. SEQUENCES 45 Compact set: A set A c R" is said to be compact if it is closed and bounded. not true. x0) < . whenever x.2.1. d). Definition 2. however.6.13 A sequence {xn} in a metric space (X. as shown in the following example. is trying to converge to f. 0<0<1 2. is said to converge if there is a point x0 E X with the property that for every real number l. or xn -* x0 and call x the limit of the sequence {xn}. and thus we conclude that xn is not convergent in (X2. Example 2. with a. d). We have that f. Once again we have that x. We then write x0 = limxn. Convex set: A set A C iR' is said to be convex if. then {xn} is not convergent (see Example 2.8 Let X1 = IR.10). b E 7G.yl.x(n) l < e for n> N and since i E IR we conclude that xn is convergent in (X1. y E A. d) is said to be a Cauchy sequence if for every real > 0 there is an integer N such that d(xn. d(x.12 A sequence of vectors x0i x1. if a sequence has a limit. the set of rational numbers (x E Q = x = b.4.12 convergence must be taking place in the metric space (X. Example 2. y) = Ix . m > N. } (each term of the sequence is found by adding the corresponding digit in v'-2).6 Sequences in a metric space (X. y) = Ix .n) < 1. > 0 there is an integer N such that n > N implies that d(xn. and consider the sequence of the previous example. lim xn = x' such that x` V X. b # 0). d).41. x. It is easy to show that every convergent sequence is a Cauchy sequence. then 9x1+(1-0)x2i also belongs to A. Let d(x. that is a Cauchy sequence in not necessarily convergent.yl.1.

In other words. The set of elements of A that can occur as first members of elements in f is called the domain of f. then to check the convergence of a sequence to some point of the space. then it has "holes". m > 2/e.e. a function is a subset of the Cartesian product between A and B where each argument can have one and only one image. it is sufficient to check whether the sequence is Cauchy. x) -> 0 as n -> oo. It is not. It is called bijective if it is both injective and surjective. Alternative names for functions used in this book are map. one needs to "guess" the limit of a sequence to prove convergence. in general. then b = c.y) =I x .) =I 1. The set of elements of B that can occur as first members of elements of f is called the range of f. operator. A function f is called injective if f (xl) = f (X2) implies that xl = x2 for every x1. (X.10 Let X = (0. It follows that {xn} is a Cauchy sequence since d(xn. on the other hand.y Consider the sequence {xn} = 1/n. m > N there exists x E X such that d(xn. and transformation. The simplest example of a complete metric space is the real-number system with the metric d =1 x .y 1.xm) < e for n.m). We have d(xn. A function from A to B is a set f of ordered pairs in the Cartesian product A x B with the property that if (a. convergent in X since limn. Example 2.7 Functions Definition 2. If a space is known to be complete. In other words. xm) < e. d) is called complete if and only if every Cauchy sequence converges (to a point of X).15 Let A and B be abstract sets.46 CHAPTER 2. 2. b) and (a. mapping. however. and An important class of metric spaces are the so-called complete metric spaces. . X = {x E R : 0 < x < 1}).14 A metric space (X. If a space is incomplete. x2 E A. c) are elements of f. and let d(x. In other words. 1/n = 0. Definition 2.xm. a sequence might be "trying" to converge to a point that does not belong to the space and thus not converging. MATHEMATICAL PRELIMINARIES 1.1) (i. d) is a complete metric space if for every sequence {xn} satisfying d(xn. In incomplete spaces. We will encounter several other important examples in the sequel..1 I< 1+ 1 < n m n mN 2 where N = min(n. A function f is called surjective if the range of f is the whole of B. provided that n.

Indeed. and if a function is uniformly continuous on a set A. x > 0. where x. d2) be metric spaces. uniform continuity is stronger than continuity. not every continuous function is uniformly continuous on the same set. If f is continuous at every point of X. oo) but not uniformly continuous.18 Let (X. d2) be metric spaces and consider a function f : X -> Y. defined by xEDI (f2 0 f1)(x) = f2[fl(x)] Definition 2. that is. This definition is clearly local. Remarks: The difference between ordinary continuity and uniform continuity is that in the former 6(e. di). We say that f is continuous at xo if for every real e > 0 there exists a real 6 = 8(e. D2 C iR0.7. then it is often useful to define a new function fl with domain D1 as follows: fl = {(a. Consider for example the function f (x) = 2. x0) depends on both a and the particular x0 E X. functions mapping Euclidean spaces. Clearly. The function fl is called a restriction of f to the set D1. that is. y) < b implies that d2 (f (x). such that d1 (x. The converse is in general not true. f : X -> Y is continuous if for every sequence {xn} that converges to x. If fl(Di) C D2. then the composition of f2 and fl is the function f2 o fl. Equivalently. The exception to this occurs when working with compact sets. consider a function f mapping a compact set X into a metric space Y. and consider functions fl and f2 of the form fl : D1 -4 II8n and f2 : D1 -> IR". Then f is uniformly continuous if and only if it is continuous. xo) < 6 implies that d2(f (x). while in the latter b(e) is only a function of E.b) E f : a E Dl}.17 Let (X. f (y)) < e. f (xo)) < e. Definition 2. (Y. FUNCTIONS 47 If f is a function with domain D and D1 is a subset of D.16 Let D1. then it is continuous on A. .2. that is. it corresponds to pointwise convergence.f(y) I < E. then f is said to be continuous on X. then we say that f is continuous at x E 1R" if given e > 0 there exists b > 0 such that iix -yMM < b => 1f(x) . y c X. Then f : X -i Y is called uniformly continuous on X if for every e > 0 there exists b = 6(e) > 0. d1) and (Y. xo) such that d(x. In the special case of functions of the form f : IR0 -* IR'. the corresponding sequence If (xn)} converges to y = f (x). is continuous over (0. Definition 2.

or a linear transformation) if and only if given any X1.1 Bounded Linear Operators and Matrix Norms Now consider a function L mapping vector spaces X into Y. 2.16) (2. = maxE lain Ilxlloo=1 i J=1 where Amax (AT A) represents the maximum eigenvalue of AT A. It is not difficult to show that m max IAxjIi = maxE lazal IlxIIi=1 7 (2. The operator norm applied to this case originates a matrix norm.48 CHAPTER 2. Important special cases are p = 1. p E R L(Axi + µx2) = . xER'.13) The function L is said to be a bounded linear operator if there exist a constant M such that JIL(x)Ij < MIjxII Vx E X. given A E II8"`n. A special case of interest is that when the vector spaces X and Y are Rn and 1R'. Definition 2.18) x=1 max IIAxII2 = IIXI12=1 Amax(ATA) n IIAIIc max IIAxII.7. we define def zoo IIAxp = IIx11P 1121 1 (2.14) The constant M defined in (2. This norm is sometimes called the induced norm because it is "induced" by the p vector norm.19 A function L : X --4 Y is said to be a linear operator (or a linear map. In this case all linear functions A : Rn -* R'n are of the form y=Ax. (2. (2.15) are vector norms.14) is called the operator norm.17) (2. Indeed. .L(x1) + pL(x2). and oo. respectively. yERm where A is a m x n of real elements. X2 E X and any A. MATHEMATICAL PRELIMINARIES 2. this matrix defines a linear mapping A : Rn -* R'n of the form y = Ax. For this mapping.15) Where all the norms on the right-hand side of (2.

and by {el.e2 = f(x) = el = en = fBl 1 Lfm(x)I 0 0 . Definition 2. then The derivative f'(x) defined in Definition 2.21 is called the differential or the total derivative of f at x. The limit f'(x) is called the derivative of f at x. en} the standard basis in R" fi (x) f2(x) fol I f°1 . The function f is said to be differentiable if it is differentiable at each x in its domain.20 A function f : R -> R is said to be differentiable at x if f is defined in an open interval (a. since D is open and f (x + h) E JRt.21 that h E R'.8 Differentiability Definition 2.2. In the following discussion we denote by f2. of course. h Now consider the case of a function f : R" -+ R. IhII is small enough. in Definition 2. .f(x) f'(x) = l a exists. b) C R and f(x + hh . e2.f(x) = f'(x)h + r(h) where the "remainder" r(h) is small in the sense that lim h.21 A function f : R' -4 R' is said to be differentiable at a point x if f is defined in an open set D C R" containing x and the limit h-+0 lim If(x+h) . 1 < i < 7n the components of the function f. to distinguish it from the partial derivatives that we discuss next. DIFFERENTIABILITY 49 2. then f(x + h) .f(x) .Or(h) = 0. If the derivative exists.8. If x + h E D.f'(x)hIl lhll =0 exists. Notice.

However. Definition 2. if f is known to be differentiable at a point x. For xEDCIR" andl<i<m. and they determine f'(x). The functions D3 f.f'(x)II < e provided that x. then f'(x) is given by the Jacobian matrix or Jacobian transformation [f'(x)]: . stated without proof. y E D and II x .f:(x) = 0. then the partial derivatives exist at x. on the other hand. as defined in Definition 2. the function f : ]R2 -> IR f(xl. MATHEMATICAL PRELIMINARIES Definition 2. implies that the continuously differentiable property can be evaluated directly by studying the partial derivatives of the function..50 CHAPTER 2. The following definition introduces the concept of continuously differentiable function. 1 <j <n we define Of. may or may not be continuous. f'.21 is not implied by the existence of the partial derivatives of the function. then it is continuous on D.x2) = is not continuous at (0. Even for continuous functions the existence of all partial derivatives does not imply differentiability in the sense of Definition 2. both A and t x exist at every point in ]R2. The following theorem. . lim = D2f: def o-+o Oxj AC R provided that the limit exists. 9X1 '1 §Xn [f'(x)] _ xi x If a function f : 1R .22 Consider a function f : ]R" -4 IItm and let D be an open set in ]R".23 A differentiable mapping f of an open set D C IIt" into R' is said to be continuously differentiable in D if f' is such that IT(y) . f:(x + Deb) . Indeed. 0) and so it is not differentiable.yII < d. if f : R' -> R' is differentiable. (or -) are called the partial derivatives of f . The derivative of the function.21.]R"` is differentiable on an open set D C 1R". On the other hand.. For example. Differentiability of a function.

abusing the notation slightly. If a function f has continuous partial derivatives of any order. 1 < i < m.8. then it is said to be smooth. together with the operations of addition and scalar multiplication. if it is continuously differentiable at every point of D.. Because it is relatively easier to work with the partial derivatives of a function. DIFFERENTIABILITY 51 Theorem 2. 2. then the Jacobian matrix is the row vector of of f1(X) axl' ax2' . In general.8. and we write f E C. 1 < j < n exist and differentiable in D if and only if the partial derivatives are continuous on D. f is said to be sufficiently smooth where f has continuous partial derivatives of any required order. Abusing this terminology and notation slightly. Then f is continuously . 1 < j < n exist and are continuous at xo. the function is said to be in Ck.6 Suppose that f maps an open set D C IR' into IRm. Summary: Given a function f : R" -i 1R' with continuous partial derivatives. This vector space is denoted by Cl. if a function f : R -> lR' has continuous partial derivatives up to order k. 1 < i < m. and we write f E Ck. . we shall use f'(x) or D f (x) to represent both the Jacobian matrix and the total derivative of f at x. form a vector space.1 Some Useful Theorems We collect a number of well known results that are often useful. We will frequently denote this vector by either ax or V f (x) vf(x) = of = of of Ox of . of axn This vector is called the gradient of f because it identifies the direction of steepest ascent of f. A function f : IR" -.2. axl'ax2' axn It is easy to show that the set of functions f : IR'2 -+ IR' with continuous partial derivatives. If f : IR' -+ R.23 is often restated by saying that A function f : 1R' --> Rm is said to be continuously differentiable at a point xo if the partial derivatives '. Definition 2.1W is said to be continuously differentiable on a set D C IR'`..

2.8 Consider the function f : 0 C R" --> R'" and suppose that the open set w contains the points a. As will be seen in Section 2.7 (Mean-Value Theorem) Let f : [a.1 (Chain Rule) Let fl : D1 C R" -+ R". this property will play a major role in the study of the solution of differential equations. Theorem 2.19) for all x1i x2 E D with the same Lipschitz constant. A useful extension of this result to functions f : R" -+ R' is given below.a). Then there exist an open set Uo containing xo and an open set Wo containing f(xo) such that f : Uo -+ Wo has a continuous inverse f -1 : WO .19) It is said to be Lipschitz on an open set D C R" if it satisfies (2. and let f'(xo) # 0. We now introduce a stronger form of continuity. f is said to be globally Lipschitz if it satisfies (2. then fl o f2 is differentiable at a. If f2 is differentiable at a E D2 and fl is differentiable at f2(a). b] and differentiable in the open interval (a. In the following theorem S2 represents an open subset of lR'.19) with D = Rn. and assume that f is differentiable at every point of S. Then there exists a point c in (a. MATHEMATICAL PRELIMINARIES Property 2.f(a) = f'(c) (b . b). Definition 2. (2. known as Lipschitz continuity. Theorem 2.Uo that is differentiable and for all y = f (x) E WO satisfies Df-1(y) = [Df(x)] 1 = [Df-1(y)]-1. The there exists a point c on S such that If (b) . Theorem 2.52 CHAPTER 2. b) such that f(b) .x2j1. .f (X2)11 < LJJx1 .9 Lipschitz Continuity We defined continuous functions earlier.9 (Inverse Function Theorem) Let f : Rn -> R" be continuously differentiable in an open set D containing the point xo E R".f (a)lb = Ilf'(c)(b - a) 11. and f2 : D2 C R" -+ IR". Finally.lRm is said to be locally Lipschitz on D if every point of D has a neighborhood Do C D over which the restriction of f with domain D1 satisfies If(XI) . b] -> R be continuous on the closed interval [a. and b and the line segment S joining these points.24 A function f (x) : Rn . and D(fl o f2)(a) = DF1(f2(a))Df2(a).10.

VX E Br(xo). However. ax If the function f is also a function of t. calculating 0'(O) using the chain rule. 1] --r R' as follows: 0(0)=fo'r=f(7(O)) By the mean-value theorem 2. It is in fact very easy to find counterexamples that show that this in not the case. Theorem 2. The next theorem gives an important sufficient condition for Lipschitz continuity.x2ED which implies that f is uniformly continuous.20). Now define the function 0 : [0. we have that I0(1) .2. L can be estimated as follows: 11af11 < L. substituting 0(1) = f (xi). the converse is not true. t). and we have that I41 -x211<8 => 1f(xi)-f(x2)11<Lllxl-x211<LLe. we conclude that the line segment y(O) = Oxl + (1 .f (X2)11 ax 11x1 . then the previous definition can be extended as follows .x211= LIIxi . where B. f = f (x. Noticing that the closed ball Br is a convex set.0(0)11= 110'(Bl)11- Thus..(xo)={xED:lx-xol <r} and consider two arbitrary points xl.8. that is. 0(0) = f (X2) If(XI) . LIPSCHITZ CONTINUITY 53 Notice that if f : 1R' -4 1R' is Lipschitz on D C 1R". xi. there exist 0l E (0.0(0)II = ax(xl -x2) and. then it is locally Lipschitz on D. that is. 1) such that II0(1) .20) Theorem 2. Proof: Consider xo E D and let r > 0 be small enough to ensure that Br(xo) C D.x211.10 provides a mechanism to calculate the Lipschitz constant (equation (2. not every uniformly continuous function is Lipschitz. then. Indeed. given e > 0. we can define 8 = e/L.10 If a function f : IR" x R' is continuously differentiable on an open set D C IIE".O)x2i 0 < 0 < 1. x2 E Br. (2. is contained in Br(xo).9.

xn) = d(f (xn).t)1I <LIIx . It is an straightforward exercise that every contraction is continuous (in fact. MATHEMATICAL PRELIMINARIES Definition 2. Vt E [to.20) for all x1.25 A function f (x.21) on D x [to. . It is said to be Lipschitz in x on D x [to. T]. the contraction mapping principle is sometimes called the fixed-point theorem. d) be a metric space. which we use later to analyze the existence and uniqueness of solutions of a class of nonlinear differential equations. Tj and assume that the derivative of f satisfies of (x. xn-1) (2. Proof: A point xo E X satisfying f (xo) = xo is called a fixed point. The f is Lipschitz continuous on D with constant L: 11f(x. T. xn+1 = f (xn). t) < L (2.23) .11 Let f : 1R' x IR -* IR' be continuously differentiable on D x [to. It is said to be locally Lipschitz on D C 1Rn x (to. 2. We have d(xn+1. x2 = A X0. T] C In x IR if every point of D has a neighborhood D1 C D over which the restriction off with domain D1 x [to. A mapping f : S -* S is said to be a contraction on S if there exists a number £ < 1 such that d(f(x). T] satisfies (2. f (xn-1) d(xn. Theorem 2. Vx. y c S. and denote x1 = f (xo). Let xo be an arbitrary point in S. oo).20).t)_f(y. Theorem 2. oo) if it is locally Lipschitz in x on every D1 x [to. y) (2. Since f maps S into itself. Every contraction mapping f : S -> S has one and only one x E S such that f (x) = x. Definition 2.T].26 Let (X. uniformly continuous) on X. This construction defines a sequence {xn} = { f (xn_ 1) }.12 (Contraction Mapping Principle) Let S be a closed subset of the complete metric space (X. Thus. T] if it satisfies (2.yll .f(y)) : e d(x. and let S C X. d). t) : lR' x R -> IRn is said to be locally Lipschitz in x on an open set D x [to. x2 E D and all t E [to.10 Contraction Mapping In this section we discuss the contraction mapping principle. T] C D x [to. y E D. xk E S Vk.22) for any two points x.54 CHAPTER 2.

{xn} is a Cauchy sequence...x). d(f(x).28) where < 1.. limn. since f is a contraction.(xn) = x. Moreover. CONTRACTION MAPPING and then.. x0). y) f(y)=y by (2.24) Now suppose m > n > N.E) d(xn.26) (2. xi-1) t=n+1 < (Cn + `n+l +. Thus.26) (2.. we have that d(xn. This completes the proof.28) can be satisfied if and only if d(x. xm) < e n. we have seen that xn E S C X. The series (1 + x + x2 + ) converges to 1/(1 .2.25) d(xn. Vn.. for some x E X. It follows that f(x) f(xn) = rnn (xn+1) = x.. y) < d(x. + m-1) d(xi. by induction d(xn+l.f(y)) = d(x. f is continuous. by successive applications of the triangle inequality we obtain m d(xn.+Cm-n-1 < 1 (1 .25) are positive.. We have f(x)=x. Since the metric space (X. Therefore. xm) < defining 1 n d(xl. Then. y) = 0. noticing that all the summands in equation (2. we have x = y. y) because f is a contraction (2.. xm) E d(xt.27) = d(x. To prove uniqueness suppose x and y are two different fixed points.10. xo) = E. In other words. m > N.xn) < End(xl. Since (2. + Cm-n-1) d(xl. xm) < n(1 + 1 + . xo) (2. and since S is closed it follows that x E S. the existence of a fixed point is proved. xo) £ d(xl. for all x < 1. {xn} has a limit.xo) 55 (2. d) is complete. . we have: 1+e+. Now.

there exist some b > 0 such that (2. MATHEMATICAL PRELIMINARIES 2. given the state space realization of a linear time-invariant system x = Ax + Bu.Vt E [to. x(to) = xo (2. t) is piecewise continuous in t and satisfies ll f(x1.30) is a fixed point of the mapping F that maps x into Fx. The . to + 61.xo11 < r}. t) .11 Solution of Differential Equations When dealing with ordinary linear differential equations. In general. Equation (2. and (2) uniqueness of the solution.30) has a unique solution in [to.32) where Fx is a continuous function of t. Thus. then x(t) satisfies x(t) = xo + J f [x (T).30).32) implies that a solution of the differential equation (2. when dealing with nonlinear differential equations two issues are of importance: (1) existence. In this section we derive sufficient conditions for the existence and uniqueness of the solution of a differential equation of the form . it is usually possible to derive a closed-form expression for the solution of a differential equation.13 (Local Existence and Uniqueness) Consider the nonlinear differential equation x = f (x. x) x(to) = xo (2.31) which is of the form x(t) = (Fx)(t) (2.f (x2. Proof: Notice in the first place that if x(t) is a solution of (2. r] dr t to (2.30) and assume that f (x. For example. this is not the case for nonlinear differential equations. t1].x211 Vx1i x2 E B = {x E Rn : llx . x(0) = xo we can find the following closed form solution for a given u: x(t) = eAtxo + Lt eA( t-Bu(rr) dr. t)ll < L 11x1 . t).29) Theorem 2.56 CHAPTER 2.t= f (t. Indeed.

to + 8] thus X is the set of all continuous functions defined on the interval [to.to+a]. in turn.xo II < r we have IFx . and thus a unique solution in S. Step (1): From (2. IIx . To complete the proof. x 1X(t)11 (2.t)jI IIFx-xoll < j[Mf(x(r)r)_f(xor)+ If(xo. Clearly. S C X. we obtain t (Fx)(t) . The final step. Given x E X. It follows that we can find such that tmaxjf(xo.30) in X must be in S. In step (1) we show that F : S -> S. can be verified by the contraction mapping theorem.to) [Lr + ].xoll < f[Lr+]dr t < (t . x(r)) d7- The function f is bounded on [to.31). we will proceed in three steps.2. in turn implies that there exists one and only one fixed point x = Fx E S. Because we are interested in solutions of (2. we denote Ilxlic = tE m. It can also be shown that S is closed and that X with the norm (2. step (3). we define the sets X and the S C X as follows: X = C[to. This. SOLUTION OF DIFFERENTIAL EQUATIONS 57 existence of a fixed point of this map.33) and finally S={xEX:IIx-xoMIc<r}. consists of showing that any possible solution of (2. . In step (2) we show that F is a contraction from S into S.11.30) in X (not only in S).33) is a complete metric space. tl] (since it is piecewise continuous). To start with.xo = fo f (r.r)I] dr t o f[L(Ix(r) -xoll)+C]dr o and since for each x E S.

Step (3): Given that S C X.t0+6] IFx .Fx211c < L6II xl .x211. Starting at xo at to.r) -f(xo. .xo II < (Lr + .) (tl .[ f (x1(T ).(Fx2)(t)II = t Jtot t to t f(x1(T). we consider x1.T)II J dT o < < j[L(Ilx(r)-xoII)+]dr j[Lr+]dr. choosing 6 < follows: means that F maps S into S.f (x2 (T ).f(x2(T). T) . 1) . For this to be the case we must have Ilx(tl)-xoll=r.xollc= tE[TT. Thus. x(t) crosses the border B for the first time. for all t < tl we have that IIx(t) -x011 < ft[ IIf(x(r).x211c for 6 < L Choosing p < 1 and 6 < i . T) 11 dT d7 < L Ilxl(T) .30) in S. MATHEMATICAL PRELIMINARIES It follows that max IFx .30) in X must lie in S. x2 E S and proceed as II (Fx1)(t) . Step (2):To show that F is a contraction on S. This implies that there is a unique solution of the nonlinear equation (2. o t It follows that r = 11x(ti) .T)II + Ilf(xo. a solution can leave S if and only if at some t = t1.x2(T)II t < LIlx1-x2IIc ft o dT IIFxi . to complete the proof. we must show that any solution of (2.to).58 CHAPTER 2. < PIIx1 . we conclude that F is a construction.xoll < 6(Lr+ and then. T) ] dr 11 o.

12. to+s].12 Exercises (2.14 is usually too conservative to be of any practical use. t) II <- Lllxl . notice that the conditions of the theorem imply the existence of a global Lipschitz constant. in the interval [to. y and z be linearly independent vectors in the vector space X. (2. It is important to notice that Theorem 2. and z + x are also linearly independent? (2.13 on the other hand. The local Lipschitz condition of theorem 2.f (x2.13 provides a sufficient but not necessary condition for the existence and uniqueness of the solution of the differential equation (2. t1]. a]T. x2 E R". the solution is guaranteed to exist only locally. i. t) . below we state (but do not prove) a slightly modified version of this theorem that provides a condition for the global existence and uniqueness of the solution of the same differential equation. t) is piecewise continuous in t and satisfies Il f (x1. dt E [to. Theorem 2.14 (Global Existence and Uniqueness) Consider again the nonlinear differential equation (2. is not very restrictive and is satisfied by any function f (x. For completeness.e. which is very restrictive. EXERCISES Denoting t1 = to + p.30) has a unique solution in [to. The price paid for this generalization is a stronger and much more conservative condition imposed on the differential equation.x211 1f(xo. Then. With addition and scalar multiplication defined in the usual way. we have that if u is such that µ 59 r Lr+ then the solution x(t) is confined to B.2. ti]. This completes the proof. find a basis.3) Under what conditions on the scalars a and 3 in C2 are the vectors [1.30). Theorem 2.131T linearly dependent? .30) and assume that f (x. as outlined in Theorem 2. is this set a vector space over the field of real numbers? Is so. y + z. and [1.1) Consider the set of 2 x 2 real matrices.t)II <_ 0 dx1.11 2. Is it correct to infer that x + y. (2.2) Let x. t) satisfying somewhat mild smoothness conditions. According to the theorem. Indeed.

(2.e. 'This norm is called the Frobeneus norm.8) Consider a matrix A E R' ' (i) Show that the functions IIAll1. and (2.4) Prove Theorem 2. (iii) Show that the function I - n IIAII = trace(ATA) -1: 1a. and 11 1141 1x111 . namely.e.5) For each of the norms 11x111.6) Show that the vector norms II IIxl12 111. it satisfies properties (i)-(iii) in Definition 2. and IIxII. in not an operato norm since it is not induced by any vector norm.8. and IlXlloo in lR . xn its v1 11x112 (2. An be its eigenvalues and X1. II - 112. (2.11x112. .. (ii) Show that IIIIIP = 1 for any induced norm lip. (2.16). sketch the "open unit ball" centered at 0 = [0. Show that (i) X and the empty set 0 are open. (ii) The intersection of a finite number of open sets is open. however..12 is a matrix norm' (i.Iloo satisfy the following: < <_ :5 Ilxll.0]T. IIA112. explain why. (2. are norms.18). the sets IIxiii < 1.1. Show that this norm. (i) Assuming that A is nonsingular. respectively. what can be said about its eigenvalues? (ii) Under these assumptions.17). If the answer is positive.7) Consider a matrix A E R1 ' and let )q. . d) be a metric space.. (not necessarily linearly independent) eigenvecti irs. < 1. i. N ATHEMATICAL PRELIMINARIES (2. and IIAIIm defined in equations (2.60 CHAPTER 2. lIx112 < 1. (iii) The union of any collection of open sets is open. (2. is it possible to ex press the eigenvalues and eigenvectors of A-' in terms of those of A? If the am wer is negative.8). satisfy properties (i)-(iii) in Definition 2. find the eigenvalues and eigenvectors of A-'.5 IIx1Io IIxII2 < n 11x112 < V"n IIxI12 .9) Let (X.

12) Let (X. y) # (0. y) = 0 0 that is. (ii) The intersection of any number of closed sets is closed.14) Consider the function x°+y` 0 for for (x. Show that (i) X and the empty set 0 are closed. (iii) The union of a finite collection of closed sets is closed. (2.13). (2. y) is not continuous (and thus not differentiable) at the origin. 0) x=y=0 Show that f (x. EXERCISES 61 (2. y) = 2 1 thus the result. d) be a metric space. Show that f is continuous if and only if the inverse image of every open set in Y is open in X.11) Let (X. Proceed as follows: (i) Show that x-40.15) Given the function f (x. whenever they exist. y) of exercise (2. if x .y-i0 x+ Y2 x (iii) X lim O (1 + y2) sin x X (2.10) Show that a set A in a metric space (X.2. show that the partial derivatives and both exist at (0. 0): x2 _y2 (1) x lOm 0 1 + x2 + y2 (ii) lim 2 2 x-*0. d2) be metric spaces and consider a function f : X -* Y. (2. d) is closed if and only if its complement is open.13) Determine the values of the following limits. dl) and (Y. (2.0 and y --> 0 along the parabola y = x2. lira f (x. and determine whether each function is continuous at (0.12. This shows that existence of the partial derivatives does not imply continuity of the function.y=x-+0 0. 0).y=x lim f (x. then limx (ii) Show that x-+O. .

2s.21) Given the following functions f : IR -. (c) locally Lipschitz at x=0. az. 0). (2. (2.19) Show that the function f = 1/x is not uniformly continuous on E = (0.20) Show that f = 1/x does not satisfy a Lipschitz condition on E = (0.62 CHAPTER 2. MATHEMATICAL PRELIMINARIES (2. (c) locally Lipschitz at x = 0. (i) f (x) = ex2 (ii) f (x) = Cos X. y = 3r . N. (i) f (x.16) Determine whether the function x2 2 for for (x. (i) z = x3 + y3. (Suggestion: Notice that y 7 < x2. (ii) z = xr+--y7 x = r cos s. (i) x1 = x2 . (2. (d) Lipschitz on some D C R2.x2(xl + x2) (ii) { x2 = -x1 + x2 /32 .x1 ( l :i1 =x2-}'xl(N2-x1 -x2) -x2 2 ) . y = r sin s. y) # (0. 1). x = 2r + s. determine whether f is (a) continuous at x = 0.18) Use the chain rule to obtain the indicated partial derivatives. 0) 0 x=y=0 and x2y2 is continuous at (0.22) For each of the following functions f : 1R2 -+ IIY2.17) Given the following functions. y) = ex' cos x sin y (ii) y) = x2 + y2 (2. (b) continuously differentiable at x = 0. (b) continuously differentiable at x = 0. and a (2. (iii) f (x) = satx.R. determine in each case whether f is (a) continuous at x = 0.) (2. 1). (iv) f (x) = sin(' ). and. find the partial derivatives f (X.x1(xl +x2) 22 = -x1 .

22 Notes and References There are many good references for the material in this chapter. and Rudin [62]. [88] and [41]. The material on existence and uniqueness of the solution of differential equations can be found on most textbooks on ordinary differential equations.12 J ±1 = X2 ll k. EXERCISES 63 J r1 = X2 Sl 22 = m21 . We have followed References [32].10.9. remarkably well-written account of vector spaces. Section 2. we refer to Bartle [7]. For general background in mathematical analysis. see Halmos [33]. For a complete. . Maddox [51]. See also References [59] and [55]. including Theorem 2.12.2. is based on Hirsch [32].

.

we explore the notion of stability in the sense of Lyapunov. In all cases the idea is: given a set of dynamical equations that represent a physical system.1 Definitions x=f(x) Consider the autonomous systems f:D -Rn where D is an open and connected subset of Rn and f is a locally Lipschitz map from D into Rn.1) represents an unforced system 65 . Other notions of stability will be explored in Chapters 6 and 7. In this chapter. there are many definitions of stability of systems. 'Notice that (3. 3.1). Exactly what constitutes a meaningful notion of good behavior is certainly a very debatable topic. Indeed. The more general case of nonautonomous systems is treated in the next chapter. In the sequel we will assume that x = xe is an equilibrium point of (3. The problem lies in how to convert the intuitive notion a good behavior into a precise mathematical definition that can be applied to a given dynamical system. xe is such that f(xe) = 0. try to determine whether such a system is well behaved in some conceivable sense. In other words.Chapter 3 Lyapunov Stability I: Autonomous Systems In this chapter we look at the important notion of stability in the sense of Lyapunov. which applies to equilibrium points. Throughout this chapter we restrict our attention to autonomous systems.

3b=6(e)>0 IIx(0) .xeII < 6 = lim x(t) = xe.1 The equilibrium point x = xe of the system (3. The main limitation of this concept is that solutions are not required to converge to the equilibrium xe. To this end.X.1) is said to be stable if for each e > 0. Definition 3.66 CHAPTER 3. If this objective is accomplished by starting from an initial state x(0) that is close to the equilibrium xef that is. the equilibrium point is said to be unstable . Very often. and we say that we want the solutions of (3. we start by measuring proximity in terms of the norm II II. 11 < e Vt > to otherwise. Definition 3. We now introduce the following definition. I1x(0) .1: Stable equilibrium point. AUTONOMOUS SYSTEMS Figure 3.1) is said to be convergent if there exists 61 > 0 : IIx(0) . This definition captures the following concept-we want the solution of (3.1) to remain inside the open region delimited by 11x(t) . staying close to xe is simply not enough.1). LYAPUNOV STABILITY I.xell < 6. then the equilibrium point is said to be stable (see Figure 3.xell < e. This definition represents the weakest form of stability introduced in this chapter. t +00 . Recall that what we are trying to capture is the concept of good behavior in a dynamical system.2 The equilibrium point x = xe of the system (3.1) to be near the equilibrium point xe for all t > to.xell < 6 IIx(t) .

xell < dl 11x(t) .xell < a Ilx(0) . A > 0 such that 11x(t) .2: Asymptotically stable equilibrium point. as defined in 3. The principal weakness of this concept is that it says nothing about how fast the trajectories approximate the equilibrium point. DEFINITIONS 67 Figure 3.1.2) holds for any xER". it is not difficult to construct examples where an equilibrium point is convergent.3. Definition 3.3 The equilibrium point x = xe of the system (3. Asymptotic stability (Figure 3. referred to as exponential stability. There is a stronger form of asymptotic stability.2) whenever Ilx(0) . which makes precise this idea. Definition 3.2 are two different concepts and neither one of them implies the other. . It is important to realize that stability and convergence.1) is said to be asymptotically stable if it is both stable and convergent.xell e- At Vt > 0 (3. yet does not satisfy the conditions of Definition 3.xell < 6. is convergent if for any given cl > 0.1) is said to be (locally) exponentially stable if there exist two real constants a. Indeed. A convergent equilibrium point xe is one where every solution starting sufficiently close to xe will eventually approach xe as t -4 oo.1 and 3.4 The equilibrium point x = xe of the system (3.xell < el Vt > to + T. It is said to be globally exponentially stable if (3. Equivalently.1 and is therefore not stable in the sense of Lyapunov. x.2) is the desirable property in most applications. IT such that IIx(0) .

According to this. To see this. In general. . the equilibrium point ye of the new systems y = g(y) is ye = 0.1 Consider the mass-spring system shown in Figure 3. Example 3.1) and define. There is no loss of generality in doing so.68 CHAPTER 3. LYAPUNOV STABILITY I. consider the equilibrium point xe of the system (3. studying the stability of the equilibrium point xe for the system x = f (x) is equivalent to studying the stability of the origin for the system y = g(y). According to this z1 = x1 Z2 mg k . the same dynamical system can have more than one isolated equilibrium point. y= y = x-xe i = f(x) f f(x) = f(y + xe) 9(y) Thus. x2 = y. we can perform a change of variables and define a new system with an equilibrium point at x = 0. we obtain the following state space realization. however. if this is not the case. not true. 0). in the definitions and especially in the proofs of the stability theorems. Now define the transformation z = x-xe. Very often. AUTONOMOUS SYSTEMS Clearly. it is assumed that the equilibrium point under study is the origin xe = 0.= Z1 = 21 :i2- = x2 . Indeed. The several notions of stability introduced so far refer to stability of equilibrium points. We have my + fly + ky = mg defining states x1 = y. J ±1 = X2 ll x2=-mkxl-mx2+9 which has a unique equilibrium point xe = (k .3. exponential stability is the strongest form of stability seen so far. The converse is. in the sequel we will state the several stability theorems assuming that xe = 0. It is also immediate that exponential stability implies asymptotic stability. since 9(0) = f(0 + xe) = f(xe) = 0. Given this property.

5 A function V : D -> IR is said to be positive semi definite in D if it satisfies the following conditions: (1) 0 E D and V (O) = 0.3. The core of this theory is the analysis and construction of a class of functions to be defined and its derivatives along the trajectories of the system under study.m (zl + k ) . and g(x) has a single equilibrium point at the origin.3: Mass-spring system.2 Positive Definite Functions Now that the concept of stability has been defined. Thus.2. In the following definition. Definition 3. POSITIVE DEFINITE FUNCTIONS 69 Figure 3. z = g(x). This is the center of the Lyapunov stability theory. 3. . D represents an open and connected subset of R". zl = z2 z2 = .mz2 Thus. the next step is to study how to analyze the stability properties of an equilibrium point.mz2 + g or I zl = Z2 Sl z2 = -mk zl . We start by introducing the notion of positive definite functions.

In this case.1. QEIRnxn. Positive definite functions (PDFs) constitute the basic building block of the Lyapunov theory. is not positive definite since for any x2 # 0. V : D -+ R is said to be negative definite (semi definite)in D if -V is positive definite (semi definite). Q=QT. Vi = 1. AUTONOMOUS SYSTEMS (ii) V(x) > 0. n..) negative definite negative semi definite xTQx>O. Vx # 0 b At < 0. for example: Vi (x) : 1R 2 --> R = axi + bx2 = [x1.. V : D -4 IR is said to be positive definite in D if condition (ii) is replaced by (ii') (ii') V(x) > 0 in D ... .>O. All of the Lyapunov stability theorems focus on the study of the time derivative of a positive definite function along the trajectories of 3.. and V < 0 in D to indicate that V is positive definite. Vx in D . V2(x*) = 0. given an autonomous .di=1. Va > 0.. b > 0 V2(x) : IIt2 1R = axi = [xl. any x of the form x' = [0.dx#0 n Thus. V(x):IR"->1R=xTQx.e.. defines a quadratic form... n xTQx<O.>0.{0}. respectively. semi definite. b'a. as we will see.Vx#0 = A.{0}. We will often abuse the notation slightly and write V > 0. Thus we have that positive definite positive semi definite V(. In other words. Example 3. x2] [ 0 a 0 1 r L X2 1 I b J >0. however. Q = QT). Since by assumption. Finally. PDFs can be seen as an abstraction of the total "energy" stored in a system.70 CHAPTER 3. Q is symmetric (i.dx0O b A. are all real. n xTQx>0. > 0.. we know that its eigenvalues Ai. . i = 1.Vi=1.2 The simplest and perhaps more important class of positive definite function is defined as follow:s. x21 L 0 ] [ X2 . and negative definite in D. n xTQx < 0. V > 0. LYAPUNOV STABILITY I.x2]T # 0..

we will first construct a positive definite function V(x) and study V (x) given by V(x) = ddV [axl' axe . Thus. The Lie derivative of V along f . Example 3. 11 It is clear from this example that the V(x) depends on the system's equation f (x) and thus it will be different for different systems. denoted by LfV.2x2] axl bx2 + cos xl 1 taxi + 2bx2 + 2x2 cos x1.1 (Lyapunov Stability Theorem) Let x = 0 be an equilibrium point of ± = f (x).3. is defined by LfV(x) = a f (x).1..3 Stability Theorems Theorem 3. (x) aV dx The following definition introduces a useful and very common way of representing this derivative. STABILITY THEOREMS 71 system of the form 3. and let V : D -> R be a continuously differentiable function such that .3 Let ax1 bx2 + cos xl and define V = x1 + x2. according to this definition. we have that V(x) = as f(x) = VV f(x) = LfV(x). Definition 3. we have V(x) = LfV(x) = [2x1.' axn] ax d = VV f(x) fl (x) av av av f.6 Let V : D -+ R and f : D -> R'. 3.3.. Thus. f : D -+ R".

positive definite functions can be seen as generalized energy functions. Theorem 3. that is. thus x = 0 is stable. and c1 > c2 are chosen such that 522 C Br. then we have that 522 C 521. As mentioned earlier.{0}.72 CHAPTER 3. . In other words. (ii) V(x) > 0 (iii) V(x) < 0 in D . thus x = 0 is asymptotically stable. it can never come out again. a trajectory can only move from a Lyapunov surface V (x) = c into an inner Lyapunov surface with smaller c. A Lyapunov surface defines a region of the state space that contains all Lyapunov surfaces of lesser value.2 (Asymptotic Stability Theorem) Under the conditions of Theorem 3. Now suppose that V(x) is assumed to be negative definite. The condition V < 0 implies that when a trajectory crosses a Lyapunov surface V (x) = c. The condition V (x) = c for constant c defines what is called a Lyapunov surface. i = 1.1 intuitively very simple. if V (O) = 0.{0}. In this case. and makes Theorem 3. This implies that the equilibrium point is stable. in D . the theorem implies that a sufficient condition for the stability of the equilibrium point x = 0 is that there exists a continuously differentiable-positive definite function V (x) such that V (x) is negative semi definite in a neighborhood of x = 0. This clearly represents a stronger stability condition. in D .1. (ii) V(x) > 0 (iii) V(x) < 0 in D . 2.{0}. Thus a trajectory satisfying this condition is actually confined to the closed region SZ = {x : V(x) < c}. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS (i) V(O) = 0. given a Lyapunov function and defining S21 522 = {x E Br : V(x) < c1} = {x E Br : V(x) < c2} where Br = {x E R" : JJxlJ < rl }.{0}.

1 are strengthened by requiring V (x) to be negative definite. by construction. the solution will remain inside 52b. Using the same argument used in the proof of Theorem 3. It then follows that any trajectory starting in Slp at t = 0 stays inside Sup for all t > 0.3. V (x) actually decreases along the trajectories of f (x).2. Moreover. all we need to show is that 1b reduces to 0 in the limit. for every real number a > 0 we can find b > 0 such that Ib C Ba. By assumption (iii) of the theorem we have that V(x) < 0 = V(x) < V(x(0)) <'3 Vt > 0. The discussion above is important since it elaborates on the ideas and motivation behind all the Lyapunov stability theorems. Therefore. These proofs will clarify certain technicalities used later on to distinguish between local and global stabilities. STABILITY THEOREMS 73 In other words. and from Theorem 3.1.1 and 3.3. Proof of theorem 3. Let a = min V (x) IxJJ=r (thus a > 0.2: Under the assumptions of the theorem. rather than semi definite. by the fact that V (x) > 0 E D). to prove asymptotic stability.1 whenever the initial condition is inside f2b. a) and denote 0. So f is well defined in the compact set Br. f2p C B. Now choose Q E (0. by the continuity of V(x) it follows that 3b > 0 such that < 'Q 3 (B6 C 00 C Br). It then follows that IX(O)II < 6 = x(t) E Q0 C Br Vt > 0 and then 1x(O)11 < b => Ix(t)II < T < f Vt > 0 ED which means that the equilibrium x = 0 is stable. and also in the discussion of region of attraction. Proof of theorem 3.1: Choose r > 0 such that the closed ball Br={xE III": IxMI<r} is contained in D.. We now provide a proof of Theorems 3. the theorem says that asymptotic stability is achieved if the conditions of Theorem 3..3={xEBr:V(x)<31Thus. In . Now suppose that x(0) E S20.

If in addition happens to be negative definite. 3. V (x) tends steadily to zero along the solutions of f (x).4: Pendulum without friction. such a is said to be a Lyapunov function candidate. This completes the proof. ma = -mg sin 9 a = la = l9 . other words. what is rather tricky is to select a whose derivative along the trajectories near the equilibrium point is either negative definite. Thus. AUTONOMOUS SYSTEMS Figure 3. when a function is proposed as possible candidate to prove any form of stability. then V is said to be a Lyapunov function for that particular equilibrium point. 52b shrinks to a single point as t -> oo. The reason. or semi definite. LYAPUNOV STABILITY I. while V depends on this dynamics in an essential manner.4 (Pendulum Without Friction) Using Newton's second law of motion we have.4 Examples Example 3. is that is independent of the dynamics of the differential equation under study. this is straightforward. of course. For this reason. Remarks: The first step when studying the stability properties of an equilibrium point consists of choosing a positive definite function Finding a positive definite function is fairly easy. However.74 CHAPTER 3. since by assumption f7(x) < 0 in D.

and a is the angular acceleration.. in this case we proceed inspired by our understanding of the physical system. e.j1 c 22 = -gsinxl = x2 which is of the desired form i = f (x). we see that because of the periodicity of cos(xj). we take V : D -4 R. . This situation. V(0) = 0. defining property (i) is satisfied in both theorems. To study the stability of the equilibrium at the origin.cosxl). Namely.cosxl). We have E=K+P = 2m(wl)2 + mgh where (kinetic plus potential energy) w = 0 = x2 h = l(1 . with D = ((-27r. The origin is an equilibrium point (since f (0) = 0).4. 2. x2)T = (2krr. however.2. k = 1. Clearly. We now define V(x) = E and investigate whether satisfy the and its derivative conditions of Theorem 3.1 and/or 3. we need to propose a Lyapunov function candidate V (x) and show that satisfies the properties of one of the stability theorems seen so far. we have that V(x) = 0 whenever x = (xl. choosing this function is rather difficult. however.cosO) = l(1 . 27r). With respect to (ii). can be easily remedied by restricting the domain of xl to the interval (-27r. is not positive definite. Thus. 1R)T.3. Thus E = 2ml2x2 + mgl(1 . EXAMPLES where l is the length of the pendulum. 0) T. 21r). i. and use this quantity as our Lyapunov function candidate. In general. we compute the total energy of the pendulum (which is a positive function). . thus. Thus 75 mlO+mgsin0 = 0 or B+ l sing = 0 choosing state variables xl = 0 { X2 = 0 we have .

a simple pendulum without friction is a conservative system. V : D -> R is indeed positive definite.9 sinxl]T mglx2 sin x1 . axl ax2 [fl (x). . The result of Example 3. This means that the sum of the kinetic and potential energy remains constant. this version of the pendulum constitutes an asymptotically stable equilibrium of the origin.4 is consistent with our physical observations. Indeed.1. The energy is the same as in Example 3. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS With this restriction.{0} V(x) _ VV . There remains to evaluate the derivative of along the trajectories of f(t). all the initial energy supplied to the pendulum will be dissipated by the friction force and the pendulum will remain at rest. -9 sin xl m _ -k12x2.klO defining the same state variables as in example 3.5 (Pendulum with Friction) We now modify the previous example by adding the friction force klO ma = -mg sin 0 . f(x) = IV IV .x2 Again x = 0 is an equilibrium point. m12x2] [x2.4. Thus. In our next example we add friction to the dynamics of the pendulum.76 CHAPTER 3.mglx2 sin xl = 0. Example 3.4 we have X2 X2 -S1Ilxl .nk.f2(x)1T _ [mgl sin xl. We have V(x) _ VV f(x) = IV aV axl ax2 [fl(x). f2(x)]T k x21T [mgl sin xl.cosxl) >0 in D . ml2x2] [x2. In the limit. Thus V(x) = 2m12x2 + mgl(1 . The pendulum will continue to balance without changing the amplitude of the oscillations and thus constitutes a stable system. The added friction leads to a loss of energy that results in a decrease in the amplitude of the oscillations. Thus V (x) = 0 and the origin is stable by Theorem 3. .

3. More important is the case of asymptotic stability. . An equilibrium point that has this property is said to be globally asymptotically stable. In the best possible case. This example emphasizes the fact that all of the theorems seen so far provide sufficient but by no means necessary conditions for stability./32) + x2. In this case the solution not only stays within e but also converges to xe in the limit. Consider. the definition of stability. The equilibrium xe is said to be stable if JIx(t) . -xlx2 (2 + x2 . provided that (xl +x2) < Q2.xell < b or in words.5 Asymptotic Stability in the Large A quick look at the definitions of stability seen so far will reveal that all of these concepts are local in character. we conclude that the origin is stable by Theorem 3.a2). To study the equilibrium point at the origin.Q2). According to this analysis. but cannot conclude asymptotic stability as suggested by our intuitive analysis.6 Consider the following system: ±2 -XI + x2(x1 + x . any initial state will converge to the equilibrium point. Thus. ASYMPTOTIC STABILITY IN THE LARGE 77 Thus V(x) is negative semi-definite.1. provided that 11x(0) .5. for example. 3. and it follows that the origin is an asymptotically stable equilibrium point. Namely. this says that starting "near" xei the solution will remain "near" xe. When the equilibrium is asymptotically stable.Q2)]T xllxl+2-'32)+x2(xi+2-'32) (xi + x2)(xl + 2 . we define V(x) = 1/2(x2 +x2). since we were not able to establish the conditions of Theorem 3. It is not negative definite since V(x) = 0 for x2 = 0. regardless of the value of xl (thus V (x) = 0 along the xl axis). x2] [xl (X2 + x . The result is indeed disappointing since we know that a pendulum with friction has an asymptotically stable equilibrium point at the origin. We have V (x) = VV f (x) [xl//. Example 3.2. V(x) is not negative definite in a neighborhood of x = 0. or asymptotically stable in the large. V(x) > 0 and V(x) < 0.xell < e. it is often important to know under what conditions an initial state will converge to the equilibrium point.

even if V(x) < Q.3}. The following example shows precisely this. V (x) <.) is such that .) that grow unbounded when x -> oo. While this condition is clearly necessary. it is however not sufficient. At this point it is tempting to infer that if the conditions of Theorem 3.8 Let V : D -4 R be a continuously differentiable function. Both sets S2p and B. Then V (x) is said to be radially unbounded if V(x) ->oo as iixli-goo.5 shows that an initial state can diverge from the equilibrium state at the origin while moving towards lower energy curves. are closed set (and so compact. The solution to this problem is to include an extra condition that ensures that V (x) = Q is a closed curve. in Theorem 3. then the asymptotic stability of the equilibrium is global. holds in a compact region of the space defined in Theorem 3. define a closed and bounded region. This property. This can be achieved by considering only functions V(. Example 3. or globally asymptotically stable. More precisely.1 (and so also that of theorem 3.7 Consider the following positive definite function: = V(x) X2 1+X2 1 + x2. Figure 3. However. The region V(x) < Q is closed for values of 3 < 1. This functions are called radially unbounded.. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS Definition 3. Definition 3. _ {x E R' Jxii < r} and then showed that Stp C B.1 by ft = {x E B.1.7 The equilibrium state xe is said to be asymptotically stable in the large. if it is stable and every motion converges to the equilibrium ast-3oo.3 (Global Asymptotic Stability) Under the conditions of Theorem 3. Theorem 3.. if Q > 1. is allowed to be the entire space lR the situation changes since the condition V(x) < 3 does not. If now B.78 CHAPTER 3. if V(.2) relies on the fact that the positive definiteness of the function V (x) coupled with the negative definiteness of V (x) ensure that V(x) < V(xo).2 hold in the whole space PJ'.1 we started by choosing a ball B.. in general.. however. This in turn implies that no is not a closed region and so it is possible for state trajectories to drift away from the equilibrium point. since they are also bounded). The reason is that the proof of Theorem 3. the surface is open.

.5.. (i) V(0) = 0. : Example 3. then x = 0 is globally asymptotically stable. S20 C B. Thus.5: The curves V(x) = 0..8 Consider the following system xl = x2 . We only need to show that given an arbitrary /3 > 0.2. Vx # 0. (ii) V(x)>0 (iii) V (x) < 0 Vx#0. Proof: The proof is similar to that of Theorem 3. the condition S2p = {x E Rn : V (X) < /3} defines a set that is contained in the ball B.3. which implies that S20 is bounded. ASYMPTOTIC STABILITY IN THE LARGE 79 Figure 3. (iii) V(x) is radially unbounded. = {x E R' IIxII < r}. for some r > 0. To see implies that for any 0 > 0.x1(xl 2+ X2 . 3r > 0 such this notice that the radial unboundedness of that V(x) > 0 whenever IIxII > r. for some r > 0.

B. a) -> R+ is said to be in the class K if (i) a(0) = 0. We now introduce a new class of functions. known as class 1C. Moreover. a is said to be in the class KQO if in addition a : ]EF+ -+ IIt+ and a(r) -+ oo as r -+ oo. and show that positive definite functions can be characterized in terms of this class of functions. V(x) > 0 and V(x) < 0 for all x E R2. Proof: See the Appendix.-21 -22(21 -f-22)]T -2(x1 + 22)2 Thus. Moreover. .2201 2 22). (ii) It is strictly increasing. + 2 To study the equilibrium point at the origin. since it follows that the origin is globally asymptotically stable. Definition 3. Lemma 3. LYAPUNOV STABILITY I. This new characterization is useful in many occasions.. We have (2) = 7f(x) 2 2[xlix2][x2 -21(21 2 +22).1 V : D --> R is positive definite if and only if there exists class K functions a1 and a2 such that al(II2II) 5 V(x) < a2(IIxII) Vx E Br C D.80 CHAPTER 3.6 Positive Definite Functions Revisited We have seen that positive definite functions play an important role in the Lyapunov theory. AUTONOMOUS SYSTEMS x2 = -21 . In the sequel. is radially unbounded. we define V (x) = x2 + x2. represents the ball Br= {xER":IxII<r}.9 A continuous function a : [0. if D = 1R' and is radially unbounded then a1 and a2 can be chosen in the class K. 3.

xell <_ 0(I1x(0) .min(P)Ilxl12 a2(x) = \max(P)Ilxll2.10 A continuous function 3 : (0.3. a) x pg+ -+ R+ is said to be in the class KL if (i) For fixed s. This function is positive definite if and only if the eigenvalues of the symmetric matrix P are strictly positive. A stronger class of functions is needed in the definition of asymptotic stability. s) is decreasing with respect to s. s) is in the class IC with respect to r. It then follows that Am.6. (ii) For fixed r.n(P)Ilxll2 V(x) < Amax(P)Ilxll2.xell < S = 1x(t) .1) is stable if and only if there exists a class IC function a(. 33(r.ax(P) the minimum and maximum eigenvalues of P. 3(r.2 The equilibrium xe of the system (3.xell <_ a(I1*0) . Vt > 0. al. Thus. Definition 3. oc) --> R+. we now show that it is possible to re state the stability definition in terms of class 1C of functions.m. t) Proof: See the Appendix. For completeness.xell) Vt > 0.1) is asymptotically stable if and only if there exists a class ICL function and a constant e such that 1x(0) . (3. Lemma 3.9 Let V (x) = xT Px.m(P)Ilxll2 < xTPx < < Amax(P)Ilxll2 .3 The equilibrium xe of the system (3. (3.4) Proof: See the Appendix. POSITIVE DEFINITE FUNCTIONS REVISITED 81 Example 3.xell. a2 : [0.5) .) and a constant a such that 1x(0) . and are defined by al(x) = .xell < a = lx(t) . respectively. (iii) 0(r. where P is a symmetric matrix. Lemma 3.s)---*0 ass -oc. Denote )m%n(P) and ).

K2.2 are satisfied. Our next theorem gives a sufficient condition for exponential stability.7 Construction of Lyapunov Functions The main shortcoming of the Lyapunov theory is the difficulty associated with the construction of suitable Lyapunov functions. ." This method is applicable to autonomous systems and often but not always leads to a desired Lyapunov function for a given system. The advantage of this notion is that it makes precise the rate at which trajectories converge to the equilibrium point.1 with al and a2(). by assumption Klllxllp < V(x) V(x) < -K3IIxllP <_ K2llxjIP < -KV(x) V(x) < -V(x) [V xo e-cK3/K2)t]1/p . Theorem 3. Then the origin is exponentially stable. the x = 0 is globally exponentially stable. 1 V(x) < V(xo)e-(K3/K2)t V x ]1/p < = Ilxll < or lix(t)ll <_ llxoll [K2]1/P e-(K3/2K2)t.4 Suppose that all the conditions of Theorem 3. exponential stability is the strongest form of stability seen so far. LYAPUNOV STABILITY I. known as the "variable gradient. the function V (x) satisfies Lemma 3. K3 and p such that V(x) < K21IxljP Klllxllp < V(x) -K3IIxIlP. In this section we study one approach to this problem. AUTONOMOUS SYSTEMS 3.6.4. Moreover. satisfying somewhat strong conditions. 3.82 CHAPTER 3.1 Exponential Stability As mentioned earlier. and in addition assume that there exist positive constants K1. Proof: According to the assumptions of Theorem 3. if the conditions hold globally. Indeed.

In other words. . we have that V(Xb) . CONSTRUCTION OF LYAPUNOV FUNCTIONS 83 The Variable Gradient: The essence of this method is to assume that the gradient of the (unknown) Lyapunov function V(. could be 9(X)=[91. (= V (x) = VV (x) ..5 A function g(x) is the gradient of a scalar function V(x) if and only if the matrix ax.7. This property is often used to obtain V by integrating VV(x) along the coordinate axis: xl V (X) = f 9(x) dx = J0 0X gl (s1. x2.6) VV(x) = g(x) it follows that g(x)dx = VV(x)dx = dV(x) thus.3. . 0) dS2 + . and itself by integrating the assumed gradient. for a dynamical system with 2 states x1 and x2. An example of such a function. S2.. 0) dsl x. fzZ + J0 92(x1. is symmetric. . Given that (3.. The power of this method relies on the following fact.V(xa) depends on the initial and final states xa and xb and not on the particular path followed when going from xa to xb. the difference V(Xb) . f (x) = g(x) . + 1 9n(x1. Theorem 3... .V (xa) = Xb Xb f VV(x) dx = J2 g(x) dx that is. 0... 0 Sn) dSn. 0.921 = [hixl + hix2. we start out then finding by assuming that V V (x) = 9(x)..7) The free parameters in the function g(x) are constrained to satisfy certain symmetry conditions. (3. hzxl + h2x2].) is known up to some adjustable parameters. satisfied by all gradients of a scalar function. f (x)) and propose a possible function g(x) that contains some adjustable parameters. The following theorem details these conditions.

In our case we have 1991 ax2 xl ahl + h2 + x2 ax2 ax2 1 ahl 2 19x1 = h2 + x1 ax. We now put these ideas to work using the following example. or. 02V axtax.i1 = -axl 2 = bx2 + xlx2. ax. Clearly. the origin is an equilibrium point. we attempt to solve the problem assuming that the J. we have 9(x) = [91. we proceed to find a Lyapunov function as follows. To simplify the solution.84 CHAPTER 3. then ahi = ahi = ah2 = ah2 ax2 ax2 1992 19x1 =0 axl and we have that: 1991 ax2 = 19x1 2 hl =1 = h2 k g(x) = [hlxl + kx2i kxl + h2x2].10 Consider the following system: . 2 = aV ax. Step 1: Assume that VV(x) = g(x) has the form g(x) = [hlxl + hix2.8) Step 2: Impose the symmetry conditions. If this is the case. 2 (3. equivalently as. 's are constant. ax. AUTONOMOUS SYSTEMS Proof: See the Appendix. In particular. 92] = [hlxl. choosing k = 0.ax.. Example 3. h2x2] . + x2 axl .ag. To study the stability of this equilibrium point. hl2 x1 + h2x2]. . . LYAPUNOV STABILITY I.

Step 4: Find V from VV by integration. Step 5: Verify that V > 0 and V < 0. In this case V(x) = -axe + (b+x1x2)x2 assume now that a > 0. Integrating along the axes.5). Assume then that hi = h2 = 1. However. 3. THE INVARIANCE PRINCIPLE 85 Step 3: Find V: V(x) = VV f(x) = 9(x) f(x) = [hix1.9).3.8. h2x2]f (x) = -ahixi +h2 (b + xlx2)x2. V (x) > 0 if and only if hi. under these conditions. it is often the case that a Lyapunov function candidate fails to identify an asymptotically stable equilibrium point by having V (x) negative semi definite. h2 > 0. s2) ds2 0 X2 i 1 x1 hiss dsl + 10 1 h2s2 ds2 2h1x1 2 1 2X2 + 2h2x2. An example of this is that of the pendulum with friction (Example 3. This shortcoming was due to the fact that when studying the . In this case V(x) = -axi . 0) dsl + JO 92(x1. we have that V(x) = I hix2 + 1 h2x2 V(x) -ahixi+h2(b+xlx2)x2 From (3.8 The Invariance Principle Asymptotic stability is always more desirable that stability.(b . we have that x1 fx2 91(sl.x1x2)x2 and we conclude that. the origin is (locally) asymptotically stable. and b < 0.

. An extension of Lyapunov's theorem due to LaSalle studies this problem in great detail. initialized at t = 0. M is the set of points such that if a solution of i = f (x) belongs to M at some instant. Those variables. Remarks: In the dynamical system literature.86 CHAPTER 3.12 For autonomous systems.15 The whole space R" is an invariant set. Definition 3. since if at t = 0 we have x(0) = x. AUTONOMOUS SYSTEMS properties of the function V we assumed that the variables xl and x2 are independent. Example 3. Example 3. LYAPUNOV STABILITY I. Example 3. In other words. then x(t) = xe Vt > 0. and a set satisfying the definition above is called positively invariant.12). then the set Sgt defined by SZi={xE1R' :V(x)<l} is an invariant set. one often views a differential equation as being defined for all t rather than just all the nonnegative t. 11 Example 3. Notice that the condition V < 0 implies that if a trajectory crosses a Lyapunov surface V(x) = c it can never come out again. any trajectory is an invariant set. The central idea is a generalization of the concept of equilibrium point called invariant set. are related by the pendulum equations and so they are not independent of one another.11 A set M is said to be an invariant set with respect to the dynamical system i = f(x) if' x(0)EM = x(t)EM VtER+. The following are some examples of invariant sets of the dynamical system i = f (x). Example 3. however.14 If V(x) is continuously differentiable (not necessarily positive definite) and satisfies V(x) < 0 along the solutions of i = f (x).13 A limit cycle is an invariant set (this is a special case of Example 3. then it belongs to M for all future time.11 Any equilibrium point is an invariant set.

4 If the solution x(t. LaSalle's invariance principle removes this problem and it actually allows us to prove that x = 0 is indeed asymptotically stable.1). Lemma 3. failed to recognize that x = 0 is actually asymptotically stable..4. the solution approaches N as t .3.5 The positive limit set N of a solution x(t. then its (positive) limit set N is (i) bounded.1) is bounded for t > to. THE INVARIANCE PRINCIPLE 87 Definition 3. our analysis.8. Roughly speaking. We start with the simplest and most useful result in LaSalle's theory. Following energy considerations we constructed a Lyapunov function that turned out to be useful to prove that x = 0 is a stable equilibrium point. Invariant sets play a fundamental role in an extension of Lyapunov's work produced by LaSalle. The following lemma can be seen as a corollary of Lemma 3. the limit set N of x(t) is whatever x(t) tends to in the limit. The difference . and (iii) nonempty. The set N is called the limit set (or positive limit set) of x(t) if for any p E N there exist a sequence of times {tn} E [0. The problem is the following: recall the example of the pendulum with friction. equivalently as t -*oc urn IJx(tn) -p1l = 0.oc. x0i to) of the system (3. Proof: See the Appendix. However.)->p or. (ii) closed. something that we know thanks to our understanding of this rather simple system. oc) such that x(t. Example 3. Proof: See the Appendix. Lemma 3. Moreover. Theorem 3. x0i to) of the autonomous system (3.1) is invariant with respect to (3. based on this Lyapunov function.16 An asymptotically stable equilibrium point is the limit set of any solution starting sufficiently near the equilibrium point.12 Let x(t) be a trajectory of the dynamical system ± = f (x).6 can be considered as a corollary of LaSalle's theorem as will be shown later.17 A stable limit cycle is the positive limit set of any solution starting sufficiently near it. Example 3.

LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS between theorems 3. Conditions (i) and (ii) of Theorem 3. By (3. for any a E IR+. (x) _ -kl2x2 .88 CHAPTER 3.5: xl X2 x2 . and -a < X2 < a.11)-(3. (iii) V(x) does not vanish identically along any trajectory in R. where we assume that 0 E D. The key of this step is the analysis of the condition V = 0 using the system equations (3. Theorem 3. -7r) x IR.l slnx1 . that is.12).6 is allowed to be only positive semi-definite. (ii) 1 (x) is negative semi definite in a bounded region R C D.6 are satisfied in the region R X2 l with -7r < xl < 7r. Example 3. assume that V(x) is identically zero over a nonzero time interval.6 The equilibrium point x = 0 of the autonomous system (3.18 Consider again the pendulum with friction of Example 3. Indeed. (3.13) which is negative semi definite since V (x) = 0 for all x = [x1i 0]T Thus. .13) we have V (x) = 0 l2x 0 = -k2 b X2 = 0 . with V short of being negative definite. the Lyapunov theory fails to predict the asymptotic stability of the origin expected from the physical understanding of the problem. g k m Again V (X) >0 Vx E (-7r.1) is asymptotically stable if there exists a function V(x) satisfying (i) V(x) positive definite Vx E D. We now look to see whether application of Theorem 3.-x2.6 and theorem 3. other than the null solution. We now check condition (iii) of the same theorem.6 leads to a better result.2 is that in Theorem 3. other than the null solution x=0. something that will remove part of the conservativism associated with certain Lyapunov functions. we check whether V can vanish identically along the trajectories trapped in R.

and the origin is (locally) asymptotically stable by Theorem 3.7 The null solution x = 0 of the autonomous system (3.ax1 . and V(.. . It follows that V (x) does not vanish identically along any solution other than x = 0.3. x0i t0) of (3. N is an invariant set with respect to (3.(x1 + x2)2x2.12). by assumption.1) that starts in B6 is bounded and tends to its limit set N that is contained in B. THE INVARIANCE PRINCIPLE 89 thus x2 = 0 Vt = x2 = 0 and by (3. V(x) = L Vx in the limit set N.1) is asymptotically stable in the large if the assumptions of theorem 3.axl .4).3 and is omitted. Notice also that V (x) is continuous and thus. x2] [x2.. Proof: The proof follows the same argument used in the proof of Theorem 3. Example 3.6 hold in the entire state space (i. we know that for each e > 0 there exist b > 0 1x011 < b => lx(t)II < E that is.5. Tr) the last condition can be satisfied if and only if xl = 0. we obtain 0 = 9 sinxl k . and thus is bounded from below in B.e.f (x) 2[ax1.(x1 + x2)2x2]T -2x2[1 + (x1 +X2)2].19 Consider the following system: xl = x2 x2 = -x2 . We have (x) = 8.6.. R = R). But along that solution. Theorem 3. It is also non increasing by assumption and thus tends to a non negative limit L as t -* oo. Hence any solution x(t.1). -x2 . any solution starting inside the closed ball Bs will remain within the closed ball B. (by Lemma 3. Also V(x) is continuous on the compact set B. which means that any solution that starts in N will remain there for all future time.6: By the Lyapunov stability theorem (Theorem 3. Proof of Theorem 3.8. Thus. Also by lemma 3. To study the equilibrium point at the origin we define V (x) = axe + x2.-n x2 and thus x2 = 0 = sin x1 = 0 restricting xl to the interval xl E (-7r. N is the origin of the state space and we conclude that any solution starting in R C B6 converges to x = 0 as t -4 oo.1).) is radially unbounded. V (x) = 0 since V (x) is constant (= L) in N.

Proceeding as in the previous example.90 CHAPTER 3. (ii) V < 0 in M. Let w be the limit set of this trajectory.) is radially unbounded. Also. Since x(t) is bounded. V(x) > 0 and V(x) < 0 since V(x) = 0 for x = (xii0). by Lemma 3. it is bounded from below in the compact set M. Proof: Consider a solution x(t) of (3. and moreover V (x) = 0 on w (since V(x) is constant on w). Then every solution starting in M approaches N as t -+ oo.a X1 . E is the set of all points of M such that V = 0. In the first place. Remarks: LaSalle's theorem goes beyond the Lyapunov stability theorems in two important aspects. It follows that V (x) does not vanish identically along any solution other than x = [0.1).) is a continuous function.4 implies that x(t) approaches w (its positive limit set) as t -+ oo. we assume that V = 0 and conclude that V=0 x2=0. that is. invariant with respect to the solutions of (3. It follows that V(x(t)) has a limit as t -+ oo. V(x) is a decreasing function of t. V(-) is required to be continuously differentiable (and so .(x1 + x2)2x2 = 0 and considering the fact that x2 = 0. Since V(x) < 0 E M. It follows that w C M since M is (an invariant) closed set.oo Hence. For any p E w3 a sequence to with to -+ oo and x(tn) -+ p. AUTONOMOUS SYSTEMS Thus. 0]T . (iii) E : {x : x E M. (iv) N: is the largest invariant set in E.5 w is an invariant set. the last equation implies that xl = 0. Theorem 3. V (x) = a on w. we conclude that the origin is globally asymptotically stable. n.8 (LaSalle's theorem) Let V : D -* R be a continuously differentiable function and assume that (i) M C D is a compact set. Hence x(t) approaches N as t -> oo. since V(. since V(. we have that V(p) = lim V(x(tn)) = a (a constant). LYAPUNOV STABILITY I. By continuity of V(x). and V = 0}.1) starting in M. Lemma 3. Moreover. Also. It follows that wCNCEcM. x2=0 Vt 22=0 X2=0 = -x2 .

LaSalle's result applies not only to equilibrium points as in all the Lyapunov theorems. and is globally asymptotically stable.20.xi . Perhaps more important. THE INVARIANCE PRINCIPLE 91 bounded).2 If D = Rn in Corollary 3.12. It follows that any trajectory initiating on the circle stays on the circle for all future time. Corollary 3.20 [68] Consider the system defined by it = x2 + xl(Q2 . and thus the set of points of the form {x E R2 : xl + x2 = 32} constitute an invariant set.8. then the origin Proof: See Exercise 3. 0) is an equilibrium point. Let S = {x E D : V (x) = 0} and suppose that no solution can stay identically in S other than the trivial one. Also. we notice that some useful corollaries can be found is assumed to be positive definite.3. is radially unbounded. at the end of this section. but it is not required to be positive definite. Then the origin is asymptotically stable.Q2] = (2x1.x2) for all points on the set.x ztx2=R2 2 r x1 = x2 l x2 = -xl. along the solution of i = f (x): T [x2 + x2 . .x1 . Example 3.1. and assume that V (x) < 0 E D. Corollary 3. 2x2)f (x) 2(xi + x2)(b2 . The trajectories on this invariant set are described by the solutions of i = f(x)I. To see this.x2) It is immediate that the origin x = (0. Example 3. the set of points defined by the circle x1 + x2 = (32 constitute an invariant set. if Before looking at some examples.x2) i2 = -xi + x2(02 . 1 x.11). Proof: Straightforward (see Exercise 3. but also to more general dynamic behaviors such as limit cycles.1 Let V : D -4 R be a continuously differentiable positive definite function in a domain D containing the origin x = 0.xi . we compute the time derivative of points on the circle. emphasizes this point.

0)U{xER2:xi+x2=j32} that is. and by LaSalle's theorem. Also 1av' av axl axl f(x) -(x2 +x 2)(xl + x2 . 0) is an equilibrium point). To this end..e. since (0. since E is the union of the origin (an invariant set. Step #2: Find E = {x E M : V(x) = 0}. Step #3: Find N. the circle is actually a limit cycle along which the trajectories move in the clockwise direction. . By construction M is closed and bounded (i. V(x) = 0 if and only if one of the following conditions is satisfied: (a)xi+x2=0 (b) xl+x2-a2=0.92 CHAPTER 3. This is trivial.a2)2 < 0. Moreover. we conclude that N = E. is positive semi definite in R2. the largest invariant set in E. and thus any trajectory staring from an arbitrary point xo E M will remain inside M and M is therefore an invariant set. compact). E is the union of the origin and the limit cycle. Step #1: Given any real number c > f3. We now apply LaSalle's theorem. every motion staring in M converges to either the origin or the limit cycle. V = 0 either at the origin or on the circle of radius p. Thus. Also V(x) < OVx E M. and the invariant set x1 +x2 = Q2. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS Thus. In other words. consider the following function: V(x) = 1(x2 +x2 -o2)2 Clearly. we have E=(0. Clearly. define the set M as follows: M={xEIR2:V(x)<c}. We now investigate the stability of this limit cycle using LaSalle's theorem.

10d . The same argument also shows that the origin is unstable since any motion starting arbitrarily near (0. and xe = (v'. asymptotic stability if often the most desirable form of stability and the focus of the stability analysis.bxl + cx1x2 + dx2 with a. In order to do so. 0) converges to the limit cycle. we consider the following example. We are interested in the stability of the equilibrium point at the origin. we propose the following Lyapunov function candidate: V(x) = axe .9. c. or attractive. thus diverging from the origin.5cxi we now choose { 2d . REGION OF ATTRACTION 93 We can refine our argument by noticing that the function V was designed to measure the distance from a point to the limit cycle: V(x) = 2(x1 +x2 . choosing c : 0 < c < 1/204 we have that M = {x E JR2 : V(x) < c} includes the limit cycle but not the origin.14) .2 to estimate the region of asymptotic stability.12b 6a-10d-2c = 0 = 0 (3. 22 = -5x1 + x1 . d constants to be determined.2c)xlx2 + cxi . b. 5 Thus. xe = (-f .2x2. 0). 3. Example 3. To study this equilibrium point. Throughout this section we restrict our attention to this form of stability and discuss the possible application of Theorem 3.4d)x2 + (2d .12b)x3 x2 + (6a .9 Region of Attraction As discussed at the beginning of this chapter. 0). 0). Differentiating we have that V = (3c .21 Consider the system defined by r 1 = 3x2 1. Thus application of LaSalle's theorem with any c satisfying c : e < c < 1/234 with e arbitrarily small shows that any motion starting in M converges to the limit cycle. and the limit cycle is said to be convergent.32)2 whenever x2 + x2 = /j2 r V (x) = 0 l V (x) = 2/34 when x = (0. This system has three equilibrium points: xe = (0. 0).3.

x2 = 4 is quickly divergent from the origin even though the point (0. According to this theorem. so good. We now study how to estimate this region. AUTONOMOUS SYSTEMS which can be satisfied choosing a = 12. we plot the trajectories of the system as shown in Figure 3. 4) E D.15) V(x) = -6x2 . .6} (3.2. Definition 3. The point neglected in our analysis is as follows: Even though trajectories starting in D satisfy the conditions V(x) > 0 and V < 0.2. Strictly speaking.94 CHAPTER 3. thus suggesting that any trajectory initiating within D will converge to the origin. we obtain V (x) = 12x2 .16) So far. and d = 6. Theorem 3. this theorem says that the origin is locally asymptotically stable. c = 6. but the region of the plane for which trajectories converge to the origin cannot be determined from this theorem alone.2 simply guarantees existence of a possibly small neighborhood of the equilibrium point where such an attraction takes place. that our conclusions are incorrect. LYAPUNOV STABILITY I. We begin with the following definition. t) be the trajectories of the systems (3. b = 1. Using these values.17) we have that V(x) > 0 and V < 0.{0}. we again check (3. The problem is that in our example we tried to infer too much from Theorem 3. once a trajectory crosses the border xiI = f there are no guarantees that V(x) will be negative. is defined by RA={xED:1/i(x. then the equilibrium point at the origin is "locally" asymptotically stable. It is therefore "tempting" to conclude that the origin is locally asymptotically stable and that any trajectory starting in D will move from a Lyapunov surface V(xo) = cl to an inner Lyapunov surface V(xl) = c2 with cl > C2. however. We now apply Theorem 3. Vx E D .t)-->xei as t --goo}.x1 (3. In general. The region of attraction to the equilibrium point x. (3.30x2 + 6xi.6<xl <1.1) with initial condition x at t = 0. the trajectory initiating at the point x1 = 0. For example.6. D is not an invariant set and there are no guarantees that these trajectories will stay within D. To check these conclusions. thus moving to Lyapunov surfaces of lesser values.13 Let 1li(x. A quick inspection of this figure shows.16) and conclude that defining D by D = {xEIIF2:-1. Thus. if V(x) > 0 and V < 0 in D . The question here is: What is the meaning of the word "local"? To investigate this issue. this region can be a very small neighborhood of the equilibrium point.{0}.15) and (3. In summary: Estimating the so-called "region of attraction" of an asymptotically stable equilibrium point is a difficult problem.xi + 6x1x2 + 6x2 = 3(x1 + 2x2)2 + 9x2 + 3x2 . denoted RA.

1). Theorem 3. In other words. invariant with respect to the solutions of (3. REGION OF ATTRACTION 95 Figure 3. the exact determination of this region can be a very difficult task. and V = 0} = xe. In general. Proof: Under the assumptions. In this section we discuss one way to "estimate" this region.8). Under these conditions we have that M C RA. It then follows that N = largest invariant set in E is also xe.3. . E M. which is based entirely on the LaSalle's invariance principle (Theorem 3.21. The following theorem. Let V : D -4 IR be a continuous differentiable function and assume that (i) M C D is a compact set containing xe.8. V = 0 ifx=xe.1).9. (ii) V is such that V < 0 VX 34 X. Theorem 3.9 Let xe be an equilibrium point for the system (3. and the result follows from LaSalle's Theorem 3. then M itself provides an "estimate" of RA. we have that E _ {x : x E M. outlines the details.9 states that if M is an invariant set and V is such that V < 0 inside M.6: System trajectories in Example 3.

Thus.21: I i1 = 3x2 i2 = -5x1 + x1 .9. x = ±1.8 S imilarly Vlx1=-1. -0.6 = 24.96 CHAPTER 3. We have VIx1=1.16 + 9.8. It is well known that given an LTI system of the form 2 = Ax.6 + 12x2 = 0 a x2 = 0.32-e} is an invariant set and satisfies the conditions of Theorem 3. It is immediate that V(1..6x2 + 6x2 = z2 dz2 axe = -9. 0. From here we can conclude that given any e > 0. the region defined by M = {x E IIY2: V(x) <20.10 Analysis of Linear Time-Invariant Systems Linear time-invariant systems constitute an important class.8) = 20. where te(A ) represents the real part of the eigenvalue )i.6 24. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS Example 3.6x2 + 6x2 = z1 dzl axe = 9.6}.2x2 = 12x1 .xl + 6x1x2 + 6x2 V(x) = -6x2 . and every eigenvalue with Me(at) = 0 has an associated Jordan block of order 1.32.6. The equilibrium point x = 0 is exponentially stable if .30x1 +6x1 V (x) We know that V > 0 and V < 0 for all {x E R2 : -1.8) = V(-1. To estimate the region of attraction RA we now find the minimum of V(x) at the very edge of this condition (i.18) the origin is stable if and only if all eigenvalues A of A satisfy Me(at) < 0.6.9.16 .6 + 12x2 = 0 a x2 = -0.8.e. This means that M C R 3.22 Consider again the system of Example 3.6 < xl < 1. which has been extensively analyzed and is well understood.6. A E 1Rnxn x(0) = x0 (3.x2) has a minimum when x2 = ±0. the function V(±1.6).

ANALYSIS OF LINEAR TIME-INVARIANT SYSTEMS 97 and only if all the eigenvalues of the matrix A satisfy Re(. Q).20) where p E Rnxn is (i) symmetric and (ii) positive definite. the Lyapunov analysis permits studying linear and nonlinear systems under the same formalism. analyzing the asymptotic stability of the origin for the LTI system (3. where LTI is a special case. V(. This is done in two steps: .10. and let V(. it seems unnecessary to investigate the stability of LTI systems via Lyapunov methods.19) reduces to analyzing the positive definiteness of the pair of matrices (P. QT = -(PA +ATP)T = -(ATP+AP) = Q If Q is positive definite.i) < 0.18) can be expressed in a rather simple closed form x(t) = eAtxo Given these facts. Moreover. exactly what we will do in this section! There is more than one good reason for doing so.) be defined as follows AERnxn (3.19) V (X) = xT Px (3. however. we will study the stability of nonlinear systems via the linearization of the state equation and try to get some insight into the limitations associated with this process. Thus XTATPi+XTPAx XT(ATP+PA)x or V= Here the matrix Q is symmetric. We also have that V=xTPx+xTP± by (3. Second. Consider the autonomous linear time-invariant system given by x=Ax. we will introduce a very useful class of Lyapunov functions that appears very frequently in the literature.21) (3.19). since _XTQX (3. Thus. Finally. the solution of the differential equation (3.22) PA+ATP = _Q.) is positive definite.3. then is negative definite and the origin is (globally) asymptotically stable. 2T = XT AT . With these assumptions. In the first place. This is.

The following theorem guarantees the existence of such a solution. Therefore no conclusion can be drawn from this regarding the stability of the origin of this system. The second point to notice is that clearly the procedure described above for the stability analysis based on the pair (P. The matrix P is also symmetric. Thus V = xTPx > 0 and V = -xT Qx < 0 and asymptotic stability follows from Theorem 3. Consider. 2P > 0 satisfying (3.98 CHAPTER 3. it seems to be easier to first select a positive definite P and use this matrix to find Q. the system with the following A matrix. Remarks: There are two important points to notice here.22) appears very frequently in the literature and is called Lyapunov equation. positive definite matrix Q. `4 = 0 4 -8 -12 -4 -24 ] taking P = I. since (eATt)T = eAt. To see . (ii) Find P that satisfies equation (3. LYAPUNOV STABILITY I.22). for example. Theorem 3. Equation (3. This approach may however lead to inconclusive results.22) and verify that it is positive definite. the approach just described may seem unnecessarily complicated. Indeed. For the converse assume that te(A1) < 0 and given Q. thus eliminating the need for solving the Lyapunov equation.22) T Proof: Assume first that given Q > 0.2.10 The eigenvalues at of a matrix A E R satisfy te(A1) < 0 if and only if for any given symmetric positive definite matrix Q there exists a unique positive definite symmetric matrix P satisfying the Lyapunov equation (3. given the assumptions on the eigenvalues of A. Q) depends on the existence of a unique solution of the Lyapunov equation for a given matrix A. we have that -Q = PA + AT P = [ 0 4 and the resulting Q is not positive definite. AUTONOMOUS SYSTEMS (i) Choose an arbitrary symmetric. We claim that P so defined is positive definite. define P as follows: 00 P = 10 eATtQeAt dt this P is well defined. In the first place.

P)] eAt = 0 .1 Linearization of Nonlinear Systems f : D -4 R' Now consider the nonlinear system x = f (x).P = 0.10. To see this.P)A+ AT (P => dt [eAT t (P .10. Ix # 0 such that xT Px = 0. This contradicts the assumption. This can be the case if and only if P . or equivalently.P)eAt] = 0 which implies that eAT t (P .23) assume that x = xe E D is an equilibrium point and assume that f is continuously differentiable in D.3. 3. We now show that P satisfies the Lyapunov equation PA + ATP = j eATtQeAtA dt + = J' t (eA-tQeAt) dt eATtQeAt J0 ATeATtQeAt dt IU = -Q which shows that P is indeed a solution of the Lyapunov equation. we reason by contradiction and assume that the opposite is true. and thus we have that P is indeed positive definite. P = P. ANALYSIS OF LINEAR TIME-INVARIANT SYSTEMS 99 that this is the case. (3. that is. The Taylor series expansion about the equilibrium point xe has the form f (x) = f (xe) + ax (xe) (x .P)e_At is constant Vt. there remains to show that this P is unique. To complete the proof. suppose that there is another solution P # P.xe) + higher-order terms . This completes the proof. Then (P-P)A+AT(P-P) = 0 eAT t [(P . But then xTPx=0 10 xTeATtQeAtx r dt = 0 with y = eAtx yT Qy dt = 0 Ja y=eAtx=O dt>0 x=0 since eat is nonsingular Vt.

Perhaps the most famous and useful result is a theorem due to Chetaev given next. and moreover A= Ate.11 Instability So far we have investigated the problem of stability. the origin is an exponentially stable equilibrium point for the system (3.23). The literature on instability is almost as extensive as that on stability.) that satisfies the conditions of one of the stability theorems seen so far. Assume that f is continuously differentiable in D. The following theorem. known as Lyapunov's indirect method. and let A be defined as in (3. we have that r (3.25) (3. f (xe) = 0. whether it is possible to show that the origin is actually unstable.100 CHAPTER 3. LYAPUNOV STABILITY I.24) (Xe)(x .12 (Chetaev) Consider the autonomous dynamical systems (3.23). Then if the eigenvalues ) of the matrix A satisfy ate(. If our attempt to find this function fails.23) about the equilibrium point xe by analyzing the properties of the linear time-invariant system (3. All the results seen so far are. AUTONOMOUS SYSTEMS Neglecting the higher-order terms (HOTs) and recalling that.26) is exponentially stable. Theorem 3.\a) < 0. however. shows that if the linearized system (3. we assume that the equilibrium point is the origin. f (Xe) (3.23) the equilibrium xe is locally exponentially stable.26) We now whether it is possible to investigate the local stability of the nonlinear system (3. namely. sufficient conditions for stability.11 Let x = 0 be an equilibrium point for the system (3.xe). Theorem 3. The proof is omitted since it is a special case of Theorem 4. Thus the usefulness of these results is limited by our ability to find a function V(. Let V : D -+ IR have the following properties: .5 for the proof of the time-varying equivalent of this result).1) and assume that x = 0 is an equilibrium point.xe1 we have that X = i.25). then it is indeed the case that for the original system (3. then no conclusions can be drawn with respect to the stability properties of the particular equilibrium point under study. In these circumstances it is useful to study the opposite problem. To simplify our notation. d (x) = C72 Now defining x = x . 3. by assumption.26).7 in the next chapter (see Section 4.

where 6 can be chosen arbitrarily small. This set is compact (it is clearly bounded and is also closed since it contains its boundary points IIxII = c and V(x) = l. and V(x) > 0}. and so in U. . U is clearly bounded and moreover. Define -y = min{V(x) : x E Q} > 0. in addition. arbitrarily close to x = 0. No claim however was made about is being positive definite in a neighborhood of x = 0. its boundary consists of the points on the sphere 1x11 = e. which is a continuous function in a compact set Q. x = 0 is unstable. this implies that given c > 0. and taking account of assumption (iii) we can conclude that V(xo) = l.. Define now the set of points Q as follows: Q = {xEU:IxII <eandV(x)> }. Assumptions (iii) says that positive definite in the set U. which. Remarks: Before proving the theorem. such that V(xo) > 0 (iii) V > OVx E U. V(. This set consists of all those points inside the ball Bf (i. INSTABILITY 101 (i) V(0) = 0 (ii) lix0 E R". Given that e is arbitrary. Accord- ing to assumption (ii).e. we briefly discuss conditions (ii) and (iii). has a minimum value and a maximum value in Q. satisfy V(x) > 0. where the set U is defined as follows: U = {x E D : IIxI1 < e.Vt > 0. and the surface defined by V(x) = 0. the set of points satisfying Ix1I < c). Under these conditions. > 0 V(xo) > 0 but then.11. we cannot find 6 > 0 such that 14011 < 6 = 1X(t)II < e Notice first that condition (ii) guarantees that the set U is not empty. It then follows that V(x). Now consider an interior point x0 E U. By assumption (ii) V(xo) > 0. the trajectory x(t) starting at x0 is such that V(x(t)) > . Proof: The proof consists of showing that a trajectory initiating at a point xo arbitrarily close to the origin in the set U will eventually cross the sphere defined by IxII = E. This conditions must hold as long as x(t) is inside the set U.) is such that V(xo) > 0 for points inside the ball B6 = {x E D : Ix1I < d}.). This argument also shows that V (x) is bounded in Q.3.

V(-) is positive definite. Let V(x) = 1/2(x2 + x2). we can write V(x(t) = V(xo) + J0 0 t V (x(r)) dr d-r = V (xo) + ryt. by Chetaev's result. Thus.20 21 = x2+x1(/32-x1-x2) -t2 = -XI + x2(N2 . LYAPUNOV STABILITY I. Thus a trajectory x(t) initiating arbitrarily close to the origin must intersect the boundary of U. Thus the origin is unstable. x = 0 is unstable since given E > 0.x2)f(x) (xl + x2)(/32 . Thus we have that V (O) = 0. AUTONOMOUS SYSTEMS This minimum exists since V(x) is a continuous function in the compact set Q.x2). Thus. the trajectory x(t) is such that V(x) > and we thus conclude that x(t) leaves the set U through the sphere IIxii = E. z.xi . 0<E<)3} we have that V (x) > OVx E U. and V > OVx E U.x1 . e. We now verify this result using Chetaev's result.. Example 3. The boundaries of this set are IIxii = E and the surface V(x) = 0. and moreover V (x) > OVx E 1R2 # 0. Also V = (x1.x2 (3. x # 0. we cannot find b > 0 such that IIxoli < b = 11x(t)II < E. However. This completes the proof. V (x0) + J rt ry It then follows that x(t) cannot stay forever inside the set U since V(x) is bounded in U. Defining the set U by U={xEIR2: IIxiI <E.12 Exercises X1 = X2 x2 = -x1 + xi .102 CHAPTER 3.1) Consider the following dynamical system: .x2) We showed in Section 3.8 that the origin of this system is an unstable equilibrium point. 0 3. for the trajectory starting at x0.23 Consider again the system of Example 3. x # 0.

study the stability of the equilibrium point at the origin: (Z) f 1 = -X1 .x2 .1: Xz =g. (b) Find the linear approximation about this equilibrium point and analyze its stability.)2 Aµ [_+ X3 = (1 + px. xe2.MI 1+µx1 A 2m(1 +µx.3. (ii) I 23 X2 x2 -x.xef and show that the resulting system y = g(y) has an equilibrium point at the origin. proceed as follows: (a) Find all of their equilibrium points.3) Consider the magnetic suspension system of Section 1. (c) Using a computer package. +x2(1 . l 22 = -X2 .4) For each of the following systems.3x1 .x2 x2 . (3. construct the phase portrait of each linear approximation found in (c) and compare it with the results in part (c). Make sure that your analysis contains information about all the equilibrium points of these systems.2 tan 1(x1 + x2) . xe3] corresponding to this input. (3. (b) For each equilibrium point xe different from zero. perform a change of variables y = x . Find the equilibrium point xe = [xel . (b) Find the linear approximation about each equilibrium point.2x2) (3.)2x2x3 + vj 1 (a) Find the input voltage v = vo necessary to keep the ball at an arbitrary position y = yo (and so x. (d) Using the same computer package used in part (c). (Z) J ll x2 = x. EXERCISES 103 (a) Find all of its equilibrium points.12. . find the eigenvalues of the resulting A matrix and classify the stability of each equilibrium point. = yo).X1X2 (ii) 1 22 -X + x. What can you conclude about the "accuracy" of the linear approximations as the trajectories deviate from the equilibrium points.x2 2 3 -xlx2 .9. construct the phase portrait of each nonlinear system and discuss the qualitative behavior of the system.2) Given the systems (i) and (ii) below.

x2) . and its derivative has been computed. (c) Assuming that the parameter a > 0. V(x) = -x2. as (a) stable.x2) (x) = (xl + 12) . For this system.2xl (x2 + x2) :i2 = -xl . in each case. and/or (e) inconclusive information.x2) . (b) locally asymptotically stable. Explain your answer in each case. G) V ( .7) It is known that a given dynamical system has an equilibrium point at the origin.1)2 .5) For each of the following systems. (f) V (X) = (x2 . (c) globally asymptotically stable.1) . V (x) = -(x2 + x2) (g) V (X) _ (x2 . What can you conclude about your answers from the linear analysis in part (b)? (d) Repeat part (c) assuming a < 0. LYAPUNOV STABILITY I: AUTONOMOUS SYSTEMS (3. V (x) = (x2 . Assuming that V(.) has been proposed.6) Consider the following system: xl = -x2 + ax1(x2 + x2) ±2 = xl + ax2(x2 + x2) (a) Verify that the origin is an equilibrium point. V(x) = -(x2 + x2). (c) V(x) = (x2 + x2 .) and are given below you are asked to classify the origin. (a) V (x) = (x2 + x2) . V (X) _ (x2 + x2). study the stability of the equilibrium point at the origin: () 2 X1 = 12 . (e) V (X) = (x2 + x2 . V(x) _ -(x2 +x2). V (x) _ (x2 + x2) (h) V (X) = (xl + 12) . (d) V(x) _ (x2 + x2 -1)2 . (b) V(x) _ (x2 +x2 -1) . V (X) = (x2 + x2). (3. and classify the stability of the equilibrium point at the origin. V(x) _ (x2 + x2).212(x2 + x2) (ii) { i1 = -11 + 11x2 ' 1 :i2 = -x2 (3. use a computer package to study the trajectories for several initial conditions. find the eigenvalues of the resulting A matrix.104 CHAPTER 3. a function V(. (b) Find the linear approximation about the origin. (d) unstable.

9) Consider the system defined by the following equations: (X3 1 x2+Q3 -xl 2 = -x1 xl Study the stability of the equilibrium point xe = (0. (3.(2x2 + 3x1)2x2 Study the stability of the equilibrium point xe = (0. a(a)) --> R E K.(3x1 + 2x2) = 0. a) -> IRE K.axl .10) Provide a detailed proof of Theorem 3.12. 0).(3x1 + 2x2) = 0 are invariant sets. 0) and (b) 1 . a2 E Kim. (ii) /3 = 0. then a-1 E K. 0).8) Prove the following properties of class IC and K. 0). (3. functions: 105 (i) If a : [0.7. then a-1 : [0.3x1 .14) Consider the system defined by the following equations: 11 = 3x2 2 ±2 = -xl + x2(1 . in the following cases: (i) /3 > 0. (3. (3.2.13) Consider the system defined by the following equations: Il = x2 12 = -x2 . (iii) If a E Kim.1.12) Provide a detailed proof of Corollary 3. (3.11) Provide a detailed proof of Corollary 3. then a1 o a2 E K. EXERCISES (3. (iv) If al. (3. (iii) Q < 0. (ii) Study the stability of the origin x = (0. . (iii) Study the stability of the invariant set 1 . then al o a2 E K. a2 E K. (ii) If al.2x2) (i) Show that the points defined by (a) x = (0.3.

15) Given the following system.106 CHAPTER 3. Notes and References Good sources for the material of this chapter are References [48].5 follow closely the presentation in Reference [95].13 [49] (Lagrange stability theorem) Let fl be a bounded neighborhood of the origin and let 1 be its complement.1) is said to be bounded or Lagrange stable if there exist a bound A such that lx(t)II<A Prove the following theorem Vt>0.1 is based on Reference [32]. [41] [88] [68] and [95] among others.7 as well as lemmas 3. Section 3. Theorem 3. (i) V(x) < 0 Vx E Q`. and Khalil.20 was taken from Reference [68]. (i) V is radially unbounded. The proof of theorem 3.4 and 3. Then the equilibrium point at the origin is Lagrange stable. [27]. AUTONOMOUS SYSTEMS (3. discuss the stability of the equilibrium point at the origin: X2 _ xix2 + 2x1x2 + xi -X3 + x2 (3. The beautiful Example 3. [41].14 The equilibrium point x = 0 of the system (3.16) (Lagrange stability) Consider the following notion of stability: Definition 3. . Assume that V(x) : R^ -* R be continuously differentiable in S2' and satisfying: (i) V(x) > 0 VxE52C. LYAPUNOV STABILITY I. [49].8 is based on LaSalle. Section 3.

to be defined.1) where f : D x [0.t) = 0 dt > to. Consider the nonautonomous systems x=f(x. in this chapter we state all our definitions and theorems assuming that the equilibrium point of interest is the origin. 4. In general.1 Definitions We now extend the several notions of stability from Chapter 3. Visualizing equilibrium points for nonautonomous systems is not as simple. We start by reviewing the several notions of stability.1) represents an unforced system. We will say that the origin x = 0 E D is an equilibrium point of (4. 107 .t) f:DxIIF+->1R' (4. (4. as in Chapter 3. oo) -a IR is locally Lipschitz in x and piecewise continuous in t on D x [0. For autonomous systems equilibrium points are the real roots of the equation f (xe) = 0. For simplicity.1) at t = to if f(0. In this case. oc). to nonautonomous systems. This issue will originate several technicalities as well as the notion of uniform stability. 'Notice that. the initial time instant to warrants special attention. xe = 0.Chapter 4 Lyapunov Stability II: Nonautonomous Systems In this chapter we extend the results of Chapter 3 to nonautonomous system.

108 CHAPTER 4. to) > 0 : 11x(0)11 < S = 11x(t)II < c Vt > to > 0 Convergent at to if there exists 51 = 51(to) > 0 : 11x(0)11 < Si = slim x(t) = 0. Definition 4.t) (4. . 3T = T(elito) such that 1x(0)11 < Si = Ix(t)11 < El dt > to +T (4. or a solution of the differential equation (4.t) 0 if y=0 that is. t) at t = 0. 36 = b(e.f(x(t).5) Asymptotically stable at to if it is both stable and convergent.t) . Consider the change of variable y = X(t) . Unstable if it is not stable. NONAUTONOMOUS SYSTEMS xe = 0 can be a translation of a nonzero trajectory. t) . t).2) and assume that x(t) is a trajectory. 400 Equivalently (and more precisely). Indeed.x(t). consider the nonautonomous system th = f(x. xo is convergent at to if for any given cl > 0. We have f (x. Thus g(y.1) is said to be Stable at to if given c > 0.1 The equilibrium point x = 0 of the system (4. t) But i = f (x(t). LYAPUNOV STABILITY H.t) = f(y + x(t). the origin y = 0 is an equilibrium point of the new system y = g(y.2) for t > 0.x(t) def = g(y.

4. independent of to.2 The equilibrium point x = 0 of the system (4.i30Ix(o)II.to) Vt > to. independent of to such that Ix(0)II < c => Ix(t)II <_ 44(0)II) Vt > to. .6) Uniformly convergent if there is 61 > 0.9) whenever jx(0) I < 6.2 The equilibrium point x = 0 of the system (4. (4. Equivalently. As in the case of autonomous systems. Definition 4. Lemma 4. The difference is in the inclusion of the initial time to.7) Lemma 4. (4.1 The equilibrium point x = 0 of the system (4.1) is uniformly asymptotically stable if and only if there exists a class ICL and a constant c > 0.1) is said to be Uniformly stable if any given e > 0. Globally uniformly asymptotically stable if it is uniformly asymptotically stable and every motion converges to the origin. This dependence on the initial time is not desirable and motivates the introduction of the several notions of uniform stability. x = 0 is uniformly convergent if for any given E1 > 0. IT = T(El) such that Ix(t)II < El Vt > to + T lx(0)II < 61 Uniformly asymptotically stable if it is uniformly stable and uniformly convergent. such that jxoII<61 = x(t)-+0 ast ->oo. it is often useful to restate the notions of uniform stability and uniform asymptotic stability using class IC and class ICL functions.1.1) is (locally) exponentially stable if there exist positive constants a and A such that 11x(t)II < a Ilxollke-at (4.9) is satisfied for anyxER". It is said to be globally exponentially stable if (4.3 The equilibrium point x = 0 of the system (4. The following lemmas outline the details.1) is uniformly stable if and only if there exists a class IC function a constant c > 0. The proofs of both of these lemmas are almost identical to their counterpart for autonomous systems and are omitted. 16 = b(e) > 0 : X(0)11 < 6 1x(t)jj < e Vt > to > 0 (4. independent of to such that 1x(0)11 < c => lx(t)II <. DEFINITIONS 109 All of these definitions are similar to their counterpart in Chapter 3.8) Definition 4.t .

Definition 4. and not of t.e.xED.t) = 0 Flt is said to be positive semi definite in D if E R+. Equivalently. W (x. Definition 4.t)I < V2(x) Vx E D. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS 4. In the following definitions we consider a function W : D x pt+ -> IIt.. (ii) W (x.6 W(.. Furthermore we assume that (i) 0 E D. Definition 4. ) is said to be positive definite in D if (i) W(0. t) of two variables: the vector x E D and the time variable t. (ii) 3 a time-invariant positive definite function V1(x) such that V1(x) < W(x. i. a scalar function W (x. In this section we introduce time-dependent positive definite functions.. t) = 0. positive definite functions play a crucial role in the Lyapunov theory. Definition 4.5 W(. (ii) W(x. It is immediate that every time-invariant positive definite function is decrescent.110 CHAPTER 4.4 (i) W(0.7 is radially unbounded if W(x.2 Positive Definite Functions As seen in Chapter 3. ) is said to be decrescent in D if there exists a positive definite function V2(x) such that IW(x.t) Vx E D.t)--goo as IIxii->oo . The essence of Definition 4. t) is decrescent in D if it tends to zero uniformly with respect to t as Ixii -+ 0. t) is continuous and has continuous partial derivatives with respect to all of its arguments.t)>0 Vx#0.6 is to render the decay of toward zero a function of x only.

Vx E D (4.1 Examples In the following examples we assume that x = [x1. t). is positive definite and decrescent and radially unbounded if and only if al(. t) and by Lemma 3. according to Definition 4.1 this implies the existence of .11) al(IjxII) < Vl(x) < W(x. t) = 0 e"t = 0 .2. Equivalently. Vx E D (4. By Definition 4... Vx E Br C D.14) and E IC such that (4. such that Vi(x) < W(x.13) It follows that is positive definite and decrescent if and only if there exist a (timeinvariant) positive definite functions and V2(. W(.t a > 0. W (.1 this implies the existence of W(x. If in addition is decrescent.6 there exists V2: W (x. t) > M for all t. This function satisfies (i) Wl (0. then.. 4.). POSITIVE DEFINITE FUNCTIONS 111 uniformly on t. ) is radially unbounded if given M.t) < a2(1IxII) .2. Example 4. Remarks: Consider now function W (x.t) < V2 (x) which in turn implies the existence of function . t).15) al(IIxII) < W(x.t) ..10) such that Vx E Br C D. Vx E D (4. Finally.5.12) and by Lemma 3. t) _ (x1 + x2)e .) and a2() can be chosen in the class ]Cc. AN > 0 such that W (x.4. t) < V2(x) . (4. provided that Ixjj > N.1 Let Wl (x. (4. such that Vx E Br C D.t) < V2(x) < 012(IIxII) . x2]T and study several functions W (x. ) is positive definite in D if and only if 3V1 (x) such that Vi (x) < W (x.

t) = oo taco Vx E 1R2. t) _ (x1+x2)(12 + 1) V3(x)(12+1) V3(x)deJ(xl+x2) Following a procedure identical to that in Example 4. Wl definite. t)I < V(x) Vx. and thus W2(x.t)l is not radially unbounded since it does not tend to infinity along the xl axis. Example 4. W4(.5 Let W5(x t) _ (x2 +x2)(12 + 1) (t2 + 2) = V5(x)(t + 2). It is not time-dependent. VtE1R.3 Let W3(x. which implies that is positive definite. is positive semi definite. Example 4. (xl + 1) Thus.2 we have that W3(x) is positive definite. IW2(x. However. and so it is decrescent.t) is not decrescent. Also lim W2(x.4 Let 2 2 W4(x t) _ (x 2+X2 ) . It is not radially unbounded since it does not tend to infinity along the xl axis.. limt.t)>O Vx540. Example 4. t) = 0 Vx. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS (ii) Wl(x.t) > V2(x) Vx E ]R2.112 CHAPTER 4. Thus it is not possible to find a positive definite function V() such that IW2(x. V2(x) = +2)) Thus.. radially unbounded and not decrescent. V5 (x) def (xl + x2 ) 2 . Wl (x. ) > OVx E 1R2 and is positive definite.2 Let W(xt) = (x1 + x2)(12 + 1) ((x1 V2(x)(t2 + 1). but not positive Example 4. V2(x) > 0 Vx E 1R2 and moreover W2(x. Thus.

if W(x. Namely. which implies that nite. Theorem 4. in Theorem 4.2 (Lyapunov Uniform Asymptotic Stability) If in a neighborhood D of the equilibrium state x = 0 there exists a differentiable function W(..1) is negative semi definite in D. (ii) The derivative of W(.t) -* oo as llxil --p oc. ) along any solution of (4.3.1 the assumption of W(. t) > ki V5 (x) for some constant k1. and (ii) The derivative of W (x.2..4. then the equilibrium state is uniformly asymptotically stable.t)I <_ k2V5(x)Vx E R2 113 is positive defi- It is also radially unbounded since W5(x. STABILITY THEOREMS Thus. ) being decrescent is optional. W5(x.3 Stability Theorems We now look for a generalization of the stability results of Chapter 3 for nonautonomous systems.) : D x [0.t)=0 VtER. Theorem 4. 4. then the equilibrium state is stable. Indeed.. and (b) decrescent.t) is also decrescent then the origin is uniformly stable.1 and 4. Remarks: There is an interesting difference between Theorems 4. Consider the system (4. ) is . t) is positive definite. t) is negative definite in D. Moreover. ) : D x [0. if W(.. oo) -* IR such that (i) W (x.1) and assume that the origin is an equilibrium state: f(O. It is decrescent since IW5(x.1 (Lyapunov Stability Theorem) If in a neighborhood D of the equilibrium state x = 0 there exists a differentiable function W(. oo) -> R such that (i) W (x.. t) is (a) positive definite.

t) there exist positive definite functions V1 (x) and V2 (x). and such that (ii) The derivative of W(x. then we settle for stability.114 CHAPTER 4.4 on exponential stability to the case of nonautonomous systems.Vt (ii) "W + VW f (x. Theorem 4. t) is (a) positive definite. if the decrescent assumption is removed. t) is negative definite Vx E 1R".2 (Uniform Asymptotic Stability Theorem restated) If in a neighborhood D of the equilibrium state x = 0 there exists a differentiable function W(. the following theorem extends Theorem 3. This point was clarified in 1949 by Massera who found a counterexample. Vt where U. oo) -+ IR such that (z) V1 (X) < W(x. .2 can be restated as follows: Theorem 4. given a positive definite function W(x. and class K functions al and a2 such that V1(x) < W(x. whereas if this is not the case. and radially unbounded Vx E ]R°b. t) < V2(x) a1(1Ix1I) < W(x. oo) -+ 1R such that (i) W (x.15). t) < V2 (X) Vx E D. The proof is almost identical to that of Theorem 3. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS decrescent we have uniform stability.3 (Global Uniform Asymptotic Stability) If there exists a differentiable function 1R" x [0. the remaining conditions are not sufficient to prove asymptotic stability.3 are positive definite functions in D. i = 1. according to inequalities (4. For completeness. ) : D x [0. t) < -V3(x) Vx E D. and (b) decrescent. t) < a2(I1xII) Vx E D Vx E Br (4. Notice also that. then the equilibrium state is uniformly asymptotically stable.4 and is omitted. then the equilibrium state at x = 0 is globally uniformly asymptotically stable.14)-(4. 2..16) (4.17) with this in mind.2 is different. Theorem 4. Theorem 4.

since W (x. W (x.3. K2IIxllP Then the origin is exponentially stable.2 are satisfied.X1X2 + x2(1 + -2[x1 . with V1 positive definite in R2. with V2 also positive definite in R2.t) < -KsllxI '. and K3 such that KllIxMMP W(x) < < W(x. and in addition assume that there exist positive constants K1. 11 4.4. t) is negative definite and the origin is globally asymptotically stable.6 Consider the following system: 21 = -xl . Then W (x' t) < ax f (x.t) < (x1 + 2x2) = V2(x) thus. K2. To study the stability of the origin for this system. we consider the following Lyapunov function candidate: W (X. we have that W (x.x1x2 + 3x21.x2.4. t) > V2 (x).1-4. PROOF OF THE STABILITY THEOREMS 115 Theorem 4. t) is decrescent. t) is positive definite. since Vl (x) < W (x.e-2tx2 { ±2 = x1 . the x = 0 is globally exponentially stable.4 Suppose that all the conditions of theorem 4. . Example 4. if the conditions hold globally. t) = X2 + (1 + e-2i)x2.4 Proof of the Stability Theorems We now elaborate the proofs of theorems 4. Clearly Vi (x) (x1 + x2) < W(x. Moreover. t). 2e-2t)] It follows that W (x. t) + at -2[x1 .

then there exists a positive function V2(x) such that I W(x. t) < V2(x) < a2(11xI1) Vx E BR.18) Moreover. for any R > 0. t) < 0. t) < W (xo. Thus.to) > W(x. However. Thus. Also W is continuous with respect to x and satisfies W(0. t)1 < V2(x) and then E3a2 in the class K such that ai(IIxH) < V1(x) < W(x. If in addition W (x. to) < a(R) which means that if 1xoHI < 6. we conclude that the stability of the equilibrium is uniform. (4. Vt > to. By the properties of the function in the class K . which means that W (x. t) dx E BR. this 6 is a function of R alone and not of to as before. then ai(I1xII) < a(R) => Ix(t)II < R Vt > to.11): ai(11x1I) < Vj(x) < W(x.t) > a(1Ix(t)II) which implies that 11x(t)II < R `dt > to. This proves stability.to). t) cannot increase along any motion. we can find 6 > 0 such that 11x011 < 6 = W(xo.t) < W(xo. Thus W (x(t). Proof of theorem 4. 3b = f (R) such that a2(6) < a. W (x. (R) => ai(R) > a2(a) > W(xo. to) = 0. by assumption. given to. t) is decreasing.1: Choose R > 0 such that the closed ball BR = {xER':IIxII <R} is contained in D.116 CHAPTER 4. to) t > to al(IIxII) < W(x(t). This completes the proof.2: Choose R > 0 such that the closed ball BR={XE]R":11x11 <R} . By the assumptions of the theorem is positive definite and thus there exist a time-invariant positive definite function Vl and a class 1C function al satisfying (4. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS Proof of theorem 4.

20) Given that the at's are strictly increasing and satisfy a. Now define T = C" (R) a3(a2) (4.19) (4. We have that 0< al(b2) <_ a.t) where (4.21) (4.a2(b1)] (4. a2. To see this. to) + J < to+T W(x(t). Also. to W(x(to).24) follows from (4. But W(x(to).t) < a2(IIXII) Vt. given e > 0. we reason by contradiction and assume that 1xo11 < 61 but llx(t`)f1 > b2 for all t in the interval to < t' < to + T. since W is a decreasing function of t this implies that 0< al(a2) < W(x(to +T). b2 > 0 such that x2(81) < al(R) < min[a1(c).23).to) -Ta3(d2) a2(5i).24) W(x.t) (4.to) < a2(11x(to)11) by (4. we can find 5i. a3(114) < -W(x.19) and the assumption llx(t`)ll > b2. Notice also that inequality (4.20) and (4. 3.Vx E BR. .22) a2(52) where we notice that b1 and b2 are functions of e but are independent of to. by assumption. 2. i = 1.22) implies that 61 > 62.Vx E BR Vt.4.4. and a3 satisfying a1(IIxII) < W(x. By the assumptions of the theorem there exist class IC functions al.(0) = 0.23) Conjecture: We claim that 11x011 < 61 = lx(t')II < b2 for some t = t` in the interval to < t' < to + T. t) dt from (4. to +T) < W(x(to).(52) al is class IC dto < t < to + T (4.19) since 11xoll < 61. PROOF OF THE STABILITY THEOREMS 117 is contained in D.

This proves that all motions converge uniformly to the origin. It then follows that our conjecture is indeed correct. Thus for any a > 0 there exists b > 0 such that al(b) > a2(a). and we have that all motions are uniformly bounded. 8.2. we must have that ai(IIxII) -> oo as IIxII -4 oo.to) .al(IIxII) Thus.23) which contradicts (4. we can show that for t > to + T 11X(011 < -E provided that IIxoII < a.t) >. . Now assume < W(x(t).3: Let al. Proof of theorem 4.t) <W(x(t'). It follows that any motion with initial state IIxoII < 8 is uniformly convergent to the origin. and T as follows: a2 (b) < al (c) T = al (b) a3 (a) and using an argument similar to the one used in the proof of Theorem 4.t*) is a decreasing function of t because < a2(IIx(t')II) < a2(b2) < al(E)- This implies that 11x(t)II < E.Ta3(82) a2(81) 0< that t > t'.2. We have ai(IIx(t)II) - al(R) by (4.21). Consider El. we have 0 < W(x(to). Given that in this case is radially unbounded.to) >_ W(x(t). If IIxoII < a. then ai(b) > a2(a) ? W(xo. the origin is asymptotically stable in the large. Thus. IIxII < a = IIxII < b.118 CHAPTER 4. LYAPUNOV STABILITY IL NONAUTONOMOUS SYSTEMS Thus. a2i and a3 be as in Theorem 4.

We now endeavor to prove the existence of a Lyapunov function that guarantees exponential stability of the origin for the system (4. t) = iT P(t)x + xT Pi + xT P(t)x = xTATPX+xTPAx+xTP(t)x = xT[PA+ATP+P(t)]x . and (iv) positive definite.to)II < kie-kz(t-t0) where for any to > 0.25).Vx c 1R". page 404.25) is exponentially stable if and only if there exist positive numbers kl and k2 such that IF(t. W (x. to)xo.) is an n x n matrix whose entries are real-valued continuous functions of t E R. details the necessary and sufficient conditions for stability of the origin for the nonautonomous linear system (4. Proof: See Chen [15].t) < k2IIXII2 This implies that W (x. k2 E R+ satisfying kixTx < xTP(t)x < k2xTx or Vt > 0.4. x(t) = 4i(t. (4. Consider the system: ± = A(t)x (4. decrescent.28) where P(t) satisfies the assumptions that it is (i) continuously differentiable. t) is positive definite. Under these assumptions. ). Indeed. the solution of (4. given without proof. It is a well-established result that the solution of the state equation with initial condition xo is completely characterized by the state transition matrix <1(. Theorem 4. and radially unbounded. (iii) bounded.27) II'(t. there exist constants k1. ANALYSIS OF LINEAR TIME-VARYING SYSTEMS 119 4. to) II = Iml x II4(t.25) with initial condition xo is given by. Consider the function W (x' t) = xT P(t)x (4.5 Analysis of Linear Time-Varying Systems In this section we review the stability of linear time-varying systems using Lyapunov tools. Vt > to (4.Vx E R" Vt > 0.26) The following theorem..5.25) where A(. k1IIx1I2 < W(x. (ii) symmetric.25).5 The equilibrium x = 0 of the system (4. to)xll.

oo) -> R' (4. According to theorem 4. Indeed. f (0. t) + of Ix-o x + HOT where HOT= higher-order terms. positive definite.1 The Linearization Principle Linear time-varying system often arise as a consequence of linearizing the equations of a nonlinear system. k4 E 1(P+ such that k3xTx < xTQ(t)x < k4xTx Moreover. continuous. if Q(t) is positive definite. and bounded matrix Q(t). Proof: See the Appendix.120 or CHAPTER 4.Vx E IRT.6 Consider the system (4. then the origin is uniformly asymptotically stable.t) = -xTQ(t)xT where we notice that Q(t) is symmetric by construction. Vt > 0. given a nonlinear system of the form x = f (x.31) .5. t) f : D x [0. there exist a symmetric. t) having continuous partial derivatives of all orders with respect to x. continuously differentiable.25). t) using Taylor's series about the equilibrium point x = 0 to obtain X = f (x. then it is possible to expand f (x. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS W(x.Vx E Rn and then the origin is exponentially stable by Theorem 4. The equilibrium state x = 0 is exponentially stable if and only if for any given symmetric. This is the case if there exist k3. (4. thus we can write f(x.30) with f (x. Given that x = 0 is an equilibrium point.29) 4. positive definite.4.t) N Of x=O or x = A(t)x + HOT (4. t) = 0. Theorem 4. t) = f (0. and bounded matrix P(t) such that -Q(t) = P(t)A(t) + AT(t)p(t) + P(t). then k3IIxII2 < Q(t) < k4IIxII2 Vt > 0.2. if these conditions are satisfied.

t) satisfies the following condition. More explicitly. We will denote g(x. t) ax since the higher-order terms tend to be negligible "near" the equilibrium point. We now investigate this idea in more detail. ANALYSIS OF LINEAR TIME-VARYING SYSTEMS where 121 f of (o. P so defined is positive and bounded. Proof: Let 4) (t. (ii) The function g(x. t). t) is uniformly asymptotically stable if (i) The linear system a = A(t)x is exponentially stable. Thus x = f (x.5.4. we study under what conditions stability of the nonlinear system (4. the quadratic function W (x. t) = 0 i=u-4a Ilxil uniformly with respect to t. t) 4) (r. (4.31) seems A(t) = of ax l==o to imply that "near" x = 0. (4.31).32) lim g(x. t) d7 c according to Theorem 4.31). t) 11 ilxll <E.7 The equilibrium point x = 0 of the nonlinear system x = A(t)x(t) + g(x. t) = xT Px . there exists 5 > 0.A(t)x.6 (with Q = I): P(t) = J 4T (T. and define P as in the proof of Theorem 4.30) can be inferred from the linear approximation (4.30) is similar to that of the so-called linear approximation (4. the behavior of the nonlinear system (4. t) . such that Ilxll <6 This means that Ilg(x. Also. t) = A(t)x(t) + g(x. t) de f f (x. independent of t.6. Given e > 0. to) be the transition matrix of x = A(t)x. Theorem 4.

32).t). a model can "approximate" a true system. By (4.t) < -11x112+21191111PlIlIxll=-11x112+OIIx112 0<9<1 < (1. t) is negative definite in a neighborhood of the origin and satisfies the conditions of Theorem 4. Moreover W (x. we can write that 2119(x. given that P(t) is bounded and that condition (4. t)II11PII < IIx11 thus. t) = thT p(t)x = xT +xTP(t)x+xTPx = (xTAT +gT)Px+XTP(Ax+g)+XTPX = XT (ATP+PA+P)x+gTPx+xTPg (_ -I) _ -XT x+9T (x. Hence. It then follows that W(x. Thus.9)11x112.6 Perturbation Analysis Inspired by the linearization principle of the previous section we now take a look at an important issue related to mathematical models of physical systems. there exist o > 0 such that 119(x. 4. it is not realistic to assume that a mathematical model is a true representation of a physical device. W (x. This completes the proof.32) holds uniformly in t. t)11lIPll f Ollxll < Ilxll with 0 < 9 < 1. t)II 1 Ilxll < IIPII 2119(x. In practice.122 CHAPTER 4.t)Px+xTPg(x. At best. provided that II x l l < b . LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS is (positive definite and) decrescent. How to deal with model uncertainty . there exist b > 0 such that 119(x. and the difference between mathematical model and system is loosely referred to as uncertainty. t) II ii x < E.4 (with p = 2).

4).3 (along with Theorem 4.4.t) < IxII2. (ii) "W + VW f (x. t) (4. if all the assumptions hold globally. In our first look at this problem we consider a dynamical system of the form i = f(x. oo) -p R such that (i) k1IIxII2 <<W(x. Moreover. t). Assumption (ii) implies that.t)II < k511x1I. ignoring the perturbation term g(x. with a1(IJxJI) = k1IIxII2 and a2(IIxJI) = k2JIxI12. Then if the perturbation g(x. t) has an asymptotically stable equilibrium point. Moreover. t) is negative definite along the trajectories of the system i = f (x. t) + g(x.8 [41] Let x = 0 be an equilibrium point of the system (4. W (X. t) satisfies the bound (iv) I1g(x. PERTURBATION ANALYSIS 123 and related design issues is a difficult problem that has spurred a lot of research in recent years.33) and assume that there exist a differentiable function W(. t). t) is positive definite and decrescent. t) < -k3 11X 112. what can it be said about the perturbed system ± = f (x. We have W= +VWf(x.. t). In our first pass we will seek for an answer to the following question: Suppose that i = f (x.33).6. The result then follows by Theorems 4. the left-hand side of this inequality implies that W(. t)? Theorem 4. Thus. (iii) IIVWII < k4IIxI12.k4k5) > 0 the origin is exponentially stable. (k3 .t)+VWg( < k3IIx112 < k4ks1IxI12 W < -(k3 . Proof: Notice first that assumption (i) implies that W (x. t).k4k5) > 0 by assumption.k4ks)IIXII2 <0 since (k3 . t) is a perturbation term used to estimate the uncertainty between the true state of the system and its estimate given by i = f (x. ) : D x [0.. ) is radially unbounded.t) + g(x. We now find along the trajectories of the perturbed system (4. then the exponential stability is global.33) where g(x. .2-4. assumptions (i) and (ii) together imply that the origin is a uniformly asymptotically stable equilibrium point for the nominal system i = f (x.

Only the bound (iv) is necessary.. .x(P) < 0 . Notice also that g(x. Vi. Special Case: Consider the system of the form Ax + g(x.\. t) < -amin(Q)IIx112 + [2Amax(P)IIx112] [711x112] < -Amin(Q)Ilxl12+27. we have that -amin(P)IIx112 < V(x) < amax(P)Ilx112 .34) where A E 1Rn"n. t) (4.\i) < 0.711x112 Vt > 0.0112 <. . n the eigenvalues of A and assume that t(. that is the solution of the Lyapunov equation PA+ATP=-Q. 2.aV Ax = -xTQx < -Amin(Q)IIXI12 Thus V (x) is positive definite and it is a Lyapunov function for the linear system k = Ax. (P)Ilx1I2 fl (x) < [-Amin(Q) + 27'\. Vx E R"- Theorem 3. Also av ax Px+xTP = 2xTP 112xTP112 = 2Amax(P)Ilx112 IlOx It follows that x aAx + ax 9(x. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS The importance of Theorem 4. t) satisfies the bound 119(X. i = 1..124 CHAPTER 4.in(P) and Amax(P) respectively as the minimum and maximum eigenvalues of the matrix P. Assume also that the perturbation term g(x. The matrix P has the property that defining V(x) = xTPx and denoting . (P)] IIxI12 It follows that the origin is asymptotically stable if -Amin(Q) + 2'yA. t) need not be known.. Denote by Ai.10 guarantees that for any Q = QT > 0 there exists a unique matrix P = PT > 0.8 is that it shows that the exponential stability is robust with respect to a class of perturbations.

CONVERSE THEOREMS 125 that is Amin(Q) > 2ry Amax(P) or.7 Converse Theorems All the stability theorems seen so far provide sufficient conditions for stability.7.4.. Theorem 4. Thus. then the equilibrium point xe satisfies one of the stability definitions. the theorems have at least conceptual value. suppose that an equilibrium point satisfies one of the forms of stability. This fact makes the converse theorems not very useful in applications since. the question above can be answered affirmatively.1 . None of these theorems. Then we have If the equilibrium is uniformly stable. and we now state them for completeness. In fact. one can never conclude anything about the stability properties of the equilibrium point. unless one can "guess" a suitable Lyapunov function. . oo) -a IIt that satisfies the conditions of Theorem 4. namely. then the search for the suitable Lyapunov function is not in vain.35) we see that the origin is exponentially stable provided that the value of ry satisfies the bound (4. as discussed earlier.35) 4. the whole point of the Lyapunov theory is to provide an answer to the stability analysis without solving the differential equation. Nevertheless. An important question clearly arises here.. ) : D x [0. equivalently ti min (Q) 2Amax(P) (4. provides a systematic way of finding the Lyapunov function. . all of these theorems read more or less as follows: if there exists a function (or that satisfies . Assume that f satisfies a Lipschitz continuity condition in D C Rn. Indeed. Does this imply the existence of a Lyapunov function that satisfies the conditions of the corresponding stability theorem? If so. and the theorems related to this questions are known as converse theorems. t). The main shortcoming of these theorems is that their proof invariably relies on the construction of a Lyapunov function that is based on knowledge of the state trajectory (and thus on the solution of the nonlinear differential equation). few nonlinear equation can be solved analytically. In all cases. then there exists a function W(-. however.9 Consider the dynamical system i = f (x.35) From equation (4. and that 0 E D is an equilibrium state.

Proof: The proof is omitted. then there exists a function W(.t) (4....3.37) where k E Z+...« E . In this section we consider discrete-time systems of the form x(k + 1) = f(x(k)..k) (4.. W x(k) Figure 4. then there exists a function W(.......JR that satisfies the conditions of Theorem 4. ) : D x [0.... If the equilibrium is globally uniformly stable. oo) -4 ]R that satisfies the conditions of Theorem 4. .......126 CHAPTER 4. (b) discrete-time system Ed- e If the equilibrium is uniformly asymptotically stable. is a continuous variable).. ) D x [0.e.. LYAPUNOV STABILITY H. systems defined by a vector differential equation of the form i = f(x.8 Discrete-Time Systems So far our attention has been limited to the stability of continuous-time systems.1: (a) Continuous-time system E. and f : ]R" x Z -+ 1R".. and study the stability of these systems using tools analogous to those encountered for continuous-time systems..... however. NONAUTONOMOUS SYSTEMS (a) u(t) E u(k) (b) .... oo) ..36) where t E 1R (i. that is. Before doing so.. 4..2. or [95] for details.... we digress momentarily to briefly discuss discretization of continuous-time nonlinear plants. x(k) E 1R". [41]. See References [88].

4. Clearly. seen as a mapping from the input u to the state x. Case (1): LTI Plants. as shown in Figure 4. then the output x(k) predicted by the model Ed corresponds to the samples of the continuous-time state x(t) at the same sampling instances. This is an idealized process in which. given by u(t) = u(k) for kT < t < (k + 1)T.e.1(a)) we seek a new system Ed : u(k) ---> x(k) (a dynamical system that maps the discrete-time signal u(k) into the discrete-time state x(k).2. into the continuous-time signal u(t). given u the mapping E : u -a x determines the trajectory x(t) by solving the differential equation x = f (x. we have used continuous and dotted lines in Figure 4. The system Ed is of the form (4. H represents a hold device that converts the discrete-time signal. given a continuous-time system E : u . We assume the an ideal conversion process takes place in which H "holds" the value of the input sequence between samples (Figure 4. given by x(k) = x(kT). E. this block is an idealization of the operation of an analog-to-digital converter. as shown in Figure 4. DISCRETIZATION 127 4. i. u). If u(k) is constructed by taking "samples" every T seconds of the continuous-time signal u. Clearly this block is implemented by using a digital-to-analog converter.3). Finding Ed is fairly straightforward when the system (4. For easy visualization. that is. If the plant is LTI. where each block in the figure represents the following: S represents a sampler.1(b)).u)=Ax+Bu (4.36) is linear time-invariant (LTI). we use the scheme shown in Figure 4.9 Discretization Often times discrete-time systems originate by "sampling" of a continuous-time system.37). and S.9.38) finding the discrete-time model of the plant reduces to solving the differential equation with initial condition x(kT) at time t(O) = kT and the input is the constant signal u(kT). or sequence u(k).2 to represent continuous and discrete-time signal. E represents the plant. which consists of the cascade combination of the blocks H. respectively. but not in the present case.x (a dynamical system that maps the input u into the state x. a device that reads the continues variable x every T seconds and produces the discrete-time output x(k). To develop such a model. The . Both systems E and Ed are related in the following way. then we have that e=f(x.

3: Action of the hold device H.128 CHAPTER 4. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS H U(t) K x(t) S Figure 4. .2: Discrete-time system Ed- 1 2 3 4 5 x 01 2 3T4T5T t Figure 4.

Therefore x(k + 1) = x[(k + 1)T] = eAT x(kT) + = eATx(k) + J f /T 0 (k+1)T eA[1)T-r1 dT Bu(kT) T eAT dT Bu(k). o 129 In our case we can make direct use of this solution with t(O) = KT = x(k) x(to) t = (k+1)T x(t) = x[(k + 1)T] u(T) = u(k) constant for kT < t < k(T + 1).36) the exact model is usually impossible to find. one is usually forced to use an approximate model. if T is small.u(k)]. In the more general case of a nonlinear plant E given by (4. is that finding the exact solution requires solving the nonlinear differential equation (4.9. with different degrees of accuracy and complexity. The simplest and most popular is the so-called Euler approximation.38) has a well-known solution of the form x(t) = eA(t to)x(to) + it eA(t-T)Bu(T) dT.x(t) AT-+O dt AT x(t + T) . which is very difficult if not impossible. of course.x(t) T Thus x= f(x. Case (2): Nonlinear Plants. Given this fact.u) can be approximated by x(k + 1) x(k) + T f[x(k).36). The reason. . which consists of acknowledging the fact that. DISCRETIZATION differential equation (4.4. then dx x=-= lim x(t + AT) . There are several methods to construct approximate models.

8 The equilibrium point x = 0 of the system (4. (4.10. 38 = 8(e. ko) > 0 : llxojl < 8 = jx(k)JI < e Vk > ko > 0. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS 4. (4. we assume that x(k + 1) = f(x(k).39) Uniformly stable at ko if given any given c > 0. unlike the continuous case. (4.10 Stability of Discrete-Time Systems In this section we consider discrete-time systems of the form (4. . that is.37) is said to be Stable at ko if given e > 0. which is defined exactly as in the continuous-time case. 4.130 CHAPTER 4. and f : l x Z -> II?1. x(k) E 1R".40) Convergent at ko if there exists 81 = 81(ko) > 0 : jjxojj < 81 => tlim +00 x(k) = 0.41) Uniformly convergent if for any given e1 > 0. Unstable if it is not stable. we now restate these definitions for discrete-time systems. 3M = M(e1) such that IIxo11 <81 = IIx(k)11 <c1 Vk>ko+M. E38 = 8(e) > 0 : jjxolI < 8 IIx(k)jl < e Vk > ko > 0.1 Definitions In Section 4.k) where k E Z+. Uniformly asymptotically stable if it is both stable and uniformly convergent. It is refreshing to notice that. Asymptotically stable at ko if it is both stable and convergent. As in the continuous-time case we consider the stability of an equilibrium point xe.37). For completeness. this equation always has exactly one solution corresponding to an initial condition xo. Definition 4.2 we introduced several stability definitions for continuous-time systems.

is said to be decrescent in D C P" if there exists a time-invariant positive definite function V2(x) such that W(x. Vx E B. k) is positive definite in B.. k) = 0 Vk > 0.2 Discrete-Time Positive Definite Functions Time-independent positive functions are uninfluenced by whether the system is continuous or discrete-time.k)>0 `dx#0.. C D. ) is said to be radially unbounded if W(x. k) -4 oo as jxjj -4 oo. k) is decrescent in Br C D if and only if there exists a class 1C function a2() such that W(x. Vx E B. k) > M provided that jjxjj > N. there exists N > 0 such that W (x. It then follows by lemma 3. uniformly on k.1 that W(x. It then follows by Lemma 3. Positive definite in D C P" if (i) W (O. C D.. Time-dependent positive function can be defined as follows. C D if and only if there exists a class 1C function such that al(jjxMI) <W(x.1 that W (x. xED. . Definition 4.. This means that given M > 0. k) dx E D V. W(.10. k) < V2(x) Vx E D. and (ii) I a time invariant positive definite function Vi(x) such that Vl (x) < W (x. (ii) W(x.k)=0 Vk>0.9 A function W : R" X Z --+ P is said to be Positive semidefinite in D C R' if (i) W(0. Vk. STABILITY OF DISCRETE-TIME SYSTEMS 131 4.10.t) <a2(IlX ).4.t).

37) is defined by OW (x.11 (Lyapunov Uniform Asymptotic Stability for Discrete. ) : D x Z+ -+ R such that (i) W (x. Moreover. ) : D x Z+ ](P such that (i) W (x. we can now state and prove several stability theorems for discretetime system.37) is negative semidefinite in D. Roughly speaking. Example 4. if W (x.W (x. then the equilibrium state is stable. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS 4. (ii) The rate of change OW(x.42) X2 (k + 1) = axi(k) + 2x2(k). k) is negative definite in D. t). and (b) decrescent.10 (Lyapunov Stability Theorem for Discrete-Time Systems). of the function W (x. Theorem 4. (ii) The rate of change. Theorem 4.10 The rate of change. all the theorems studied in Chapters 3 and 4 can be restated for discrete-time systems.. k) along the trajectories of the system (4. k) replacing W (x. k) is (a) positive definite. k) is also decrescent.132 CHAPTER 4.10.7 Consider the following discrete-time system: xl(k + 1) = xl(k) + x2(k) (4. k) along any solution of (4. OW (x. AW(x. k). The proofs are nearly identical to their continuous-time counterparts and are omitted. k) is positive definite. (4. k + 1) . then the origin is uniformly stable.37) there exists a function W(.43) . With these definitions. If in a neighborhood D of the equilibrium state x = 0 there exists a function W(-. then the equilibrium state is uniformly asymptotically stable.3 Stability Theorems Definition 4. k). k) = W(x(k + 1). If in a neighborhood D of the equilibrium state x = 0 of the system (4.Time Systems). with AW (x.

EXERCISES 133 To study the stability of the origin. and the origin is locally asymptotically stable (uniformly. (b) decreasing or not.4) Characterize each of the following functions W : 1R2 x R --4 R as: (a) positive definite or not. . (c) radially unbounded or not. In this case AV(x) = V(x(k + 1)) . which can be easily seen to be positive definite.1.11.1) Prove Lemma 4. t) = (xi + x2) (ii) W2(x. (4.11 Exercises (4.4. We need to find AV(x) = V(x(k + 1)) . 4. (4. we have V(x(k+1)) = 2xi(k+1)+2x1(k+1)x2(k+1)+4x2(k+1) 2 [xi(k) + x2(k)]2 + 2(x1(k) + x2(k)][ax3 (k) + 2x2(k)] +4[axl(k) + 2x2(k)]2 V(x(k)) = 2x2 + 2xix2 + 4x2. and thus the origin is stable. since the system is autonomous).V(x(k)). after some trivial manipulations. we conclude that AV(x) = V(x(k + 1)) . AV(x) is negative definite in a neighborhood of the origin.4. (4.V(x(k)) _ Therefore we have the following cases of interest: 2x2 + 2ax4 + 6ax3 x2 + 4a2x6. In this case.3) Prove Theorem 4.2) Prove Lemma 4. we consider the (time-independent) Lyapunov function candidate V(x) = 2xi(k) + 2x1(k)x2(k) + 4x2(k). t) = (xl + x2)et.2. a = 0.V(x(k)) = -2x2 < 0. (i) W1 (X. a < 0. From here.

11.6) Given the following system. W4 (X. (ii) W2 (X. t) _ -(xl + x2)(1 + sine t). (Vii) W7(x. in each case. t) = -(xl + x2)et. W11(x) t) _ (x1 + x2). and/or (c) globally uniformly asymptotically stable. (v) W5(x. t) = (xi + x2).10. W10 (x. has been has been proposed. t) = (xl + x2)e t W'5 (X.134 CHAPTER 4. a function are given below you are asked to classify the and computed. (xi) W11(x. (viii) W8 (X. xl{'1) (4. t) = (xi + x2)e-t. W1(x.7) Prove Theorem 4. study the stability of the equilibrium point at the origin: . (i) W1 (X. Assuming that origin. t) = (xi +x2)(1 + cos2 wt). (iv) W4 (X.t) = (xi + x2)(1 + cost wt). (iv) W4(x. t) = -(x2 + x2)e-t. t) = (xi + x2)(1 + e-t). t) = -xi. (4. Wz(x. t) = (xi + x2) cos2 wt. Explain you answer in each case. t) _ -xi.5) It is known that a given dynamical system has an equilibrium point at the origin. (4. as (a) stable. t) = (xi + x2). t) = (xl + x2). t) _ -(xi + x2) (vi) W6 (X. 4Vs(x. t) = (xi + x2)et.8) Prove Theorem 4. W7(x. (x) Wio(x. Ws(x. (b) locally uniformly asymptotically stable. t) _ -(xi + x2).X1X2 sin2 t (4. and its derivative For this system.9) Given the following system. study the stability of the equilibrium point at the origin: 21 = -x1 . (ix) W9(x. (iii) W3(x. (vi) W6(x. (V) W5(x.x1x2 cos2 t 22 = -x2 . t) = -(xl + x2)e-t. W6 (x. (Vii) W7(x.t) = (x2 + x2)(1 + cos2 wt). t) _ -xle-t. t) _ (xl + x2)(1 + cos2 wt). (4.t) = (x2+x2)(l+e t). t) = (xi + x2)(1 + e-t). t) = (x2+ )+1 .t) _ -(xi + x2)(1 + e-t). t) = (xi + x2) cos2 wt. LYAPUNOV STABILITY II: NONAUTONOMOUS SYSTEMS (iii) W3(x. W3(x.

Section 4. [88]. [68]. .4. Section 4. Classical references on the subject are Hahn [27] and Krasovskii [44]. EXERCISES 135 xl ±2 = x2 + xi + xz = -2x1 .3X2 Hint: Notice that the given dynamical equations can be expressed in the form Ax + g(x)- Notes and References Good sources for the material of this chapter include References [27].11. and [95]. [41].6 is based on Khalil [41]. Perturbation analysis has been a subject of much research in nonlinear control.5 is based on Vidyasagar [88] and Willems [95].

.

our attention has been restricted to open-loop systems. To start. It seems clear from this discussion that the stability of feedback systems can be studied using the same tool discussed in Chapters 3 and 4. In a typical control problem. we substitute (5. O(x)) According to the stability results in Chapter 3.1) and assume that the origin x = 0 is an equilibrium point of the unforced system ± = f (x.1) to obtain ± = f (x.3) is negative definite in a neighborhood of the origin.Chapter 5 Feedback Systems So far. 0). our interest is usually in the analysis and design of feedback control systems. Now suppose that u is obtained using a state feedback law of the form u = cb(x) To study the stability of this system.u) (5. Feedback systems can be analyzed using the same tools elaborated so far after incorporating the effect of the input u on the system dynamics. then we can find a positive definite function V whose time derivative along the trajectories of (5. however.3) is asymptotically stable.2) into (5. consider the system x = f(x. 137 . In this chapter we look at several examples of feedback systems and introduce a simple design technique for stabilization known as backstepping. if the origin of the unforced system (5.

138 CHAPTER 5. While the idea works quite well in our example. In a more realistic scenario what we would obtain at the end of the design process with our control u is a system of the form i=(a-a)x2-x where a represents the true system parameter and a the actual value used in the feedback law. setting ax2. . FEEDBACK SYSTEMS 5. This is undesirable since in practice system parameters such as "a" in our example are never known exactly. it does come at a certain price.1 Basic Feedback Stabilization In this section we look at several examples of stabilization via state feedback. In this case the true system is also asymptotically stable. We notice two things: (i) It is based on exact cancelation of the nonlinear term ax2.5) into (5. 0 We mention in passing that this is a simple example of a technique known as feedback linearization.x and substituting (5. Example 5. These examples will provide valuable insight into the backstepping design of the next section. The reason is that nonlinearities in the dynamical equation are not necessarily bad. as desired. (ii) Even assuming perfect modeling it may not be a good idea to follow this approach and cancel "all" nonlinear terms that appear in the dynamical system. consider the following example. To see this.1 Consider the first order system given by we look for a state feedback of the form u = O(x) that makes the equilibrium point at the origin "asymptotically stable.a)x2.4) we obtain ±= -x which is linear and globally asymptotically stable. u = -ax2 ." One rather obvious way to approach the problem is to choose a control law u that "cancels" the nonlinear term Indeed. but only locally because of the presence of the term (a .

To show that this is the case.x which leads to Once again. In practice. Terms of the form -x2. The presence of this variable in the control input u can lead to very large values of the input. At the same time. the physical characteristics of the actuators may place limits on the amplitude of this function. greatly contribute to the feedback law by providing additional damping for large values of x and are usually beneficial. we proceed as follows. Notice first that our control law ul was chosen to cancel both nonlinear terms ax2 and -x3. To find an alternate solution. f(0.5.6) has an asymptotically stable equilibrium point at the origin. O(x)) (5. quite different: The presence of terms of the form x2 with i even on a dynamical equation is never desirable. Let's now examine in more detail how this result was accomplished.1 we can set u ui = -ax 2 + x3 . we will construct a function Vl = Vl (x) : D -* IR satisfying . Given the system x= f(x. These two terms are. with j odd. Indeed. BASIC FEEDBACK STABILIZATION 139 Example 5.uE1R.1.0)=0 u = q5(x) we proceed to find a feedback law of the form such that the feedback system ± = f (x.2 Consider the system given by Lb =ax2-x3+u following the approach in Example 5. on the other hand. even powers of x do not discriminate sign of the variable x and thus have a destabilizing effect that should be avoided whenever possible. notice that the cancellation of the term x3 was achieved by incorporating the term x3 in the feedback law. and thus the presence of the term x3 on the input u is not desirable. ul is the feedback linearization law that renders a globally asymptotically stable linear system.u) xEr. however.

2 and 3. =ax2-x3+u defining Vl(x) = 2x2 and computing V1.l f (x. . Example 5.x4 + xu. Moreover.x) = -x2 'f . we modify the function V2(x) as follows: V1 = ax3 . (ii) V1(x) is negative definite along the solutions of (5.6). and V1(x) is positive definite in D . we must have dEf . In Example 5. there exist a positive definite function V2(x) : D --> R+ such that Vi (x) = ax.3 Consider again the system of example 5. With this input function u.x.140 CHAPTER 5.x4 + x(-ax2 + x3 .x4 + xu < -V2(x) For this to be the case. u) ax3 . ax3 . O(x)) < -V2 (x) Vx E D.(x4 + x2).V2(x).x2 xu < -x2 . It then follows that this control law satisfies requirement (ii) above with V2(x) = x2. we obtain Vl = x f (x.{0}.ax3 = -x(x + ax2) which can be accomplished by choosing u = -x .axe. FEEDBACK SYSTEMS (i) V(0) = 0. Clearly.2. Not happy with this solution. if D = ]R" and V1 is radially unbounded.3.x4 + xu < -x4 .2 we chose u = ul = -axe + x3 . With this input function we have that V1 = ax3 . The result is global since V1 is radially unbounded and D = R. we obtain the following feedback system 2 = ax2-x3+u -x-x3 which is asymptotically stable. then the origin is globally asymptotically stable by Theorems 3.

8). Thus. We now endeavor to find a state feedback law to asymptotically stabilize the system (5. As will be seen shortly.(x) < 0 Vx E D where D R+ is a positive semidefinite function in D. the system (5. The function u E R is the control input and the functions f. we assume that there exists a state feedback control law of the form _ Ox).2 Integrator Backstepping Guided by the examples of the previous section. (5.5.1(a)): (i) The function f Rn -> ]Rn satisfies f (0) = 0.7)-(5. and [x.8)). for which a known stabilizing law already exists.7). x According to these assumptions. To start with.0(x)1 (5. (5. To this end we proceed as follows: We start by adding and subtracting g(x)o(x) to the subsystem (5. We will make the following assumptions (see Figure 5.8) E ]R.8). we consider a system of the form = f(x) + g(x)f = U. (ii) Consider the subsystem (5. ov..7)-(5.8) consists of the subsystem (5. Viewing the state variable as an independent "input" for this subsystem.7) (5. More general classes of systems are considered in the next section.7).2.9) = U.7) and (5. e]T E ]Rn+1 is the state of the system (5. Here x E ]Rn. we now explore a recursive design technique known as backstepping.O(x)] < -V.7) (Figure 1(b)). the importance of this structure is that can be considered as a cascade connection of the subsystems (5. the origin is an equilibrium point of the subsystem i = f (x). 0(0) = 0 and a Lyapunov function V1 : D -4 IIF+ such that V1(x) 8 [f (x) + g(x) . We obtain the equivalent system x = f(x) + g(x)0(x) + g(x)[ .8). INTEGRATOR BACKSTEPPING 141 5. augmented with a pure integrator (the subsystems (5.7)-(5.g : D -> R" are assumed to be smooth.10) .

(d) the final system after the change of variables.142 CHAPTER 5. FEEDBACK SYSTEMS a) U x b) U x -O(x) c) U z x L d) f (') + 9(')O(') v=z z x Figure 5.8).7)-(5. (c) "backstepping" of -O(x). . (b) modified system after introducing -O(x).1: (a) The system (5.

This feature will now be exploited in the design of a stabilizing control law for the overall system (5.16) is equivalent to the system (5.fi(x) q= ao[f(x)+9(x).16) consider a Lyapunov function candidate of the form V = V (x.5.O(x) (5. the subsystem (5.2.14) the resulting system is i = f(x) + 9(x)O(x) + 9(x)z (5.fi(x) = u.7)-(5.17) We have that V _ av.15)-(5.16) which is shown in Figure 1(d).15)-(5.15) i=v (5.16) is.15) incorporates the stabilizing state feedback law and is thus asymptotically stable when the input is zero. as shown in Figure 1(c). However.8). as shown in Figure 1(d).12) . ) = Vi (x) + Z z2.16).18) . (ii) The system (5. the cascade connection of two subsystems. once again.] a0x= (5. (5.11) (5. We can choose v . Defining v = z (5. To stabilize the system (5.15)-(5.13) This change of variables can be seen as "backstepping" -O(x) through the integrator. INTEGRATOR BACKSTEPPING Define 143 z = z = where .15)-(5. These two steps are important for the following reasons: (i) By construction. [f(x) E--- - x + 9(x)O(x) + 9(x)z] + zi _AX f (x) + ax a-x g(x)z + zv. k>0 (5. the system (5.((x) + kz) .

kz2.16) with x` = xl = x2 1 S f(x) = f(xi) = axi .kz2 < -Va(x) .21) (5. we define V1(xl) = 1xi Vl(xl) = axi .x1 + X2 X2 = u. find a state feedback control law _ q(x) to stabilize the origin x = 0.(x1 + x1 . = 0 is also asymptotically stable. then the origin is globally asymptotically stable. Finally.x3 g(x) = Step 1: Viewing the "state" as an independent input for the subsystem (5.19) that the origin x = 0.11).4 Consider the following system.18).x1 + x1x2 < -Va(x1) deI 1 .k[ -(5. (5. notice that.15)-(5. Proceeding as in Example 5. Moreover.xi + x2. z = 0 is asymptotically stable. the stabilizing state feedback law is given by u = i + q5 (5. the result also implies that the origin of the original system x = 0. In our case we have it =axi .O(x) and 0(0) = 0 by assumption. (5.O(x)] . since z = l.12). (5. If all the conditions hold globally and V1 is radially unbounded.23) Clearly this system is of the form (5.22) Example 5. .13). which is a modified version of the one in Example 5. we obtain u = LO [f (x) + g(x) .3. according to (5. FEEDBACK SYSTEMS Thus V- 8V1 a +E-1 g(x)cb(x) .19) It then follows by (5.kz2 ax f (x) 8x l[f(x) + g(x).15).144 CHAPTER 5.20) and using (5.3: it = axi . l. and (5.

1 Chain of Integrators A simple but useful extension of this case is that of a "chain" of integrators.22).3. 5.5.]. 5. With this control law the origin is globally asymptotically stable (notice that Vl is radially unbounded). Step 2: To stabilize the original system (5.23). We now look at more general classes of systems.axi leading to 21 = -xl -X 3 X.-5X 1 g(x) .21). We have u 8x [f (x) + g(xX] . specifically a system of the form th = f(x) + g(X)1 41 =6 .xl + axT. BACKSTEPPING: MORE GENERAL CASES which can be accomplished by choosing 145 x2 = 0(x1) = -XI .3. under the assumption that a stabilizing law 0.k[C . we make use of the control law (5.O(x)] = -(1+2axi)[ax2 i-x3 l+x2]-xl-k[x2+xl+ax2i].(5. x E 1R". E 1R.3 Backstepping: More General Cases In the previous section we discussed integrator backstepping for systems with a state of the form [x. The composite Lyapunov function is V=Vi+2z2 = 2X2+2[x2-O(xl)]2 = 2X2 + 2[x2 . l.

f= f f(x)+09(x) 1 1.7)-(5. a`1 ] [0 .O(x)]. We first consider the first two "subsystems" = f(x) + 9(x)1 S1 (5. The second-order system (5.27)-(5.27) =6 (5.i)]. aV2 OxSl) u= ax xCC- V 9(x[) . bl) = aa(x) [f (x) + 9(x) 1] . 90(x. k>0 11T 9= L 0 J Applying the backstepping algorithm once more.24) 1=6 u (5. FEEDBACK SYSTEMS G-1 bk u Backstepping design for this class of systems can be approached using successive iterations of the procedure used in the previous section. ac.51]T . without loss of generality.28) and assume that 1 = O(x1) is a stabilizing control law for the system th = f(x) + g(x)q5(x) Moreover.[ a 22. we consider. we obtain the stabilizing control law: .8) with x= at(x) l [a0 1 z. C [ .7)-(5.28) can be seen as having the form (5.O(x)]2 k>0 We now iterate this process and view the third-order system given by the first three equations as a more general version of (5. We can asymptotically stabilize this system using the control law (5. To simplify our notation.21) and associated Lyapunov function V2: 6 = O(x.ax1 g(x) V2 = V1 + 2 [S1 . we also assume that V1 is the corresponding Lyapunov function for this subsystem.146 CHAPTER 5.k[f2[. the third order system = f(x) + 9(x)1 (5. WI [.26) and proceed to design a stabilizing control law.O(x.t .8) with 6 considered as an independent input. k>o .k[S2 .25) (5.

for simplicity. we consider the system it = 2 ax2 + O(xi) and find a stabilizing law u = O(x1). We proceed to stabilize this system using the backstepping approach.W a0(x.0(xl )] 2 1 = VI + 2 [x2 + XI + axi]2. ail .S1)b2 [f (x) +9(x)1] + al. BACKSTEPPING: MORE GENERAL CASES or 147 u = a0(x. we propose the stabilizing law O(x1. aav. We can now proceed to the first step of backstepping and consider the first two subsystems. assuming at this point that x3 is an independent input. Example 5.5.2 . In other words.51)]. ax The composite Lyapunov function is .k[x2 .O(x)]2 + 2 [S2 .axe is one such law.0(x.4: ±1 = axi + x2 x2 = x3 X3 = u.x2)(= x3) = 11)[f(xl) +9(xl)x2] .aV2 k> 0. V= 1 = V1 + 2 [S1 . Using the result in the previous section. Using the Lyapunov function Vl = 1/2x1. we consider the first equation treating x2 is an independent "input" and proceed to find a state feedback law O(xi) that stabilizes this subsystem.x19(xl) .5 Consider the following system. CC I CC We finally point out that while.O(x. it is immediate that 0(xl) = -xl . which is a modified version of the one in Example 5.O(xl)].k[l. To start. . bl)]2. we have focused attention on third-order systems. k>0 with associated Lyapunov function V2 = Vl+_z2 Vl + [x2 .3. the procedure for nth-order systems is entirely analogous.

Sl)SCC2 2 = f2(x. for all i = 1. SL 61.. x2) = -(1 + 2ax1) [axi + x2] .3.7 where x E R'. that are fed back. k.6)e3 Sk-1 = A-1 (X.[X2 + x1 + axi] where we have chosen k = 1.' GAG G)+A(xi6)6.. x2) axi [f (XI) + 9(xl)x2] + aO(xl.2 Strict Feedback Systems We now consider systems of the form f (x) +CC 9(x)S1 CC CC S1 = fl (x.W(xl.7)-(5. FEEDBACK SYSTEMS a¢(xl ) ax.148 In our case CHAPTER 5. and fi.I..e1. C1. k>0 is a stabilizing control law with associated Lyapunov function V +21 0 5. 6.x1 . g= are smooth. S2. A. Si) + 91(x. x2) X3 _ aV2 09x2 09x2 .6. We now move on to the final step. -(1 + 2axi) x1 avl => O(xl.6) + 92(x. fzf and gz depend only on the variables x. Strict feedback systems are also called triangular . Ct E R.8) with x= I x2 J f L 9 f(x1) Og(x1)x2 J L 0 J From the results in the previous section we have that U = 090(x1.Ck)U 6 = A(Xi61 2) .k[x3 . x2)].. in which we consider the third order system as a special case of (5. . Sk-1) + 9k-1(x. Systems of this form are called strict feedback systems because the nonlinearities f.

) .0. ) {[f(x)+g(x)] - 19(x) . l f `9 1 ] f.32) (5.fa=f2.8). ) # 0 over the domain of interest. (5. S) = Vl (x) + 2 [S - O(x)]2. k = 1 in the system defined above): = f(x)+g(x) 149 system is of order (5.2.fi(x)] .30) = fa(x. We begin our discussion considering the special case where the one (equivalently.31) into (5. 9a=92 91 .21).9= [ 0 ].k1 [ .S2) + 92(x. ) .1. we now endeavor to stabilize (5.31) Substituting (5.29) (5. the stabilizing control law and associated Lyapunov function are 9a(x. ga(x.30) we obtain the modified system Cx = f (x) + 9(xX 41 (5. .0u. S) def 9. using (5.0+9a(x.7)-(5. e=e2.S[[1)[[+ 91(x.3. (5. S) 1 [Ul .Mx' S)].Sl)S2 [ C S2 = f2(x.5. kl > 0 (5.17). If ga(x. BACKSTEPPING: MORE GENERAL CASES systems.fa(x.34) V2 = V2 (x. This system reduces to the integrator backstepping of Section 5. (5.S1.29)-(5.1.31).33) = t1 which is of the form (5. then we can define U = O(x. To avoid trivialities we assume that this is not the case.&)6 which can be seen as a special case of (5.(x. It then follows that.2 in the special case where f" (x.35) We now generalize these ideas by moving one step further and considering the system [[ = f (x) +[ 9(x)1 S[[1 = f1(x.29) satisfies assumptions (i) and (ii) of the backstepping procedure in Section 5. and (5.29)-(5. Assuming that the x subsystem (5.30) with x= l C1 J .30).

35). k1>0 V2 = 2x2+2[x1+x2+a]2. S1. and using the control law and associated Lyapunov function (5.x12 +X 1x2.01(x. Using the Lyapunov function candidate V1 = 1/2x2 we have that V1 = x1 [ax2 . the control law x2 = O(x1) = -(x1 + a) results in V-(x1+xi) which shows that the x system is asymptotically stable.xi . x2) 1 (1 + x2)f -(1 + a)[ax2 .6 Consider the following systems: x1 = axz . we have that a stabilizing control law and associated Lyapunov function for this systems are as follows: 02(x. 2) = V2(x) + 2 [6 . Thus.34)-(5.37) The general case can be solved iterating this process.36) V3 (X.k1[x2 + x1 + a] . Sz) = 9z 1'x - afi (fi(x)+9t a i91-kz[fz-01]-fz} k2 > 0 (5. We begin by stabilizing the x subsystem. It then follows by (5.35) that a stabilizing control law for the second-order system and the corresponding Lyapunov function are given by 01(x1. . (5.150 CHAPTER 5. 1)]2.X1 + x2 x2] . FEEDBACK SYSTEMS With these definitions. Example 5.(x1 + X2)}. 6.34)(5.xl + x 2 1x2 1 x2 = XI + x2 + (1 + xz)u.x1 + x2x2] 3 3 ax1 .

m X2 k \I2 2m(1 + px1)2. but assume that to simplify matters.2: Current-driven magnetic suspension system. and therefore negative values of the input cannot occur.5. X2 = y. however. we obtain the following state space realization: X1 = X2 (5.39) A quick look at this model reveals that it is "almost" in strict feedback form. We are interested in a control law that maintains the ball at an arbitrary position y = yo. Notice that I represents the direct current flowing through the electromagnet. 5. the electromagnet is driven by a current source 72. Defining states zl = y. EXAMPLE 151 8 Figure 5.4 Example Consider the magnetic suspension system of Section 1.4. In this case.2.3.38) AµI2 X2 = 9 . Setting x1 = 0 .31): my = -ky + mg . It is not in the proper form of Section 5. (5.9.1. as shown in Figure 5.2(1 + µy)2 . We can easily obtain the current necessary to achieve this objective.2 because of the square term in 72. The equation of the motion of the ball remain the same as (1. we can ignore this matter and proceed with the design without change.

9(x) = 1 k Mx. aµ . It is straightforward to show that this equilibrium point is unstable.mx2 . We start by applying a coordinate transformation to translate the equilibrium point to = (yo.34) with 9a = -2m[1 Ap 00 8V1 + (xl + yo)]2' 8x 1 [f (x) + 9(x).] = x2 V1 = 2 x1. which stabilizes the first equation. = X1=:1 x2 x2 X2 = x2 I2 2mg(1 + µyo)2 Ay In the new coordinates the model takes the form: 21 = X2 1'2 (5.38)-(5. we obtain Io = µ (1 + µyo)2. 2 ex -x1. S = X2. Using the Lyapunov function candidate V1 = 2x? we have V1 = xjx2 and setting x2 = -x1 we obtain O(x1) _ -xl. 9(x) = 1 - . 0)T to the origin. Step 2: We now proceed to find a stabilizing control law for the two-state system using backsteppping. The new model is in the form (5. 0 = g . [1 + µ(x1 + yo]2 Step 1: We begin our design by stabilizing the xl-subsystem.152 CHAPTER 5. and so we look for a state feedback control law to stabilize the closed loop around the equilibrium point.41) which has an equilibrium point at the origin with u = 0.[1 + µ(x1 9(1 + µyo)2 + yo]2' and g. To this end we use the control law (5. X1 = :i .39).Yo. FEEDBACK SYSTEMS along with it = x2 = 0 in equations (5. (x. To this end we define new coordinates: 2'm.30) with x = X1. f (X) = 0.29)-(5.40) k 9_ mx2 _ 9(1 + µyo)2 _ Ay u 2m[1 + µ(x1 + yo)]2 [1 + µ(x1 + yo]2 (5.

5 Exercises r xl=xl+cosxl-1+x2 (5.42) The corresponding Lyapunov function is V2 = V2(xl.1) Consider the following system: 5l x2 =U Using backstepping.0(x)] = x1 + x2. design a state feedback control law to stabilize the equilibrium point at the origin. we obtain u = 01(x1. design a tate feedback control law to stabilize the equilibrium point at the origin. x2) = V1 + 2 [x2 . . Straightforward manipulations show that with this control law the closed-loop system reduces to the following: xl = -X1 x2 = -(1 + k)(xl + x2) 5. EXERCISES [S . +g [1 (5.5. Ja(x. Substituting values.2) Consider the following system: x1 = x2 { x2 = x1 +X3 + x3 X2 = U Using backstepping.O(x)12 ) X2 . (5.5. x2) _ _2m[1 + p(xl + yo)] 2 1I (1 + µyo)2 + µ(x1 + yo)]2 J k [_(1+ki)(xi+x2)+_x2 -9 . S) = 9 k [1 2 153 + µ(x1+ yo]2 .

4) Consider the following system: f it = xl +X 1 + xlx2 ll a2 = xi + (1 + x2)u Using backstepping. (5. which contains a lot of additional material on backstepping. (5. with help from chapter 13 of Khalil [41]. Notes and References This chapter is based heavily on Reference [47]. including interesting applications as well as the extension of the backstepping approach to adaptive control of nonlinear plants.5) Consider the following system: zl = x1 + x2 { :2 = x2 ix2 . The reader interested in backstepping should consult Reference [47]. design a state feedback control law to stabilize the equilibrium point at the origin. design a state feedback control law to stabilize the equilibrium point at the origin. design a state feedback control law to stabilize the equilibrium point at the origin.xi + u Using backstepping. .154 CHAPTER 5.3) Consider the following system. FEEDBACK SYSTEMS (5.1 + X2 x2 = X3 X2 =u Using backstepping. consisting of a chain of integrators: xl = xl + ext .

which. 155 . The space X must be u H y Figure 6. as an alternative to the stability in the sense of Lyapunov. To define the notion of mathematical model of physical systems. The input-output theory of systems was initiated in the 1960s by G. as discussed in Chapters 3 and 4.Chapter 6 Input-Output Stability So far we have explored the notion of stability in the sense of Lyapunov. Sandberg. corresponds to stability of equilibrium points for the free or unforced system. Zames and I. which we will denote by X.1: The system H. Thus. we first need to choose a suitable space of functions. In this chapter we explore the notion of input-output stability. roughly speaking. Namely. and it departs from a conceptually very different approach. This notion is then characterized by the lack of external excitations and is certainly not the only way of defining stability. it considers systems as mappings from inputs to outputs and defines stability in terms of whether the system output is bounded whenever the input is bounded. a system is viewed as a black box and can be represented graphically as shown in Figure 6.1.

Mathematically this is a challenging problem since we would like to be able to consider systems that are not well behaved. that is. whereas IIu(t)lloo represents the infinity norm of the vector u(t) in 1R4.C2) The space . 6..2 (The Space Coo) The space Goo consists of all piecewise continuous functions u : pt+ -1 1R4 satisfying IIuIIc- dcf sup IIu(t)II. In this chapter we need to consider "function spaces.1 Function Spaces In Chapter 2 we introduced the notion of vector space. Definition 6. .. the most important spaces of this kind in control applications are the so-called Lp spaces which we now introduce.156 CHAPTER 6. In other words IIuIIcm 4f sup (max Iuil) < oo tE]R+ i 1 < i < q.2). tEIR+ (6.2) The reader should not confuse the two different norms used in equation (6. u is of the form: ui (t) u(t) = u2(t) uq(t) Definition 6. defined in section 2. i. In the following definition. where the output to an input in the space X may not belong to X. spaces where the "vectors. (6.. < oo. which we now introduce. The classical solution to this dilemma consists of making use of the so-called extended spaces." or "elements.3." specifically. Indeed.Coonorm of the function u. we consider a function u : pp+ --> R q. INPUT-OUTPUT STABILITY sufficiently rich to contain all input functions of interest as well as the corresponding outputs.1 (The Space . the norm IIuIIc_ is the .1.1) The norm IIuII1C2 defined in this equation is the so-called C2 norm of the function u. + Iu9I2] dt < oo. and so far our interest has been limited to the nth-dimensional space ]R".e.C2 consists of all piecewise continuous functions u : 1R -> ]R9 satisfying IIuIIc2 =f f -[Iu1I2 + Iu2I2 +." of the space are functions of time. By far.

as well as all the stability definitions.3) Another useful space is the so-called L1. To add generality to our presentation. If p and q are such that 1 + 1 = 1 with 1 < p < oo and if f E LP and g E Lq. However. (6.6. 0<t<T t>T . (6. the space LP consists of all piecewise continuous functions u : IIt+ -4 Rq satisfying IIUIIcp f (f [Iu1I' + IU2IP + . denoted by X.3).5) For the most part.. t < T t. T E R+ 0. + IugIP] dt) (f 00 1/P < oo.6) t > T Example 6. Given p : 1 < p < oo.. oo) -+ [0. oo) defined by u(t) = t2. (6..1. From (6. we will state all of our definitions and most of the main theorems referring to a generic space of functions. are valid in a much more general setting. (6. 0. We define the truncation operator PT : X -> X by (PTU)(t) = UT(t) = def u(t). 6..4) Property: (Holder's inequality in Lp spaces). Definition 6. are special cases of the so-called LP spaces. most of the stability theorems that we will encounter in the sequel. with occasional reference to the space L. The truncation of u(t) is the following function: uT(t) _ t2. we will focus our attention on the space L2.3 Let u E X. and I(f9)TII1= f T f(t)9(t)dt < o (fT f(t)Idt) (fT I9(t)I dt) .1 Extended Spaces We are now in a position to introduce the notion of extended spaces.1 Consider the function u : [0. L1 is the space of all piecewise continuous functions u : pg+ -4 Rq satisfying: IIUIIc1 f (jiuii + IU2I + + jugI] dt) < oo. FUNCTION SPACES 157 Both L2 and L.1. then f g E L1.

. X is the space of real-valued function in Li..IR9. PT satisfies (1) [PT(u + v)J(t) = UT(t) + VT(t) Vu. It can be easily seen that all the Lp spaces satisfy these properties. (iv) If u E Xe. X is such that u = limTc... is the space consisting of all functions whose truncation belongs to X. the space X is referred to as the "parent" space of Xe. Given a function u E Xe . UT. The norm of functions in the space X will be denoted II . It is not normed because in general. IIxTIIx < oo. denoted Xe is defined as follows: Xe = {u : ]R+ . (ii) [PT(au)](t) = auT(t) Vu E Xe . IIUTII Example 6. the truncation operator is a linear operator. it is possible to check whether u E X by studying the limit limT. Definition 6. X. IIxTIIx is a nondecreasing function of T E IR+. Equivalently.2 Let the space of functions X be defined by X = {x x : 1R -+ 1R x(t) integrable and Jx We have x(t) dt < o0 In other words.IIx (ii) X is such that if u E X. however. In the sequel. t>T J 0<t<T = I XT(t) I dt = J Tt dt = 22 T . using property (iv) above. Consider the function x(t) = t.4 The extension of the space X.3. a E R. X is closed under the family of projections {PT}. Notice that although X is a normed space.. We will assume that the space X satisfy the following properties: (i) X is a normed linear space of piecewise continuous functions of the form u : IR+ -> IR9. that is. 1f0. and moreover. (iii) If u E X and T E R+. INPUT-OUTPUT STABILITY Notice that according to definition 6.158 CHAPTER 6.7) In other words. such that XT E X VT E R+}. XT(t) = IIXTII f t. then u E X if and only if limT-+. Xe is a linear (not normed) space. regardless of whether u itself belongs to X. then UT E X VT E IR+. the norm of a function u E Xe is not defined. (6. Thus. V E Xe . then IIUTIIX 5 IIxIIx .

INPUT-OUTPUT STABILITY Thus XT E Xe VT E R+. if the outputs [Hu(t)]T and [HUT(t)]T are identical.8). imagine that we perform the following experiments (Figures 6. As mentioned earlier. we find the output y(t) = Hu(t). It consists of all the functions u(t) whose truncation belongs to LP . u(t) for t > T). . Thus. See Figure 6. Those systems cannot be described with any of the LP spaces introduced before. 159 Remarks: In our study of feedback systems we will encounter unstable systems.3(a)-(c).3(d)-(f).. The extension of the space LP .2 and 6.e.8). The difference in these two experiments is the truncated input used in part (2). the system output in the interval 0 < t < T does not depend on values of the input outside this interval (i. and from here the truncated output yT(t) = [Hu(t)]T.e. (6.6. or even in any other space of functions used in mathematics. or more precisely. systems whose output grows without bound as time increases. the mathematical representation of a physical system. 6. See Figure 6. satisfied by all physical systems. and finally we take the truncation yT = [HuT(t)]T of the function y.3): (1) First we apply an arbitrary input u(t). the extended spaces are the right setting for our problem.8) is important in that it formalizes the notion.2 Input-Output Stability We start with a precise definition of the notion of system. Clearly yT = [Hu(t)]T(t) represents the left-hand side of equation (6.. 1 < p < oo. However x ¢ X since IXT I = oo. our primary interest is in the spaces £2 and G. we compute the output y(t) = Hu(t) = HuT(t) to the input u(t) = uT(t). i. but care must be exercised with mathematical models since not all functions behave like this. (2) In the second experiment we start by computing the truncation u = uT(t) of the input u(t) used above. Notice that this corresponds to the right-hand side of equation (6. Thus. that the past and present outputs do not depend on future inputs. is defined to be a mapping H : Xe -> Xe that satisfies the so-called causality condition: [Hu(')]T = Vu E Xe and VT E R.2. will be denoted Lpe. Namely. and repeat the procedure used in the first experiment.5 A system. To see this.8) Condition (6.. Definition 6. All physical systems share this property.

In this sense. The essence of the input-output theory is that only the relationship between inputs and outputs is relevant. PT(Hx) = PT[H(PTx)] dx E Xe and VT E W1 . while the input-output theory considers relaxed systems with non-zero inputs. In other words. Notice that the Lyapunov theory deal with equilibrium points of systems with zero inputs and nonzero initial conditions. does not depend in any way on the notion of state. In fact. the internal description given by the state is unnecessary in this framework.6 A system H : Xe -> Xe is said to be input-output X-stable if whenever the input belongs to the parent space X. are complementary to the Lyapunov theory. Definition 6. we review these concepts and consider input-output systems with an internal description given by a state space realization. . We end this discussion by pointing out that the causality condition (6. In later chapters. and the notion of input-output stability in particular.160 CHAPTER 6.e. It is important to notice that the notion of input-output stability.. Strictly speaking. there is no room in the input-output theory for the existence of nonzero (variable) initial conditions. the output is once again in X. the input-output theory of systems in general. H is X-stable if Hx is in X whenever u E X. Experiment 2: input fl(t) = UT(t) applied to system H. and in fact.2: Experiment 1: input u(t) applied to system H. We may now state the definition of input-output stability. the notion of input-output system itself.8) is frequently expressed using the projection operator PT as follows: PTH = PTHPT (i. INPUT-OUTPUT STABILITY u(t) H y(t) = Hu(t) u(t) =UT(t) H y(t) = HUT(t) Figure 6.

(e) response of the system when the input is the truncated input uT(t). (b) the response y(t) = Hu(t).3: Causal systems: (a) input u(t). (d) truncation of the function u(t). .8). (f) truncation of the system response in part (e). INPUT-OUTPUT STABILITY 161 (a) t y(t) = Hu(t) (b) t Hu(t)]T (c) t (d) t [HuT(t)] (e) t [HUT(t)1T (f ) t T Figure 6. Notice that this figure corresponds to the left-hand side of equation (6. (c) truncation of the response y(t). Notice that this figure corresponds to the right-hand side of equation (6.8).2.6.

It is immediately obvious that if a system has finite gain. but do not have a finite gain. and notice that N(0) = 0. given that the response is an instantaneous function of the input. A different.3 E pl+ such that II(Hu)TIIX <_y(H) IIuTIIX +a. If the system H satisfies the condition Hu = 0 whenever u = 0 then the gain y(H) can be calculated as follows y(H) = sup II (IHu IIXI X (6. i = 1.= 1. however. Example 6. the memoryless systems Hi : Gone . It is clear that input-output stability is a notion that depends both on the system and the space of functions. and a constant. and all T in lR+ for which UT # 0. . (6. For instance.Gooe. y(H) = sup II(Hu)TIIG.9) Systems with finite gain are said to be finite-gain-stable.7 A system H : Xe -4 Xe is said to have a finite gain if there exists a constant y(H) < oo called the gain of H.5 are input-output-stable. The gain y(H) is easily determined from the slope of the graph of N.10) where the supremum is taken over all u E X. The converse is. not true. We conclude this section by making the following observation.4.162 CHAPTER 6. 2 shown in Figure 6. INPUT-OUTPUT STABILITY For simplicity we will usually say that H is input-output stable instead of inputoutput X-stable whenever confusion is unlikely.3 have no "dynamics" and are called static or Systems such as memoryless systems. Definition 6. and perhaps more important interpretation of this constant will be discussed in connection with input-output properties of state space realizations. then it is input-output stable. and consider the nonlinear operator N(-) defined by the graph in the plane shown in Figure 6.9) is called the bias term and is included in this definition to allow the case where Hu # 0 when u = 0. One of the most useful concepts associated with systems is the notion of gain. The constant /j in (6. As an example. any static nonlinearity without a bounded slope does not have a finite gain.3 Let X = Goo. IIuTIIL- in example 6.

INPUT-OUTPUT STABILITY 163 N(u) 1.25 U Figure 6.2.5: The systems H1u = u2.25 0.25 1.6. y = elUl u Figure 6. . and H2u = ek"I.4: Static nonlinearity NO.

then it is also proper. A rational function M E R(s) will be said to be proper if it satisfies lim M < oo.. The norm off c A is defined by 11f 11A =1 fo I + jW 0 I fa(t) I dt. according to this. if H(s) is strictly proper. Note that. if fo = 0). then IIf IIA = IIf 111. with the norm of f defined in (6. we limit our discussion to single-inputsingle-output systems.e. (6. Notice that. if f E L1 (i. The extension of the algebra A.164 CHAPTER 6. For simplicity. R(s) consists of all rational functions in s with real polynomials.3 Linear Time-Invariant Systems So far. Definition 6. and fa(. fa E L.e. R(s): field of fractions associated with R[s]. whenever dealing with LTI systems we focused our attention on state space realizations. We now introduce the following notation: R[s]: set of polynomials in the variable s. In this section we include a brief discussion of LTI system in the context of the input-output theory of systems. denoted Ae. is defined to be the set of all functions whose truncation belongs to A.11) We will also denote by A the set consisting of all functions that are Laplace transforms of elements of A. however.) is such that rIfa(r)I J dr<oo namely. e-+0a It is said to be strictly proper if 8-+00 lim M < 0. .8 We denote by A the set of distributions (or generalized functions) of the form At) = 1 where fo E R. t<0 denotes the unit impulse. i. foo(t) + fa(t). not true. The converse is. INPUT-OUTPUT STABILITY 6.11).

) = h06(t) + ha(t) E A and moreover. Definition 6.9 The convolution of f and g in A. u = 0 for t f< 0.1 Consider a function F(s) E R(s). then y(t) = hou(0) + J0 t h(r)u(t . It is interesting.10 A linear time-invariant system H is defined to be a convolution operator of the form (Hu)(t) = h(t) * u(t) = J h(T)u(t . if H is LP stable. Theorem 6.T) dT = hou(0) + / t h(t . LINEAR TIME-INVARIANT SYSTEMS 165 Theorem 6. denoted by f * g.12) It is not difficult to show that if f. . g c A. Given the causality assumption h(T) = 0 for r < 0.T)u(T) dr (6. Then F(s) E A if and only if (i) F(s) is proper. Theorem 6.10 includes the possible case of infinite-dimensional systems. Proof: See the Appendix.) E A. however. is defined by (f * g)(t) = f 00 f (T)g(t .T)u(T) dT (6. then f * g E A and g * f E A.T) dT = FOO h(t .13) where h(. then IHuIjc.T)g(T) dT.) is called the "kernel" of the operator H. to consider a more general class of LTI systems. Definition 6. < Jjh11Al1u1lcp . We conclude this section with a theorem that gives necessary and sufficient conditions for the Lp stability of a (possibly infinite-dimensional) linear time-invariant system. we have that. The function h(.T) dT = 0 00 f (t . In the special case of finite-dimensional LTI systems. and let represent its impulse response.1 implies that a (finite-dimensional) LTI system is stable if and only if the roots of the polynomial denominator lie in the left half of the complex plane.T) dT = hou(0) + fo t h(t .T)u(T) dT o 00 If in addition. denoting y(t) ption I Hu(t) (6.14) y(t) = hou(0) + J r h(T)u(t . and (ii) all poles of F(s) lie in the left half of the complex plane. We can now define what will be understood by a linear time-invariant system. J0 (6.6. such as systems with a time delay.3. Then H is GP stable if and only if h(.2 Consider a linear time-invariant system H.15) Definition 6. Proof: See the Appendix.

16) This is a very important case. INPUT-OUTPUT STABILITY 6.. < IIuII..4.T)dr Ihollu(t)l + ft sip Iu(t)I {IhoI + = IIuIIc. 6.. we focus our attention on the study of gain. We have hoo(t)+ha(t) E (h * u)(t) = hou(t) + J0 t ha(T)u(t .4 Cp Gains for LTI Systems Having settled the question of input-output stability in . and constitutes perhaps the most natural choice for the space of functions X.IIhIIA or IIhIIA > IIHuIIcIIuIIc- This shows that IIhIIA ? 7(H). To show that IIhIIA = ry(H).166 CHAPTER 6. consists of all the functions of t whose absolute value is bounded. = sup U IIII IIII = Ilullc p_1 IIHuIIc_ (6.. = Il h(t) II A (6. let u(t . The space Cc.Cp spaces. IIhIIA Thus f c Iha(T)Idr} Ilyll. Once again. It is clear that the notion of gain depends on the space of input functions in an essential manner. for simplicity we restrict our attention to single-inputsingle-output (SISO) systems.T) = sgn[h(T)] VT .17) Consider an input u(t) applied to the system H with impulse response A. For each fixed t.1 L Gain By definition 7(H). We will show that in this case ti(H). We do this by constructing a suitable input. we must show that equality can actually occur.

To see this.20) where H(tw) = .20) is the so-called H-infinity norm of the system H. to state this in different words. sinusoids and step functions are not in this class). We consider a linear time- invariant system H. i.g.)I f IIHIIoo (6. -Y2(H) = sup x III (6.(T)U(t .18) xIIIC2 } 1/2 IIxIIc2 = { f Ix(t)I2 dt 00 J (6. the space G2 is the most widely used in control theory because of its connection with the frequency domain that we study next. L GAINS FOR LTI SYSTEMS where 167 sgn[h(t)] 1 if h(T) > 0 l 0 if h(T) < 0 It follows that lullc_ = 1. and y(t) = (h * u)(t) = bou(t) + t J0 t ha.F[h(t)].4. 6.2 G2 Gain This space consists of all the functions of t that are square integrable or. Although from the input-output point of view this class of functions is not as important as the previous case (e. consider the output y of the system to an input u Ilylicz = IIHuIIcz = f [h(t) * u(t)]dt = 21r ii: 00 IU(JW)12dw} where the last identity follows from Parseval's equality.r)dr Ihol + f Iha(T)ldr = 0 IhIIA and the result follows. the Fourier transform of h(t).e. From here we conclude that 2r1 IIyIIcz < {supIH(Jw)I}2 I r i- . (Hu)(t) = h(t) *u(t).4. the gain of the system H is given by 'Y2(H) = sup I H(. functions that have finite energy.19) We will show that in this case. The norm (6. du E G2.6.. and let We have that E A be the kernel of H..

The following model is general enough to encompass most cases of interest. is the wealth of theorems and results concerning stability of feedback systems.n))I2 4 Aw f w-&0 _ A21 H(7w)I2dw + W+AWO JW pW0 AZI H(M)I2dw } Thus.. orIw+woI<Ow 0 otherwise -W+OWO In this case IIYIIc.168 CHAPTER 6. ft(iw) -+ R(p) and IIyIIc. as shown in Figure 6. we have that IIyII2 --+ IIHI12 as Aw -+ oo. Let u(t) be such that its Fourier transform. W IlylI 2 <_ yz(H) < (6. -> 1 A2I H(. which completes the proof. as Ow -4 0. we proceed to construct a suitable input. . however. As with the G. It is useful to visualize the L2 gain (or H-infinity norm) using Bode plots.7w)I W fIIHII.5 Closed-Loop Input-Output Stability Until now we have concentrated on open-loop systems.21) IIUI12 . 6. = 1 I 2ir Therefore..P[u(t)] = U(yw).6. to show that it is the least upper bound. As we will see. has the following properties: U( )I_{ A if Iw . . the beauty of the input-output approach is that it permits us to draw conclusions about stability of feedback interconnections based on the properties of the several subsystems encountered around the feedback loop. To study feedback systems.22) Equation (6. One of the main features of the input-output theory of systems.21) or (6.22) proves that the H-infinity norm of H is an upper bound for the gain 7(H). defining A = {7r/2Aw}1/2. case. INPUT-OUTPUT STABILITY but sup lH(.w. (6.woI< . and then prove the so-called small gain theorem. we first define what is understood by closed loop input-output stability.

el and e2 are outputs. (ii) The following equations are satisfied for all u1.) regardless of the number of inputs and outputs of the system.24) can be solved for all inputs U1. it is implicitly assumed that the number of inputs of H1 equals the number of outputs of 112. the space of functions with bounded absolute value. usually referred to as error signals and yl = H1e1. and y2 E Xe for all pairs of inputs ul... Definition 6. w Figure 6. .6: Bode plot of H(jw)I.23) and (6. disturbances. CLOSED-LOOP INPUT-OUTPUT STABILITY 169 H(34 IIHII.11 We will denote by feedback system to the interconnection of the subsystems H1 and H2 : Xe -* Xe that satisfies the following assumptions: (i) e1.C. Here ul and u2 are input functions and may represent different signals of interest such as commands. Y2 = H2e2 are respectively the outputs of the subsystems H1 and H2.23) (6. Assumptions (i) and (ii) ensure that equations (6.24) e2 = u2 + H1e1. indicating the IIHII. U2 E Xe If this assumptions are not satisfied. For example. if X = .. For compatibility.5. It is immediate that equations (6. we write H : Xe --> Xe (or H : £ooe -* L. the subsystems H1 and H2 can have several inputs and several outputs. We also notice that we do not make explicit the system dimension in our notation. the operators H1 and H2 do not adequately describe the physical systems they model and should be modified.7. ..H2e2 (6. In general.24) can be represented graphically as shown in Figure 6. u2 E Xe . norm of H.6. and the number of outputs of H1 equals the number of inputs of H2. and sensor noise. u2 E Xe : el = ul . e2i yl.23) and (6.

In other words. y(t) yi (t) Y2 ] (6. and with u. .26) (6. Definition 6.27) In words.e) E Xe x Xe and e satisfies (6. P is bounded if Pu E X for every u c X . j = 1.23) and (6. e.13 A relation P on Xe is said to be bounded if the image under P of every bounded subset of XX E dom(P) is a bounded subset of Xe .14 The feedback system of equations (6. defined as follows: u(t) uul (t) = [ 2 (t) I I e(t) - eel (t) [ 2 (t) I. and y given by (6.24)} F = {(u. In the following definition. For this system we introduce the following input-output relations.170 CHAPTER 6. e. respectively.25) Definition 6. (6. U2 E Xe . and y. u E dom(P). 2.24)}.24) is said to be bounded or input-output-stable if the closed-loop relations E and F are bounded for all possible u1. That is the main reason why we have chosen to work with relations rather than functions.7: The Feedback System S. Definition 6.23) and (6. Given Ui. we introduce vectors u. E and F are the relations that relate the inputs u2 with ei and y2.24) are taken for granted. u2 in the domain of E and F.23) and (6.12 Consider the feedback interconnection of subsystems H1 and H2. INPUT-OUTPUT STABILITY Figure 6. y) E Xe x Xe and y satisfies (6.25) we define E = {(u. Notice that questions related to the existence and uniqueness of the solution of equations (6. i.23) and (6.

According to Definition 6. one of the most important results in the theory of input-output systems.31) Iu2TII + II(Hlel)TII < IIu2TII + 'Y(H1)II e1TIl. (6. Consider a pair of elements (u1. Theorem 6. then its feedback interconnection is stable. if y(Hj)7(H2) < 1. is the most popular and.30) we obtain IIe1TII <_ IIu1TII + 7(H2){IIu2TII +'Y(Hl)IIe1TII} 5 IIu1T + ry(H2)IIu2TII +' Y(H1)'Y(H2)II e1TII = [1 . . Substituting (6.6. we assume that the bias term 3 in Definition 6. in a sense to be made precise below. (u2i e2) that belong to the relation E.14 we must show that u1i u2 E X imply that e1 i e2.29) Thus. input-output--stable systems will be referred to simply as "stable" systems. Theorem 6. e1. we will sometimes denote X-stable to a system that is bounded in the space X. THE SMALL GAIN THEOREM 171 In other words.23) and (6. Remarks: Notice that. e2 must satisfy equations (6. To emphasize this dependence. The main goal of the theorem is to provide open-loop conditions for closed-loop stability. the outputs y1 and y2 and errors e1 and e2 are also in X. u2.30) (6.6 The Small Gain Theorem In this section we study the so-called small gain theorem.5). See also remark (b) following Theorem 6.31) in (6. the most important version of the small gain theorem.3. is less than 1. In the sequel. y1 and Y2 are also in X. the feedback system is input-output-stable.3 Consider the feedback interconnection of the systems H1 and H2 : Xe -4 Xe Then.28) (6. in some sense.7 is identically zero (see Exercise 6. H1 and H2. It says that if the product of the gains of two systems.6. u1. Proof: To simplify our proof. after truncating these equations. IIelTII < IIu1TII + II(H2e2)TII < IIu1TII + 7(H2)IIe2TII Ie2TII (6.24) and.(H2e2)T e2T = u2T + (Hlel)T. Then. el). a feedback system is input-output-stable if whenever the inputs u1 and u2 are in the parent space X. 6. as with open-loop systems.32) . we have e1T = u1T . the definition of boundedness depends strongly on the selection of the space X.3 given next.1(Hl)1'(H2)]II e1TII 5 IIu1TII +7(H2)IIu2TII (6.

3 provides sufficient but not necessary conditions for input-output stability. That F is also bounded follows from (6. then (6.34) and (6.24) is unique.24). (6. y1 y2 are bounded. If the only thing that is known about the system is that it satisfies the condition y(H1)y(H2) < 1. Moreover.2 evaluated as T -+ oo.35 ) +y(H1)IIu1II}. however.3 says exactly that if the product of the gains of the open-loop systems H1 and H2 is less than 1. IIui1I < oc for i = 1. We have Ie1II IIe2II < [1 .23) and (6. it does not follow from Theorem 6. to find a system that does not satisfy the small gain condition y(H1)y(H2) < 1 and is nevertheless input-outputstable.24) was not proved for every pair of inputs u1 and u2.24) exists. INPUT-OUTPUT STABILITY and since. .172 CHAPTER 6.34) and (6.33) Similarly IIe2TII < [1-y(H1)y(H2)]-1{IIu2TII +y(Hl)IIu1TII}. An alternative approach was used by Desoer and Vidyasagar [21]. then it is bounded.23) and (6.3 guarantees that. If..37) I(Hiei)TII < y(Hi)IIeiTII. 2). Notice that we were able to ignore the question of existence of a solution by making use of relations. u1 and u2 are in X (i.y(H1)y(H2)]-1{IIu1II + 1(H2)IIu2II} < [1 y(H1)y(H2)]-1{IIu2II (6. 0 Remarks: (a) Theorem 6.3 that for every pair of functions u1 and u2 E X the outputs e1i e2. who assume that e1 and e2 belong to Xe and define u1 and u2 to satisfy equations (6. if a solution of equations (6. we have that 1 .e.23) and (6. i = 1. In practice.y(H1)y(H2)]-1{IIu1TII +y(H2)IIu2TII}. and indeed usual. then each bounded input in the domain of the relations E and F produces a bounded output.33) must also be satisfied if we let T -4 oo. in addition.23) and (6.36) It follows that e1 and e2 are also in X (see the assumptions about the space X) and the closed-loop relation E is bounded. (6. because the existence of a solution of equations (6.33). y(Hl)y(H2) < 1. then Theorem 6. it is possible.35) and (6.36) and the equation (6.y(H1)y(H2) # 0 and then IIe1TII <_ [1 . imply that the solution of equations (6. - (6. It does not. In other words. (b) Theorem 6. the norms of e1T and e2T are bounded by the right-hand side of (6. the question of existence of a solution can be studied separately from the question of stability.34) Thus. by assumption.

w2)2 + 4w2] 0 as w -* oo. we have 1 I(4-w2)+23w1 1 [(4 . s+2s+4 W and let H2 be the nonlinearity N() defined by a graph in the plane. We apply the small gain theorem to find the maximum slope of the nonlinearity N() that guarantees input-output stability of the feedback loop. THE SMALL GAIN THEOREM 173 N(x) x Figure 6. Examples 6. We assume that X = G2. as shown in Figure 6.4 and 6.8: The nonlinearity N(. the supremum must be located at some finite frequency. and since IG(3w)I is a continuous function of w.5 contain very simple applications of the small gain theorem.6. First we find the gain of Hl and H2.8.). the maximum values of lG(jw)I exists and satisfies Since JG(3w_) I 7(Hi) = maxlG(jw)I W .4 Let Hl be a linear time-invariant system with transfer function G(s) = -2. For a linear time-invariant system. Example 6. we have -y(H) = sup 16(3w) I In this case.6.

We have y(H2) = IKL. However. It follows that y(H1) = 1/ 12. k > -4. Thus.jw)I twice with respect to w we obtain w = v. Differentiating IC(. if the absolute value of the slope of the nonlinearity N(. dIG )I ww =0.5 Let Hl be as in Example 6. denoted H(s). and d2I2w) I Iw. we obtain y(Hl)y(H2) < 1 is closed loop stable. = IKI< 12.12 < k < Pole analysis: -4 < k < oo. H2 is linear time-invariant and H2 = k). and very often results in conservative estimates of the system stability. or equivalently. Applying the small gain condition y(Hl)y(H2) < 1. and check stability by obtaining the poles of H(s).< 0. Comparing the results obtained using this two methods. in this case. 12.. INPUT-OUTPUT STABILITY IG(7w')I. For a second-order polynomial this is satisfied if and only if all its coefficients have the same sign. the small gain theorem provides a poor estimate of the stability region. application of the small gain theorem produces the same result obtained in Example 6.174 CHAPTER 6.4. and IG(j-w`)j = 1/ 12. in this simple example we can find the closed loop transfer function.e. if IkI < 12. the system is closed loop stable. We conclude that. namely. Since the gain of H2 is y(H2) = IkI. we have Small gain theorem: . It follows that the system is closed-loop-stable if and only if (4 + k) > 0. the small gain theorem provides sufficient conditions for the stability of a feedback loop. We have H(s) = G(s) 1 + kG(s) s2 + 2s + (4 + k) The system is closed-loop-stable if and only if the roots of the polynomial denominator of H(s) lie in the open left half of the complex plane. The calculation of y(H2) is straightforward.7 Loop Transformations As we have seen.4 and let H2 be a constant gain (i. .) is less than 12 the system Example 6. 6.

such as the passivity theorem (to be discussed in Chapter 8). The following theorem shows that the system S is stable if and only if the system SK is stable. Let H1.9.9 and let SK be the modified system obtained after a type I loop transformation and assume the K and (I+KH1)-1 : X -+ X.6. The closed-loop relations of SK will be denoted EK and FK. K and (I + KH1)-1 be causal maps from Xe into X. Then (i) The system S is stable if and only if the system SK is stable.9: The Feedback System S.10. LOOP TRANSFORMATIONS 175 Figure 6. . and will be referred to as transformations of Types I and Type II.Ku2 and u'2 = u2i as shown in Figure 6. the system S can always be replaced by the system SK.4 Consider the system S of Figure 6.15 (Type I Loop Transformation) Consider the feedback system S of Figure 6. In other words. The same occurs with any other sufficient but not necessary stability condition. Definition 6. formed by the feedback interconnection of the subsystems Hi = Hl(I + KHl)-1 and H2 = H2 . and assume that K is linear. Theorem 6. The are two basic transformations of feedback loops that will be used throughout the book. One way to obtain improved stability conditions (i.K. and (2) it lessens the overall requirements over H1 and H2.e. with inputs u'1 = ul . less conservative) is to apply the theorems to a modified feedback loop that satisfies the following two properties: (1) it guarantees stability of the original feedback loop. it is possible that a modified system satisfies the stability conditions imposed by the theorem in use whereas the original system does not. H2. A loop transformation of Type I is defined to be the modified system. In other words. denoted SK. for stability analysis.7.

.............. ..........Ku2 + > Hi=Hl(I+KHl)-l yl Hl K :...10: The Feedback System SK...................................... H2 = H2 .. INPUT-OUTPUT STABILITY ...........................................................................K Figure 6..............................176 CHAPTER 6...... ul ........................ ....... : ....... Z2 I e2 H2 + u2 K ........................

Proof: The proof is straightforward. however. thus leading to the result. with inputs ui = M-lu1 and u2 = a2i as shown in Figure 6. Let H1.9. H2 be causal maps of Xe into XQ and let M be a causal linear operator satisfying (i)M:X -4X. Theorem 6. Notice. M-1 causal. The closed-loop relation of this modified system will be denoted EM and FM. A type II loop transformation is defined to be the modified system SM.9 and let SM be the modified system obtained after a type II loop transformation. O Definition 6. although laborious. LOOP TRANSFORMATIONS 177 Ul M-1 M H1 yl Z2 M-1 Y2 e2 H2 + U2 Figure 6.16 (Type II Loop Transformation) Consider the feedback system S of Figure 6. .5 Consider the system S of Figure 6.6. formed by the feedback interconnection of the subsystem Hi = H1M and H2 = M-1H2. (ii) IM-1 : X --1 X : MM-1 = I.11: The Feedback System SM. Then the system S is stable if and only if the system SM is stable.7. that the transformation consists essentially of adding and subtracting the term Kyl at the same point in the loop (in the bubble in front of H1). Proof: The proof is straightforward and is omitted (see Exercise 6.9). and is omitted .11. (iii) Both M and M-1 have finite gain.

We first define the nonlinearities to be considered. Definition 6.17 A function 0 : R+ x 1l -a IR is said to belong to the sector [a. We assume that the reader is familiar with the Nyquist stability criterion. it is timevarying and for each fixed t = t'. expanding d(s) in partial fractions it is possible to express this transfer function in the following form: G(s) = 9(s) + d(s) where . x) is confined to a graph on the plane. in general. Vx E R.12: The nonlinearity 0(t*.12. x) in the sector [a. 3]. one of the first applications of the small gain theorem was in the derivation of the celebrated circle criterion for the G2 stability of a class of nonlinear systems. 6. if 0 satisfies a sector condition then.8 The Circle Criterion Historically. t(t'. x) < f3x2 Vt > 0.178 CHAPTER 6. as shown in Figure 6. The Nyquist criterion provides necessary and sufficient conditions for closed-loop stability of lumped linear time-invariant systems.38) According to this definition. Q] where a<0if axe < xO(t. Given a proper transfer function G(s) = p(s) q(s) where p(s) and q(s) are polynomials in s with no common zeros. INPUT-OUTPUT STABILITY Figure 6. (6.

6 analyzes the L2 stability of a feedback system formed by the interconnection of a linear time-invariant system in the forward path and a nonlinearity in the sector [a. if and only if the Nyquist plot of G(s) [i. Theorem 6. and let Hl be a linear timeinvariant system with a proper transfer function G(s) that satisfies assumptions (i)-(iii) above.Q] in the feedback path. 1 < p < oo. Vw E R and encircles it exactly v times in the counterclockwise direction as w increases from -oo to oo. g(s) is the transfer function of an exponentially stable system). then the Gee. whenever we refer to the gain -y(H) of a system H. Thus. THE CIRCLE CRITERION 179 (i) g(s) has no poles in the open left-half plane (i.1 (Nyquist) Consider the feedback interconnection of the systems Hl and H2. the polar plot of G(3w) with the standard indentations at each 3w-axis pole of G(3w) if required] is bounded away from the critical point (-1/K + 30). (ii) n(s) and d(s) are polynomials. Under these assumptions. With this notation. it will be understood in the L2 sense. and let H2 be a constant gain K. .e. the Nyquist criteria can be stated as in the following lemma. because it encompasses not a particular system but an entire class of systems. if one of the following conditions is satisfied. Lemma 6. The circle criterion of Theorem 6. n(s) d(s) contains the unstable part of G. system is £2-stable: .. and n(s) d(s) is a proper transfer function.6.8. Under these conditions the feedback system is closed-loop-stable in LP . In the following theorem.6 Consider the feedback interconnection of the subsystems Hl and H2 : Gee -+ Assume H2 is a nonlinearity 0 in the sector [a. Let Hl be linear time-invariant with a proper transfer function d(s) satisfying (i)-(iii) above.e.. (iii) All zeros of d(s) are in the closed right half plane. This is sometimes referred to as the absolute stability problem. Q]. The number of open right half plane zeros of d(s) will be denoted by v.

Proof: See the Appendix.1) Very often physical systems are combined to form a new system (e. You are asked to prove the following theorem.39) According to this definition. it is important to determine whether the addition of two causal operators [with addition defined by (A + B)x = Ax + Bx and the composition product (defined by (AB)x = A(Bx)]. if the truncations of ul and U2 are identical (i.180 CHAPTER 6. Vt < T).5 if and only if it is causal according to Definition 6. INPUT-OUTPUT STABILITY (a) If 0 < a < f3: The Nyquist plot of d(s) is bounded away from the critical circle C'. U2 E Xe and VT E IIF+ (6. (c) If a < 0 < 3: G(s) has no poles in the closed right half of the complex plane and the Nyquist plot of G(s) is entirely contained within the interior of the circle C'. Then H is causal according to Definition 6. then the truncated outputs are also identical.18 An operator H : Xe 4 Xe is said to be causal if PTU1 = PTU2 = PTHui = PTHU2 VUI.3: G(s) has no poles in the open right half plane and the Nyquist plot of G(s) remains to the right of the vertical line with abscissa -f3-1 for all w E R. you are asked the following questions: (6. are again causal. Show that the sum operator C : Xe --4 Xe defined by C(x) _ (A + B)(x) is also causal.g. Theorem 6.18. where v is the number of poles of d(s) in the open right half plane.7 Consider an operator H : Xe 4 Xe.9 Exercises their outputs or by cascading two systems).2) Consider the following alternative definition of causality Definition 6. (b) I_f 0 = a <. (b) Show that the cascade operator D : Xe -> Xe defined by D(x) = (AB)(x) is also causal. With this introduction. 6. which states that the two definitions are equivalent. .e. if U1 = U2. (6. centered on the real line and passing through the points (-a-1 + 30) and (-(3-1 +..y0) and encircles it v times in the counterclockwise direction. Because physical systems are represented using causal operators. by adding (a) Let A : Xe -> Xe and B : Xe -4 Xe be causal operators.

for bounded causal operators. u2 in Xe and all T in pt+ for which u1T # u2T. 2. (6. i = 1. (6. Theorem 6.7.9) Find the incremental gain of the system in Example 6.4. or simply continuous.5) Prove the small gain theorem (Theorem 6.3.4) Prove the following theorem. (6.3) Prove the following theorem. and y*(H) = y(H). find the sector [a. x according to Definition 6.8 Consider a causal operator H : Xe -* Xe satisfying HO = 0. (6. EXERCISES 181 (6. Then H has a finite gain. We now introduce a stronger form of input-output stability: Definition 6. (6. is taken over all u1.9 Let H1.6) Prove Theorem 6.(Hu2)TII I(u1)T .9.6.7) Prove Theorem 6. satisfying r(H) =Sup I(Hul)T .5. (6.3) . Show that if H : Xe 4 Xe is Lipschitz continuous.3) in the more general case when Q # 0.10) For each of the following transfer functions. then it is finite-gain-stable. H2 : X --> X be causal bounded operator satisfying H1O = 0. which states that. 0. and assume that 3y'(H) = sup jjHxIj/jIxjI < oodx E X . if there exists a constant I'(H) < oo.40) (6. called its incremental gain. Then y(H1H2) < y(Hl)^t(H2) (6. /3] for which the closedloop system is absolutely stable: (2) H(s) (s+ 1)(s+3) ' (ii) H(s) _ (s + 2)(s .41) where the supremurn.2 we introduced system gains and the notion of finite gain stable system.8) In Section 6.u2)TII (6. all of the truncations in the definition of gain may be dropped if the input space is restricted to the space X. Theorem 6.19 A system H : Xe -> Xe is said to be Lipschitz continuous.

Our presentation follows [21]) as well as [98]. (see also the more complete list of Sandberg papers in [21]). [66]. INPUT-OUTPUT STABILITY Notes and References The classical input/output theory was initiated by Sandberg. and Zames [97]. [88] and [92]. [98].182 CHAPTER 6. [65]. . Excellent general references for the material of this chapter are [21].

and ignores the internal system description.1) where f : D x Du -+ 1R' is locally Lipschitz in x and u. On one hand.0) has a uniformly asymptotically stable equilibrium point at the origin x = 0. input-output stability deals with systems as mappings between inputs and outputs. Lyapunov stability applies to the equilibrium points of unforced state space realizations. We also assume that the unforced system x = f(x. The sets D and Du are defined by D = {x E 1R : jxjj < r}. These two concepts are at opposite ends of the spectrum. Du = {u E Ift"` : supt>0 IIu(t)JI = 1jullc_ < ru}.1 Motivation Throughout this chapter we consider the nonlinear system x = f(x.Chapter 7 Input-to-State Stability So far we have seen two different notions of stability: (1) stability in the sense of Lyapunov and (2) input-output stability. and discuss stability of these systems in a way to be defined.u) (7. which may or may not be given by a state space realization. On the other hand. 7. We assume that systems are described by a state space realization that includes a variable input function. In this chapter we begin to close the gap between these two notions and introduce the concept of input-to-state-stability. 183 . These assumptions guarantee the local existence and uniqueness of the solutions of the differential equation (7.1).

Example 7.184 CHAPTER 7. 0<T<t Thus. the answer to both questions above seems to be affirmative. consider the following simple example. 0<T <t = suptlIx(t)II <e ? Drawing inspiration from the linear time invariant (LTI) case. and that if limt__.. however a lot more subtle. To see this. specifically.. 0 x(t) = 0.. Indeed. these implications fail. IIuT(t)Ile_ < J. the problem to be studied in this chapter is as follows. it is easy to find counterexamples showing that.. u(t) = 0 =: limt__.. The nonlinear case is. Indeed.1 Consider the following first-order nonlinear system: x=-x+(x+x3)u.. in general. it follows trivially that bounded inputs give rise to bounded states. dr (7. u(t) = 0. then limt. .. where all notions of stability coincide.2) A<0 It then follows that IIx(t)II <- keAtlIxoll + 1 0 t kea(t_T) IIBII IIu(T)II dr < keAtllxoll + kIIB II sup IIu(t)II A t = keAtllxoII + kIIBII IIUT(t)IIf_ . u = 0) the equilibrium point x = 0 is asymptotically stable.e. Given that in the absence of external inputs (i. Given an LTI system of the form = Ax + Bu the trajectories with initial condition x0 and nontrivial input u(t) are given by x(t) = eAtxp + t eA(t-T)Bu(T) J0 If the origin is asymptotically stable.) x(t) = 0 ? (b) Or perhaps that bounded inputs result in bounded states ?. then all of the eigenvalues of A have negative real parts and we have that IleAtll is bounded for all t and satisfies a bound of the form IleAtll < keAt. INPUT-TO-STATE STABILITY Under these conditions. for LTI systems the solution of the state equation is well known. . does it follow that in the presence of nonzero external input either (a) limt_..

Du = Rm and (7. Vt > 0. which implies that the origin is uniformly asymptotically stable. is called the ultimate bound of the system 7.)Ilc. Definition 7. lx(t) I1 <_ 0040 11. the term 3(I1 xo11. and the trajectories approach the ball of radius y(5).t) +y(b). .t) -* 0 as t -> oo.+ such that IIx(t)II <_ Q(IIxoll.3) for all xo E D and u E Du satisfying : Ilxoll < ki and sups>o IUT(t)II = IluTllc_ < k2i 0 < T < t. Ilx(t)II < Q(Ilxoll.2 Definitions In an attempt to rescue the notion of "bounded input-bounded state". It is said to be input-to-state stable. t) Vt > 0. tlim Ilx(t)II <_ y(b) For this reason. which we now discuss. which clearly has an asymptotically stable equilibrium point. Definition 7. Unforced systems: Assume that i = f (x. 0).). 7.7.1) is said to be locally input-to-state-stable (ISS) if there exist a ICE function 0. as can be easily verified using the graphic technique introduced in Chapter 1. k2 E ][i. DEFINITIONS 185 Setting u = 0. 0 <T < t (7. a class K function -y and constants k1. trajectories remain bounded by the ball of radius 3(Ilxoll.3) is satisfied for any initial state and any bounded input u.. when the bounded input u(t) = 1 is applied.1 has several implications. i.. Ilxoll < ki.t) +y(IIuT(. we now introduce the concept of input-to-state stability (ISS). u) is ISS and consider the unforced system x = f (x. i. However. however small.1) with initial state xo satisfies. we obtain the autonomous LTI system ± = -x. we see that the response of (7.e. Interpretation: For bounded inputs u(t) satisfying Ilullk < b.1 The system (7. As t increases.e. t) + y(b). which results in an unbounded trajectory for any initial condition. the forced system becomes ± = x3. or globally ISS if D = R".2. Given that y(0) = 0 (by virtue of the assumption that -y is a class IC function).1.

5) (7. t).1) if there exist class AC functions al.1. max{Q. Vt > 0.1) whenever the trajectories are outside of the ball defined by IIx*II = X(IIuPI).1) if it has the following properties: (a) It is positive definite in D. especially in the proof of some results. u) < -a3(IIxII) V is said to be an ISS Lyapunov function if D = R".6) aa(x) f (x. To this end.4) might be preferable to (7. (7. Nevertheless.)}. we will show in the next section that ISS can be investigated using Lyapunov-like methods.3 Input-to-State Stability (ISS) Theorems Theorem 7. Definition 7. we now introduce the concept of input-to-state Lyapunov function (ISS Lyapunov function). and al.3) with the following equation: Iix(t)II <_ maX{)3(IIxoII. (7. It seems clear that the concept of input-to-state stability is quite different from that of stability in the sense of Lyapunov. V is an ISS Lyapunov function for the system (7. a3. and X such that the following two conditions are satisfied: al(IIXII) <_ V(x(t)) < a2(IIxII) Vx E D.1 is to replace equation (7.3). 0 < T < t.186 CHAPTER 7. t > 0 Vx E D.1) is input-to-state-stable according to .3) follows from the fact that given 0 > 0 and y > 0.4) and (7. given a positive definite function V. 7. Notice that according to the property of Lemma 3. a2. u E Du : IIxII >_ X(IIuII). (7. Remarks: According to Definition 7. INPUT-TO-STATE STABILITY Alternative Definition: A variation of Definition 7.'Y(IIuT(')IIG. On occasions. there exist class IC functions al and a2 satisfying equation (7.1) and let V : D -a JR be an ISS Lyapunov function for this system.5).2 A continuously differentiable function V : D -> Ift is said to be an ISS Lyapunov function on D for the system (7. Du = R.2. Then (7. 2y}.1 (Local ISS Theorem) Consider the system (7. (b) It is negative definite in along the trajectories of (7.4) The equivalence between (7. a21 a3 E 1Coo . y} < Q + -y < {2$.

denoted O(Qc). and let ru f SupIuTII = IuIIG_. INPUT-TO-STATE STABILITY (ISS) THEOREMS Definition 7. is bounded and closed (i. Proof of Theorem 7.1) is globally input-to-state stable.6) of the ISS Lyapunov function guarantee that the origin is asymptotically stable. then the system (7.6) implies that V(x(t)) < 0 Vt > 0 whenever x(t) ¢ 1 C. We now consider two cases of interest: (1) xp is inside S2. Also define c S2c f a2(X(ru)) {x E D : V (x) < c} and notice that 1C. and if al. and thus it includes its boundary. this also implies that IIxHI > X(IIu(t)II) at each point x in the boundary of III. whenever x0 E 1C..2 (Global ISS Theorem) If the preceeding conditions are satisfied with D = 1R' and Du = 1Wt..7) (7. and (2) xo is outside Q. (7. It then follows by the right-hand side of (7. Qc is a compact set).1 with 187 aj1oa2oX k1 = a21(al(r)) k2 = X-1(min{k1iX(r. t>O 0<T<t.5) that the open set of points {xCR'i:IIxII <X(ru)<c}CQ. 02. Trajectories x(t) are such that al(IIxII) < V(x(t)) IIx(t)II : aj 1(V(x(t))) < al 1(c) = ai 1(a2(X(ru))) defining yfa11oa2oX(IuTIG_) 0<T<t . so defined. Therefore. for all t > 0.9) Theorem 7. then defining conditions (7.5) and (7.)}).3.CD.8) (7.e.7. x(t) is locked inside 0. Moreover. Notice also that condition (7. Now consider a nonzero input u. a3 E )Coo. ): The previous argument shows that the closed set 1 is surrounded by 3(1L) along which V(x(t)) is negative definite. Case (1) (xo e 1..1: Notice first that if u = 0.

<'Y(IIuTIIG-) Vt >_ 0. we obtain JIx(t)1Ioo Ilx(t)lIc and thus we conclude that < -Y(Ilulloo) < Q(IIxoII.11).9).. It then follows that for some tl > 0 we must have V(x(t)) >0 c. t) + 7(IIuII.10) and (7. By condition (7. 1Ix(t)II. then x(t) E 8(1k). and = {x E D : V(x) < c}. respectively. This argument also shows that x(t) is bounded and that there exist a class /CL function 0 such that for 0 < t < tl 11x(t)JI.8) and (7. INPUT-TO-STATE STABILITY we conclude that whenever xo E Sao. Thus kl d=f Ixoll al l(ai(r)) .8). < /3(Ilx.)} or that IIx(t)II <_ 3GIxoII.): Assume that xo E Q. (7. Assuming that xo V Q. and we are back in case (1). we must have that V(x) > c..t) (7. or equivalently. Thus al(IIxII) < al(r) < V(xo) V(x) S <_ a2(IIXII) a2(IIxOI) and lixol1 al-l(ai(r)).7(IIuII. it remains to show that kl and k2 are given by (7. V(xo) > c.) Vt > 0. IIxIl > X(IIull) and thus V(x(t)) < 0.11) Combining (7. To see (7.6).t) for t > tj for 0<t<tj Vt > 0 lx(t)II < max{p(Ilxoll.t).188 CHAPTER 7. To complete the proof. by definition S2c c = a2(X(ru)).10) Case (ii) (xo ¢ S1. 0 <T < t. notice that. for 0 < t < tl V(x(tl)) = But this implies that when t = t1.

u).9)x4 . We need to find (7.5) with al(IIxII) = 020411) = 2x2. 7.3. provided that aOIxI3 > Iul . where the parameter 0 is such that 0 < 0 < 1 V = -ax4 + aOx4 . we propose the ISS Lyapunov function candidate V (x) = This function is positive definite and satisfies (7.u) > 0.X(ru)}).12).y(Ilutlloo)} IjxII Vt> 0. whenever IIxHj > X(IjuII).3.1 Examples We now present several examples in which we investigate the ISS property of a system. To see (7. Example 7. a>0.2 is not an easy task. something that should not come as a surprise since even proving asymptotic stability has proved to be quite a headache.t).2 Consider the following system: ±=-ax3+u 2x2.12) and X(.x(aOx3 . k2 = X-1(min{k1. our examples are deliberately simple and consist of various alterations of the same system. 0 <T <t X(k2) < Thus < min{kl. This will be the case.9) notice that IMx(t)JI 189 < max{Q(IIxol(.7.1 and 7. To this end we proceed by adding and subtracting aOx4 to the right-hand side of (7. X(ru))}.9)x4 = a3(IIx1I) provided that x(aOx3 .aOx4 + xu = -a(1 . It should be mentioned that application of Theorems 7. Thus. To check for input-to-state stability.) E K such that V(x) < -a3(IIxII). INPUT-TO-STATE STABILITY (ISS) THEOREMS which is (7.u) < -a(1. We have V = -x(ax3 .8).

x[aOx 3 .190 CHAPTER 7. we have that = -ax4 + aOx 4 . 4 provided .2: x=-ax3+x2u = -ax4 +x3u a>0 Using the same ISS Lyapunov function candidate used in Example 7.9)x.aBx4 + x3u = -a(1 . 0 Example 7. I .4 Now consider the following system. equivalently IxI > ( a )1/3 It follows that the system is globally input-to-state stable with ^j(u) _ ( aB " 1/3 0 Example 7.3 Now consider the following system. INPUT-TO-STATE STABILITY or.0)x4 .u) lxI > 0 > or.0)x4. Thus.3: i=-ax3+x(1+x2)u a>0 Using the same ISS Lyapunov function candidate in Example 7.x3(a0x .(1 + x2)u] < -a(1 .aOx 4 + x(1 + x2)u -a(1 .2 and 7. which is yet another modified version of the one in Examples 7.0)x4 .u) < -a(1 . the system is globally input-to-state stable with 7(u) = .2 we have that -ax 4 + x(1 + x2)u -ax4 + aOx 4 . provided 0<0<1 x3(a0x . which is a slightly modified version of the one in Example 7.2.

Equation (7.(1 + x2)u] > 0.1 and 7. Under these conditions (7.13) is satisfied if Ixl > ( 1+r2 u 1/3 jxIj < r}. the system is input-to-state stable with k1 = r: 7(u) = ( 1+(5)1-1 2 / 1/3 k2 = X-1[min{k1iX(ru)}] = X 1(k1) = X-1(r) aOr (1 + r2). INPUT-TO-STATE STABILITY REVISITED x[aOx3 191 . We begin by stating that.1) is locally input-to-state-stable.7. = k2 7.2 state that the existence of an ISS Lyapunov function is a sufficient condition for input-to-state stability.4 Input-to-State Stability Revisited In this section we provide several remarks and further results related to input-to-state stability. Theorems 7.13) Now assume now that the sets D and Du are the following: D = {x E 1R" D = R. Therefore. as with all the Lyapunov theorems in chapters 3 and 4. For completeness. respectively.1 or 7. As with all the Lyapunov theorems in Chapters 3 and 4.4 Consider the system (7.4. and that the function f (x. Proof: See Reference [73].5 are important in that they provide conditions for local and global input-to-state stability. there is a converse result that guarantees the existence of an ISS Lyapunov function whenever a system is input-to-state-stable.1). Assume that the origin is an asymptotically stable equilibrium point for the autonomous system x = f (x. .1) is input-to-state-stable if and only if there exist an ISS Lyapunov function V : D -> R satisfying the conditions of Theorems 7.2.3 The system (7.4 and 7. Theorem 7. Theorem 7. using only "classical" Lyapunov stability theory. : (7. u) is continuously differentiable. we now state this theorem without proof. Theorems 7. 0).

9)a3(IIxII) .6) holds for any a(.4 and 7.O)a3(IIxII) VIxII>x31ro u 1 which shows that (7. we consider two different scenarios: (a) IIxII > X(IIuii): This case is trivial.1).u E Du (7. 0). t > 0 Vx E D.-a3(IIxII) Vx E D. a2i a3. Proof: Assume first that (7. INPUT-TO-STATE STABILITY Theorem 7. u).192 CHAPTER 7. For the converse. Under these conditions (7.. Theorem 7.1) if and only if there exist class 1C functions al. Theorem 7.15) is satisfied.14) (7. Then we have that ax f(x.5: See the Appendix.X(IIuii) To see that (7. Assume that the origin is an exponentially stable equilibrium point for the autonomous system i = f (x.) - .6 gives an alternative characterization of ISS Lyapunov functions that will be useful in later sections. a2i a3. and or such that the following two conditions are satisfied: al(IIxII) 5 V(x(t)) < a2(IIxII) OV(x) f(x.6) holds: aw f (x.du E Du 9<1 < -(1 .1) is input-to-state-stable.. Indeed. We now state and prove the main result of this section. u) is continuously differentiable and globally Lipschitz in (x. u E Du : IIxii >.15) is satisfied.. and al.Oa3(IIxII) + a(IIuII) < -(1.5 Consider the system (7. u) <.6) is satisfied.15) V is an ISS Lyapunov function if D = R'. under these conditions. (7. Proof of theorems 7. Du = W' . and a E 1C.u) < -a3(IIxII)+a0IuID Vx E D.6 A continuous function V : D -> R is an ISS Lyapunov function on D for the system (7.u) < -a3(IIxJI) +a(IIuII) `dx E D. and that the function f (x. assume that (7.

(7. notice that. u) < -a3(Ilxll) +O(r) o = max{0. For reasons that will become clear in Chapter 9. (ii) it is nonnegative. so defined.2 is that condition (7. IxII<d . Inequality (7.4.1) is input-to-state-stable if and only if there exists a continuous function V : D -> R satisfying conditions (7. we can claim that the system (7.14) and (7. It may not be in the class /C since it may not be strictly increasing.15).15). The only difference between Theorem 7.7. there exist points x E R" such that 013(IIXIi) = a(ru).6) in Definition 7. This completes the proof.u) < <_ -a(Ilxll) +a(IIuIU -a(Ildll)+a(Ilulle_) This means that the trajectory x(t) resulting from an input u(t) : lullL. Remarks: According to Theorems 7. find a E /C such that a(r) > Q. given ru > 0. however. we have that for any Ilxll > d and any u : Ilullc_ < ru: jx f (x.2 has been replaced by condition (7.e.15) permits a more transparent view of the concept and implications of input-to-state stability in terms of ISS Lyapunov functions. at some t = t') enter the region SZd = max V(x).3.15) is called the dissipation inequality. IxII <_ X(r) eX f (X. INPUT-TO-STATE STABILITY REVISITED (b) IIxii < X(IIuiD: Define 193 0(r) = max [ a7x f (x. To see this.< ru will eventually (i. u) + a3(X(IIu1I))] then (lull = r.6 and definition 7. O(r)} Now. and (iii) it is continuous. This implies that 3d E 1R such that a3(d) = a(ru).. We can always. satisfies the following: (i) 0(0) = 0. defining we have that or. or d = a 1(a(ru)) Denoting Bd = {x E 1 : IIxii < d}.6 and 7.

such that (&. we say that x(s) = O(y(s)) if 8-100 ly(s)I as s --+ oo+ lim 1x(s)1 < oo.16) (7. there exist & E 1C.8 have theoretical importance and will be used in the next section is connection with the stability of cascade connections if ISS systems. ii) is a supply pair. . Assume that & is a 1C. function satisfying a(r) = O(a(r)) as r -a 0+.) constitute a pair [a(. According to the discussion above.).&) is a supply pair. referred to as an ISS pair for the system (7. Theorem 7. a(IIxII) < V(x) < a(IIxII) VV(x) f(x.. Given functions x(.. Vx E 1R". The fact that Std depends on the composition of a given system.8: See the Appendix.1).8 Let (a.1). Then.1). the trajectory is trapped inside Std.194 CHAPTER 7. a.).7 Let (a. such that (a. The proof of theorems 7. a) be a supply pair for the system (7. Vx E 1R".) and a(. o]. the pair [a(.)] is perhaps nonunique. there exist & E 1C. R -> IR. The functions a(. Remarks: Theorems 7.7 and 7. d E and or E 1C. the region Std seems to depend on the composition It would then appear that it is the composition of these two functions and that determines the correspondence between a bound on the input function u and a bound of on the state x. INPUT-TO-STATE STABILITY Once inside this region. o(. Theorem 7.1) and we assume that it is globally input-to-state-stable with an ISS pair [a.. a) be a supply pair for the system (7. as s -+ 0+ Similarly x(s) = O(y(s)) if s-+0 I y(s) I lim Ix(s)I < 00. Our next theorem shows that this is in fact the case..8 shows how to construct the new ISS pairs using only the bounds a and a. function satisfying o(r) = O(&(r)) as r -a oo+.7 and 7.. Assume that & is a 1C.7 and 7.). but not V itself. Then. because of the condition on V..u) < -a(IIx1I) +o(IIu1I) for some a. Proof of theorems 7.u E 1W7 (7.17) We will need the following notation. In the following theorem we consider the system (7.

18) (7.5. U) -a2(IIZIj) +J2(jIUID) (7.5 Cascade-Connected Systems Throughout this section we consider the composite system shown in Figure 7. a2) is an ISS pair for the system E2. respectively.1. This means that there exist positive definite functions V1 and V2 such that VV1 f(x. Lemma 7. u) (7. where El and E2 are given by E1 : E2 : ± = f (x. Si] and [&2i Q2] for the two systems.21) The lemma follows our discussion at the end of the previous section and guarantees the existence of alternative ISS pairs [&1. the new ISS-pairs will be useful in the proof of further results. 01] and [a2. z) z = 9(z. .20) VV2 9(Z. 7.7. In the following lemma we assume that both systems E1 and E2 are input-to-statestable with ISS pairs [al. CASCADE-CONNECTED SYSTEMS 195 U E2 z E1 x Figure 7. The state of E2 serves as input to the system E1.1: Cascade connection of ISS systems.1 Given the systems E1 and E2. As it turns out.z) < G -al(1Ix1I)+a1(I1z11) (7. o2].19) where E2 is the system with input u and state z. we have that (i) Defining a2 a2(s) fors "small" for s "large" { a2(S) then there exist &2 such that (&2.

This theorem is somewhat obvious and can be proved in several ways (see exercise 7. Theorem 7. INPUT-TO-STATE STABILITY al(s) = 2a2 there exist al : [al.10 Consider the cascade interconnection of the systems El and E2. a local version of this result can also be proved. and the theorem is proved. 1&2] is an ISS pair for the system El. z).1. then the composite system E E:u -+ is locally input-to-state-stable. z) + VV2 f (Z. Theorem 7. all = [al. the functions V1 and V2 satisfy VV1 f(x. As the reader might have guessed. We have V ((x.CHAPTER 7.8.7 and 7. we now state this result without proof. x z . Proof: A direct application of Theorems 7.3). If both systems are input-to-state-stable. For completeness. If both systems are locally input-to-state-stable. We now state and prove the main result of this section. Proof: By (7.z) < -al(IIxII)+2612(IIZII) VV2 9(z.20)-(7.u) < -a2(IIzII) +&2(IIUII) Define the ISS Lyapunov function candidate V = V1 + V2 for the composite system.2a2(IIZII)+a2(IIuII) It the follows that V is an ISS Lyapunov function for the composite system.9 Consider the cascade interconnection of the systems El and E2. then the composite system E is input-to-state-stable.21) and Lemma 7. u) 1 -a1(IIxII) . u) = VV1 f (X.

.2: The proofs of both corollaries follow from the fact that.2: Cascade connection of ISS systems with input u = 0.23) is locally asymptotically stable. then the origin of the interconnected system (7. Here we consider the following special case of the interconnection of Figure 7. CASCADE-CONNECTED SYSTEMS 197 u=0 E2 z E1 x Figure 7. is globally asymptotically stable.10 are also important and perhaps less obvious.0) (7.23) and study the Lyapunov stability of the origin of the interconnected system with state z Corollary 7.9 and 7. The following two corollaries.22)-(7. then the origin of the interconnected system (7.2 Under the conditions of Corollary 7. Proof of Corollaries 7.1. z) z = g(z) = g(z.1.9 or 7. and then so is the interconnection by application of Theorem 7. under the assumptions. Corollary 7. which are direct consequences of Theorems 7.1 If the system El with input z is locally input-to-state-stable and the origin x = 0 of the system E2 is asymptotically stable.7. the system E2 is trivially (locally or globally) input-to-state-stable.1 and 7.5.23). shown in Figure 2: El : E2 : i = f (x.22) (7.22)-(7. if El is input-to-state-stable and the origin x = 0 of the system E2 is globally asymptotically stable.10.

4) Sketch a proof of Theorem 7.5.10. (7.1.6 Exercises (7. (7.t1 = -x1 + x2 Ix2 x2 = -x2 + 21x2 + u { (i) Is it locally input-to-state-stable? (ii) Is it input-to-state-stable? (7. INPUT-TO-STATE STABILITY 7.9. (7.6) Consider the following system: I i1 = -21 .8) Consider the following cascade connection of systems: 2 = -x3 + x2u1 .5) Consider the following system: .2) Sketch the proof of theorem 7.7) Consider the following cascade connection of systems 2 = -x3 + x2u { z = -z3 + z(1 + Z2)x (i) Is it locally input-to-state-stable? (ii) Is it input-to-state-stable? (7.XU2 + U1162 Is it input-to-state stable? (7. using only the definition of input-tostate-stability.198 CHAPTER 7.1) Consider the following 2-input system ([70]): i= -x3 + x2U1 . Definition 7.22 (i) Is it locally input-to-state-stable? (ii) Is it input-to-state-stable? ll i2 = x1-x2+u (7.XU2 + u1u2 1 f z -z3 +z 2 X .3) Provide an alternative proof of Theorem 7.

The literature on input-to-state stability is now very extensive.7.6.6 were taken from reference [73]. Theorems 7.7 and 7. ISS pairs were introduced in Reference [75].9) Consider the following cascade connection of systems: -21 + 222 + 71. [74]. and [72] for a thorough introduction to the subject containing the fundamental results.Z22 + 2122 (i) Is it locally input-to-state-stable? (ii) Is it input-to-state-stable? Notes and References The concept of input-to-state stability.3 and 7.8 are based on this reference. See also References [70]. See also chapter 10 of Reference [37] for a good survey of results in this area. as well as Theorems 7. Section 7. EXERCISES (i) Is it locally input-to-state-stable? (ii) Is it input-to-state-stable? (7. . [71].1 199 -22 + U2 -Z3 + 2221 .5 on cascade connection of ISS systems. as presented here. was introduced by Sontag [69].

.

P(t) = where dwd(t) (8. (8.2) Now consider a basic circuit element. We begin by recalling from basic physics that power is the time rate at which energy is absorbed or spent.) .1) w(. 8. we look for open-loop conditions for closed loop stability of feedback interconnections. 201 . Throughout this chapter we focus on the classical input-output definition. time w(t) = Jto t p(t) dt. power energy t Then .1 using a black box.1 Power and Energy: Passive Systems Before we introduce the notion of passivity for abstract systems. represented in Figure 8. State space realizations are considered in Chapter 9 in the context of the theory of dissipative systems.Chapter 8 Passivity The objective of this chapter is to introduce the concept of passivity and to present some of the stability results that can be obtained using this framework. As with the small gain theorem. it is convenient to motivate this concept with some examples from circuit theory.

and are therefore called passive elements. in an admittedly ambiguous . and the current in the circuit element is denoted by i. In circuit theory. in general. for example. the box delivers energy (this is the case. The assignment of the reference polarity for voltage. (8.4) represents the effect of initial conditions different from zero in the circuit elements.1 the voltage across the terminals of the box is denoted by v. PASSIVITY i v Figure 8.e.3) w(t) = J 00 v(t)i(t) dt = J 00 v(t)i(t) dt + J tv(t)i(t) dt. the box absorbs energy (this is the case. We have p(t) = v(t)i(t) thus. a circuit element is passive if f t -00 v(t)i(t) dt > 0. (8. are well behaved.202 CHAPTER 8.1: Passive network. we have (i) If w(t) > 0.5) Resistors. for example. Passive networks. with negative voltage with respect to the polarity indicated in Figure 8. i.. capacitors and inductors indeed satisfy this condition. In Figure 8.4) The first term on the right hand side of equation (8. the energy absorbed by the circuit at time "t" is (8. and reference direction for current is completely arbitrary. for a resistor). elements that do not generate their own energy are called passive.1). (ii) If w(t) < 0. for a battery. With the indicated sign convention.

2. To study this proposition. then we should be able to infer some general statements about the behavior of a passive network. and using Kirchhoff voltage law. is a concept that has been used to describe a desirable property of a physical system. fo i(t)v(t) dt > 0.2: Passive network. sense. It is not straightforward to capture the notion of good behavior within the context of a theory of networks.1. Stability.8. It follows that T T J0 e2(t) dt > R2 J0 i2(t) dt + f T v2(t) dt. where we assume that the black box contains a passive (linear or not) circuit element. I T T e2(t) dt = Jo (i(t)R + v(t))2 dt = R2 J T i2(t) dt + 2R 0 J0 T i(t)v(t) dt + J T v2(t) dt 0 and since the black box is passive. If the notion of passivity in networks is to be of any productive use. we consider the circuit shown in Figure 8. 0 . we have e(t) = i(t)R + v(t) Assume now that the electromotive force (emf) source T is such that e2(t)dt<oo we have. or systems in a more general sense. Assuming that the network is initially relaxed. and it is intended to capture precisely the notion of a system that is well behaved. in its many forms. in a certain precise sense. POWER AND ENERGY: PASSIVE SYSTEMS 203 i R e(t) v Figure 8.

x) = 0 if and only if x = 0. y E X. we formalize these ideas in the context of the theory of input-output systems and generalize these concepts to more general classes of systems. implies that the energy in these two quantities can be controlled from the input source e(. x) > 0. z) = (x. y) = 01 (y. since the applied voltage is such that fo e2 (t) dt < oo. and in this sense we can say that the network is well behaved. we can take limits as T -+ o0 on both sides of thinequality. y) = (y. then the inner product space is said to be a Hilbert space.. Vx. In particular. y. y E X. there exists a real number (x. This. (8.2 Definitions Before we can define the concept of passivity and study some of its properties we need to introduce our notation and lay down the mathematical machinery. z E X. We E R. The function X x X -+ IR is called the inner product of the space X. if for every 2 vectors x. x) (ii) (x + y. Using these proper we can define a norm for each element of the space X as follows: llxjfx = (x.). The essential tool needed in the passivity definition is that of an inner product space. y E X. we want to draw conclusions about the feedback interconnection of systems based on the properties of the individual components. 8. PASSIVITY Moreover.6) Throughout the rest of this chapter we will assume that X is a real inner product space. . In the next section. y) that satisfies the following properties: (Z) (x. (iv) (x.x). and we have R2 j 00 i(t)2 dt+ J 0 v2(t) dt <0 e2(t) dt < 00 0 0) which implies that both i and v have finite energy. If the space X is complete.204 CHAPTER 8. z) + (y + z) (iii) (ax. An important property of inner product space is the so-called Schwarz inequality: I (x)y)l <_ IIxIIx IlylIx Vx. in turn. x) dx. (v) (x. Definition 8.1 A real vector space X is said to be a real inner product space.

to state our definitions in more general terms. Hu)T > 0 Vu E Xe. the space of finite energy functions: X = {x : IR -+ ]R.10) . = (x. the function x(t) = et belongs to Gee even though it is not in G2.. and assume that the inner product satisfies the following: (XT. X. there is no a priori reason to assume that this is the only interesting inner product that one can find. yT) = (xT. x) = x2(t) dt < oo 0 Thus. Virtually all of the literature dealing with this type of system makes use of the following inner product (x..8.2 : (Passivity) A system H : Xe 4 Xe is said to be passive if (u.8) We have chosen. we have that X = G2.2 : Let X = £2. In the sequel. y) = (x. and satisfy} f00 IIxIIc2 = (x. and moreover r 00 IIxIIc. Notice also that. even for continuous time systems.2. Then the usual "dot product" in IRn. with this inner product.7) where x y indicates the usual dot product in Htn. however. VT E R+. = Lee is the space of all functions whose truncation XT belongs to £2.. El Definition 8. (8. For the most part.1 Let X be Rn.y=xTy=xlyl+x2y2+. Indeed. DEFINITIONS Example 8. regardless of whether x(t) itself belongs to £2. For instance. This inner product is usually referred to as the natural inner product in G2. we will need the extension of the space X (defined as usual as the space of all functions whose truncation belongs to X). y)T Example 8. It is straightforward to verify that defining properties (i)-(v) are satisfied. defined by 205 x. our attention will be centered on continuous-time systems.±xnyn defines an inner product in IRn. (8. x) = f IIx(t)II2 dt. given that our discussion will not be restricted to continuous-time systems. y) = f00 x(t) y(t) dt 0 (8. yT) (x.

i(t))T J e v(t)i(t) dt.3 is a bias term included to account for the possible effect of energy initially stored in the system at t = 0. initially stored at time t = 0. VT E R. To emphasize these ideas.11) The constant 3 in Definitions 8. we go back to our network example: Example 8. 0 Closely related to the notions of passivity and strict passivity are the concepts of positivity and strict positivity. i(t))T > Q Choosing the inner product to be the inner product in £2. . PASSIVITY Definition 8.206 CHAPTER 8. According to Definition 8.2 states that only a finite amount of energy. Hx)T = (v(t).3 Consider again the network of Figure 8. Definition 8.1. the last inequality is equivalent to the following: T J0 x(t)y(t)dt = T J v(t)i(t) dt > 0 Vv(t) E Xe.4) we know that the total energy absorbed by the network at time t is ftoo v(t)i(t) dt = J t v(t)i(t) dt + J c o 00 v(t)i(t) dt (v(t). introduced next.2 and 8. i(t))T + J v(t)i(t) dt.2. according to definition 8. the network is passive if and only if (x. (8. From equation (8.Hu)T > 611UT111 +Q Vu E Xe1 VT E IR+. can be extracted from a passive system.3 : (Strict Passivity) A system H : Xe -4 Xe is said to be strictly passive if there exists 6 > 0 such that (u. To analyze this network as an abstract system with input u and output y = Hu.2 the network is passive if and only if (v(t). Therefore. we define u = v(t) y = Hu = i(t).

12) The following theorem shows that if a system is (i) causal and (ii) stable. HU)T > 6IIuTII2. Hu) > bI uI X + /. and by (8.11). We have that (i) H positive b H passive. if the system H is not input-output stable. For the converse.12) we have that (UT.12) is unbounded.12) with b = 0. the notions of positivity and strict positivity apply to input-output stable systems exclusively. (8.12) and consider an arbitrary input u E Xe follows that UT E X. (ii) H strictly positive H strictly passive.9) since H is causal by (8.12). then the notions of positivity and passivity are entirely equivalent.9). Proof: First assume that H satisfies (8. (HUT)T) = (UT. By (8. we have that (u. HUT) ? 5U7' It +0 by (8. we conclude that (8. Hu)T It follows that (u.8. then the left-hand side of (8. As a consequence.11). (Hu)T) (u.Hu)T ? SIIuTIIX + a but (u.4 : A system H : X -a X is said to be strictly positive if there exists d > 0 (u. Theorem 8. HUT) (UT. The only difference between the notions of passivity and positivity (strict passivity and strict positivity) is the lack of truncations in (8.2. DEFINITIONS such that 207 Definition 8. (HUT)T) (UT. HUT) . Hu)T = (UT. and let H be causal. assume that H satisfies (8. Notice that.12) implies (8. .1 Consider a system H : X --> X . (Hu)T) = (UT.1 H is said to be positive if it satisfies (8.11) and consider an arbitrary input u E X. Vu E X. but (UT. and since u E Xe is arbitrary.

.13) is passive. . e(t) E Xe and is uniquely determined for each u(t) E Xe). n...11) implies (8. PASSIVITY Thus (UT. i = 1. then the system H : Xe + Xe .e.Xe .. since u E X and H : X .12) and the second assertion of the theorem is proved..208 CHAPTER 8.15) is passive. i = 1.2 Consider a finite number of systems Hi : Xe . (iii) If the systems Hi..(Hl+. . then. We have (i) If all of the systems Hi. (ii) If all the systems Hi. Part (i) is immediately obvious assuming that b = 0. the mapping from u into y defined by equations (8.Hlx+.X.14) (8.3) .14)-(8. The following Theorem considers two important cases Theorem 8.13) is strictly passive. defined by (see Figure 8. HUT) > IIUTII2 + /3. Hu) > 8IIuIIX + Q Thus we conclude that (8. 2 are passive and the feedback interconnection defined by the equations (Figure 8.15) is well defined (i.+Hnx)T (x+ Hlx)T + . (8. i = 1. n are passive.3 Interconnections of Passivity Systems In many occasions it is important to study the properties of combinations of passive systems. then the system H defined by equation (8. and at least one of them is strictly passive. i = 1.. Moreover. which is valid for all T E R+. n are passive. Proof of Theorem 8. This completes the proof. + (x+ Hnx)T deJ N. 8.4) e = u-H2y y = Hie (8. . we can take limits as T -* oo to obtain (u.2 Proof of (i): We have (x.+Hn)x)T = (x.

INTERCONNECTIONS OF PASSIVITY SYSTEMS 209 Hn H2 u(t) H1 y(t) Figure8.3.3.4: The Feedback System S1.3: y(t) H1 H2 Figure 8. .

Proof of (iii): Consider the following inner product: (u. 8. This completes the proof.. The proof is omitted since it requires some relatively advanced results on Lebesgue integration. that if the inner product is the standard inner product in £2....3 Let H : Xe -4 Xe . It follows that (x. H2y)T ? (31+02).3.Hkx)T+. the concept of passivity is closely related to the norm of a certain operator to be defined. the number of systems in parts (i) and (ii) of Theorem 8.1 Passivity and Small Gain The purpose of this section is to show that. Hle)T + (y. Y)T = (e + H2Y. if necessary.. y)T + (H2y.Hnx)T 61(x. By relabeling the systems.. We have: (8. Hx)T = (x.210 CHAPTER 8. Theorem 8. and assume that (I + H) is invertible in X.. H f (H1 + Proof of (ii) Assume that k out of the n systems H= are strictly passive.+(x. +/3n and the result follows. +6k(x.. Y)T = (e.. Hk. PASSIVITY + Hn) is passive. The validity of these results in case of an infinite sequence of systems depends on the properties of the inner product. and the gain of a system H : Xe -+ Xe is the gain induced by the norm IIxII2 = (x.Xe : S = (H . in an inner product space.. however. + HnX)T = (x Hlx)T+.2 cannot be assumed to be infinite. H2.x)T +. H1x + .16) ..+(x. Thus. then this extension is indeed valid. 1 < k < n. we can assume that these are the systems H1.I)(I + H)-1. It can be shown. . assume that (I + H)-1 : Xe -+ Xe . Remarks: In general.x)T+Q1 +. In the following theorem Xe is an inner product space. x). that is. Define the function S : Xe -.. Y)T = (e.

. as shown in Figure 8. H2 : Xe 4 Xe and consider the feedback interconnection defined by the following equations: ei = ul .18) (8. that is.VT E Xe. Theorem 8. (a) H is passive if and only if the gain of S is at most 1.17) (b) H is strictly passive and has finite gain if and only if the gain of S is less than 1.3 is identically zero.4. STABILITY OF FEEDBACK INTERCONNECTIONS 211 ul(t) + el(t) yi(t) H1 e2(t) H2 Figure 8. To simplify our proofs.4 Stability of Feedback Interconnections In this section we exploit the concept of passivity in the stability analysis of feedback interconnections. we assume without loss of generality that the systems are initially relaxed. (8. S is such that I(Sx)TI x < II-TIIX Vx E Xe .H2e2 (8.4 : Let H1. Proof: See the Appendix. Our first result consists of the simplest form of the passivity theorem. and so the constant )3 in Definitions 8. The simplicity of the theorem stems from considering a feedback system with one single input u1.19) yi = H1e1.5 (u2 = 0 in the feedback system used in Chapter 6).2 and 8.5: The Feedback System S1.8. 8.

if both systems are passive and one of them is (i) strictly passive and (ii) has finite gain. then if H1 is passive and H2 is strictly passive. however.y1)T > 61IYlTIIX 2 By the Schwarz inequality.20) to obtain IIy1I I >_ b 1IIu1II which shows that if u1 is in X. yl)T = (u1. if ul E X. y1)T but (e1. yl)T 1< IIu1TIIx IIy1TIIX. Hle1)T = (el + H2e2. yl)T > 0 6IIy1TI x since H1 and H2 are passive and strictly passive. Hlel)T + (H2e2. if H1 is passive and H2 is strictly passive. PASSIVITY Under these conditions. Thus (u1. the strictly passive system must also have finite gain. we need a stronger assumption. then eli e2. Under these conditions.5 : Let H1.Xe and consider again the feedback system of equations (8.4 says exactly the following. Theorem 8. For these two signals to be bounded. respectively.20) Therefore. We consider this case in our next theorem. namely. .212 CHAPTER 8. According to our definition of input-output stability. Hlel)T = (el. Hence >_ 6IIy1TIIX 2 Iy1TIIX <_ 6-11Iu1TIIX (8. The theorem.18)-(8.18)-(8.19) admits a solution. does not guarantee that the error el and the output Y2 are bounded. y1i and Y2 are in X whenever x E X. we can take limits as T tends to infinity on both sides of inequality (8. the output y1 is bounded whenever the input ul is bounded. this implies that the closed-loop system seen as a mapping from u1 to y1 is input-output-stable. Hlel)T + (H2y1. Hlel)T = (el. H2 : Xe . then y1 is also in X. Remarks: Theorem 8. then y1 E X for every xEX.19). Assuming that the feedback system of equations (8.Hlel)T > (H2yl. Proof: We have (ul. I (ul.

from equation (8.e. we obtain Ily2zTlix < ry(H2)IIy1TIIx <_ 'Y(H2)6-1IIu1TIIx Thus.6: The Feedback System S. Finally.5 is general enough for most purposes.23) .21) so that y1 E X whenever u1 E X.18) we have IIe1TIIX: Iu1TIIX+II(H2y1)TIIX 5 IIN1TIIX+-Y(H2)IIy1TIIX which.6).8.21). and since H2 has finite gain. Y2 E X whenever u1 E X. Then IIy2TIIX = II(H2y1)TIIx. Proof: We prove the theorem assuming that H1 is passive and H2 is strictly passive with finite gain.20). Remarks: Theorem 8. H2 is passive and H1 is strictly passive with finite gain) is entirely similar.6 : Let H1. we include the following theorem.22) e2 = u2 + H1e1 (8. H2 : Xe -> Xe and consider the feedback system of equations e1 = u1 . STABILITY OF FEEDBACK INTERCONNECTIONS 213 Figure 8. which shows that the result of Theorem 8. Proceeding as in Theorem 8.4.H2e2 (8. Theorem 8. we obtain Iy1TIIX < b-1IIU1TIIX (8. implies that e1 E X whenever u1 E X. The opposite case (i. taking account of (8. Also Y2 = H2y1.4 (equation (8.5 is still valid if the feedback system is excited by two external inputs (Figure 8. For completeness..

if both systems are passive and one of them is (i) strictly passive and (ii) has finite gain. where h E A. x2 E X.1. and H(3w) is the Fourier transform of h(t).7 Consider a linear time-invariant system H : C2e -> G2e defined by Hx = h* x. then el. In other words We[F'(7w)] = te[F(-yw)]+ ` m[F'(7w)] _ -`3m[F(-yw)] (b) (Parseval's relation) f f (t)g(t) dt - 2a _C0 P(31)15(31))' dw where G(yw)` represents the complex conjugate of G(yw). respectively. then the real and imaginary parts of P(p). by assumption. Proof: The elements of A have a Laplace transform that is free of poles in the closed right half plane. Proof: Omitted.214 CHAPTER 8. Thus. we examine in some detail the implications of passivity and strict passivity in the context of linear time-invariant systems. which implies that H is causal and stable. e2. h c A. Thus. x CGee. according to Theorem 8.yw belong to the region of convergence of H(s). points of the form s = . Throughout this section. we will restrict attention to the space G2. Also.5 Passivity of Linear Time-Invariant Systems In this section. PASSIVITY Under these conditions. (ii) H is strictly passive if and only if 36 > 0 such that Ste[H(yw)] > b Vw E R. With this in mind. Theorem 8. 8. denoted We[F(3w)] and `Zm[F(yw)]. yl. are even and odd functions on w. respectively. and y2 are in X whenever xl. We have (i) H is passive if and only if te[H(yw)] > 0 Vw E R. and we can drop all truncations in the passivity definition. We also recall two properties of the Fourier transform: (a) If f is real-valued. we have . H is passive (strictly passive) if and only if it is positive (strictly positive).

To prove necessity.7 was stated and proved for single-input-single-output systems. where h c A .5. Hx) (x. Theorem 8. it must be true that te[H(tw)] < 0 Vu . the second integral is zero. The proof follows the same lines and is omitted. For completeness. It follows that f f IX(.8.h*x) J 27r 1 00 f x(t)[h(t) * x(t)] dt 00 X (7w)[H(?w)X (7w)] du) 27r 1 27r and sincem[H(yw)] is an odd function of w.7w)I2 H(7w)' dw 00 te[H(7w)] lX (7w)12 dw + 00 2rrm[H(7w)] 00 3 00 du) (x.Hx) > inf te[H(yw)] W from where the sufficiency of conditions (i) and (ii) follows immediately. We can now construct 9(y.E 1w . x E L2e.8 Consider a multi-input-multi-output linear time-invariant system H : £2e -4 £2e defined by Hx = h * x. assume that 2e[H(3w)] < 0 at some frequency w = w'. By the continuity of the Fourier transform as a function of w. This completes the proof. x) 27r I I -`f (7w) I2 dw we have that (x. Hx) < 0 for appropriate choice of M and M. Hx) _ and noticing that 1 00 27r J Re[H(34 dw (x.) as follows: X(pw) > M OW) < m VwE 1w-w*I <e elsewhere It follows that X(yw) has energy concentrated in the frequency interval where lRe[H(3w)] is negative and thus (x. Theorem 8. . PASSIVITY OF LINEAR TIME-INVARIANT SYSTEMS 215 (x. we state the extension of this result to multi-input-multi-output systems.w' I <c. for some e > 0. We have.

3. which implies that IIsJi = 1. (wo . this algebra consists of systems with all of their poles in the open left half of the complex plane.7 says nothing about whether a system with transfer function H(s) = as s2 w2 a> 0. for finite-dimensional systems. A transfer function of this .w2) . This transfer function. H is passive. w o > 0 0 (8. In particular.7b). (ii) H is strictly passive if and only if 3b > 0 such that Amin[1 (3w)] + H(3w)'] > b Vw E ]R It is important to notice that Theorem 8.24) is passive. in turn. Our next theorem shows that the class of systems with a transfer function of the form (8. regardless of the particular value of a and w. Remarks: Systems with a transfer function of the form (8. Thus. Theorem 8.24) are oscillatory.as + wo s2+as+wo (U)02 .w)] + H(3w)*] > 0 Vw E R.7 was proved for systems whose impulse response is in the algebra A. as s2 w2 0 a > 0. Theorem 8. i. w0 > 0. Proof: By Theorem 8.3b)/(a+. is the building block of a very important class of system to be described later.haw Vw. PASSIVITY (i) H is passive if and only if [H(. the output oscillates without damping with a frequency w = wo. Thus IS(3w)I = 1 and the theorem is proved.w2) + haw which has the form (a.9 Consider the system H : Gee -> G2e defined by its transfer function H(s) = Under these conditions.24) is indeed passive.. if excited. H is passive if and only if 1-H 1+H but for the given H(s) 00 <1 S(s) _ SOW) _ s2 .e.216 CHAPTER 8.

6. no strictly proper system can satisfy this condition. It will be shown later that the feedback combination of a passive system with an SPR one.6 Strictly Positive Real Rational Functions According to the results in the previous section. P" denotes the set of all polynomials on nth degree in the undetermined variable s.2 that. Then H(s) is said to be positive real (PR). Fortunately. It follows from Theorem 8. In the sequel. STRICTLY POSITIVE REAL RATIONAL FUNCTIONS 217 form is the building block of an interesting class of systems known as flexible structures. thus relaxing the conditions of the passivity theorem. help in on the way: in this section we introduce the concept of strict positive realness (SPRness).25) is passive.e) is PR.5 Consider a rational function ft(s) = p(s)/q(s). roughly speaking. for example. working in the space £2i a system with transfer function as in (8. lies somewhere between passivity and strict passivity. which. rarely satisfied by physical systems. in turn.8. This is indeed a discouraging result. 8. if ?Re[H(s)] > 0 for rte[s] > 0. . Consider. severely limits the applicability of the passivity theorem.) E P". We now state these conditions as an alternative definition for SPR rational functions. This is a very severe restriction. If stability is to be enforced by the passivity theorem. is also stable. Definition 8. This limitation. Indeed. a (causal and stable) LTI system H is strictly passive if and only if H(3w) > b > 0 Vw E R. where p(. A linear time-invariant model of a flexible structure has the following form: H(s) _ 00 cxts (8. They are challenging to control given the infinite-dimensional nature of their model. For this reason. frequencydomain conditions for SPRness are usually preferred. Remarks: Definition 8. H(s) is said to be strictly positive real (SPR) if there exists e > 0 such that H(s .25) Examples of flexible structures include flexible manipulators and space structures. then only controllers with relative degree zero can qualify as possible candidates. the case in which a passive plant (linear or not) is to be controlled via a linear time-invariant controller.5 is rather difficult to use since it requires checking the real part of H(s) for all possible values of s in the closed right half plane. and E Pm.

where E P'. and matrices Q E Rmxm W E lRm'. and W-i00 lim w2JIe[H(yw)] > 0 It is important to notice the difference amon. The necessity of including condition (ii) in Definition 8.EP PB + WTQ = C (8. not true. uE1im UERm y = Cx+Du.7 was pointed out by Taylor in Reference [79].28) WTW = D+DT . and (iii) (C.6 Consider a rational function H(s) = p(s)/q(s). Then H(s) is said to be in the class Q if (i) E P'. H(s) is strictly proper. In fact. and (ii) Re[H(jw)] > 0..218 CHAPTER 8. that is.1 Consider a system of the form Lb = Ax+Bu. PASSIVITY Definition 8. and e > 0 sufficiently small such that PA + ATP = _QQT _. known as the Kalman-Yakubovich lemma. (ii) (A. the roots of Vw E [0. the function Hl (s) = (s + 1)-1 + s3 E Q. H(s) is said to be SPR if it is weak SPR and in addition. xEl[t". For example. p and q have the same degree.5. Then H = C(sI . but it is not SPR according to Definition 8. that is. and assume that (i) the eigenvalues of A lie in the left half of the complex plane. is said to be weak SPR if it is in the class Q and the degree of the numerator and denominator polynomials differ by at most 1. A) is observable. then H(s) E Q. We now state an important result. however.A)-1B + D is SPR if and only if there exist a symmetric positive definite matrix P E Rn.n. Lemma 8.26) (8. oo). are in the open left half plane). The converse is. Definition 8. the several concepts introduced above. (ii) n = m + 1. it is not even positive real. The importance of this condition will became more clear soon. if H(s) is SPR (or even weak SPR). B) is controllable.e. Clearly. one of the following conditions is satisfied: (i) n = m. and is a Hurwitz polynomial (i.7 H(s).27) (8.

eH1(s)] is passive Hz = H2 + eI is strictly passive. Theorem 8. however.10 Consider the feedback interconnection of Figure 8. stability follows by theorem 8. and SPR. we will state and prove a result that can be considered as the nonlinear counterpart of this theorem. The following example shows that the loop transformation approach used in the proof of Theorem 8. thus emphasizing the importance of condition (ii) in Definition 8.8.A) -1B is SPR if and only if there exist symmetric positive definite matrices P and L a real matrix Q. The significance of the result stems from the fact that SPR functions can be strictly proper. . (ii) H2 is passive (and possibly nonlinear).7.2.pL (8. In Chapter 9. The details are in the Appendix.30) PB = CT Proof: The proof is available in many references and is omitted. the feedback interconnection is input-output-stable. a linear timeinvariant SPR controller can be used instead of a strictly passive one.4. and p sufficiently small such that PA + ATP = _QQT . Indeed. then the two resulting subsystem satisfy the following conditions: Hi(s) = Hi(s)/[1 . Thus. According to this result. while strictly passive functions cannot.10 (see the Appendix) will fail if the linear system is weak SPR.10 is very useful. Theorem 8.5. strictly proper.29) (8. and assume that (i) H1 is linear time-invariant. Under these assumptions. the conditions of the passivity theorem can be relaxed somewhat. STRICTLY POSITIVE REAL RATIONAL FUNCTIONS 219 Remarks: In the special case of single-input-single-output systems of the form i = Ax+Bu y = Cx the conditions of lemma 8.6. when controlling a passive plant. Proof: The proof consists of employing a type I loop transformation with K = -E and showing that if e > 0 is small enough.1 can be simplified as follows: H = C(sI . See Theorem 9.

we proceed as follows 1 Z [H (7w) + H(-7w)] = (ab-eC-w2)2+(a+b-e)2 (abc .3.. We now consider the system H'(s). (8. Re[H'(3w)] > 0).220 CHAPTER 8.ec2 >0 0.3.e. (iii) H(s) is not SPR if a + b < c. if fl(s) is weak SPR.w)] = a + b . and let H'(s) = H(s)/[1 .1) Prove Theorem 8. (8. We need to see whether H'(s) is passive (i. and no such e > 0 exists. then a + b = c.4 Consider the linear time-invariant system H(s) = (s + c)/[(s + b)(s + b)]. However. after the loop transformation. PASSIVITY Example 8.e) > 0 if and only if abc .31) (8. 8.yw (ab-w2)+.c from here we conclude that (i) H(s) is SPR if and only if a + b > c.32).w2) + w2(a + b) t II( e[ 7w)] = (ab-w2)2+w2(a+b)2 = W-ioo lim w2 te[H(.yw(a+b) c(ab .eH(s)]. .2 and 8. (ii) H(s) is weak SPR if a + b = c.4 in the more general case when Q # 0 in Definitions 8. we can always find an e > 0 that satisfies (8.5 in the more general case when 0 $ 0 in Definitions 8.2) Prove Theorem 8. To analyze this condition.2 and 8.7 Exercises (8.ec2) + w2(a + b . We have H(3w ) c+.31)-(8. We first investigate the SPR condition on the system H(s).c .32) a+b-c-e > If a + b > c.

1. including stability of feedback systems.8. Early results on stability of passive systems can be found in Zames [98]. See References [35] and [78] for a detailed treatment of frequency domain properties of SPR functions. Our presentation closely follows Reference [21]. and passivity results in particular.4 is based on unpublished work by the author in collaboration with Dr. and even the synthesis of passive networks. Reference [1].7. Vidyasagar [88] or Anderson [1]. Example 8. adaptive control. Damaren (University of Toronto). Strictly positive real transfer functions and the Kalman-Yakubovich lemma 8. The proof of the Kalman-Yakubovich lemma can be found in several works. which is an excellent source on the input-output theory of systems in general. play a very important role in several areas of system theory. contains a very thorough coverage of the Kalman-Yakubovich lemma and related topics. . C. See for example Narendra and Anaswami [56]. in particular. EXERCISES 221 Notes and References This chapter introduced the concept of passivity in its purest (input-output) form.

.

This concept was motivated by circuit theory.Chapter 9 Dissipativity In Chapter 8 we introduced the concept of a passive system (FIgure 9. Hu)T = MO. y) may not constitute a suitable candidate for an energy function. in that case (assuming X = G2) T (u. For more general classes of dynamical systems. In this chapter we pursue these ideas a bit further and postulate the existence of an input energy function and introduce the concept of dissipative dynamical system in terms of a nonnegativity condition on this function. a system H : Xe -4 Xe is said to be passive if (u. Thus. 223 . equivalently. the voltage v(t) and current i(t) across a network or vice versa. respectively. Many systems fail to be passive simply because (u. We will also depart from the classical input-output u H y Figure 9. HU)T > Q. i(t))T = 100 v(t)i(t) dt which represents the energy supplied to the network at time T or.1). the energy absorbed by the network during the same time interval. given an inner product space X. where u and y are. "passivity" is a somewhat restrictive property.1: A Passive system. Specifically.

called the storage function. which consists of functions. u E U : SZ C R --> 1R'.224 CHAPTER 9. y E U : SZ C R -> IItp. : X -+ 1R+. u). The use of an internal description will bring more freedom in dealing with initial conditions on differential equations and will also allow us to study connections between input-output stability.1 Dissipative Systems Throughout most of this chapter we will assume that the dynamical systems to be studied are given by a state space realization of the form (i=f(x. O(x1) < q5(xo) + f w(t) dt. is a locally integrable function of the input and output u and y of the system Definition 9. the functions in U map a subset of the real numbers into 1R1. ycY where U is the input space of functions.2) is called the dissipation inequality.u). and the several terms in (9. that satisfies tl J to that is. Y the output space. In other words. DISSIPATIVITY theory of systems and consider state space realizations. X The set X C 1R" represents the state space.2) Inequality (9. Iw(t)I dt < oo. 9. such that for all xo E X and for all inputs u E U we have t. and stability in the sense of Lyapunov. Associated with this system we have defined a function w(t) = w(u(t). to (9.1 A dynamical system t' 1is said to be dissipative with respect to the supply rate w(t) if there exists a function . y(t)) : U x y -+ ]R. uEU. called the supply rate.2) represent the following: . XEX Sl y = h(x..

then we have (since x1 = xo) 0(Xo) = < 0(Xo) + i w(t) dt 0 J w(t)dt > (9. plus the total energy externally supplied during the interval [to.2).4) is satisfied if and only if 0a(x) f (x.2. u)) Vx. First we notice that if 0 is continuously differentiable. however. the storage function 0 of a dissipative system. In this way there is no internal "creation" of energy. y) = w(u. DIFFERENTIABLE STORAGE FUNCTIONS 225 the storage function. we can write _(xl) .0(Xo) < 1 tl .3) states that in order to complete a closed trajectory.2 Differentiable Storage Functions In general. Throughout the rest of this chapter.to). defined in Definition 9. t1].to but tl . Inequality (9. and denoting O(xi) the value of O(x) when t = ti.to w(t) dt lim tl-*to O(x1) . according to (9. equal to the sum of the energy O(xo) initially stored at time to.to = dO(x) = 00(x) f(x. we will see that many important results can be obtained by strengthening the conditions imposed on 0. t1].2) by (t1 . then dividing (9. the stored energy l4(xi) at time t1 > to is. 9. h(x. u) < w(t) = w(u.1 need not be differentiable.0(Xo) t1 .5) . (9. O(x(t`)) represents the "energy" stored by the system V) at time t'. It is important to notice that if a motion is such that it takes the system V) from a particular state to the same terminal state along a certain trajectory in the state space. at most. u) dt 8x and thus (9.9.3) where f indicates a closed trajectory with identical initial and final states. Thus. it. fto' w(t) dt: represents the energy externally supplied to the system zG during the interval [to. a dissipative system requires external energy.

) is positive definite. DISSIPATWITY Inequality (9.2. however. we can restate definition 9. Definition 9. u E lRm. and constitutes perhaps the most widely used form of the dissipation inequality. There are.a2(I14II) ..1 A system Eli is input-to-state stable if and only if it is dissipative with respect to the supply rate w(t) = -a3(IIxII) +a(IIuII) where a3 and a are class )C.O(x) <. Assuming that 0 is differentiable. 9. In this section we study a particularly important function and some of its implications.y) Vx E R' . that satisfies the following properties: (i) There exist class 1C. several interesting candidates for this function.2 (Dissipativity re-stated) A dynamical system Eli is said to be dissipative with respect to the supply rate w(t) = w(u.3 QSR Dissipativity So far we have paid little attention to the supply rate w(t). functions. We can now review this concept as a special case of a dissipative system. Lemma 9.6. u) < w(u. defining property (i) simply states that 0(.5) is called the differential dissipation inequality.90 Vx E IRn 8x f (x.2 and Lemma 7.. functions al and a2 such that al(IIxII) <. u).226 CHAPTER 9. In the following lemma we assume that the storage function corresponding to the supply rate w is differentiable. 9. We will see that concepts such as passivity and small gain are special cases of this supply rate.1 Back to Input-to-State Stability In Chapter 7 we studied the important notion of input-to-state stability. y) if there exist a continuously differentiable function 0 : X --> IR+. called the storage function. Proof: The proof is an immediate consequence of Definition 9. In other words. while property (ii) is the differential dissipation inequality..1 as follows. and y = h(x. .

R = 0. UT) L S Ri 1 LU I (9. 2.9. and R E Rmxm with Q and R symmetric. we define the supply rate w(t) = w(u. using time invariance.]R+ such that 11x(0) = xo E X and for all u E U we have that T J0 w(t) dt = (y. Ru)T (9.8) moreover.1) with one interesting twist. the state space realization of the system V) is no longer essential or necessary. QSR dissipativity can be interpreted as an input-output property. Su)T + (u. QSR DISSIPATIVITY 227 Definition 9. 0)dissipative. QY)T + 2(y.3 Given constant matrices Q E 1Rp> .Passive systems: The system V) is passive if and only if it is dissipative with respect to Q = 0.1).3. Ru)T ? O(xi) . Equivalently. Su) + (u. The equivalence is immediate since in this case (9.1(xo). we will continue to assume that the input output relationship is obtained from the state space realization as defined in (9. Ru) and T 00 (9. by assumption) (9. w(t) defined in (9. Indeed. Notice that with this supply rate.10) Definition 9. we can now state the following definition Definition 9. We now single out several special cases of interest. Qy)T + 2(y.11) . u)T ? O(xi) . V) is passive if and only if it is (0. 1. SU)T + (u.4 is clearly a special case of Definition (9. As we will see.10) implies that (y. Qy) + 2(y.9) Thus.6) is such that to+T T to w(t) dt = JO w(t) dt.4 The system V) is said to be QSR-dissipative if there exist a storage function X -. doing so will allow us to study connections between certain input-output properties and stability in the sense of Lyapunov.7) J0 w(t) dt = (y. S E R' m. and S = 21.O(xo) ? -O(xo) (since O(x) > 0 Vx. y) as follows: w(t) = yTQy+2yTSu+uTRu Q [YT.6) It is immediately obvious that f w(t) dt = (y. (9. which appear for specific choices of the parameters QS and R. (9. Instead of pursuing this idea.

Other cases of interest. strictly passive. then defining 3 = IIyTIIGZ 2 (xo). This formulation is also important in that it gives 3 a precise interpretation: 3 is the stored energy at time t = 0. DISSIPATIVITY . y)T < 72 (n. and S = 2I.Strictly output-passive systems: The system V) is said to be strictly output pasIn this case. b > 0.10) and obtain (y.72IIuTIIGz + 20(xo) IIyTIIGZ 72IIUTIIG2 + 20(x0) and since for a.2). we substitute dissipative with respect to Q = 0. y)T + 0 . u)T + (U.O(xo).u)T + 20(xo) IIyTIIGZ <.O(x0) > -qS(xo) (y. u)T > O(xl) . 4.20(x0) or (y. Thus. defining . y)T b(u. R = 2 I. sive if it is dissipative with respect to Q = -EI.u)T . and S = 0. (9. we obtain -e(y.O(xo) ? -O(xo) or fT J UT y dt = (u.I. and S = aI.O(x0) >. y)T ? E(y. y)T + 2 ('a.0 f 2. U) T + Q = 5IIuIIT + 3.0(x1) .10) and obtain i 1 2 (y.7IIUTIIGZ + 3 We have already encountered passive. a -+b2 < (a + b). which appear frequently in the literature are described in paragraphs 4 and 5. R = 0. substituting these values in (9. 3. we substitute these values in (9. -bU)T >. we conclude that <. given by the initial conditions xo. R = -b.Finite-gain-stable: The system 7/i is finite-gain-stable if and only if it is dissipative with respect to Q = . y)T + (y. these values in (9. and finite-gain-stable systems in previous chapters.-O(xO) or f Q (u.y)T >_ -72(u.Strictly passive systems: The system Vi is strictly passive if and only if it is To see this.10).11) is identical to Definition (8.228 CHAPTER 9. To see this.0(x1) . u)T >.

P. In this case. R = -bI.2 If -0 is strictly output passive. p. U)T > -O(x1) .10). we proceed to find the total energy stored in the system at any given time.x2 + 1 = X2 Y To study the dissipative properties of this system.4 9. k is the spring constant. Proof: The proof is left as an exercise (Exercise 9. shown in Figure 9. and f is an external force. 9. and S = I. we obtain the following equation of the motion: ml+0±+kx= f where m represents the mass.4.2).b(u.9. x2. Assuming for simplicity that the friction between the mass and the surphase is negligible.4. and ±1 = x2.1 Examples Mass-Spring System with Friction Consider the mass-spring system moving on a horizontal surface. U)T + (y.2. EXAMPLES 229 5. U)T + E(y. which is a direct consequence of these definitions. We have E = 1kx2 + mx2 21 .O(xo) > -O(x0) or I 0 rT J U T y dt = (U. then it has a finite L2 gain.Very strictly-passive Systems: The system V) is said to be very strictly passive if it is dissipative with respect to Q = -eI. substituting these values in (9. Y)T + 130 The following lemma states a useful results. Y)T ? CU.Y)T . Lemma 9. we obtain the following state space realization: 1 = X2 2 = -mx1 . and assuming that the desired output variable is the velocity vector. the viscous friction force associated with the spring. Defining state variables x1 = x. we obtain -E(y.

where zmx2 represents the kinetic energy of the mass and Zkxi is the energy stored by the spring. we define: ¢r=E=2kxi+2mx2.2: Mass-spring system. Since 0 is continuously differentiable with respect to xl and x2i we can compute the time derivative of 0 along the trajectories of i/i: a0x . S = 2. 8x [kxl 7fl2]L -m x1 -X2 1x2 -/3x2+xf _Qy2 + Y f. from where we conclude that the mass-spring system 0 is strictly output-passive.230 CHAPTER 9. thus. Thus. J0 t 0 dt = E(t) > 0 and it follows that the mass-spring system with output i = x2 is dissipative with respect to the supply rate w(t) = yf . and R = 0. we propose E as a "possible" storage function. Since the energy is a positive quantity. . It is immediately evident that this supply rate corresponds to Q = -0. DISSIPATWITY Figure 9./3y2.

w(t) = yf which corresponds to Q = 0. we now turn our attention to a perhaps more abstract question. we define f = X2 = E = 2kxi + 2rnx2.4. we now introduce this concept. and R = 0. AVAILABLE STORAGE 231 9. we obtain: =x2f =yf since once again. at any given time? Willems [91] termed this quantity the "available storage. 9. Definition 9. We conclude that the mass-spring system with output i = x2 is dissipative with respect to the supply rate frt 0 0 dt = E(t) > 0. (9. we ask: What is the maximum amount of energy that can be extracted from it. the state space realization reduces to I it i2 = -mxl+f y = x2 Proceeding as in the previous example.9.5 The available storage. S = 2. For completeness. cba of a dynamical system. y(t)) dt. x(0) = X.12) . but assume that f3 = 0." This quantity plays an important conceptual role in the theory of dissipative systems and appears in the proofs of certain theorems.5. This implies that the mass-spring system is passive. along with supply rates and storage functions. This section is not essential and can be skipped in a first reading of this chapter.2 Mass-Spring System without Friction Consider again the mass-spring system of the previous example. Given a dissipative dynamical system.5 Available Storage Having defined dissipative systems. Differentiating m along the trajectories of 0.0 with supply rate w is defined by 0a(x) = sup u() T>0 f Jo T w(u(t). In this case.

The following theorem is important in that it provides a (theoretical) way of checking whether or not a system is dissipative. Moreover. T] . Thus cba is itself a storage function. To see this. starting from the initial state x at t = 0. y(t)) dt. Theorem 9. we can follow an "optimal" trajectory that maximizes the energy extracted following an arbitrary trajectory. 0a(x) denotes the energy that can be extracted from . y) dt tl sup fo w(u(t). y(t)) dt. in terms of the available storage. This second process is clearly nonoptimal.) we have that O(xo) + J T w(u.b. We have O(xo) sup . and show that the cba satisfies T Oa(xl) < Oa(x0) + fw(t) dt. we now consider an arbitrary input u' : [0.1 A dynamical system 0 is dissipative if and only if for all x E X the available storage (¢a(x) is finite. Proof: Sufficiency: Assume first that Oa is finite.232 CHAPTER 9. and thus cba itself is a possible storage function. or we can force the system to go from xo to xl and then extract whatever energy is left in Vi with initial state x1. Necessity: Assume now that 0 is dissipative. DISSIPATIVITY As defined. To show that Vi is dissipative.12) contains the zero element (obtained setting T = 0) we have that Oa > 0.J T>o uca /T o w(u(t). This means that 10 > 0 such that for all u(. for a dissipative system we have that 0 < Oa < S. respectively. 0 .y) dt > O(x(T)) > 0. thus leading to the result. we compare Wa(xo) and !aa(xl). T >-J 0 cl w(u'.1R" that takes the dynamical system V) from the initial state xo at t = 0 to a final state xl at t = T. starting at the states xo and xl. Since the right-hand side of (9. the "energies" that can be extracted from V. y) dt + Oa(xl). 0 This means that when extracting energy from the system Vi at time xo. T"O>0 -J w(u*.

u E R.6)) if there exists a differentiable function 0 : R" -+ IR and function L : 1R' -4 Rq and W : R" -4 R'> "` satisfying . The benefit is a much more explicit characterization of the dissipative condition. that is.6. These assumptions. bring.Jo w(u(t). of course. leads in fact.1. 3. dissipativeness can be characterized in terms of the coefficients of the state space realization of the system Vi.13) is reachable from the origin. it shows that under certain assumptions. specifically. Theorem 9. we assume that 0 is of the form V) are affine J ±=f(x)+9(x)u y = h(x) + 7(x)u where x E R". Moreover.9. the available storage 0a.e. some restrictions on the class of systems considered. to the Kalman-Yakuvovich lemma.6 Algebraic Condition for Dissipativity We now turn our attention to the issue of checking the dissipativity condition for a given system. however. Our next theorem provides a result that is in the same spirit as the Kalman-Yakuvovich lemma studied in Chapter 8. Throughout the rest of this section we will make the following assumptions: 1.The state space of the system (9. _ 0a(xo) "O T > 0 9. it will be shown that in the special case of linear passive systems.(x) is a differentiable function of x.13) 2.We assume that both f in the state space realization and functions of the input u.13) is QSR-dissipative (i. this characterization of dissipativity. 233 T O(xo) ? sup .6). (9.2 The nonlinear system zb given by (9. This means that given any xl E R" and t = tl E R+ there exists a to < tl and an input u E U such that the state can be driven from the origin at t = 0 to x = xl at t = t1.Whenever the system V) is dissipative with respect to a supply rate of the form (9. y(t)) dt. and y E RP. particularly 1 and 3. respectively. is not practical in applications. dissipative with supply rate given by (9. That result. The notion of available storage gave us a theoretical answer to this riddle in Theorem 9. ALGEBRAIC CONDITION FOR DISSIPATIVITY Thus.

19) Proof: To simplify our proof we assume that j (x) = 0 in the state space realization (9.17) and (9.15) (9.234 CHAPTER 9. we have that w(u.16) 19T (ax)T STh(x) . and S = S.17) R = R + jT (x)S + STj (x) +7T(x)Q7 (x) and S(x) = Qj(x) + S.WTL(x) R= for all x.14) a-f (x) = hT (x)Qh(x) .15) aof (x) +LTL+uTRu+2uTSTh a46f(x)+LTL+uTWTWu+2uT[2gT(O )T +WTL] substituting (9. 0(0) = 0 (9.18) (9. With this assumption we have that k = R.20) _ 0+ (L + Wu)T (L + Wu) . The necessity part of the proof can be found in the Appendix. DISSIPATIVITY q(x) > 0 vx $ 0.3).y) = yTQY+UTRu+2yTSu = hT Qh + uT Ru + 2hTSu substituting (9. where WTW (9.13) (see exercise 9. Assuming that S. L and W satisfy the assumptions of the theorem.LT (x)L(x) 1 (9. (9.16) a[f(x)+gu]+LTL+uTWTWu+2uTWTL ox + (L + Wu)T (L + Wu) (9. We prove sufficiency. something that is true in most practical cases.13) f (x) + LTL] + UTRu + 2hTSu substituting (9. In the case of linear systems this assumption is equivalent to assuming that D = 0 in the state space realization.

O(xo) + 2 O(x(t)) .9. then there exist a real function 0 satisfying ¢(x) > 0 Vx # 0. we have that T TP 5xf(x)=x[A+PA]x . p = PT > 0. and S = 1 in (9.20) in the sufficiency part of the proof.6). Corollary 9. Guided by our knowledge of Lyapunov stability of linear systems.1 If the system is dissipative with respect to the supply rate (9.e. Setting u = 0.y) (9. Notice that (9. ALGEBRAIC CONDITION FOR DISSIPATIVITY 235 J w(t) dt = O(x(t)) . 0(0) = 0.23) Now assume that. and assume as in the proof of Theorem (9. and h(x) = Cx.2.21) is identical to (9. Q = R = 0.2) that j(x) = 0. we define the storage function O(x) = xT Px.90 8xf (x) <0 (9.9 = hT(x). Theorem 9. in addition. g(x) = B.22) ax.6. the system V is linear then f (x) = Ax. such that d = -(L+Wu)T(L+Wu)+w(u. In this case.6. 9. (9.21) Proof: A direct consequence of Theorem 9.6)).1 Special Cases We now consider several cases of special interest. Passive systems: Now consider the passivity supply rate w(u. y) = uTy (i.O(xo) f(L t + Wu)T (L + Wu) dt and setting xo = 0 implies that t w(t) dt > O(x(t)) > 0.2 states that the nonlinear system 0 is passive if and only if 8x f (x) = -LT (x)L(x) 9T(aO)T = h(x) or. equivalently .

eyTy (i.e. DISSIPATIVITY which implies that (9.2 states that the 2I nonlinear system z/) is strictly output-passive if and only if a7 f(x) = -ehT(x)h(x) .22)-(9.24) ao g hT (x).6)). Q = 0.23) are satisfied if and only if ATP+PA < 0 BTP = CT. In this case Theorem 9.236 CHAPTER 9.e. (9. and S = again that j (x) = 0. a f(x) < -ehT(x)h(x) (9.LT(x)L(x) or. Q = -JI.b is strictly passive if and only if ax gT f (x) = -LT (x)L(x) = h(x) . equivalently..2 can be considered as a nonlinear version of the Kalman-Yakubovich lemma discussed in Chapter 8. and S = 2I in (9. y) = uT y .6)). with j = 0.25) Strictly Passive systems: Finally consider the strict passivity supply rate w(u. R = 0..5UT u (i. Theorem 9. Strictly output passive systems: Now consider the strictly output passivity supply rate in (9. and assume once again that j(x) = 0.2 states that the nonlinear system t.13).2WTL a (ax) with R=R=WTW=-8I which can never be satisfied since WTW > 0 and 8 > 0. for passive systems. Therefore. can be strictly passive. In this case Theorem 9.y) = uTy . It then follows that no system of the form (9. and assume once w(u. R = -8I. .

we assume that the storage function 4) : X -> R+ is differentiable and satisfies the differential dissipation inequality (9.5). Proof: Define the function V (x) 4)(x) .5) and condition (ii) we have that V(x) < 0 and stability follows by the Lyapunov stability theorem. Throughout this section.7.4)(xe). Notice also that the general class of dissipative systems discussed in theorem 9. u) thus. The . Theorem 9. by (9. Under these conditions xe is a stable equilibrium point for the unforced systems x = f (x.0)=0. f (xe) _ f(xe. and assume that the following conditions are satisfied: (i) xe is a strictly local minimum for 0: 4)(xe) < 4)(x) Vx in a neigborhood of xe (ii) The supply rate w = w(u. STABILITY OF DISSIPATIVE SYSTEMS 237 9.5). tf Theorem 9.3 includes QSR dissipative systems as a special case. that is. In the following theorem we consider a dissipative system z/i with storage function 0 and assume that xe is an equilibrium point for the unforced systems ?P. 0).7 Stability of Dissipative Systems Throughout this section we analyze the possible implications of stability (in the sense of Lyapunov) for dissipative dynamical systems.3 Let be a dissipative dynamical system with respect to the (continuously differentiable) storage function 0 : X -* IR+. Also. y) is such that w(0. It is also important in that it gives a clear connection between the concept of dissipativity and stability in the sense of Lyapunov. y) < 0 Vy.3 is important not only in that implies the stability of dissipative systems (with an equilibrium point satisfying the conditions of the theorem) but also in that it suggests the use of the storage function 0 as a means of constructing Lyapunov functions. which satisfies (9. and by condition (i) is positive definite Vx in a neighborhood of xe. the time derivative of V along the trajectories of 0 is given by V (x) = aa(x) f (x.9. This function is continuously differentiable.

Definition 9.2 and Corollary 9. dissipativity provides a very important link between the input-output theory of system and stability in the sense of Lyapunov. stability follows from the Lyapunov stability theorem.6 ([30]) A state space realization of the form V) is said to be zero-state detectable if for any trajectory such that u . Then.3. y .1 we have that if 1i is QSR dissipative. however. and satisfies V (x) = 0 if and only if x(t) = xe. Proof: From Theorem 9. In this case we have that dO = 0 = hT (x)Qh(x) h(x) = 0 x=0 .0. was introduced as an input-output property. Corollary 9. y) = w(0. 0) = 0. Proof: Under the present conditions. (ii) Asymptotically stable if Q < 0. Much more explicit stability theorems can be derived for QSR dissipative systems.y) and setting u = 0. Assume now that Q < 0. dO _ -LT L + hT (x)Qh(x) along the trajectories of i = f (x).4 Let the system i = f(x) +g(x)u 1 y = h(x) be dissipative with respect to the supply rate w(t) = yTQy + 2yT Su + uT Ru and zero state detectable. We now present one such a theorem (theorem ). Theorem 9. Thus. then xe is asymptotically stable. 0). due to Hill and Moylan [30].238 CHAPTER 9.0. Thus.0 we have that x(t) . the free system i = f (x) is (i) Lyapunov-stable if Q < 0. h(x.2 Under the conditions of Theorem 9. then there exists 0 > 0 such that dt _ -(L+Wu)T(L+Wu)+w(u. V(x) is strictly negative and all trajectories of ?p = f (xe. DISSIPATIVITY property of QSR dissipativity. if in addition no solution of i = f (x) other than x(t) = xe satisfies w(0. and asymptotic stability follows from LaSalle's theorem.

0 and y . x c R7Wi yi = hi(xi) +. 9. We will see that several important results can be derived rather easily. FEEDBACK INTERCONNECTIONS 239 by the zero-state detectability condition.7(xi)u'ie y E 1R'n for i = 1. then i = f (x) is asymptotically stable. . Corollary 9. that is.0 implies that x(t) 0. Notice that. then = f (x) is asymptotically stable.2.6 and the special cases discussed in Section 9. that is. (iv) If 0 is strictly output-passive. for any given xl and tl there exists a to < tl and an input function E U -* R' such that the state can be driven from x(to) = 0 to x(tl) = x1. 2.8 Feedback Interconnections In this section we consider the feedback interconnections and study the implication of different forms of QSR dissipativity on closed-loop stability in the sense of Lyapunov. (v) If 0 is very strictly passive. Throughout this section we assume that state space realizations VP1 and 02 each have the form -ii = fi(xi) +g(xi)ui. We will also make the following assumptions about each state space realization: Y)1 and V2 are zero-state-detectable. The following corollary is then an immediate consequence of Theorem 9.8. then i = f (x) is asymptotically stable. that is. thanks to the machinery developed in previous sections. Y'1 and 02 are completely reachable. we have assumed that both systems are "squared". (ii) If 0 is strictly passive. they have the same number of inputs and outputs.9.3 Given a zero state detectable state space realization J ± = f(x) + g(x)u l y = h(x) +j(x)u we have (i) If 0 is passive. then the unforced system i = f (x) is Lyapunov-stable. then i = f (x) is Lyapunov-stable. u . u E Rm. (iii) If 0 is finite-gain-stable. for compatibility reasons.

yl)+aw2(u2.26) Then the feedback interconnection of 01 and 02 is Lyapunov stable (asymptotically stable) if the matrix Q defined by Ql + aR2 -S1 + aS2 -ST + aS2 R1 + aQ2 is negative semidefinite (negative definite) for some a > 0. yi) = yT Q. Theorem 9. x2]T is given by _ w1(u1.yti + 2yT Slut + u'Rut. DISSIPATIVITY Figure 9. Thus.3: Feedback interconnection.5 Consider the feedback interconnection (Figure 9. ] (9.28) . (9. O(x1i X2) is positive definite by construction.27) Proof of Theorem 9.5: Consider the Lyapunov function candidate O(x1.y2) (y1 Qlyl + 2yTSlu1 + u1 R1ul) + a(yz Q2y2 + 2y2 S2u2 + u2 R2u2) (9.3) of the systems Y'1 and 7'2. The derivative of along the trajectories of the composite state [x1. and assume that both systems are dissipative with respect to the supply rate wt(ut. X2) = O(x1) + aO(x2) a>0 where O(x1) and O(x2) are the storage function of the systems a/il and z/b2i respectively. we can now state the main theorem of this section. Having laid out the ground rules.240 CHAPTER 9.

for i = 1. Thus Q 621 0 0 -e2I which is negative definite. and S2 = I. and S1 = I. r T y2 T1 J f Qi + aR2 -S1 + aS2 1 J L -S + aS2 Rl + aQ2 r yl YT IL 1 J Corollary 9.9. in addition. then Qi = 0. and the result follows. one of the following conditions are satisfied: (1) One of 01 and 02 is very strictly passive.4 and 9.8.andSi=2I. we have 01 passive: Q1 = 0.5. and Si = 2I.5. The case 02 passive and 01 very strictly passive is entirely analogous.27). 02 very strictly passive: Q2 = -e21. R1 = 0. This theorem is very powerful and includes several cases of special interest. (b2) If both 01 and 02 are strictly passive. we obtain the following: (a) If both 01 and 02 are passive. (bl) Assuming that ¢1 is passive and that 02 is very strictly passive. we have Qi=0. With these values. 2. (b) Asymptotic stability follows if. R2 = -621. then the feedback system is Lyapunov stable. we obtain = L yi and the result follows.R2=-6iI. Ri = 0. FEEDBACK INTERCONNECTIONS 241 Substituting ul = -Y2 and u2 = yl into (9. (3) Both 01 and 02 are strictly output passive.5. . (2) Both 01 and are strictly passive.4 Under the conditions of Theorem 9. which we now spell out in Corollaries 9.28). and the result follows by Theorem 9. Q = 0. Proof: Setting a = 1 in (9. we have that (a) If both 01 and 02 are passive.

and Si =21j. Q is negative semidefinite.29) (9. identical. The following corollary is a Lyapunov version of the small gain theorem. Thus Ell 0 0 -e21 which is negative definite.R1= 2. Corollary 9.30) and substituting (9.5.242 CHAPTER 9.5.R2=0. Proof: Under the assumptions of the corollary we have that Qi=I. and the result follows by Theorem 9. DISSIPATIVITY Thus b2I 0 0 -b11 which is once again negative definite. 0 . Thus Q becomes a72 L Q 0 -1)1 0 1 (7i .29) into (9. (b3) If both 01 and 02 are strictly output-passive. andSi=0.5 Under the conditions of Theorem 9. Corollary 9. if both 01 and 02 are finite-gainstable with gains 'y and 72i the feedback system is Lyapunov-stable (asymptotically stable) if 7172<-1 (7172<1). we have: Qt=eiI.30) leads to the stability result. of course. and the result follows. The case of asymptotic stability is.a)I J Thus.4 is significant in that it identifies closed-loop stability-asymptotic stability in the sense of Lyapunov for combinations of passivity conditions. provided that (9.

+ 1 y2'U.2. 2 (9.33) results in -27 1 2 (. (9. ) T u.9. equivalently (LO) T ax f(x) + 2 aax 99T ax + 1 IIyll2 < 2 - 1'Y2 2 v- 1 T (ao 2 ax T2 < 0.1y2IIuII2 Ox ax Adding and subtracting 2y2uTu and 21 obtain a_(x) ax xlo ggT( 12 ao 2 - 1 IIy112. - (9... the differential dissipation inequality (9. )T 99 T <- -2 11y112 or.9 Nonlinear G2 Gain Consider a system V) of the form i = f(x) + g(x)u y = h(x) (9.g.33) )T to the the left hand side of (9.90 (.2IIyl12 < 0.32) we have that a0(x) f(x) + ao x)g(x)u < w(t) <.34) into (9.32) Thus. substituting (9. if it is dissipative with supply rate w = 2(y2IIu112 . substituting (9.9. this system is finite-gain-stable with gain y.31) into (9.31) As discussed in Section 9.1 y2uTu 2 ax J (aO)T T (ao)T ax + 2'Y axgg 1y2 2 12 aOggT 2y ax u. NONLINEAR G2 GAIN 243 9.IIy112) Assuming that the storage function corresponding to this supply rate is differentiable.5) implies that fi(t) = aa( )x 1 < w(t) s 272IIu112 .90 Px) + 1 .T76 ax ax 2 .axggT (ax) + 2y21lu112 Therefore...34) +2.1 gT(a")T +aof(x) 2y2 ax ax (9.2729 T + .33) we f (x) + ao(x) ax g(x)u = a6x f(x) + a_(x)g(x)v.35) .

is bounded above by -y. . T1. According to (9. The true L2 gain of 0. x21 1X2 2x 3 xl x .2xlx2 . 0<y'<7 Example 9.y2 a--gg + 2 IyI 2< 0 (9. i. .e.36) is. then it must satisfy the so-called Hamilton-Jacobi inequality: H def (ax )T 1 a-f(x) + 2.244 CHAPTER 9.x2 1x2 - l -(xl + x2)2 -11x114 = T (ii) 2 ggT ao . we will be content with "estimating" an upper bound for y.x2 + Qx2u X2 3>0 f (x) = I 32. at best.xl 9(x) = L 13x2 J . DISSIPATWITY This result is important. .1 Consider the following system: xix2 . if the system V) given by (9.C2 norm.36) can be obtained as follows: (i) f(x) a f (X) _ = [x1. a process that resembles that of finding a suitable Lyapunov function.36) Finding a function that maximizes f and satisfies inequality (9. h(x) = (xi + x2) To estimate the .x2 1x2 . we consider the storage function "candidate" fi(x) = 2 (xi + x2).31) is finite-gainstable with gain ry. This can be done by "guessing" a function 0 and then finding an approximate value for -y.xi +Qxlu .. the three terms in the Hamilton-Jacobi inequality (9. denoted 'y'.xi . With this function. x21 [ 1 I )3xl Qx2 2 I=P1 2 2 = PIIXU q2 2 ? L99T (LO) T ate= 2ry2 IIxI14 . Often. very difficult.35).

Assuming that this is the case. (9. f (x) = Ax.9. g(x) = B. p = PT ) and substituting in (9.39) . Taking 4(x) = a xT Px (P > p.38) and (9. we have that ( i=Ax+Bu Sl y=Cx that is. adding (9.38) Thus.37). NONLINEAR L2 GAIN 245 (iii) 2 IHy1I2 2IIy1I2 = 2(x2 +x2)2 = 2 IIxII4 Combining these results. (9. we obtain 71 + NT = xT PA+ATP+ 212PBBTP+ 2CTC R x< 0.1 Linear Time-Invariant Systems It is enlightening to consider the case of linear time-invariant systems as a special case. we obtain 7IT = xT 1ATp + 7 PBBT P + 2 CT CJ x< 0. (9.38).9.9.36). we obtain 7{ = xT + 212 PBBT P + 2CTCl 7 J x < 0. we have that = -IIxII4 + or IIXI14 + 1 IIXI14 <0 9.37) Taking the transpose of (9. and h(x) = Cx.

246 CHAPTER 9.25).9.44) Letting 0 = ko1 for some k > 0 and substituting (9. We now discuss how to compute the L2 gain y. The left-hand side of inequality (9.44).36) 1 H'f axf (x) + 2y2 ax99T (ax )T + 2 11y1J2 < 0 (9.43) into (9.2) that strictly output passive systems have a finite G2 gain.42) (9. further analysis leads to a stronger result.e.. we consider the Hamilton-Jabobi inequality (9.3 (lemma 9.42)-(9.41) has a positive definite solution. See Reference [23] or [100]. i.41) is known as the Riccati equation.24)-(9. 9. the system z/i has a finite G2 gain less than or equal to -y provided that for some y > 0 PA+ATP+ _ PBBTP+ ZCTC 2-y2 _< 0. In the linear case. Consider a system of the form f x = f (x) + 9(x)u y = h(x) and assume that there exists a differentiable function 01 > 0 satisfying (9. 00 8x f (x) < -ehT (x)h(x) hT (x) (9. Indeed. the equality R'fPA+ATP+ 212PBBTP+ ZCTC = 0 y (9.40) is well known and has played and will continue to play a very significant role in linear control theory. It can be shown that the linear time-invariant system V) has a finite gain less than or equal to -y if and only if the Riccati equation (9. DISSIPATIVITY Thus. we have that V has L2 gain less than or equal to y if and only if -kehT (x)h(x) + 2y2 k 2 h T (x)h(x) + 2 hT (x)h(x) < 0 .2 Strictly Output Passive Systems It was pointed out in Section 9. To estimate the L2 gain of Eli.43) which implies that tli is strictly output-passive.

Stabilizes the closed-loop system. u= control signal. however. over the years a lot of research has been focused on developing design techniques that employ this principle as the main design criteria. SOME REMARKS ABOUT CONTROL DESIGN 247 or. and d= "exogenous" signals (such as disturbances and sensor noise).9. is its ability to reduce the effect of undesirable exogenous signals over the system's output. control design can be viewed as solving the following problem: find a control law that 1.10 Some Remarks about Control Design Control systems design is a difficult subject in which designers are exposed to a multiplicity of specifications and design constraints. where C= controller to be designed. also in some specific sense. To discuss control design in some generality. . y= measured output.10.Reduces the effect of the exogenous input signals over the desired output. it is customary to replace the actual plant in the feedback configuration with a more general system known as the generalized plant. Therefore. G= generalized plant. 2. in some prescribed sense. More explicitly. One of the main reasons for the use of feedback. z= output to be regulated (such as tracking errors and actuator signals). Many problems can be cast in this form. extracting the common factor hT(x)h(x) -ke+212k2+2 <0 7> and choosing k = 1/c we conclude that k 2_k .1 9. The standard setup is shown in Figure 9.4.

5 and the generalized setup of Figure 9. consider the following example: Example 9.4: Standard setup. This problem can indeed be studied using the standard setup of Figure 9.. DISSIPATIVITY d z G u I I y C Figure 9. To fix ideas and see that this standard setup includes the more "classical " feedback configuration. z is a vector whose components are input and output of the plant.5: y = el (the controller input) u = yl (the controller output) d: disturbance (same signal in both Figure 9. Suppose that our interest is to design a controller C to reduce the effect of the exogenous disturbance d over the signals Y2 and e2 (representing the input and output of the plant).2 Consider the feedback system shown in Figure 9. the several variables in the standard setup of Figure 9.e. To see this.248 CHAPTER 9. i.4 correspond to the following variables in Figure 9. In our case. Z= L e2 Y2 J.5.4. we must identify the inputs and outputs in the generalized plant G. .4).

The H.10.. with several important contributions by several authors. we saw that the L2 gain is the H.9. which has the form of the standard setup of Figure 9.. . The control design problem can now be restated as follows. controllers was solved during the 1980s.. A bit of work shows that the feedback system of Figure 9. and the synthesis of H. optimization. in some specific sense. For linear time-invariant systems.6. D A lot of research has been conducted since the early 1980s on the synthesis of controllers that provide an "optimal" solution to problems such as the one just described. and the theory behind the synthesis of these controllers is known as H. The properties of the resulting controllers depend in an essential manner on the spaces of functions used as inputs and outputs. See References [25] and [100] for a comprehensive treatment of the H. Zames [99].. optimization theory was initiated in 1981 by G. Find C such that (i) stabilizes the feedback system of Figure 9. Two very important cases are the following: L2 signals: In this case the problem is to find a stabilizing controller that minimizes the L2 gain of the (closed-loop) system mapping d -+ z. and (ii) reduces the effect of the exogenous signal d on the desired output z.6. theory for linear time-invariant systems.5: Feedback Interconnection used in example 9.4. SOME REMARKS ABOUT CONTROL DESIGN 249 Figure 9.2. norm of the transfer function mapping d -4 z.5 can be redrawn as shown in Figure 9.

250 CHAPTER 9. DISSIPATIVITY e2 Y2 z U I I I-y2=el I y Figure 9.6: Standard setup. .

and d and y represent the measured and regulated output. Dahleh and B. system We will consider the standard configuration of Figure 6.u. Instead. We will consider the state feedback suboptimal L2-gain optimization problem.C2 control problem for nonlinear systems. because the full state is assumed to be available for feedback.C2 gain less than or equal to ryl.45) where u and d represent the control and exogenous input. We look for a controller C that stabilizes the closed-loop system and minimizes the . gain is the Ll norm of the transfer function mapping d -. A. then we can choose rye < yl and find a new controller C2 such that the mapping from d -> w has an G2 gain less than or equal to 'y2.C2 gain of the mapping from d to z.z. respectively. and the theory behind the synthesis of these controllers is known as Ll optimization. Given a "desirable" exogenous signal attenuation level. gain of the (closed-loop) system mapping d -> z.11 Nonlinear G2-Gain Control Solving the nonlinear G2-gain design problem as described above is very difficult. denoted by ryl. For linear time-invariant systems. The L optimization theory was proposed in 1986 by M.11. we shall content ourselves with solving the following suboptimal control problem. This problem can be seen as an extension of the H. 9. u. and therefore it is often referred to as the nonlinear Ham-optimization problem. Vidyasagar [89]. This problem is sometimes referred to as the full information case.d) (9. Pearson [16]-[19]. we saw that the L. See also the survey [20] for a comprehensive treatment of the Ll theory for linear time-invariant systems. find a control Cl such that the mapping from d -* w has an . optimization theory to the case of nonlinear plants.9. In the remainder of this chapter we present an introductory treatment of the . u. d) y = 9(x. NONLINEAR L2-GAIN CONTROL L. We will consider a state space realization of the form . respectively.e.. 251 signals: Similarly. All of these references deal exclusively with linear time-invariant systems. d) z = h(x. in this case the problem is to find a stabilizing controller that minimizes the L.4 and consider a nonlinear of the form V) i = f (x. If such a controller Cl exists. The first solution of the Li-synthesis problem was obtained in a series of papers by M. Iterating this procedure can lead to a controller C that approaches the "optimal" (i. minimum) -y.

46) has a finite C2 gain < -y if and only if the Hamilton-Jacobi inequality 9{ given by b(x)bT (x) + 2 x_ i J has a solution 0 > 0. Assume that we obtain > 0 is a solution of L. with k > 2. Theorem 9.6 The closed-loop system of equation (9.47).47) T u= -bT (x) (a. DISSIPATIVITY x = a(x) + b(x)u + g(x)d.36) with f (x) = a(x) . d E R'' (9.46).46) z= [h(x)] where a.49) (9. u E II8"`. 0 . b. [bT(X) (g)T ] which implies (9. (9.) (x).50) into the Hamilton-Jacobi inequality (9.b(x)bT(x) ()T +g(x)d ()T (9. a(x) . See Reference [85] for the necessity part of the proof. and so the closed-loop system has a finite G2 gain ry.50) z= L -bT(x) Substituting (9. The control law is given by H 8xa(x) 8 [.252 CHAPTER 9. Substituting u into the system equations (9.48) Proof: We only prove sufficiency.b(x)bT (x) h(x) (x) [_bT(X) ( )T ] T results in H= ax +2 (a(x)_b(x)bT(x)_) [h(X)T T ) + 2ry2 axg(x)gT (x) (8 hx 8xb(x).49) and (9. g and h are assumed to be Ck.g(x)gT(x) - ll (Ocb)T + hT (x)h(x) < 0 (9.

(9.12. find the total energy E = K1 + K2 + P = K + P stored in the cart-pendulum system.1) (f) Computing the derivative of E along the trajectories of the system. Assuming for simplicity that the moment of inertia of the pendulum is negligible. (iv) has a finite G2 gain. (b) If the answer to part (a)(iv) is affirmative. (iii) strictly output-passive. (c) Find the potential energy P in the cart-pendulum system.1) Prove Lemma 9.2) Prove Lemma 9.9. and assuming that the output equation is y=±=X2 (a) Find the kinetic energy K1 of the cart.12 Exercises (9. a>0 X1-X X2 = xl (a) Determine whether i is (i) passive. (b) Find the kinetic energy K2 of the pendulum. (ii) strictly passive. (e) Defining variables _ q= x 9 M= L ml cos M+m mlcoso ml2B ' u_ f 0 show that the energy stored in the pendulum can be expressed as E = 24TM(q)4 +mgl(cos0 . EXERCISES 253 9.4) Consider the system -a21-22+u.3) Consider the pendulum-on-a-cart system discussed in Chapter 1. find an upper bound on the L2 gain. .2. (9.1. show that the pendulum-on-a-cart system with input u = f and output ± is a passive system. 1 (9. (d) Using the previous results.

8 follows Reference [31] very closely. The beauty of the Willems formulation is its generality. DISSIPATIVITY (9. Notes and References The theory of dissipativity of systems was initiated by Willems is his seminal paper [91]. (iii) strictly output-passive.1 is from Willems [91]. Unlike the classical notion of passivity.3 follows Theorem 6 in [91]. . See also Section 3. Theorem 9. The definition of QSR dissipativity is due to Hill and Moylan [30]. on which Section 9. Section 9.2 in van der Schaft [85].3 is based on Hill and Moylan's original work [30]. (a) Determine whether i is (i) passive. Theorem 9. Stability of feedback interconnections was studies in detail by Hill and Moylan.X2 )3x1 Q>0. which was introduced as an input-output concept. find an upper bound on the G2 gain. [29]. as well as [91]. [93]. Sections 9.9 and 9. Theorem 1. The connections between dissipativity and stability in the sense of Lyapunov were extensively explored by Willems. dissipativity can be interpreted as an input-output property. Nonlinear £2 gain control (or nonlinear H.11 follow van der Schaft [86]. [94]. state space realizations are central to the notion of dissipativity in Reference [91].6 follows Reference [30]..5) Consider the system -x1+x2+u.254 CHAPTER 9. Employing the QSR supply rate. Section 9. See also References [6] and [38] as well as the excellent monograph [85] for a comprehensive treatment of this beautiful subject. a>0 -x1x2 .1 is based. Reference [91] considers a very general class of nonlinear systems and defines dissipativity as an extension of the concept of passivity as well as supply rates and storage functions as generalizations of input power and stored energy.. control) is currently a very active area of research. and [87]. respectively. (ii) strictly passive. (iv) has a finite £2 gain. (b) If the answer to part (a)(iv) is affirmative. Section 9.

Although many successful applications have been reported over the years. 10. feedback linearization has a number of limitations that hinder its use. as we shall see. we will limit our attention to single-input-single-output systems and consider only local results. Throughout this chapter. The intention of this chapter is to provide a brief introduction to the subject.Chapter 10 Feedback Linearization In this chapter we look at a class of control design problems broadly described as feedback linearization. Here. Feedback linearization was a topic of much research during the 1970s and 1980s. we need to review a few basic concepts from differential geometry. a secondary control law can be designed to ensure that the overall closed-loop system performs according to the specifications. however. we assume that D is an open and connected subset of R'. Once a linear system is obtained. See the references listed at the end of this chapter for a more complete coverage. the design is carried out using the new linear model and any of the well-established linear control design techniques. feedback linearization is a concept of paramount importance in nonlinear control theory. whenever we write D C R". This time. Even with these shortcomings. The main problem to be studied is: Given a nonlinear dynamical system. A vector field 255 . We have already encountered the notion of vector field in Chapter 1. find a transformation that renders a new dynamical system that is linear time-invariant. For simplicity. by transformation we mean a control law plus possibly a change of variables.1 Mathematical Tools Before proceeding.

a mapping that assigns an n-dimensional vector to every point in the n-dimensional space." It is customary to label covector field to the transpose of a vector field. Then. we have x f (x) = VV f (x) = LfV(x). namely. denoted Lfh.XIX22 . and Lfh(x) = aX-9(x) L9Lfh(x) = L9[Lfh(x)] = 2(8xh)g(x) and in the special case f = g. is given by Lfh(x) = ax f (x). Defined in this way. -x2 + x 2 Jx2 .1. g : D C ]R" -* 1R" we have that Lfh(x) = axf(x). going back to Lyapunov functions. they have continuous partial derivatives of any required order.1) Thus. A slightly more abstract definition leads to the concept of Lie derivative. ]R" -4 1R" Definition 10. Throughout this chapter we will assume that all functions encountered are sufficiently smooth. we have . a vector field is an n-dimensional "column. given two vector fields f. 10. (10. given V : D -+ 1R and f (x). Notice that." As we well know. FEEDBACK LINEARIZATION is a function f : D C R" -> 1R1.1 Let h(x) = 2 (xi + x2) _ AX) 9(x) _ _ . The Lie derivative notation is usually preferred whenever higher order derivatives need to be calculated.256 CHAPTER 10. that is.1 Lie Derivative When dealing with stability in the sense of Lyapunov we made frequent use of the notion of "time derivative of a scalar function V along the trajectories of a system d = f (x). V is merely the Lie derivative of V with respect to f (x).1 Consider a scalar function h : D C 1R" -* JR and a vector field f : D C The Lie derivative of h with respect to f. Lf Lf h(x) = L2 fh(x) 1 _ 8(Lfh) 8x f (x) Example 10.

9g af axg(x).2 Consider the vector fields f. Lfh(x): a-g(x) [ xl x2 ] xl . LfL9h(x): LfL9h(x) = a(Oxh)f(x) -2 [xl xz] x2 I xl)x22.2) .µ(1 . g : D C 1R" -> R. denoted by [f.x2 x2 2 x2 + xlx2 I -(xl + x2).x2 Dx2 µ(1 .10.g](x) = 8xf (x) .xi)xz = 2µ(1 Lfh(x): Lfh(x) a(8xh) g(x) -2 [xl x21 -xl .g]. 10.µ(1 .xl)x2. (10.1.1. is the vector field defined by [f.2 Lie Bracket Definition 10. -x1 . MATHEMATICAL TOOLS 257 Lfh(x): Lfh(x) = ax [X1 f(x) = x2] -x2 -xl .xlx 2 2 x2 + x2 Ix2 2(x2 + x2). The Lie bracket of f and g.

Lemma 10.2 Letting -x2 f(x) = L .2 : D C R -+ 1R . where L[f.g](x) = 8xf(x) 0 [1 8xg(x) [-x2 0 °1 1 -x1 .L9Lfh.x2 j)x2 g(x) _ x1 X2 1 we have [f.258 CHAPTER 10. -XI . . algl + a2g2] = a1 [f. adfg ad3 [f.xi) xl X2 1 L O 2pxix2 The following notation. adf2 g] 9]]] The following lemma outlines several useful properties of Lie brackets.µ(1 . g] ad fg(x) and adfg = g adfg = Thus. g]] = [f.gj h represents the Lie derivative of h with respect to the vector [f. FEEDBACK LINEARIZATION Example 10.all . we have (i) Bilinearity: (a) (b) [a1 f1 + a2f2. g] [f.gl h = L fLgh(x) . [f.f] (iii) Jacobi identity: Given vector fields f and g and a real-valued function h. g] + a2 [f2. adfg] _ [f. g2] (ii) Skew commutativity: [f. is useful when computing repeated bracketing: (x)1 f [f. ad:f 1g] = [f.xl)x2 2µxlx2 -1 -µ(1 . we obtain L[f. g].g] = -[g. g] = al [fl. frequently used in the literature. gl] + a2 [f.1 : Given f1.

MATHEMATICAL TOOLS 259 10. 11f (x) 11 = 00- The following lemma is very useful when checking whether a function f (x) is a local diffeomorphism. for a linear time-invariant system of the form ±=Ax+Bu we can define a coordinate change z = Tx. or a local diffeomorphism. defined by 1-1(f(x)) = x dxED exists and is continuously differentiable. The function f is said to be a global diffeomorphism if in addition (i) D=R .2 Let f (x) : D E lW -+ 1R" be continuously differentiable on D. Lemma 10. 10. in the new variable z the state space realization takes the form z=TAT-1z+TBu = Az+Bu.1.1. if (i) it is continuously differentiable on D.3 : (Diffeomorphism) A function f : D C Rn -4 Rn is said to be a diffeomorphism on D.3 Diffeomorphism Definition 10. Proof: An immediate consequence of the inverse function theorem. then f (x) is a diffeomorphism in a subset w C D. . For example. and (ii) limx'. If the Jacobian matrix D f = V f is nonsingular at a point xo E D. where T is a nonsingular constant matrix.1. Thus. and (ii) its inverse f -1.4 Coordinate Transformations For a variety of reasons it is often important to perform a coordinate change to transform a state space realization into another.10.

2x1 U 2x2 x2 . knowing z x = T-1(z).260 CHAPTER 10. that is. Example 10. we can always recover the original state space realization.2x1x2 + xi 2x2 4x1x2 and 0 1 z = X2 x1 + 0 0 u +X2 . assuming that T(x) is a diffeomorphism and defining z = T(x). FEEDBACK LINEARIZATION This transformation was made possible by the nonsingularity of T. Also because of the existence of T-1. Indeed. 1 0 1 0 0 1 0 x1 1 0 1 0 0 1 1 z = 2x1 0 + 2x1 0 .3 Consider the following state space realization 0 x1 U X2 . we have that z= ax= [f (x) + g(x)u] Given that T is a diffeomorphism.2x1x2 + X1 and consider the coordinate transformation: x1 x = T(x) = xl +X2 x2 2+X3 1 OIT 0 1 0 0 1 (x) = 2x1 0 2x2 Therefore we have. Given a nonlinear state space realization of an affine system of the form x = f (x) + g(x)u a diffeomorphism T(x) can be used to perform a coordinate transformation. we have that 3T-1 from which we can recover the original state space realization.

10.1. MATHEMATICAL TOOLS

261

and substituting xi = z1i x2 = z2 - zi, we obtain
0
1

i=

z1
z2

+

0
0

u.

10.1.5

Distributions

Throughout the book we have made constant use of the concept of vector space. The
backbone of linear algebra is the notion of linear independence in a vector space. We recall, from Chapter 2, that a finite set of vectors S = {xi, x2i , xp} in R' is said to be linearly dependent if there exist a corresponding set of real numbers {) }, not all zero, such that
E Aixi = Aixi + A2x2 + - + APxP = 0.

On the other hand, if >i Aixi = 0 implies that Ai = 0 for each i, then the set {xi} is said
to be linearly independent.
Given a set of vectors S = {x1, X2, , xp} in R', a linear combination of those vector defines a new vector x E R", that is, given real numbers A1, A2, Ap,
x=Aixl+A2x2+...+APxP

Ellen

the set of all linear combinations of vectors in S generates a subspace M of Rn known as the span of S and denoted by span{S} = span{xi, x2i )XP} I i.e.,
span{x1 i x2,

, xp} _ {x E IRn' : x = A1x1 + A2x2 + . + APxp, Ai E R1.

The concept of distribution is somewhat related to this concept.

Now consider a differentiable function f : D C Rn -> Rn. As we well know, this function can be interpreted as a vector field that assigns the n-dimensional vector f (x) to each point x E D. Now consider "p" vector fields fi, f2,. , fp on D C llPn. At any fixed point x E D the functions fi generate vectors fi(x), f2 (x), , fp(x) E and thus

O(x) = span{ fi(x), ... f,(x)}
is a subspace of W'. We can now state the following definition.

262

CHAPTER 10. FEEDBACK LINEARIZATION

Definition 10.4 (Distribution) Given an open set D C 1R' and smooth functions fl, f2i , fp D -> R", we will refer to as a smooth distribution A to the process of assigning the subspace A = span{f1, f2,. - -fp} spanned by the values of x c D.

We will denote by A(x) the "values" of A at the point x. The dimension of the distribution A(x) at a point x E D is the dimension of the subspace A(x). It then follows that

dim (A(x)) = rank ([fi (x), f2(x), ... , fp(x)])

i.e., the dimension of A(x) is the rank of the matrix [fi(x), f2(x),

, fp(x)].

Definition 10.5 A distribution A defined on D C Rn is said to be nonsingular if there
exists an integer d such that

dim(O(x)) = d

Vx E D.

If this condition is not satisfied, then A is said to be of variable dimension.

Definition 10.6 A point xo of D is said to be a regular point of the distribution A if there exist a neighborhood Do of xo with the property that A is nonsingular on Do. Each point of D that is not a regular point is said to be a singularity point.

Example 10.4 Let D = {x E ii 2
span{ fl, f2}, where

:

xl + X2 # 0} and consider the distribution A

fl=[O],
We have

f2=Lx1 +x2]
1
1

dim(O(x)) = rank

0 xl+x2

])

Then A has dimension 2 everywhere in R2, except along the line xl + x2 = 0. It follows that A is nonsingular on D and that every point of D is a regular point. Example 10.5 Consider the same distribution used in the previous example, but this time with D = R2. From our analysis in the previous example, we have that A is not regular since dim(A(x)) is not constant over D. Every point on the line xl + X2 = 0 is a singular
point.

10.1. MATHEMATICAL TOOLS

263

Definition 10.7 (Involutive Distribution) A distribution A is said to be involutive if gi E i and 92 E i = [91,92] E A.

It then follows that A = span{ fl, f2,.

,

fp} is involutive if and only if
([fi(x),...

rank ([fi(x), ... , fn(x)]) --rank

,

fn(x), [f i, f,]]) ,

Vx and all i,j

Example 10.6 Let D = R3 and A = span{ fi, f2} where

x2 1

1

Then it can be verified that dim(O(x)) = 2 Vx E D. We also have that
0

[f1, f21 =

a2 f1 - afl f2 =

1

0

Therefore i is involutive if and only if
1

0

1

0

0
1

rank

0 x2

xi
1

= rank

0 x2

xi
1

0

This, however is not the case, since
0
1

0

0
1

rank

xi
1

= 2 and rank

0

xi
1

=3

1)

(

1

0

and hence A is not involutive. Definition 10.8 (Complete Integrability) A linearly independent set of vector fields fi, , fp on D C II2n is said to be completely integrable if for each xo E D there exists a neighborhood N of xo and n-p real-valued smooth functions hi(x), h2(x), , hn_p(x) satisfying the partial differentiable equation

8hj

8xf=(x)=0

1<i<p, 1<j<n-p

and the gradients Ohi are linearly independent.

264

CHAPTER 10. FEEDBACK LINEARIZATION

The following result, known as the Frobenius theorem, will be very important in later sections.

Theorem 10.1 (Frobenius Theorem) Let fl, f2i , fp be a set of linearly independent vector fields. The set is completely integrable if and only if it is involutive.
Proof: the proof is omitted. See Reference [36].

Example 10.7 [36] Consider the set of partial differential equations
2x3

Oh
C7x1
2

Oh

- 8x2
3

=

0

i9h 8h Oh -x1-- -2x2a +x3ax

= 0

which can be written as
2x3
8h 8h oh

-1
0

-X
-2X2
x3
I

=0

or

Vh[f1 f2]
with
2X3

-xl
,

fl =

-1
0

f2 =

-2x2
x3

To determine whether the set of partial differential equations is solvable or, equivalently, whether [fl, f2] is completely integrable, we consider the distribution A defined as follows:
2x3

-xl
,

= span

-1
0

-2X2
x3

It can be checked that A has dimension 2 everywhere on the set D defined by D = {x E IIF3

xi + x2 # 0}. Computing the Lie bracket [fl, f2], we obtain

(fl,
fz] =

-4x3
2

0

10.2. INPUT-STATE LINEARIZATION
and thus
223

265

-21
X3

-4X3
2
0

fl

f2

[fl, f2] 1 =

-1 -2X2
0

which has rank 2 for all x E R3. It follows that the distribution is involutive, and thus it is completely integrable on D, by the Frobenius theorem.

10.2

Input-State Linearization
± = f(2) + g(x)u

Throughout this section we consider a dynamical systems of the form

and investigate the possibility of using a state feedback control law plus a coordinate transformation to transform this system into one that is linear time-invariant. We will
see that not every system can be transformed by this technique. To grasp the idea, we start our presentation with a very special class of systems for which an input-state linearization law is straightforward to find. For simplicity, we will restrict our presentation to single-input systems.

10.2.1

Systems of the Form t = Ax + Bw(2) [u - cb(2)]

First consider a nonlinear system of the form
2 = Ax + Bw(2) [u - q5(2)]
(10.3)

where A E Ilt"', B E II2ii1, 0 : D C II2" -4 )LP1, w : D C 1(P" -4 R. We also assume that

w # 0 d2 E D, and that the pair (A, B) is controllable. Under these conditions, it is
straightforward to see that the control law
u = O(2) + w-1 (2)v
(10.4)

renders the system

i=A2+Bv
which is linear time-invariant and controllable.

The beauty of this approach is that it splits the feedback effort into two components that have very different purpose.

266

CHAPTER 10. FEEDBACK LINEARIZATION

1- The feedback law (10.4) was obtained with the sole purpose of linearizing the original state equation. The resulting linear time-invariant system may or may not have "desirable" properties. Indeed, the resulting system may or may not be stable, and may or may not behave as required by the particular design.

2- Once a linear system is obtained, a secondary control law can be applied to stabilize the resulting system, or to impose any desirable performance. This secondary law, however, is designed using the resulting linear time-invariant system, thus taking advantage of the very powerful and much simpler techniques available for control design of linear systems.

Example 10.8 First consider the nonlinear mass-spring system of example 1.2
I xl = x2

(l

x2 =

mxl - ma2xi

mx2 +

which can be written in the form

Clearly, this system is of the form (10.3) with w = 1 and O(x) = ka2xi. It then follows that the linearizing control law is u = ka2xi + v

Example 10.9 Now consider the system
1 it = x2

ll x2 = -axl + bx2 + cOS xl (u - x2)

Once again, this system is of the form (10.3) with w = cos xl and O(x) = x2. Substituting into (10.4), we obtain the linearizing control law:

u=x2+cos-l X1 V
which is well defined for - z < xl < z .
El

10.2. INPUT-STATE LINEARIZATION

267

10.2.2

Systems of the Form 2 = f (x) + g(x)u

Now consider the more general case of affine systems of the form

x = f(x) + g(x)u.

(10.5)

Because the system (10.5) does not have the simple form (10.4), there is no obvious way to construct the input-state linearization law. Moreover, it is not clear in this case whether such a linearizing control law actually exists. We will pursue the input-state linearization of these systems as an extension of the previous case. Before proceeding to do so, we formally introduce the notion of input-state linearization.

Definition 10.9 A nonlinear system of the form (10.5) is said to be input-state linearizable if there exist a diffeomorphism T : D C r -> IR defining the coordinate transformation z = T(x)
and a control law of the form
(10.6)

u = t(x) + w-1(x)v
that transform (10.5) into a state space realization of the form

(10.7)

i = Az + By.
We now look at this idea in more detail. Assuming that, after the coordinate transformation (10.6), the system (10.5) takes the form

i = Az + Buv(z) [u - (z)]
= Az + Bw(x) [u - O(x)]
where w(z) = w(T-1(z)) and (z) = O(T-1(z)). We have:
(10.8)

z- axxSubstituting (10.6) and (10.9) into (10.8), we have that

ax[f(x)+g(x)u].

(10.9)

[f (x) + g(x)u] = AT(x) + Bw(x)[u - O(x)] must hold Vx and u of interest. Equation (10.10) is satisfied if and only if

(10.10)

f (x) = AT(x) - Bw(x)O(x)
ag(x)

(10.11)

= Bw(x).

(10.12)

268

CHAPTER 10. FEEDBACK LINEARIZATION

From here we conclude that any coordinate transformation T(x) that satisfies the system of differential equations (10.11)(10.12) for some 0, w, A and B transforms, via the coordinate transformation z = T(x), the system
z = f(x) + g(x)u
(10.13)

into one of the form

i = Az + Bw(z) [u - O(z)]
must satisfy the system of equations (10.11)-(10.12).

(10.14)

Moreover, any coordinate transformation z = T(x) that transforms (10.13) into (10.14)
Remarks: The procedure just described allows for a considerable amount of freedom when selecting the coordinate transformation. Consider the case of single-input systems, and recall that our objective is to obtain a system of the form

i = Az + Bw(x) [u - ¢(x)]
The A and B matrices in this state space realization are, however, non-unique and therefore so is the diffeomorphism T. Assuming that the matrices (A, B) form a controllable pair, we can assume, without loss of generality, that (A, B) are in the following so-called controllable
form:
0 0
1

0
1
,

0

0

0

Ar _
0 0
1

B, _
1

0

0

0

nxn

nxl

Letting
Ti(x) T2(x)

T(x)
Tn(x)
nx1

with A = Ac, B = B,, and z = T(x), the right-hand side of equations (10.11)-(10.12)
becomes

A,T(x) - Bcw(x)4(x) _

(10.15)

10.2. INPUT-STATE LINEARIZATION
and
0 0

269

(10.16)

0

L w(x)

Substituting (10.15) and (10.16) into (10.11) and (10.12), respectively, we have that
a lf(x)

= T2(x)

a

f(x) = T3(x)
(10.17)

8x 1 f (x) = T. (x)

' n f(x) = -O(x)W(x)
and

a 119(x)

=0

2(x) = 0
(10.18)

ax

19(x) = 0
W(x) # 0.

a n9(x) =
Thus the components T1, T2,
(i)

, T, of the coordinate transformation T must be such that

x

29(x) = 0 Vi = 1,2,.
71

n - 1.

n9(x)
(ii)

0.

2f (x) =T,+1

i = 1,2,...,n- 1.

270

CHAPTER 10. FEEDBACK LINEARIZATION

(iii) The functions 0 and w are given by
W(x) = axn9(x)

(an/ax)f(x)
(0'Tn1ax)g(x) '

10.3

Examples
[01
J

Example 10.10 Consider the system
=L
exax2 1
J

+

u = f (x) + g(x)u.

To find a feedback linearization law, we seek a transformation T = [T1iT2]T such that

1g = 0

(10.19)

29 # 0
with
alf(x) =T2.

(10.20)

(10.21)

In our case, (10.19) implies that
OT1
a7X

_ aTi 0Ti
g

-

0

_

0T1

Fa7X1

axe

I

_0

1

axe

so that T1 = T1(x1) (independent of 7X2). Taking account of (10.21), we have that

axf(x)=T2
5T1 aTi
[57X1 57X2
f(7X)

aT1

-

0
57X1

ex' - 1
axi

_
T2

=> T2 =
1

(e22 - 1).

To check that (10.20) is satisfied we notice that
8772
a7X g(7X)

=

oT2 0T2
57X1

=

OT2
57X2

=

a
57X2

axe 9(7X)

aT1 x - 1)
57X1

(e

#0

10.3. EXAMPLES
provided that

271

0. Thus we can choose

T1(x) = xi
which results in

T=[e-x21 1J
Notice that this coordinate transformation maps the equilibrium point at the origin in the x plane into the origin in the z plane.
The functions 0 and w can be obtained as follows:

w=

29(x) = ex2

(aT2/ax)f(x) _
(0T2/8x)9(x)

It is easy to verify that, in the z-coordinates
f z1 = x1

ll z2=eX2-1 { z2 = az1 z2 + azl + (z2 + 1)u
which is of the form
z = Az + Bw(z) [u - O(z)]
with
A

zl = z2

A,- [

0

O

I,

B=B, = 1 0
J

w(z) = z2 + 1,

O(z) _ -azi.
11

Example 10.11 (Magnetic Suspension System)
Consider the magnetic suspension system of Chapter 1. The dynamic equations are
x2
xl
aµa2

0

x2

=

xg

9 1+µa1 a

k

mX2

2m 1+µa1

+
x 2x3 )

0

(-Rx3 + l 1+µa1) µ

(

A

24 ) with a1f(x) = T2 a 1(X) = T 's 2 (10. To proceed we arbitrarily choose T1 = x1.26) a 2 AX) = -q(x)w(x). We have that af(x)=T3 2 and substituting values. To find a feedback linearizing law.27) [] 0 0 r1+ 1+µxi i) c711 8x3 A ) =0 so T1 is not a function of x3.26).25) (10. we seek a transformation T = [T1 T2 T3]T such that 5g(x) = l 0 0 (10.27).22) implies that (10.25) implies that ax 1(X) = T2 = [1 0 011(X) = T2 and thus T2 = x2. we have that T3 k =g-mx2 2m1+ ( aµx3 {tx1)2.22) 2 g ( x) = ( 10 23 ) .22)-(10. We now turn to equation (10. Equation (10.272 CHAPTER 10. We will need to verify that this choice satisfies the remaining linearizing conditions (10. Equation (10. FEEDBACK LINEARIZATION which is of the form i = j (x) + g(x)u. . 2 g( x ) 0 (10 .

adf-lg(x)]nxn has rank n for all x E Do.4.23)-(10.. C = [g(x). Equivalently.2 The system (10.10.. CONDITIONS FOR INPUT-STATE LINEARIZATION To verify that (10. Theorem 10.28) In this section we consider a system of the form where f .24) we proceed as follows: aT2 273 ax1 9(x) = [0 1 0 0 0] u = 0 ( ) so that (10. The coordinate transformation is xl T = g k X2 aµx2 mx2 2m(l+µxi) The function q and w are given by (aT3/ax)f(x) w(x) _ 0 g(x) ax O(x) (aT3/ax)g(x) 10. Similarly 39(x) = m.23) is satisfied. g : D -+ R' and discuss under what conditions on f and g this system is input-state linearizable.4 Conditions for Input-State Linearization ± = f(x) + g(x)u (10..(1µ+i x1) 0 provided that x3 # 0. adfg(x). the matrix . adfg(x).28) is input-state linearizable on Do C D if and only if the following conditions are satisfied: (i) The vector fields {g(x). Therefore all conditions are satisfied in D = {x E R3 : X3 # 0}. ad f-lg(x)} are linearly independent in Do. .

FEEDBACK LINEARIZATION . Proof: See the Appendix.e. Vx E 1R2. we have that {g. 9] = ex f (x) .13 Consider the linear time-invariant system i=Ax+Bu i. 9]] = A2B g = (-1)tRB. ad fg.2 are satisfied VxEIR2. Example 10. [f. ( 0 ex2 1 0 Also the distribution A is given by span{9} = span C 1 L1 0 which is clearly involutive in R2. adfg = [f. Straightforward computations show that adf9 = af9 = [f. . f (x) = Ax. We have.12 Consider again the system of Example 10. adf-2g} is involutive in Do.. Thus conditions (i) and (ii) of Theorem 10.274 CHAPTER 10. (ii) The distribution A = span{g.adfg}={LOJ' and rank(C) = rank [er2 J} 2.of g(x) = -AB [f. and g(x) = B. E) Example 10.9] = axf(x) x2 'f g(x) adfg = e0 Thus.10 r ex2 L -1 i axe + 1 u = f(x) + 9(x)u.

Notice also that conditions (ii) is trivially satisfied for any linear time-invariant system.10. as in the input-state linearization problem. Example 10. (-1)n-'An-'B} and therefore..5. . where the interest is in linearizing the mapping from input to state. A2B. adfg(x). .-. g : D C Rn -* lR' h:DCRn-4 R (10. B). . Consider the system y=h(x) = f (x) + g(x)u f. To get a better grasp of this principle.. is that when deriving the coordinate transformation used to linearize the state equation we did not take into account the nonlinearity in the output equation. Therefore. we consider the following simple example.29) Linearizing the state equation. . we obtain =x2 . adf-lg(x)} are linearly independent if and only if the matrix C = [B. INPUT-OUTPUT LINEARIZATION Thus.. AB.2 is equivalent to the controllability of the pair (A..axix2 + (x2 + 1)u We are interested in the input-output relationship. our interest is in a certain output variable rather than the state. In this section we consider the problem of finding a control law that renders a linear differential equation relating the input u to the output y. 275 {g(x). such as in a tracking control problem. adf-19(x)} _ {B. so we start by considering the output equation y = x1. Often.5 Input-Output Linearization So far we have considered the input-state linearization problem.14 Consider the system of the form . Differentiating. for linear systems condition (i) of theorem 10. 10. does not necessarily imply that the resulting map from input u to output y is linear. ad fg(x). of course. the vectors {g(x). since the vector fields are constant and so 6 is always involutive. The reason. -AB. An-1B]nxn has rank n. A2B. .

Once this linear 11 system is obtained. In this case we can define the control law U = L91(x) [-Lfh + v] that renders the linear differential equation y=v. This idea can be easily generalized. Given a system of the form (10. y 8x x ah Oh axf (x) + 8xg(x)u = Lfh(x) + L9h(x)u There are two cases of interest: . .g : D C l -4 1R' and h : D C R" -+ R are sufficiently smooth. the approach to obtain a linear input-output relationship can be summarized as follows. linear control techniques can be employed to complete the design. In this case we continue to differentiate y until u appears explicitly: .276 CHAPTER 10. letting u f X2 + 1 1 [v + axi x2] (X2 54 1) we obtain or y+y=v which is a linear differential equation relating y and the new input v.CASE (1): Qh # 0 E D. Thus. we obtain x2 = -xl -axix2+(x2+1)u. Differentiate the output equation to obtain Oh.29) where f. FEEDBACK LINEARIZATION which does not contain u.CASE (2): D-X = 0 E D. Differentiating once again.

1.30) is important and is called the relative degree of the system. y(r) = Lfh(x) + L9Lf-ll h(x)u If L9Lfh(x) = 0. Letting u= L9Lf llh(x) 1 [-Lrh+v] we obtain the linear differential equation y(r) = v. (10. Thus. we can define C9T1 de f y = ax f(x) = T2 y(2) = a 22[f(x) + g(x)ul. Definition 10. if the relative degree is equal to the number of states.30) The number of differentiations of y required to obtain (10.10 A system of the form x = f (x) + g(x)u f.10. Vx E Do Vx E Do L9Lf lh(x) # 0 Remarks: (a) Notice that if r = n. 22g(x) = 0 . then denoting h(x) = Tl (x). that is. 1 The assumption r = n implies that Og(x) = 0. g : D C Rn y=h(x) h:DCR"R R" is said to have a relative degree r in a region Do C D if L9L fh(x) = 0 Vi. 0 < i < r . INPUT-OUTPUT LINEARIZATION 277 ydef y(2) dt[axf(x)l = Lfh(x)+L9Lfh(x)u. for some integer r < n with L9Lf -llh(x) $ 0.5. we have that y = 5[f (x) + g(x)u]. we continue to differentiate until. We now define this concept more precisely.

15 Consider again the system of Example 10. as we shall see.17). then input-output linearization leads to input-state linearization. the definition of relative degree coincides with the usual definition of relative degree for linear time-invariant systems.14 ±1 =x2 22 = -xl .axix2 + (x2 + 1)u hence the system has relative degree 2 in Do = {x E R2 : x2 # 1}. 5g(x) = l 2 0 0 g (x ) x) ax 1 g ( n g ( x) 0 0 which are the input-state linearization conditions (10. FEEDBACK LINEARIZATION y(2) = 82 f (x ) x + f T3 and iterating this process y (n) = Therefore n f ( X) a = = n g ( x) u.278 CHAPTER 10. if the relative degree r equals the number of states n. Thus.ax2 + (x2 + 1)u 1x2 y=x1 we saw that X2 -xl .16 Consider the linear time-invariant system defined by { x=Ax+Bu y=Cx . Example 10. Example 10. (b) When applied to single-input-single-output systems.

r = n ..m . -9n-1 nxn nxl 0. l 0. 0.. we have that CA'-1B= 1 0 for i= l pm for i=n .. we conclude that r = n .. if CB = p.. Thus. We have y=Ci=CAx+CBu 0 0 CB = [ Po P1 .. Assuming that this is not the case.. ] J Pm. We now calculate the relative degree using definition 10. INPUT-OUTPUT LINEARIZATION where 0 1 279 0 1 .A) 1B = sn+qn-lss-1+.. if m=n-1 otherwise IO 1 Thus. 11xn' Pmsm H(S) = C(SI .. Pm 0 .. given the form of the C matrix.. if m=n-2 otherwise If CAB = pm. With every differentiation.. then we conclude that r = n .10. we have that y = CAx..in > 0. and we continue to differentiate y: y(2) = CAi = CA2X + CABu 0 0 CAB = [ Po P1 .. _ pm.. +Po m<nc The relative degree of this system is the excess of poles over zeros.10. Assuming that this is not the case.. Pm 0 .. Pm 0 ..5.m = 2. 0 0 0 0 0 .m = 1.+qo . C= [ Po P1 Pin The transfer function associated with this state space realization can be easily seen to be +Pm-lsm-1 + . we continue to differentiate. 0 A= 0 0 0 1 B= 0 1 -qo -Q1 .... the "1" entry in the column matrix A'B moves up one row. that is..

10.A)-1B = 6s2+2s+7 s5+4s4+s3+5s2+3s+2 which shows that the relative degree equals the excess number of poles over zeros. and that all SISO systems can be linearized in this form.280 CHAPTER 10.m and we conclude that r = n .6 The Zero Dynamics On the basis of our exposition so far. we have that r = 3. y(3) = CA3X + CA2Bu. A= 0 0 0 0 0 0 0 1 -2 -3 -5 -1 -4 C=[ 7 2 6 0 0 0 0 1 B= 0 0 0 L 1 0 5x5 0 5x1 0 11x5 To compute the relative degree. y(2) = CA2X + CABu. To understand the main idea. consider first the SISO linear time-invariant system . In this section we discuss in more detail the internal dynamics of systems controlled via input-output linearization. FEEDBACK LINEARIZATION It then follows that y(?) = CA' x + CA''-'Bu. number of poles over zeros.m.17 Consider the linear time-invariant system defined by x=Ax+Bu I y=Cx where 0 1 0 1 0 0 01 . it would appear that input-output linearization is rather simple to obtain. at least for single-input-single-output (SISO) systems. that is. we compute successive derivatives of the output equation: y=CAx+CBu. It is easy to verify that the transfer function associated with this state space realization is H(s) = C(sI . CB=O CAB = 0 CA2B = 6 Thus. the relative degree of the system is the excess Example 10. CA''-1B # 0. r = n .

X2 + Q2x3 . Ignoring the fact that the system is linear time-invariant. the control law 1 u = 40x1 + q.10.kle .k2e + yd] . THE ZERO DYNAMICS 281 i=Ax+Bu y = Cx.42x3) + Plu Thus. (10. Since we are interested in a tracking control problem.k2e + yd. With this input v we have that u = Igoxl + 41x2 + 42x3 x3J + 1[. ±2 x3 0 -ql P1 1 + 0 1 U 42 y = [ Po 0] The transfer function associated with this system is H(s) = + Pls qo + q1S + g2S2 + Q3S3 U.v P1 P1 produces the simple double integrator y=v. we can define the tracking error e = y . equation (10.6. In companion form. we can assume without loss of generality that the system is of third order and has a relative degree r = 2. we proceed with our design using the input-output linearization technique. We have: Y = Pox1 +P1x2 y = POxl +P1±2 = POx2 + Plx3 = P0X2 +P1X3 = POx3 + pl(-40x1 .41x2 . Suppose that our problem is to design u so that y tracks a desired output yd.31) can be expressed as follows: ±1 0 0 4o 1 0 0 x2 X3 'I.yd and choose v = -kle .P0 X3 + .31) To simplify matters.

Indeed.32).x3 through a one-to-one transformation. based on the input-output linearization principle.282 CHAPTER 10. and therefore the effectiveness of the input-output linearization technique depends upon the stability of the internal dynamics. ±1 = - P1 xl + 1 1 P1 y = Aid xl + Bid y. whereas the original state space realization has order n = 3. the internal dynamics is thus exponentially stable if the eigenvalues of the matrix Aid in the state space realization (10. During the design stage. and thus the design stage can be seen as a reallocation of the eigenvalues of the A-matrix via state feedback.31) was lost. Using elementary concepts of linear systems. r = 2.33) can be used to "complete" the three-dimensional state. The unobservable part of the dynamics is called the internal dynamics. Thus. Therefore. part of the dynamics of the original system is now unobservable after the input-output linearization.32).32) A glance at the result shows that the order of the closed-loop tacking error (10. and plays a very important role in the context of the input-output linearization technique. and yd is the desired trajectory. thus producing the (external) second-order differential equation (10. Notice also . Notice that the output y in equation (10. the state equation (10. To complete the three-dimensional state.33) Equation (10. resulting in the (external) twodimensional closed loop differential equation (10. Thus. provided that the first-order system (10. In our case. we reason as follows. we know that this is possible if during the design process one of the eigenvalues of the closed-loop state space realization coincides with one of the transmission zeros of the system. (10.31) was manipulated using the input u.33) has an exponentially stable equilibrium point at the origin. as important as the stability of the external tracking error (10. of course.i. we obtain full information about the original state variables x1 .32). Stability of the internal dynamics is. This is indeed the case in our example. FEEDBACK LINEARIZATION which renders the exponentially stable tracking error closed-loop system e + k2e + kle = 0. and so y is bounded since e was stabilized by the input-output linearization law. xl will be bounded. we can consider the output equation y = Pox1 +P1x2 = Poxl +pi .32) is the same as the relative order of the system. To see why. As we well know. (10. A look at this control law shows that u consists of a state feedback law.33) is given by y = e + yd.33) are in the left half plane. using xl along with e and a as the "new" state variables. At the end of this process observability of the three-dimensional state space realization (10.

THE ZERO DYNAMICS 283 that the associated transfer function of this internal dynamics is Hid = Po + pis Comparing Hid and the transfer function H of the original third-order system. for linear time-invariant systems the stability of the internal dynamics is determined by the location of the system zeros. Now define the following nonlinear transformation µi (x) {fin-r(x) z = T(x) = 01 def 71EII2" r.10. ERr (10.34) y = h(x) and assume that (10. Our discussion above was based on a three dimensional example. provided that the zeros of the transfer function Hid are in the left-half plane. This also implies that the internal dynamics of the original system is exponentially stable.6. specifically. = h(x) 02 = a 11 f (x) V)i+i = V)i f (x) i = 1 . however. which pertains to linear time-invariant system only.35) Lr where V). our conclusions can be shown to be general. Consider an nth-order system of the form x = f(x) + g(x)u (10. we conclude that the internal dynamics contains a pole whose location in the s plane coincides with that of the zero of H. we study a nonlinear version of this problem. The extension to the nonlinear case is nontrivial since poles and zeros go hand in hand with the concept of transfer function.34) has relative degree r. thus leading to the loss of observability. Rather than pursuing the proof of this result. We proceed as follows. System that have all of their transfer function zeros are called minimum phase.

77 is unobservable from the output. When the input u` is applied. which can be linearized by the input u' _ O(x) + w-IV (10.34) into the following so-called normal form: fl = fo(rl.O(x)] (10. with the following properties: W = a rr g(x).37) (10.39) take the form fl = fo(rl.0 = Ac + Bcv y = Ccy and thus. (10.40) 77 represents the internal dynamics.37)(10. From here we conclude that the stability of the internal dynamics is determined by the autonomous equation 7 = fo(rl.284 CHAPTER 10. which we now formally introduce. 0) - This dynamical equation is referred to as the zero dynamics.0 = all 1x=T-1(z) Ox and (00r/ax) g(x) The normal form is conceptually important in that it splits the original state x into two parts. dxEDo. . namely. equations (10. ) = A. 77 and .39) y = Cc where f0(77.36) It will be shown later that this change of coordinates transfers the original system (10. for 1<i<n-r. FEEDBACK LINEARIZATION and pi .38) (10. + Bcw[u .{Ln-r are chosen so that T(x) is a diffeomorphism on a domain Do C D and Xg(x)=0. (rlax) f (x) f represents the external dynamics.

37)(10. then the nonlinear system can be fully linearized and.10. ignores robustness issues that always arise as a result of imperfect modeling.18 Consider the system xl = -kxl . Input-output linearization is achieved via partial cancelation of nonlinear terms.0) is called the zero dynamics. whether input-output linearization can be applied successfully depends on the stability of the zero dynamics.0 in equations (10.37)-(10. This means that the zero dynamics can be determined without transforming the system into normal form.6. represented in normal form).-0 u(t) _ O(x) f7 = fo(mI. Summary: The stability properties of the zero dynamics plays a very important role whenever input-output linearization is applied. input-output linearization can be successfully applied.11 Given a dynamical system of the form (10. Before discussing some examples.. r < n: If the relative degree of the nonlinear system is lower than the order of the system. The remaining n . systems whose zero dynamics is stable are said to be minimum phase. we notice that setting y .2x2u 4 22 = -x2 + xiu y = X2 .39) we have that y-0 ty t.39) (i. of course. Example 10. This analysis. then input-output linearization does not produce a control law of any practical use. 0) Thus the zero dynamics can be defined as the internal dynamics of the system when the output is kept identically zero by a suitable input function. then only the external dynamics of order r is linearized. THE ZERO DYNAMICS 285 Definition 10. The stability properties of the internal dynamics is determined by the zero dynamics. If the zero dynamics is not asymptotically stable. Two cases should be distinguished: r = n: If the relative degree of the nonlinear system is the same as the order of the system.r states are unobservable from the output. the autonomous equation 1 = fo(y. Thus.e. Because of the analogy with the linear system case.

the zero dynamics is asymptotically stable if a = 0.u=0. and unstable if a > 0. we obtain y= Therefore LEI =x2+xi 2x111 + i2 = 2x1 (x2 + xl) + x2 + u r=2 To find the zero dynamics. which is exponentially stable (globally) if k > 0. we proceed as follows: y=0 Then. Example 10.i1=0=x2+2i =>' X2=0 u=-x2. Therefore. FEEDBACK LINEARIZATION First we find the relative order. To determine the zero dynamics. Differentiating y. r = 1.286 CHAPTER 10. we proceed as follows: #-. Moreover. and unstable if k < 0. we obtain y=i2=-x2+x1u.19 Consider the system I xl = x2 + xi i2=x2+u x3 = x1 +X 2 + ax3y = X1 Differentiating y. . the zero dynamics is given by x2=0 t-. i2=x2+u=0 Therefore the zero dynamics is given by x3 = ax3.

41) into the normal form (10.2. g : D C R" -> R" y=h(x). . Proof: See the Appendix.. Then.r. the input-output linearization procedure depends on the existence of a transformation T(. for every xo E Do. f.7. 10..8 Exercises 10.) that converts the original system of the form fx = f (x) + g(x)u. The following theorem states that such a transformation exists for any SISO system of relative degree r < n.10.1) Prove Lemma 10. Theorem 10.6..37)-(10.41) and assume that it has relative degree r < n Vx E Do C D. _r such that (i) Lgpi(x) = 0. dx E S2 f T(x) = ii( ) 1 An-r(x) 01 Lr J 01 = h(x) 02 = Lfh(x) Or = Lf lh(x) is a diffeomorphism on Q.39).3 Consider the system (10.7 Conditions for Input-Output Linearization According to our discussion in Section 10. CONDITIONS FOR INPUT-OUTPUT LINEARIZATION 287 10. h:DCRn ->R (10. there exist a neighborhood S2 of xo and smooth function It. (ii) for 1 < i < n . /t. .

6) Consider the following system 21=x1+x3 { x2 = xix2 X3 = x1 sin x2 + U (a) Determine whether the system is input-state linearizable. FEEDBACK LINEARIZATION (10. (10. Verify that this system is input-state linearizable. find the linearizing law. (b) If the answer in part (a) is affirmative.7) ([36]) Consider the following system: xl = X3 + 3. find the linearizing law. (b) Design a state feedback control law to stabilize the ball at a desired position.2x3 { x2 = x1 + (1 + x2)46 23=x2(1+x1)-x3+46 (a) Determine whether the system is input-state linearizable. (10.2) Consider the magnetic suspension system of Example 10.11.11.288 CHAPTER 10. (10. To complete the design. (b) If the answer in part (a) is affirmative. (10. (b) If the answer in part (a) is affirmative.4) Determine whether the system J ±1 = X 2 X 3 ll X2 = U is input-state linearizable.5) Consider the following system: 21 = x1 + x2 { ±2=x3+46 X3=x1+x2+x3 (a) Determine whether the system is input-state linearizable. find the linearizing law. .3) Consider again the magnetic suspension system of Example 10. (10. proceed as follows: (a) Compute the function w and 0 and express the system in the form i = Az + By.

Su. [34].t1 =21+22 ±2=2g+U X3 = X2 . see the outstanding books of Isidori [36]. [39]. design a control law to track a desired signal y = yref- Notes and References The exact input-state linearization problem was solved by Brockett [12] for single-input systems. used to introduce the notion of zero dynamics follows Slotine and Li [68].1 follows closely References [36] and [11]. (b) Determine whether it is minimum phase. Our presentation follows Isidory [36] with help from References [68]. and Marino and Tomei. The multi-input case was developed by Jakubczyk and Respondek. Section 10. EXERCISES (10.8) Consider the following system: 289 . [77] and Hunt et al.8. Nijmeijer and van der Schaft [57]. . The notion of zero dynamics was introduced by Byrnes and Isidori [14].(X23 y = 21 (a) Find the relative order. (c) Using feedback linearization. For a complete coverage of the material in this chapter.10. The linear time-invariant example of Section 6. [88] and [41]. [52].

.

This reconstruction is possible provided certain "observability conditions" are satisfied. Unfortunately. 11.1 Observers for Linear Time-Invariant Systems We start with a brief summary of essential results for linear time-invariant systems. Similarly. For example. we assume that the system is single-input-single-output. the nonlinear case is much more challenging. for simplicity. Throughout this section we consider a state space realization of the form x=Ax+Bv y = Cx AEII2nXn BEIfPnxl C E R1xn where. knowledge of the state is sometimes important in problems of fault detection as well as system monitoring. an observer can be used to obtain an estimate 2 of the true state x. 291 . In this case.Chapter 11 Nonlinear Observers So far. state feedback was used in Chapter 5. and some form of state reconstruction from the available measured output is required. and no universal solution exists. Aside from state feedback. the results of Chapter 10 on feedback linearization assume that the state x is available for manipulation. whenever discussing state space realizations it was implicitly assumed that the state vector x was available. The purpose of this chapter is to provide an introduction to the subject and to collect some very basic results. assuming that the vector x was measured. While for linear time-invariant systems observer design is a well-understood problem and enjoys a well-established solution. along with the backstepping procedure. the state x is seldom available since rarely can one have a sensor on every state variable. and that D = 0 in the output equation.

if this is the case.1 The state space realization (11.1) is said to be observable if for any initial state xo and fixed time tl > 0.y2 = Lx1 . (11.b1 "observable" if and only if Lxl = Lx2 x1 = x2. t1] suffices to uniquely determine the initial state xo.292 CHAPTER 11. the state x(ti) can be reconstructed using the well-known solution of the state equation: x( tl) Also = elxo + Lt1 eA(t-T)Bu(r) dr.Lx2 = CeAtl (xl . 0 yl . NONLINEAR OBSERVERS 11. By definition.5) y(t2) = (Lx2)(t) = CeAtlx2 +CJ and rt.x2). the mapping L is one-to-one if y1 = Y2 For this to be the case. then the inversion map xo = L-ly uniquely determines x0. we can write y(t) = (Lxo)(t). . equation (11.4) (11. Once xo is determined. for fixed u = u`. We have. that is.3) defines a linear transformation L : 1R" -+ ]R that maps xo to y.2) y(ti) = Cx(tl) = CeAtlxo + C r 0 t.1 Observability Definition 11. Indeed.3) Notice that. y(ti) = (Lxl)(t) = CeAt'xl +C J 0 tl eA(t-T)Bu*(r) d7 eA(t-T)Bu*(r) drr (11. we must have: CeAtx = 0 x=0 .1. Now consider two initial conditions xl and x2.1) is observable if and only if the mapping Lx0 is one-to-one. the knowledge of the input u and output y over [0. Thus. Accordingly. (11. eA(t-T)Bu(rr) dr. we argue that the state space realization (11.

only the first n .0 in (11. From the discussion above. L is one-to-one [and so (11.11. the observability properties of the state space realization (11. We now show that this is the case if and only if C CA rank(O)'Irank CAn-1 = n (11. 293 Thus. according to this discussion.6) [or (11.1) is observable] if and only if the null space of CeAt is empty (see Definition 2.7) To see this.0. (11. we have that the state space realization (11. without loss of generality.1)] is observable if and only if y(t) = CeAtxo .0 = x .1. The matrix 0 is called the observability matrix. = CAn-1xo or. setting u = 0.1 powers of AZ are linearly independent. OBSERVERS FOR LINEAR TIME-INVARIANT SYSTEMS or. Therefore we can assume..1).. Assuming for simplicity that this is the case. note that y(t) = CeAtxo 2 C(I+tA+ By the Sylvester theorem.6) Observability conditions can now be easily derived as follows. that u . C y=0 and the condition Oxo = 0 CA CAn-1 xp = 0. x= Ax (11. we can redefine observability of linear time-invariant state space realizations as follows.1) are independent of the input u and/or the matrix B.1) reduces to l y=Cx. b Ox0 = 0 xo = 0 is satisfied if and only if rank(O) = n. Notice that. equivalently N(CeAt) = 0. Given these conditions. .9). Thus y = 0 b Cxo = CAxo = CA2xo = .

1)... +po Assuming that (A.1.1. there exists a nonsingular matrix T E IItfXf such that defining new coordinates t = Tx.2 Observer Form The following result is well known.1) for (11.i (11.8) (11. Defining the observer error de f x=x .294 CHAPTER 11..2 The state space realization (11.6)] is said to be observable if C CA CAn-1 rank =n 11.3 11t Observers for Linear Time-Invariant Systems Now consider the following observer structure: x=Ax+Bu+L(y-Cx) where L E Rn-1 is the so-called observer gain.1) takes the so-called observer form: r 0 1 0 0 -qo -q1 -Q2 PO Pi 0 1 0 0 0 + J U 0 0 1 -Qn-1J L 2n y=[0 . the state space realization (11.. Consider the linear time-invariant state space realization (11. C) form an observable pair. and let the transfer function associated with this state space realization be H(s) = C(sI - A)-1B = pn-lsn-1 +pn-2sn-2 yn + qn-lsn-1 + .9) we have that x = x-i=(Ax+Bu)-(Ai+Bu+Ly-LCi) = (A . NONLINEAR OBSERVERS Definition 11. 11.LC)(x . + qo + ..i) ..

the case.11) (11.LC are in the left half of the complex plane.10) Thus. then the eigenvalues of (A . provided the so-called observer error dynamics (11. provided that the eigenvalues of the matrix A .LC).11). Thus. we conclude that (i) The eigenvalues of the observer are not affected by the state feedback and vice versa. Also (A . a state feedback law is used in (11.11.i.LC)i Thus A 0BK ABLC] [x] x=Ax where we have defined The eigenvalues of the matrix A are the union of those of (A + BK) and (A . However.LC) can be placed anywhere in the complex plane by suitable selection of the observer gain L.1. of course.12) i. .BK± (A + BK)x .e.10) is asymptotically (exponentially) stable.12) is the observer equation. 11. It is a well-known result that. This is. (ii) The design of the state feedback and the observer can be carried out independently.LC). OBSERVERS FOR LINEAR TIME-INVARIANT SYSTEMS or 295 x = (A . x -+ 0 as t --> oo.1) is controlled via the following control law: u = Kx i = Ax+Bu+L(y-Cx) (11.7) is satisfied.4 Separation Principle Assume that the system (11. Equation i = Ax+BKx Ax+BKx . an estimate x of the true state x was used in (11. (11. This is called the separation principle. We have (11. if the observability condition (11.1.BK±. short of having the true state x available for feedback.11).

are sufficiently smooth and that h(O) = 0. y(xu(t.. we consider an unforced nonlinear system of the form 'Yn1 I x=f(x) y = h(x) h:ii.2 Nonlinear Observability x=f(x)+g(x)u y = h(x) Now consider the system f :R"->lR'. We also assume that f ().1R (11. This means that 0. xo)) Definition 11. xo)) Definition 11.. f :1 -* [F (11.14) and look for observability conditions in a neighborhood of the origin x = 0.4 that distinguishability must hold for all functions. In the following theorem. checking observability is much more involved than in the linear case. t] xo = x2 There is no requirement in Definition 11.4 The state space realization ni is said to be (locally) observable at xo E R" if there exists a neighborhood U0 of xo such that every state x # xo E fZ is distinguishable from xo.13) For simplicity we restrict attention to single-output systems. There are several subtleties in the observability of nonlinear systems.xo): represents the solution of (11. in general. Clearly y(xu(t. xo) is said to be distinguishable if there exists an input function u such that y(xu(t. It is said to be locally observable if it is locally observable at each xo E R". .13) at time t originated by the input u and the initial state xo. is locally observable in a neighborhood Uo C Rn if there exists an input u E li such that y(xu(t. xo).g:]R'->1R h : PJ . Throughout this section we will need the following notation: xu(t.296 CHAPTER 11. xo)) = h(xu(t. xo)) = y(xu(t. NONLINEAR OBSERVERS 11. xo): represents the output y when the state x is xu(t. xo)) y(xu(t. and. xo)) Vt E [0.3 A pair of states (xo.

.15) is equivalent to the observability condition (11.13) is observable. equivalently.14) is locally observable in a neighborhood Uo C D containing the origin. We saw earlier that for linear time-invariant systems. .2. This property is a consequence of the fact that the mapping xo --1 y is linear. Of course. if rank(O) = n. if Vh rank = n dx E Uo VLf 'h Proof: The proof is omitted. in general.2 Consider the following state space realization: -k1 = x2(1 .u) 0. definition 11. CA2. Example 11. for nonlinear systems local observability does not.1 Let = Ax y = Cx.7).1 x2 = x1 y = xl . . The following example shows that. CAn-1 } is linearly inde- Roughly speaking.13) is locally observable around the origin. observability is independent of the input function and the B matrix in the state space realization. The following example clarifies this point. imply global observability.4 and Theorem 11.) VLf-1h = CAn-1 = V(CAx) = CA and therefore '51 is observable if and only if S = {C. Then h(x) = Cx and f (x) = Ax. as discussed in Section 11.2.1 The state space realization (11. pendent or.1 state that if the linearization of the state equation (11. See Reference [52] or [36]. CA. NONLINEAR OBSERVABILITY 297 Theorem 11.11. condition (11. for linear time-invariant systems. then (11. Nonlinear systems often exhibit singular inputs that can render the state space realization unobservable. and we have Vh(x) = C VLfh = V(. Example 11.

it is tempting to approach nonlinear state reconstruction using the following three-step procedure: (i) Find an invertible coordinate transformation that linearizes the state space realization.3 Observers with Linear Error Dynamics Motivated by the work on feedback linearization. (iii) Recover the original state using the inverse coordinate transformation defined in (i). In the next two sections we discuss two rather different approaches to nonlinear observer design. A complete coverage of the subject is outside the scope of this textbook. NONLINEAR OBSERVERS which is of the form. Now consider the same system but assume that u = 1. we obtain the following dynamical equations: { A glimpse at the new linear time-invariant state space realization shows that observability has been lost.VLfh}) = rank({[1 0]. 11. [0 1]}) = 2 and thus J i =f(x) y = h(x) is observable according to Definition 11. we have rank({Oh.4. (ii) Design an observer for the resulting linear system. 11. . each applicable to a particular class of systems. x = f (x) + g(x)u y = h(x) with X1 0 If u = 0.2.1 Nonlinear Observers There are several ways to approach the nonlinear state reconstruction problem. Substituting this input function.298 CHAPTER 11. depending on the characteristics of the plant.

uER yER T(0) = 0. We have (11.z) . u) . form (11. OBSERVERS WITH LINEAR ERROR DYNAMICS More explicitly. then we have that z -+ 0 as t -4 oo. 0 1]. an observer can be constructed according to the following theorem.K(y . the new state space realization has the x =Ao z y = Coz + 7 ( y.zn)] = (Ao + KC0)z. u) .21). z E Rn (11.16) z = T(x).18).. under these conditions.u) y=h(x) there exist a diffeomorphism satisfying xE]R".. Co=[0 0 .11. ry= L 7n(y.K(y . defining z = Aoz + ry(y.3. we obtain i = x-i = T-1(z) -T-1(z .18) where 0 1 0 0 1 .. 0 0 'Y1 (y. and x = x .u) I (11.oo 0 . Proof: Let z = z .. suppose that given a system of the form 299 x= f(x)+g(x.z.[Aoz +'Y(y. after the coordinate transformation. u) (11.2 ([52]) If there exist a coordinate transformation mapping (11. Theorem 11. .. FER (11.+ 0 as t . then. then x -+ x z = .i. Using (11.16) into (11.i-z = [Aoz +'Y(y.20) x = T-1(z) ast-4 oc.zn). u) yEll 71(y. If the eigenvalues of (Ao+KCo) have negative real part.u) J L 0 0 1 0 J then..21) such that the eigenvalues of (Ao+KCo) are in the left half of the complex plane. u)] .17) and such that.19) Ao = 0 0 .

. NONLINEAR OBSERVERS Example 11. this approach to observer design is based on cancelation of nonlinearities and therefore assumes "perfect modeling. perfect modeling is never achieved because system parameters cannot be identified with arbitrary precision. in general. L1 -K2J Lz2 El It should come as no surprise that. Thus. the "expected" cancellations will not take place and the error dynamics will not be linear.300 CHAPTER 11.3 Consider the following dynamical system it = x2 + 2x1 { 22 = X1X2 + X U (11.22) y=x1 and define the coordinate transformation z1=x2-2j X2 1 z2=x1.22) takes the form I .18) with Ao=[ p].u)-K(y-22) z2]+[-25y123u]+[K2 Lz2J-L1 00J The error dynamics is l] Lx2] Thus. the system (11. z ---> 0 for any K1i K2 > 0. In the new coordinates. as in the case of feedback linearization. Co=(01]." In general. The result is that this observer scheme is not robust with respect to parameter uncertainties and that convergence of the observer is not guaranteed in the presence of model uncertanties. andy=L-z y3u r 2y3§ zy J The observer is z=Aoz+y(y.tl = -2y3 + y3u Z2=Z1+Zy2 y=Z2 which is of the form (11.

the estimation error converges to zero as t -+ oo.23) where A E lRnxn C E Rlxn and f : Rn x R -a Rn is Lipschitz in x on an open set D C R'.23) and the corresponding observer (11. and belongs to the category of what can be called the differential geometric approach.LC) + (A . u)] . u) + L(y .C2)] = (A . f satisfies the following condition: If(x1. For simplicity.f (x. (11. u). under these assumptions. .24) x=Ax+f(x.x is asymptotically stable. Proof: = [Ax + f (x. i. u) y=Cx (11. if the Lyapunov equation P(A .u*) . Theorem 11.f(x2.4. defined below.u)+L(y-Cx) Rnxl. (11. (P) (11.4 Lipschitz Systems The nonlinear observer discussed in the previous section is inspired by the work on feedback linearization.26) y< .3 Given the system (11. LIPSCHITZ SYSTEMS 301 11.min (Q) 2A.[Ax + f (x.. In this section we show that observer design can also be studied using a Lyapunov approach.LC)T P = -Q where p = PT > 0. and Q = QT > 0. u) . we restrict attention to the case of Lipschitz systems.25) where L E The following theorem shows that.e.11.27) then the observer error 2 = x .u )lI < 7IIx1 -x211 Now consider the following observer structure Vx E D.LC)2 + f (x.25). Consider a system of the form: J i = Ax + f (x. is satisfied with (11.

equivalently y )'min (Q) 2A""' (P) 0 Example 11. V is negative definite. Therefore.i+ x. NONLINEAR OBSERVERS To see that I has an asymptotically stable equilibrium point at the origin.t1I2 II2xT P[f [(. u) .f(. u) jj < 2y\.4 Consider the following system: 0 LX2J=[12J[x2J+[ y=[10][ Setting 2 2 I L=[° we have that A-LC= Solving the Lyapunov equation [ 0 1 1 2 P(A .u)] pTQiII but and Amin(Q)II. provided that Amin(Q)II:i1I2 > 2yAmax(P)jjill2 or.fp. u) .u) .LC) + (A ..LC)T P = -Q . consider the Lyapunov function candidate: V iPi + iPx _ -2TQ2+22TP[f(i+2. u)jjj J12±TPII II f p+ x.302 CHAPTER 11.f(2.

5 0. and Amax(P) _ 1. we obtain 1. Denoting x1 = 2 . This principle guarantees that output feedback can be approached in two steps: (i) Design a state feedback law assuming that the state x is available.921 < 2k11x1 . Thus. and so a function of the observer gain L.921 = 2k112.5. NONLINEAR SEPARATION PRINCIPLE 303 with Q = I. (ii) Design an observer.4 we discussed the well-known separation principle for linear time-invariant (LTI) systems.5 which is positive definite.7071. and replace x with the estimate i in the control law.5 -0. How to maximize this region is not trivial (see "Notes and References" at the end of this chapter).1 we have S C2]T : I£21 < k. ry = 2k and f is Lipschitz Vx = [l.f(x2)112 = (Q2 92)2 = -µ2)I = 1(6 +µ2)(6 -{12)1 < 2121 16 .11. . and ry=2k< or 1 2Amax(P) k< 1 6. x2 = [µ2 we have that Il f(x1) .5 -0.2929. 11. Of course.x2112 for all x satisfying 161 < k. this region is a function of the matrix P.5 Nonlinear Separation Principle In Section 11. The eigenvalues of P are Amjn(P) = 0.8284 The parameter k determines the region of the state space where the observer is guaranteed to work. We now consider the function f.1.

2x(-x + x2z). With this control law.35) V = -x2 .28)-(11. consider the following example. we choose the control law cb1(x) = -x2. in general.33) (11.x3 . guarantee closed-loop stability. z = 0 is a globally asymptotically stable equilibrium point of the system Lb = -x +x2z i = -cz . To see this. + u + 2x(-x + x4 + x2£) _ -k + u + 2x(-x + x2z) Letting 2(x2+z2) -x2 + z[x3 .q(x) (11.k + u + 2x(-x + x2z)] and taking u = -cz .30) t: .cz2 which implies that x = 0.28).5 ([47]) Consider the following system: -x + x4 + x2 (11.31) (11.34) _ -kt +u+2xi _ -kl. if the true state is replaced by the estimate given by an observer. .29) k>0 We proceed to design a control law using backstepping. the system (11.32) (11. Using as the input in (11. then exponential stability of the observer does not.x3 + kl.304 CHAPTER 11.28) (11. we obtain 2 = -x+x4+ x2(61 (x) = -x Now define the error state variable z=S-01(x)=1. we have that c>0 (11. Example 11.29) becomes i = -x +x2z i= (11. Indeed. NONLINEAR OBSERVERS In general. nonlinear systems do not enjoy the same properties.+x2. CC With the new variable.

+u+kl-u -kl. Now assume that only x is measured.. and 2x31.11. are measured. of course. we obtain in the control -x+x2z+x21.+u. We have _ -x +x21.0. . Even though l. that is. exponentially converges to zero. Let the observer be given by 1. This is a reduced-order observer.=-kl. the presence of the terms x21.l. It follows that &)e-kt which implies that l. assumes that both x and 1.5.. one that estimates only the nonavailable state 1. -cz-x3+2x31. &)e-kt Solving for x. Using the estimated law (11. = l.oxo)et + Eoxoe-kt which implies that the state x grows to infinity in finite time for any initial condition lxo > 1+k. We have _ -kl.35). . exponentially converges toward .. assume that the error variable z . leads to finite escape time for certain initial conditions. NONLINEAR SEPARATION PRINCIPLE 305 This control law. and suppose that an observer is used to estimate l. To see this. The estimation error is t. we obtain xo(1 + k) x(t) = (1 + k .

See References [61]. [4] and [5]. Observers for Lipschitz systems were first discussed by Thau [80]. [5]. high gainobservers. . [8].4 is based on this reference. The main problem with this approach is that choosing the observer gain L so as to satisfy condition (11. Extensions to multivariable systems were obtained by Krener and Respondek [46] and Keller [42]. Observers with linear error dynamics were first studied by Krener and Isidori [45] and Bestle and Zeitz [10] in the single-inputsingle-output case. [47]. Section 11. There are several other approaches to nonlinear observer design. the separation principle does not hold true for general nonlinear systems. [83].3 is based on Reference [45] as well as References [52] and [36]. not covered in this chapter. and observer backstepping. Output feedback is a topic that has received a great deal of attention in recent years. [82]. [60]. Section 11. Conditions under which the principle is valid were derived by several authors. and [96] for further insight into how to design the observer gain L. one of the main problems with this approach to observer design is that the conditions required to be satisfied for the existence of these observers are very stringent and are satisfied by a rather narrow class of systems.306 CHAPTER 11. Besides robustness issues. [4]. As mentioned.27) is not straightforward. The list of notable omissions includes adaptive observers [52]-[54]. nonlinear observer design is a topic that has no universal solution and one that remains a very active area of research. NONLINEAR OBSERVERS Notes and References Despite recent progress. See References [90].

307 . Moreover. we show that V(x) is positive definite if and only if there exist ai E IC such that ai(IlxI) < V(x) Vx E D. if D = R" and V() is radially unbounded. v<_IIxII<r for 0 < y < r The function X() is well defined since V(. A. define X(y) = min V(x). To prove necessity. For easy of reference we re-state the lemmas and theorems before each proof. It is clear that the existence of ai is a sufficient condition for the positive definiteness of V.1 Chapter 3 Lemma 3.) is continuous and y < lxll < r defines a compact set in ]R". Moreover.Appendix A Proofs The purpose of this appendix is to provide a proof of several lemmas and theorems encountered throughout the book. so defined has the following properties: (i) X < V(x).1: V : D -> R is positive definite if and only if there exists class IC functions al and a2 such that ai(Ilxll) < V(x) < a2(IIXII) Vx E Br C D. 0 < l x l l < r. then al and a2 can be chosen in the class Proof: Given V(x) : D C R" -> R.

the set over which the minimum is computed keeps "shrinking" (Figure A.308 APPENDIX A. PROOFS Figure A. al is strictly increasing. it is not strictly increasing. (iv) It satisfies X(0) = 0. Let al(y) be a class K function such that al (y) < kX(y) with 0 < k < 1. in K because it is not strictly increasing. Thus ai(IIxil) < X(IIxlI) <_ V(x) for IIxII < r. in general.1) is positive. however. It is also nondecreasing since as s increases. We also have that X(IIxIj) <_ V(x) for 0 < IIxII < r is not. X is "almost" in the class 1C. The function al can be constructed as follows: al(s) = min [x)] al s s<y<r this function is strictly increasing. Thus. from (A. in the class IC since. (ii) It is continuous.2). To see this notice that = min [X(y)1 y J s<y<r (A. . in general.1). (iii) It is positive definite [since V(x) > 0]. It is not. Therefore.1: Asymptotically stable equilibrium point.

. we assume that the equilibrium point is the origin. Lemma 3. a-'(fl) }.2: Asymptotically stable equilibrium point. We must show that this condition implies that for each e1i361 = al(es) > 0: Ix(0)II < 81 = IIx(t)II < el Vt > to. for 11x(0)I1 < 61i we have Ilx(t)II C a(Ilx(O)II) < a(a-1(el)) = el. To simplify the proof of the following lemma.A.2) Proof: Suppose first that (A. choose 81 = min{8. Thus.1) is stable if and only if there exists a class 1C function a constant a such that IIx(0)II < 8 = IIx(t)II < a(IIx(0)II) Vt > 0.2: The equilibrium xe of the system (3.2) is satisfied. CHAPTER 3 309 Figure A.1.3) Given e1. xe = 0. This proves that there exists E X such that al(jlxlj) < V(x) The existence of E 1C such that for IxMI < r V(x) < a2(lIXIl) for IxII < r can be proved similarly. (A. (A.

Moreover. . IIx(t)II -* 0 as t -4 oo and x = 0 is also convergent. Theorem 3.3 The equilibrium xe = 0 of the system (3. The reader can consult Reference [41] or [88] for a complete proof.) E 1C . 0) Vt > 0.3(IIxoII. Thus.4 is satisfied.310 APPENDIX A. 8x2 19 L9 9X Rn e . and a(. (A. Lemma 3. for each r E Since the inverse of a function in the class K is also in the same class.3 is in class /CC which implies that . It then follows that x = 0 is asymptotically stable. and we have 11x(t)II < e = a(IIx(0)II) provided that IIx(0)II < P. assume that (A. The rest of the proof (i.t) -* 0 as t --> oo. and (iii) it is nondecreasing. We can find a class 1C function that satisfies fi(r) _< 4i(r).e. Thus IIx(0)II < b'. Given el.1) is asymptotically stable if and only if there exists a class 1CG function and a constant a such that IIxoII < 8 = Ix(t)II <_ Q(IIxoII.. IIx(0)II < a' = 11x(t)11 < el Vt > to. but it is not necessarily continuous.3) is satisfied. The mapping 4): el -* 5*(el) satisfies the following: (i) 4i(O) = 0. Then IIx(t)II < 3(Ilxo11. choose any x(0) such that a(IIx(0)II) = e. let s C 1R+ be the set defined as follows: A = 161 E 1R+ : IIx(0)II < bl = 11x(t)II < ei}.5 A function g(x) is the gradient of a scalar function V(x) if and only if the matrix jxlxlxl 19 L9 0 09 8xz 8x. (ii) P(e1) > 0 de > 0. PROOFS For the converse. Let 8' = sup(s).t) Proof: Suppose that A.4) whenever IIxoII < b and the origin is stable by Lemma 3. This set is bounded above and therefore has a supremum. Given e. Thus. the converse argument) requires a tedious constructions of a 1CL function Q and is omitted.2. we now define: 1R+.

5) We now compute the partial derivatives of this function: av ax-1 gi (xi) o..0) ds2 + . 0) ds2 + axl(xl. CHAPTER 3 is symmetric. Thus av Thus a2v 717 '2v a a2v acv x2 x2 x a2v xl T= I 2 o vv TX....x2.. .x2....0) f x2 (x1. f 0 0 X2 g2(xi.1. .9gn + 0 91(x1..... s2.0... 0) ds2 + + 10 xn 9n(xl.0). 311 Proof: To prove necessity. X2 xn x..sn) dsn(A. . by assumption. + 0) ds1 .7): xl V (x) = fox g(x) dx = f g1(si. 0. we assume that g(x) = VV(x).. 0) + x. 0. xl To rove sufficiency. .. 0... s2.A. J a2v a2v x L z2 x a2v a2v a2v and the result follows since TX-1 xI = x.sn) dsn X2 a91 + J dsn 5x2(x1.... +/ X 5ag. It then follows that . where we have used the fact that'. ..s2. '99i 0g axe au and consider the integration as defined in (3. assume that p Y. .

Since the infinite sequence x(tn. x0.x(tn. then its (positive) limit set N is (i) bounded.. .12.qjI < 2 Since qk is in N there is a sequence {tn} such that to -4 oo as n -3 oo.312 APPENDIX A. To show that this is the case. Lemma 3. x0. and Ix(tn. xo. Then x(t.sn 0 91(x) Proceeding in the same fashion.. x2. (ii) closed.qII < e 2 which implies that q E N and thus N contains its limit points and it is closed. Then there exists a sequence {tn} such that I Ip . and by properties of bounded sequences. 0) I0 xz + +91 x1. Suppose that this is not the case.. Now take an arbitrary infinite sequence {tn} that tends to infinity with n. 8q E N such that Ilgk . It remains to show that x(t. . consider the sequence of points {qi} in N.12 p E N. and (iii) nonempty. which approaches a limit point gofNasi -+oo. 0) + 91(x1.gkli < Thus. x0i to) of the system (3. to) . N. it must contain a convergent subsequence that converges to some point p E R'. s2. To prove that it is closed. Proof: Clearly N is bounded.1) is bounded for t > to. PROOFS 9V ax-1 91(x1. Moreover. since the solution is bounded. 0. . the solution approaches N as t -+ oo. t0) approaches N as t -> oo. x0i to) is bounded. it contains some subsequence that converges to a point q contradicts Definition 3. xo. Given e > 0. But this is not possible since it . it can be shown that OV a xe = g= (x) Vi. t0) . . and thus N is nonempty. IIx(tn. to) II > e as n -+ oo for any p in the closed set N. to) is a bounded sequence of points. By Definition 3. xo. we must show that it contains all its limit points."'.4: If the solution x(t.

positive definite continuous and bounded matrix Q(t). there exist a symmetric. p.6). xo. x(tn.2. xo. t)Q(T). xo. t)x(t)]dr x(T)T Q(T)x(T)dT. t)Q(T)4)(T. t)x dT oo r4)T(T. to] = x[t + tn. To prove necessity. to) of the autonomous system x = f(x) (A. x(tn. to) -> p as n -> oo. The equilibrium state x = 0 is exponentially stable if and only if for any given symmetric.5: The positive limit set N of a solution x(t. P. to).6: Consider the system i = A(t)x. then x(t. to] = x(t.D(T. let (A.8) P(t) = j D'(T.A. CHAPTER 4 313 Lemma 3. we have n-+oo lim x[t. xo. xo. x[t. to). to). O . t Thus.oo (A.6) is continuous with respect to initial conditions. xo. to) n . to) approaches N as t .6) is invariant with respect to (A.oo. Let p c N. (A. It follows that lim x[t + tn. Then there exists a sequence {tn} such that to -+ oc and x(tn. t) dr.7) shows that x(t. Also. Proof: We need to show that if xo E N. 00 xT 4T (T. to) E N for all t > to. A. Since the solution of (A.p.2 Chapter 4 Theorem 4. to).7) and since x(t. to] = x(t. positive definite continuously differentiable and bounded matrix P(t) such that -Q(t) = P(t)A(t) + AT(t)p(t) + P(t) Proof: Sufficiency is straightforward. to) is in NVt. it TO J t)x(t)]T `w (T) [4(T. x0. xo.

this implies that xTPx < A2IIx1I2 On the other hand. 404) Ix(t)II < Thus kle-k2(t-to) xTPx = f t kle-k2(T-t) Q(T)kle-k2(T-t) dT ft 00 xTPx = or k2Q(T)e-2k2(T-t) dr. That P is symmetric follows from the definition. t)x dT t x(T)TQ(T)x(T) d7 f' aIIAN)II IIx(T)II2 dT f' > Thus. or N II x(T)T A(T)x(T) II dT II f Nx(T)T drx(T) dTII NxTx. t)Q(T)d)(T. . for example. Thus Jt "0 1 k2Me-2k2(T-t) dr. since Q is positive definite. there exist k1. A bounded implies that I I A(t) I I < N. 3a > 0 such that xT Qx > axT x.8). a t) = -45(T. Thus 00 xT4T (T. PROOFS Since the equilibrium is globally asymptotically stable. Also. k2 such that (see. xTPx < 2 kM 2 Also. We know that (see Chen [15]. Chen [15] pp.314 APPENDIX A. there remains to show that P satisfies (A. 1 The boundedness of Q(t) implies that there exist M: 11Q(t)II < M. a xTPx > NxTx AlIIxII2 << xTPx A1IIxII2 < xTPx < A2IIxII2 which shows that P is positive definite and bounded. Vt E R. t)A(t). Chapter 4. Thus.

To prove the converse. then JjHxjjr < jh11A11x11p. which completes the proof. and (ii) all poles of F(s) lie in the left half of the complex plane. then f (-) contains derivatives of impulses and so does not belong to A.3. Then F(s) E A if and only if (i) P(s) is proper.A.t)Q(T)41(T. Expanding G(s) in partial fractions and antitransforming (A.1 Consider a function P(s) E R(s). Theorem 6. Dividing n(s) by d(s). notice that if (i) does not hold. if A = (v +.Q(t). we have n f (t) = G-1{F(s)} = kb(t) + E g2(t) 2=1 where n is the degree of d(s) and each g2 has one of the following forms [u(t) represents the unit step function : (i) ktii-1e)tu(t).(t) E A and moreover. and let n(s)andd(s) E R[s] respectively. It follows that f (t) E A.9) where k E 1R and d(s) is strictly proper. t)Q(T)4D (T. t) dTl J . Proof: Assume first that P(s) satisfies (i) and (ii). (ii) a < 0.Q(t) P = -P(t)A(t) .3. and let represent its impulse response.t) dr+JIf 315 4T(T. then.t) 00 dT . we can express F(s) as: F(s) = k + d s) = k + G(s) (A. be the numerator and denominator polynomials of F(s). CHAPTER 6 Thus. if (ii) does not hold. A.2 Consider a linear time-invariant system H. .9). for some i.3 Chapter 6 Theorem 6. g2 ¢ G1 and then f (t) V A and F(s) il A.t)Q(T)a(D(T. P= f Jit -l A(t)-AT(t) which implies that rJ V (DT(T.AT(t)p(t) ._Also.Q(t) 4pT (T. if H is Gp stable. and ke(°+. Then H is L stable if and only if hoo(t) + ha. clearly.+)tu(t). and so F(s) E A. if A < 0 is a real pole of multiplicity m.yw) is a complex pole.

)p/q (f t Iha(s -T)IIu(T)I' dT) ds (II(ha)TIIGI )p/q j and reversing the order of integration. (A.T)u(T) dT t 0 = 91+92 We analyze both terms separately. by Holder's inequality (6. assume that the output of H to an input u E Cp . we obtain t (ft ha(s -T)IIu(r)IP dT) ds (II(92)TIIG.)' -< (II(ha)TIIG1)p/q fIUMIP (f 0 T Iha(s 1 . = IhoIIu(t)IIc..11) Thus. it follows that t I92(t)I <- f Iha(t .T)I Iu(T)I dT = f t Iha(t -T)I1/p Iha(t . PROOFS E A and consider Proof: The necessity is obvious.)p = I92(3)Ip ds s JOT (II(ha)TIIG. Clearly IIg1TIIc. For the second term we have that (A. To prove sufficiency.T)I1/gIu(T)I dT.T)I ds) dr .10) ftlha(t-T)I 192(t)I=I I tha(t-T)u(T)dTI < 0 Iu(T)I dr and choosing p and q E 1R+ such that 1/p + 1/q = 1. We have h(t) * u(t) = hou(t) + J halt .7-)I11q C (ft <- 1 Iha(t-T)IdTJ(ft -T)u(T)dT) (II(ha)t!IGI )1/q (ft lha(t-T)IU(T)IdT) 1/p fT (II(g2)TIIG.T)I1/PIu(T)I and I halt .316 APPENDIX A.5) with f we have that I92(t)I I ha(t . = Ilhou(t)IIc.

if one of the following conditions is satisfied then the system is G2 stable: (a) If 0 < a < /3: The Nyquist plot of G(s) is bounded away from the critical circle C'.A. u E LP and E A. Proof: Consider the system S of equations (6.)p. 11 IIHullc. (A.)p 317 = (II(ha)TIIC. /3]. and since. 6. Under these assumptions.12) and (A.6 Consider the feedback interconnection of the subsystems H1 and H2 : £2e ---> Gee. from (A. CHAPTER 6 < (II(ha)TIIGI )p/q II(ha)TIIC1 (IIuTIIc. IIuTIIc.6.. and let H1 be a linear time-invariant system with transfer function G(s) of the form: G(s) = 9(s) + d(s) Where G(s) satisfies assumptions (i)-(iii) stated in Theorem 6. = (Ihol + IIhaIIc1) Theorem 6.24) (Fig. (A.10) we conclude that II(h * u)TIIc. we can take limits as T -a oo. = IIh * ullc.12) Thus. centered on the real line and passing through the points (-a-1 +30) and (-/3-1 +30). (b) If 0 = a < /3: G(s) has no poles in the open right half plane and the Nyquist plot of G(s) remains to the right of the vertical line with abscissa -0-1 for all w E IR.. It follows that II(92)TIIc. by assumption. + II(92)TIIc. < <- II(91)TII C. (Ihol + II(ha)TIIc.3.23) and (6. and encircles it v times in the counterclockwise direction. 13) . Assume H2 is a nonlinearity 0 in the sector [a.) IkTIic. (c) If a < 0 < 0: G1s) has no poles in the closed right half of the complex plane and the Nyquist plot of G(s) is contained entirely within the interior of the circle C'. <- II(ha)TIIG. where v is the number of poles of d(s) in the open right half plane.6) and define the system SK by applying a loop transformation of the Type I with K = q defined as follows: o f (l3 + a) 2 . Thus IIUIIC. )P (IIuTIIG.

We have Hi H2' = H [1+ q Hi ]-s s = G(s) [1 + qd (s)] (A . then the system is stables.4. it is easy to see that H2 = 0' E [-r. if the following two conditions are satisfied. where r f (Q Thus. .318 APPENDIX A.3: Characteristics of H2 and H2. 14) = H2-q. (ii) 7(Hi)7(Hz) < 1. The significance is that the type I loop transformation has a considerable effect on the nonlinearity 0. PROOFS x Figure A. (A. the stability of the original system S can be analyzed using the modified system SK.15) By Theorem 6.9. the gain of H2 is 2 a) (see Figure A. 'Here condition (ii) actually implies condition (i). (-q-s Hi is G2 stable if and only if the Nyquist plot of d(s) encircles the point + 30) v times in counterclockwise direction. Condition (i): H2 is bounded by assumption.) -y(H2) = r (A. 7(H2) = r.1. r].3. Separation into two conditions will help clarify the rest of the proof. if H2 = ¢ E [a. By lemma 6. I3]. (i) Hl and H2 are G2 stable. and moreover. Namely.16) According to the small gain theorem.

17) if and only if G(7w) r [sup l[1+gG(.22) where a _ a+Q 2af R _ (/3-a) 2a. which has the following solutions: x1 = (-/3-1 + 30) and x2 = (-a-1 + 30).r2) + 2qx + 1 < > (1 + qx)2 + g2y2 > 0 0. it follows that the Nyquist plot of d(s) must encircle the entire critical circle C' v times in the counterclockwise direction. Thus C = C'.e.18) rlG(yw)l < l1+gG(. CHAPTER 6 319 Condition (ii): H1 and Hi are linear time-invariant systems.3.22) divides the complex plane into two regions separated by the circle C of center (-a + 30) and radius R. (A.7w)l dwER.23) Inequality (A. a)3) x + >0 (A.22) is satisfied by points outside the critical circle. .r2) + y2(g2 . i. and part (a) is proved. thus.7w)J < 1 or (A.19) Let G(3w) = x + jy.20) by a. Then equation (A. Notice that if y = 0.3: In this case we can divide equation (A.C2 gain of H1 is 'Y(H1) = j1G1j. Since the critical point (-q-1 +. C is equal to the critical circle C' defined in Theorem 6. To satisfy the stability condition (i). then C satisfies (x + a)2 = R2. the .3 to obtain x2 + y2 + (0.21) which can be expressed in the following form (x + a)2 + y2 > R2 (A. the gain condition y(Hi)y(H2) < 1 is satisfied if the Nyquist plot of G(s) lies outside the circle C*.3 (A.6. It is easy to see that inequality (A. = sup IG(7w)Iw Thus ry(Hl)ry(H2) < 1 (A.20) a. v times in counterclockwise direction. the Nyquist plot of d(s) must encircle the point (-q-1 +30).A. j0) is located inside the circle C*.3(x2 + y2) + (a + 3)x + 1 We now analyze conditions (a)-(c) separately: (a) 0 < a <. Therefore. (A.19) can be written in the form r2(x2 + y2) x2(82 .

or Q[((3w)] > -Q-1 Thus.22) by a/3. 0). To satisfy condition (i) the Nyquist plot of C(s) must encircle the point (-q-1 + 30) v times in the counterclockwise direction. -q-1 = -2.26) (A 27) .320 APPENDIX A. However. in this case (-q-1 +30) lies outside the circle C'. Proof of theorem 7.6.1) is locally input-to-state-stable. Under these conditions (7. the converse Lyapunov theorem guarantees existence of a Lyapunov function V satisfying al(IIxII) < V(x(t)) < a2(jjxjj) OV (x) f (x 0) < -c' (11X11) 8x dx E D Vx E D (A.4: We will only sketch the proof of the theorem.0-1. Thus.4: Consider the system (7.4 Chapter 7 Theorem 7.23). and if a = 0.1). because of the change in sign in the inequality. Poles on the imaginary axis are not permitted since in this case the Nyquist diagram tends to infinity as w tends to zero.24) where we notice the change of the inequality sign with respect to case (a). condition (ii) is satisfied if the Nyquist plot of d(s) lies entirely to the right of the vertical line passing through the point (-/3-1 + 30). PROOFS (b) 0 = a <)3: In this case inequality (A. however. Given that x = 0 is asymptotically stable. But -q-1 = -2(a +. u) is continuously differentiable. It follows that d(s) cannot have poles in the open right half plane. and that the function f (x. The reader should consult Reference [74] for a detailed discussion of the ISS property and its relationship to other forms of stability.3: In this case dividing (A. Equation (A. (c) a < 0 <. we obtain x2 + y2 + ( a+ 0) x + a <0 (A. This completes the proof of Theorem 6. As in case (a) this circle coincides with the critical circle C`.22) reduces to x >.3-1.k = f (x.24) can be written in the form (x + a)2 + y2 < R2 (A. violating condition (A.24). the critical point is inside the forbidden region and therefore d(s) must have no poles in the open right half plane.24) is satisfied by points inside C. Assume that the origin is an asymptotically stable equilibrium point for the autonomous system . A.0)-'. It follows that condition (ii) is satisfied if and only if the Nyquist plot of d(s) is contained entirely within the interior of the circle C'.25) where a and R2 are given by equation (A. (A.

dx : IIxII >.4. VV(x) f(x. u) .) satisfies ll f (x. and that the function f (x.31) .u E R'n. i.1) is input-to-state stable.5: Consider the system (7.5: The proof follows the same lines as that of Theorem 7. L>0 (A. f (. eX f(x. Inequality (A.u) < -a(IIxII).0)1I+E.4 and is omitted. (A. u) is continuously differentiable and globally Lipschitz in (x.f (x. Proof of Theorem 7. u).A.28) We need to show that. under the assumptions of the theorem..29) the set of points defined by u : u(t) E Rm and lull < b form a compact set.e.29) implies that in this set lIf(x.u)Il < Ilf(x. Vx : II4II ? X(Ilull) and the result follows. a) is a supply pair with corresponding ISS Lyapunov function V. kl >0 (A.u E Rm. Assume that the origin is an exponentially stable equilibrium point for the autonomous system t = f (x.u) < -a(Ilxll) +a(Ilull) Vx E W . I& !1x E Rn. (A.7: Assume that (a. 0)II < L Ilull.30) VV(x) f(x.. CHAPTER 7 and 321 av ax < ki lxll. we have that.a3(I1xll) +E. Theorem 7.1). By the assumptions of the theorem. Now consider a function a E IC with the property that a(llxll) >. dx : lxll > a(IIull) To this end we reason as follows. 0).X(Ilull) With a so defined. E>0. there exist a E K such that a f(x. Proof of theorem 7.u) < -a(IIxID.u) < -a(Ilxll)+a(Ilull) We now show that given o E 1C. Under these conditions (7.

the right hand-side of (A.1q[V(x)1a(IIxII)(A.n) = q[V(x)) [a(IIuII)-a(IIxII)l. we have that V(x) 5 a(IIxII) < 9(IIuII). We now look for an inequality of the form (A.35) We now show that the right hand-side of (A. nondecreasing function.34) is bounded by q[9(IIuIUa(IIuII) . the the right-hand side of (A.37) To this end.2q[a(s))a(s) < a (r) . .a(s) Vr. Now define (A. PROOFS To this end we consider a new ISS Lyapunov function candidate W.33) p(s) = f q(t) dt 0 where pt+ -+ R is a positive definite.34) is bounded by (A. where (A. defined as follows W = p o V = p(V(x)).34) is bounded by -2q[V(x)]a(IIxII) (ii) 2a(IIxII) < a(IIuII): In this case.. /3 E 1C. The rest of the proof can be easily completed if we can show that there exist & E 1C.u) = P (V(x))V(x. notice that for any /3.322 APPENDIX A.32) (A.36) To see this we consider two separate cases: (i) a(IIuII) < za(IIxII): In this case. oo).. the following properties hold: Property 1: if /3 = O(13(r)) as r -+ oo. (A.32) we have that W = aW f(x. 0(s) = d(a-1(2a)). such that q[9(r)]o(r) . and nondecreasing function q: q(r)/3(r) < 4(r) Vr E [0.34) 9=aoa-1o(2o) that is. then there exist a positive definite.s > 0. smooth. Using (A.30) in the new ISS Lyapunov function W.. (A. smooth. From (i) and (ii)..36).

8 we need the following property Property 2: if /3 = 0(13(r)) as r -f 0+. smooth.40) into (A. /3(.40) is satisfied. (A. defining a(s) = 1 q(c (s))a(s) (A.4. there exist q: /3 < q(s)Q(s) Vs E [0. then there exist a positive definite.) and smooth.41) a(r) = q[9(r)]a(s) we have that (A. which implies (A. (A.3(s) Vs E [0. With these definitions. defining (A. / (r) = &(O-'(r))- E K. CHAPTER 7 323 With this property in mind.37).A. and define /3(r) = a(B-1(r)). By property 1. oo).].40) Proof of Theorem 7. Defining /3(r) = 2a[9-1(s)] we obtain /3(r) = d[9-1(s)]. consider a E K. oc). Finally.42) .38) (A. Thus -2q[9(s)]a(s) Finally. By property 2. and nondecreasing function q(-): q(r)/(r) < /3(r) Thus Vr E [0.8: As in the case of Theorem 7. q[9(r)]a(r) < &.39) we have that q[a(s)]a(s) > 2&(s) j q[B(s)]a(s) < &(r) and the theorem is proved substituting (A. and nondecreasing function q: a(s) < q(s). oo).37). [so that 8.35) is 1C. there exist a positive definite. defined in (A.

43) (a) H is passive if and only if the gain of S is at most 1. In this case. assume that (I + We have : Xe --> Xe .50) Thus.51) implies that IISyIIT . Hx >T IIyIIT (A. y >T ((I + H)x. (a) : By assumption.IIyIIT = IISyIIT <.Hx>T IISyIIT = IIyIIT .e. This completes the proof of part (a).I)x >T (Hx-x. (I + H) is invertible.5 Chapter 8 H)-1 Theorem 8. (I + H)x >T IIHxIIT+IIxIIT+2(x.I)(I + H)-1y = (H . Define the function S : Xe --> Xe : S = (H . Hx >T> 0.324 APPENDIX A.4(x. On the other hand. (H ..50) from (A.2(x. we obtain (A. . PROOFS A. and assume that (I + H) is invertible in Xe . Hx >T (A. that is. if the gain of S is less than or equal to 1. so we can define xr (I + H)-1y (I + H)x (A.3: Let H : Xe -> Xe .44) is satisfied. then (A.I)x.52) which implies that (A. S is such that II(Sx)TIIx s IIxTIIx Vx E XefVT E Xe (A.49).44) (b) H is strictly passive and has finite gain if and only if the gain of S is less than 1.45) (A.48) IISyIIT = ((H .47) = = y Hx = = Sy y-x (H .I)(I + H)-1 (A.IIyIIT (A.Hx-x>T = IIHxliT + IIxIIT . Hx >T= IIyIIT .49) = (y. and (A.0 and thus H is passive.IISyIIT . i. (x.46) (A.51) implies that 4(x.I)x (A.51) Now assume that H is passive. subtracting (A.

Hx >T > b'IIyIIT (A.51). if H is strictly passive.47) in the last equation.XIIT y(H)IIXIIT <- IIYIIT . The finite gain of H implies that IIHxIIT << y(H)IIxIIT Substituting (A. which implies that S has gain less than 1.A.5. We can write ISyIIT (1 . Hx >T IIyIIT 5'11yI = 4(x. we see that ISyIIT +5'IIyIIT = <<- IIyIIT IISyIIT (1.Hx >T > - 51IIyII2 T .IIyIIT I y(H)IIxIIT IIyIIT (1+y(H))IIxIIT or (1 + y(H))-l IIyIIT <_ IIyIIT.53) in (A. we have that 0 < (1 .5')112IIyIIT (A.56) and since 0 < 5' < 1. we have IIyIIT .55) where 0 < 5' < min[l.4(x. then (x. assume that S has gain less than 1. 45(1 + -y(H) )-2]. Hx >T > 461I x I T Substituting (A.Hx>T >_ SIxIT 4(x. Thus.51)1/2 < 1. For the converse.57) Substituting IISyIIT by its equivalent expression in (A.51). CHAPTER 8 (b) : 325 Assume first that H is strictly passive and has finite gain.55) in (A. (A. we obtain (A. (A.54). Hx >T > 45(1 + y(H))-211YIIT or 4(x.53) Also.54) 4(x. we have II Y . substituting (A.5')IIyIIT 0 < 5' < 1.

This completes the proof.26')(x. (XT. and SPR. (4 . Proof: We need to show that y E X whenever u E X. strictly proper. > 0. Under these assumptions. e > 0.10 Consider the feedback interconnection of Figure 7. XT > . XT » e(xT. the feedback interconnection is input-output-stable.26')IIHx1ITIIxjIT . (ii) H2 is passive (and possibly nonlinear). and assume that (i) Hl is linear time-invariant. Hx >T -6'IIxIIT < and since 6'11x117.56) by their equivalent expressions in (A.5 is input-output stable if and only if the system SE of Figure A.326 APPENDIX A. We know that the feedback system S of Figure 7.6'II x112 by Schwartz' inequality 6'II HxIIT < (4 . we obtain 4(x. H2xT > +e(XT.4. Hx >T) (4 .4 is input-output stable.50).Hx>T and since 0 < 6' < 1.50) to obtain 6'II HxIIT < (4. To see this. Hx >T> it (4 . we start by applying a loop transformation of the type I.26')(x. as shown in Figure A. Theorem 8. PROOFS and substituting IIyIIT using (A. First consider the subsystem H2' = H2 + EI. . with K = -e.26') IIxIIT so that H is strictly passive. we have 6'(IIHxJIT+IIxIIT)26'IIxIIT (x. To see that H has also finite gain.49) and (A.5. Hx >T 6'(II HxIIT + IIxIIT + 2(x. We have. (H2 + EI)xT >= (XT. substitute IISyJIT and IIyIIT in (A.26')IIHxIITIIxIIT so that II HxIIT (4 626/) IIxIIT from where it follows that H has finite gain.

.................. : ...... ................................... H2 14 H2 = H2+e :...............5....... CHAPTER 8 327 Hi = H1(I H1 Yi eHl)-1 :............ : Figure A........A........4: The feedback system S............... ....................................................

29)-(8. Hi is stable for any e < '.4. denoting 7(H1). Also.5 is a solution of equations (A. there exist positive definite matrices P and L. L. We now argue that Hi is SPR for sufficiently small E. -y.e.4(x(to)) > 0. since H1 is SPR.ny[CTC]). given P. and HZ is strictly passive. To prove necessity we show that the available storage defined in Definition 9. Notice first that for any state x0 there exist a control input u E U that takes the state from any initial state x(t_1) at time t = t-1 to xo at t = 0.µL + 2CCT C and provided that 0 < e < e' = p)tmjn[L]/(2A. we have that t. the result follows from Theorem 8.. hence passive. Q.60) (A. To see this.LT(x)L(x) (A.61)(A. a real matrix Q and µ sufficiently small such that (8. µ.1)). P(A + eBC) + (A + eBC)T P = _QQT . t' / t In particular T /t. w(t) dt = o .13) is QSR-dissipative [i.328 APPENDIX A. PROOFS It follows that HZ is strictly passive for any e > 0.O(x(t_1)) > 0 .58) 81- (O)T gT ax 31-f (x) = hT (x)Qh(x) .30) are satisfied.6)] if there exists a differentiable function 0 : ]Rn -.59) (A. Since the system is dissipative.61) for appropriate functions L and W. dissipative with supply rate given by (9.6 Chapter 9 Theorem 9. notice that. then by the small gain theorem.61) ST h(x) . 0(0) = 0 (A.2: The nonlinear system Eli given by (9.. w(t) dt = O(x(tl)) . w(t) dt + fo T w(t) dt = cb(x(T)) . R and Rgxm satisfying function L : Rn -> Rq and W : 1Rn fi(x) > 0. min(c*. We conclude that for all e E (0..WT L(x) R = WTW Proof: Sufficiency was proved in Chapter 9. A. Hi is strong SPR. But then it follows that. the matrixµL-2eCTC is positive definite. Thus.

which. . +w(u.0a(xo) Vt > 0 d(x.. u) > 0 for all x. By Theorem 9. so defined. implies that i t w(s) ds > 4a(x(t)) . Hence there exists a bounded function C : R' --4 IR such that T T w(t) dt > - fo f w(t) dt w(t) dt > C(xo) > -oc JO which implies that 0a is bounded.6.e. The right-hand side of the last inequality depends only on xo. i.u) a0axx) fw -d dtx) > 0. 2 ax f = 1 T aSa aSa hT (x)Qh(x) .13).a(x)g(x)u+hTQh+2hTSu+UTRu we notice that d(x. and (ii) it is quadratic in u.W T L(x). u) can be factored as d(x.y) [f(x) +g(x)u] +yTQy+2yTSu+uTRu a 00"2x) f(x) . has the following properties: (i) d(x. whereas u can be chosen arbitrarily on [0.A. T]. It then follows that d(x. since 0a is differentiable by the assumptions of the theorem.u) Substituting (9. u. u).LT (x)L(x) This completes the proof. CHAPTER 9 that is ff 329 Jo Jt.u) = [L(x) + Wu]T [L(x) + Wu] = LTL+2LTWu+uTWTWu which implies that R = WTW -g ax = ST h(x) . we have that d(x.1 the available storage is itself a storage function.

2. ad fg.28) is input-state linearizable on Do C D if and only if the following conditions are satisfied: (i) The vector fields {g(x).28) into a system of the form . adfg(x). PROOFS A. Then there exists a coordinate transformation z = T(x) that transforms (10.62) ax 1 f W = Tn(x) T.63) 8x 1 g(x) = 0 54 a ng(x) 0. Equivalently. adf-lg(x)} are linearly independent in Do. Proof: Assume first that the system (10.18) we know that T is such that ax f (x) = T2(x) axf(x) = T3(x) (A..7 Chapter 10 Theorem 10.17) and (10. C = [g(x). as defined in Section 10..330 APPENDIX A. (ii) The distribution A = span{g. From (10. adfg(x). adf-lg(x)}nxn has rank n for all x E Do. f (x) and i4 0 ag(x) = 0 l ag(x) = 0 2 (A.28) is input-state linearizable. with A = Ac and B = B. ad f-2g} is involutive in Do.2: The system (10.. . . .z = Az + By. . the matrix .

A. 7. CHAPTER 10
Equations (A.62) and (A.63) can be rewritten as follows:

331

LfTi=LfTi+1,
L9T1 = L9T2 =

(A.64)

= L9Tn_1 = 0 L9Tn # 0

(A.65)

By the Jabobi identity we have that
VT1[f,g]

_ V(L9T1)f -V(LfT1)g

0-L9T2=0
or

VTladfg = 0.

Similarly

OT1adfg = 0 VT1ad f-1g # 0.

(A.66)

(A.67)

We now claim that (A.66)-(A.67) imply that the vector fields g, adf g, , adf-1g are linearly independent. To see this, we use a contradiction argument. Assume that (A.66) (A.67) are satisfied but the g, adfg, , adf -1g are not all linearly independent. Then, for some i < n - 1, there exist scalar functions Al (X), A2(x), , .i_1(x) such that
i-1

adfg =
k=0

Ak adfg

and then
n-2

adf-19=
k=n-i-1 and taking account of (A.66). we conclude that
n-2

adfg

OT1adf 1g = > Ak VT1 adf g = 0
k=n-i-l
which contradicts (A.67). This proves that (i) is satisfied. To prove that the second property is satisfied notice that (A.66) can be written as follows

VT1 [g(x), adfg(x), ... , adf 2g(x)] = 0

(A.68)

that is, there exist T1 whose partial derivatives satisfy (A.68). Hence A is completely integrable and must be involutive by the Frobenius theorem.

332

APPENDIX A. PROOFS

Assume now that conditions (i) and (ii) of Theorem 10.2 are satisfied. By the Frobenius theorem, there exist Tl (x) satisfying

L9T1(x) = LadfgTl = ... = Lad,._2gT1 = 0
and taking into account the Jacobi identity, this implies that

L9T1(x) = L9L fT1(x) _ ... = L9L' -2Ti (x) = 0
but then we have that

VT1(x)C=OT1(x)[9, adf9(x), ..., adf-lg(x)] = [0, ... 0, Ladf-19Ti(x)]
The columns of the matrix [g, adfg(x), , adf-lg(x)] are linearly independent on Do and so rank(C) = n, by condition (i) of Theorem 10.2, and since VT1(x) # 0, it must be true that adf-19Ti(x) 0
which implies, by the Jacobi identity, that
LgL fn-1T1(x) 4 0.

Proof of theorem 10.3:
The proof of Theorem 10.3 requires some preliminary results. In the following lemmas we consider a system of the form

f x = f (x) + g(x)u,
S

f, g : D C R" - Rn

l y=h(x),

h:DClg"->R

(A.69)

and assume that it has a relative degree r < n. Lemma A.1 If the system (A.69) has relative degree r < n in S2, then
Lad)fgLfh(x)
k

- tf 0

0<j+k<r-1
j+ k = r - 1

(-1) L9Lf lh(x) 54 0

(A.70)

VxEc, Vj<r-1, k>0.
Proof: We use the induction algorithm on j.

A.7. CHAPTER 10
(i)

333

j = 0: For j = 0 condition (A.70) becomes
LAkh(x)=

(0
tl L9Lf 1h(x)

0<k<r-1
k=r-1

which is satisfied by the definition of relative degree.

(ii) j = is Continuing with the induction algorithm, we assume that
k

_

0

(-1)'L9Lf lh(x)

0<i+k<r-1
i+k=r-1

(A.71)

is satisfied, and show that this implies that it must be satisfied for j = i + 1. From the Jacobi identity we have that
Lad1gR. = LfLpa - LOLfA

for any smooth function A(x) and any smooth vector fields f (x) and 3(x). Defining

A = Lf h(x) = adfg
we have that

Laf i9Lf h(x) = LfLa fgLfh(x) - Lad;I9Lf+lh(x)

(A.72)

Now consider any integer k that satisfies i + 1 + k < r - 1. The first summand in
the right-hand side of (A.72) vanishes, by (A.71). The second term on the right-hand side is k+ 1h() x

-LadisLf

=

10

0<i+k+1<r-1
i+k+1=r-1

(-1)'L9Lf lh(x)

and thus the lemma is proved.

Lemma A.2 If the relative degree of the system (A.69) is r in 1, then Vh(x), VLfh(x), , VL f lh(x) are linearly independent in Q.

Proof: Assume the contrary, specifically, assume that Vh(x), VLfh(x), , VLf'h(x)
are not linearly independent in 0. Then there exist smooth functions a1(x), such that + arVLf lh(x) = 0. aiVh(x) + a2VLfh(x) +
,

ar (x)
(A.73)

334

APPENDIX A. PROOFS

Multiplying (A.73) by adfg = g we obtain

a1Lgh(x) + a2L9Lfh(x) +

+ arLgLf 1h(x) = 0.

(A.74)

By assumption, the system has relative degree r. Thus

L9L fh(x) = 0
LgLf lh(x)
Thus (A.74) becomes
0.

for 0 < i < r - 1

arLgLr 1h(x) = 0

and since L9Lr 1 76 0 we conclude that ar must be identically zero in Q.
Next, we multiply (A.73) by ad fg and obtain

aladfgh(x) + a2adfgL fh(x) +

+ aradfgLr 1h(x) = -ar-lad fgLf lh(x) = 0

where lemma A.1 was used. Thus, ar-1 = 0. Continuing with this process (multiplying each time by ad fg, ... , adr-1g we conclude that a1 i , ar must be identically zero on 0, and the lemma is proven.

Theorem 10.3: Consider the system (10.41) and assume that it has relative degree r < n Vx E Do C D. Then, for every x0 E Do there exist a neighborhood S of x0 and smooth
function µ1i
,

such that

(i) L9uj(x)=0,
(i2)

for 1 -<i <n-r, VxC-Q, and
F

Ai (x)

1

T(x) =
01

L

Or

J

01 = h(x) 02 = Lfh(x)
Or = Lf 1h(x)

is a diffeomorphism on I.

A.7. CHAPTER 10
Proof of Theorem 10.3: To prove the theorem, we proceed as follows.

335

The single vector g is clearly involutive and thus, the Frobenius theorem guarantees that for each xp E D there exist a neighborhood SZ of xo and n - 1 linearly independent smooth functions µl, . ,µ._1(x) such that

Lyµi(x) = 0

for 1 < i < n - 1 dx E SZ
,

also, by (A.73) in Lemma A.2, Vh(x),
defining
r

VLf-2h(x) are linearly independent. Thus,
µ1(x)
1

T(x) =

i.e., with h(x),

, Lf lh(x) in the last r rows of T, we have that
VLFlh(xo) Vspan{Vµ1, ...,
phn-1}

= rank (15 (xo)J I = n.
which implies that (xo) # 0. Thus, T is a diffeomorphism in a neighborhood of xo, and the theorem is proved.

Bibliography
[1]

B. D. O. Anderson and S. Vongpanitlerd, Network Analysis and Synthesis: A Modern Systems Theory Approach, Prentice-Hall, Englewood Cliffs, NJ, 1973.

[2] P. J. Antsaklis and A. N. Michel, Linear Systems, McGraw-Hill, New York, 1977.

[3] V. I. Arnold, Ordinary Differential Equations, MIT Press, Cambridge, MA, 1973.

[4] A. Atassi, and H. Khalil, "A Separation Principle for the Stabilization of a Class of Nonlinear Systems," IEEE Trans. Automat. Control," Vol. 44, No. 9, pp. 1672-1687,
1999.
[5]

A. Atassi, and H. Khalil, "A Separation Principle for the Control of a Class of Nonlinear Systems," IEEE Trans. Automat. Control," Vol. 46, No. 5, pp. 742-746, 2001.

[6]

J. A. Ball, J. W. Helton, and M. Walker, "H,,. Control for Nonlinear Systems via
Output Feedback," IEEE Trans. on Automat. Control, Vol. AC-38, pp. 546-559, 1993.

[7]

R. G. Bartle, The Elements of Real Analysis, 2nd ed., Wiley, New York, 1976.

[8]

S. Battilotti, "Global Output Regulation and Disturbance ttenuation with Global Stability via Measurement Feedback for a Class of Nonlinear Systems," IEEE Trans. Automat. Control, Vol. AC-41, pp. 315-327, 1996.
,

[9]

W. T. Baumann and W. J. Rugh, "Feedback Control of Nonlinear Systems by Extended Linearization," IEEE Trans. Automat. Control, Vol. AC-31, pp. 40-46, Jan.
1986.

[10] D. Bestle and M. Zeitz, "Canonical Form Observer Design for Nonlinear TimeVariable Systems," Int. J. Control, Vol. 38, pp. 419-431, 1983.
[11] W. A. Boothby, An Introduction to Differential Manifolds and Riemaniann Geometry, Academic Press, New York, 1975.
337

1988. Diaz-Bobillo. Control Theory. IFAC World Congress. New York. Dahleh and I. Dahleh and J. Francis. Byrnes and A. A. Dahleh and J. Holt. pp. 6. 1964-1968. Dahleh and J. Prentice-Hall. Control Probles. "State Space Solutions to Standard H2 and H. Fantoni and R. 1983. New York. A Course in H. Vol. A. [13] W. No. 1975. 32. on Automatic Control. A.. 889-895. ENglewood Cliffs. Francis. [14] C. Linear Systems Theory and Design." IEEE Trans. 1987.. Englewood Cliffs. [24] I. 1569-1573. Vol. 2nd ed. 1991. pp.. pp. 1995. A. Guckenheimer and P. J. No. C.. "Feedback Systems: Input-Output Properties". 3rd ed. 1987. [17] M." IEEE Trans. Decision and Control. 1978. 1987. Vol. American Control Conf. A." Proc. Robust Stability and Mixed Sensitivity Minimization. K. Dahleh and J." Proc. Prentice Hall. A. Doyle. Springer-Verlag. 1984. Pearson. Modern Control Theory." IEEE Trans. NJ. Brogan. 1984. No. Reading. 2nd ed. WA. Glover.. Vol. Desoer and M. No. "C'-Optimal Compensators for Continuous-Time Systems. and Bifurcations of Vector Fields. W. A. [26] J. Academic Press. Nonlinear Oscillations." Proc. "A Frequency Domain Philosophy for Nonlinear Systems. New York. Springer-Verlag. 1989. Holmes. Vol. L. 831-847. 2002. [25] B. [20] M. MA. 1115-1120. Chen." AddisonWesley. Brockett. on Automatic Control. [18] M. New York. pp. 8. on Automatic Control. August 1989. Rinehart and Winston. Lozano. An Introduction to Chaotic Dynamical Systems. [16] M. Dynamical Systems. on Automatic Control. Seattle.338 BIBLIOGRAPHY [12] R. 11-Optimal Controllers for Discrete-Time Systems.. 8. 33. Isidori. Springer-Verlag. 722-731. 23rd IEEE Conf. P. P. Khargonekar and B. Pearson. Controllers for MIMO Discrete-Time Systems. 38. Vidyasagar. L. pp. T. 1986. [23] J. Devaney. Pearson. I. [22] R. [15] C. "Optimal Rejection of Persistent Disturbances. Non-Linear Control for Underactuated Mechanizal Systems." IEEE Trans. New York. 32. 4. NJ. [19] M. 10. . "Feedback Invariants for Non-Linear Systems. Pearson. pp. [21] C. Control of Uncertain Systems: A Linear Programming Approach.

Control. Vol 32. Vol. Jan. New York. 1987. Astolfi. 1983. on Automat. Automat. 339 [28] J. Academic Press." Automatica. pp. [41] H.. Springer-Verlag. Sci. Ioannou and G. Meyer. Springer-Verlag. 1987. Respondek. K. J. pp. Tao. J. [40] T. and G. Isidori. 1974. Moylan.. Control. Nonlinear Systems. eds. [35] P. reprinted 1987. Nonlinear Control Systems. "On Linearization of Control Systems. 1980. 327-357. Moylan. and P. Prentice-Hall. New York. J. pp. Springer-Verlag. Moylan. 1974. No. Vol. Jan. [30] D. Hill and P. Franklin Inst. AC-37. Acad. Nonlinear Control Systems.. "Nonlinear Control via Approximate InputOutput Linearization: The Ball and Beam Example. Vol 37. Control. Smale. 1967. Hill and P. "Frequency Domain Conditions for Strictly Positive Real Functions. J. NJ.BIBLIOGRAPHY [27] W." IEEE Trans.. pp. Isidori and A.". Hauser. Jakubczyk and W. [42] H. 1980." the J. Automat. Automat." IEEE Trans. Kokotovic. 1 pp." in Differential Geometric Control Theory. pp. Prentice-Hall. and H. 6. 392-398. Sussmann. Math. Control. Su. 1996. July 1977. Vol. Oct. Dynamical Systems and Linear Algebra. Hunt. "Stability of Nonlinear Dissipative Systems. Stability of Motion. [29] D." IEEE Trans. pp. Hahn. Khalil. J. Ser. [31] D. Polonaise Sci. Boston. pp. Halmos. Brockett. 1995. Control via Measurement Feedback in Nonlinear Systems. 53-54." Bull. "Disturbance Attenuation and H. Vol.. New York. Keller. 2nd ed. Birhauser." Int. J. Englewood Cliffs. No. Differential Equations. 1999. March 1992. "Non-Linear Observer Design by Transformation into a Generalized Observer Canonical Form.. Vol. 309. [39] B. Millman. S. W. 1976. Hirsch and S. 28. 1980. Finite Dimensional Vector Spaces. AC-21. NJ. 1-33. "Dissipative Dynamical Systems: Basic State and InputOutput Properties. J. R.. 377-382. "Stability Results for Nonlinear Feedback Systems. [36] A. [32] M. Berlin. 1283-1293. S. Englewood Cliffs. [38] A. New York. . 3rd ed. Vol. Sastry. R. "Design for Multi-Input Nonlinear Systems. [34] L. Hill and P. Control. Kailath. R. Isidori. W. R.1992. Linear Systems. 46. [33] P. Springer-Verlag. R." IEEE Trans. [37] A. pp. 1915-1930. 13. 268-298. 708-711.

1982. 1995. NJ. Krasovskii. Stanford University Press. [52] R. pp. Nonlinear and Adaptive Control Design. New York. Krener and A. and P. Tomei. Academic Press. 7. Prentice-Hall. of Circuit Theory. No. Marino and P. "Deterministic Nonperiodic Flow. Vol. New York. Cambridge University Press. Stability by Lyapunov's Direct Method. pp. 130-141. Springer Verlag. [49] J. 3. Tomei. 520-527. Vol. P. N. "Nonlinear Observers with Linearizable Error Dynamics. Kokotovic. Adaptive and Robust. (44] N. [53] R.. Vol. Prentice-Hall. pp. pp. [45] A. Ordinary Differential Equations. Mag. [54] R. 1989. No. 1995. NJ. Stanford. pp. J. Nonlinear Dynamical Control Systems. Respondek. Adaptive Control Signal Process. Berlin. Nijmeijer and A. J. [47] M. K. 1985. [50] E. Control Lett. 197-216. 7-17. Isidori. Marino and P. N. 1995. 4. and S. Vol. 1963. M. 1960. UK. [56] K." Syst. Krener and W. 40. Vol. 1963. Krstic. I.340 BIBLIOGRAPHY [43] P. Wiley. Lorenz. LaSalle. 1990. [48] J. Elements of Functional Analysis. "Linearization by Output Injection and Nonlinear Observers. Maddox." IRE Trans. [46] A." IEEE Trans. Vol. "Adaptive Observers for a Class of Multi-Output Nonlinear Systems. [55] R. 6. J.". Dec. J. Cambridge. Atmos. Anaswami. CA. Miller and A. Automat. LaSalle. Englelwood Cliffs. Control Optimization. pp. 23. [51] I. 1300-1304. N. 6. Academic Press. P." SIAM J. 47-52." J. "Adaptive Observers with Arbitrary Exponential Rate of Convergence for Nonlinear Systems. 7. 1961. "The Joy of Feedback: Nonlinear and Adaptive. Stable Adaptive Systems. Michel.. Stability of Motion. 1988. pp. Narendra and A." IEEE Control Sys. Control. Tomei. Marino and P. Lefschetz. 2nd Edition. Kanellakopoulos. 20. Nonlinear Control Design: Geometric. [57] H. van der Schaft. Kokotovic." Int. 353-365. . Englewood Cliffs.. Sci. No. "Some Extension of Lyapunov's Second Method. June 1992.. 1992. New York. 1983.

. 1995. 397-401. 34. UK. Ott.43. Prentice Hall. Tannenbaum.4. 1989. pp. 1990.. March 1998. "Further Facts about Input-to-State Stabilization. pp. New York. Rugh." Feedback Control. [59] L. No. Berlin. Cliffs. Sandberg. 2. W. [63] W. 11. 1809-1812. 1581-1599. pp. E. New York. "Analytical Framework for Gain Scheduling". Francis and A. Automat. 1964. W. [60] S. No. "On the G2-Boundedness of Solutions of Nonlinear Functional Equations. [66] [67] I. 1991. Springer-Verlag. Rudin. Vo1. Tech.. D. Chaos in Dynamical Systems. J. Sandberg. 435-443. 1976. Sontag. Automat. K. 1965. 59. pp." IEEE Trans. Tech J. Hedrick." Bell Sys. IEEE Control Systems Magazine. pp. McGraw-Hill. "An observation Concerning the Application of the Contraction Mapping Fixed-Point Theorem and a Result Concerning the Norm-Boundedness of Solutions of Nonlinear Functional Equations. pp.35. 1991. 1994. Englewood Cliffs. Cambridge University Press. 871-898." IEEE Trans. Li.. "Observers for Lipschitz Nonlinear Systems.44. Principles of Mathematical Analysis. Control. Vol. Jan. NJ. Vol. Rugh. Linear System Theory. 215235. Vol.-J. W. NJ. pp. Lecture Notes in Control and Information Sciences. vol. pp. J. 1965. 515-528. Prentice-Hall. Nonlinear Systems and Complexity. Vol. Springer-Verlag. Control. [70] E. R. Differential Equations and Dynamical Systems." Bell Sys. 79-84. "State Space and I/O Stability for Nonlinear Systems. J. Automat. Sontag. Slotine and W. pp." Int. eds. 1991. D. 473-476. No. Applied Nonlinear Control. B. Raghavan and J. [61] R.. Perko. "Observer Design for a Class of Nonlinear Systems.44. 2nd ed. Control. J. [64] W. [62] W. "Some Results Concerning the Theory of Physical Systems Governed by Nonlinear Functional Equations. "Smooth Stabilization Implies Coprime Factorization. 1993. 1. Rajamani. Tech J. Vol.BIBLIOGRAPHY 341 [58] E. [71] E. [68] J. Vol.. Sontag. Control.. Sandberg. AC-43. Cambridge. Englewood [69] E. 1996." Bell Sys. [65] I. I." IEEE Trans. 3rd ed. D.

Tsinias. G2-Gain and Passivity Techniques in Nonlinear Control. 20. 219-226. [80] F. "A Generalization of Vidyasagar's theorem on Stability Using State Detection. pp. Control Lett. Circuits Sys. Control Lett. Wang. Vol. pp. 41. [75] E. .. Taylor. 24-36. Springer Verlag. [84] B. pp. Vol 33. Sontag and Y." IEEE Trans. 1988. 1996. 2. D. 1995. Su. Vol." Syst. [86] A. Control. H. van der Schaft. Control. 310-311. 1994. 427-438. No. Sontag and Y. Nonlinear Dynamics and Chaos. Dec. 1993. Syst. Sontag. 1995. M. pp." IEEE Tins." European Journal of Control. "Linearizing Control of Magnetic Suspension Systems. "Changing Supply Functions in Input/State Stable Systems.. pp. Thau. Control. "On the Input-to-State Stability Property. 351-359. London. 4. 1476-1478. S. 1991. K. Sontag and A. pp. Control Sys. Trumper. [74] E. Vol.. Vol. "New Characterizations of Input-to-State Stability. 1973. MA." IEEE Trans. pp. "Observing the State of Nonlinear Dynamic Systems. H. 1." Int. "Sontag's Input-to-State Stability Condition and Global Stabilization Using State Detection. "On Characterizations of the Input-to-State Stability Property. Wang. Control Lett. 1283-1294. Tsinias. 1920. Radio Review. pp. [85] A. 5. [78] G. UK. Vol. "On the Linear Equivalents of Nonlinear Systems. 1-8. Technol. 1995. pp. [83] J. "On a State Space Approach to Nonlinear H"C' Control. L. Reading. 1991." IEEE Trans. [79] J. Automat. No. 17. Subrahmanyan. 17. Olson and P. pp. July 1997. 24. [82] J. [73] E.. 1183-1185. 1982.. Van der Pol. Vol. 16.. Control Lett. Control. D. D. [76] S. "Strictly Positive Real Matrices and the Lefschetz-KalmanYakubovich Lemma. Addison-Wesley." Syst. Vol. 471-479. 37-42." Syst. Automat. D. 3. 704-754. [81] D. J. 1999." Syst. March 1974. 40. Vol. pp. E. Control Lett. Ioannou. van der Schaft. pp. pp. [77] R. Automat." IEEE Trans. Strogatz. Vol. 48-52. Vol. "Strictly Positive Real Functions and the Lefschetz-Kalman-Yakubovich (LKY) Lemma.342 BIBLIOGRAPHY [72] E. Tao and P. Teel..

Control. van der Schaft. Automat. 228-238. No. Control." IEEE Trans. Glover. C. 1992. Doyle. 1971." IEEE Trans. Englewood Cliffs. Prentice-Hall. 604-607. Vol. Zhou. Willems. AC-25. C." Arch. Zames. AC-26. Automat. Zames. Vidyasagar. Vol. Vol. Control. Prentice Hall. 24-35. pp. [95] J. Vidyasagar. Sept. 1971. Rational Mech. 5. 1981. 45. [100] K. 504-509. 3. pp. The Analysis of Feedback Systems. Automat. Vidyasagar. "Dissipative Dynamical Systems. Stability Theory of Dynamical Systems. 1963. Vol. [90] M. Control. Zak. May 1990. Feb. L. Englewood Cliffs. Willems. [91] J. Willems. C. 1966. AC-11. Control. pp. New York. Control. "On the Stabilization and Observation of Nonlinear/Uncertain Dynamic Systems. [88] M. Anal." IEEE Trans. C. pp. Vol. [98] G. 321-351. NJ. "On the Stabilization of Nonlinear Systems Using State Detection. pp. pp. Robust and Optimal Control. pp. pp. "The Generation of Lyapunov Functions for Input-Output Stable Systems. 392-404.. Willems. 770-784. NJ. 1993. Vol. 31. 1970. and J. "Optimal Rejection of Persistent Bounded Disturbances." IEEE Trans. 105-133. [96] S. "G2-Gain Analysis of Nonlinear Systems and Nonlinear State Feedback. Automat. . [97] G. Parts I and II . Willems.. AC-35." IEEE Proc. 9. CT-10. 64. Zames. Wiley. 301-320. MA. H. Vol. 1976. Automat. pp. "Functional Analysis Applied to Nonlinear Feedback Systems. C. [92] J. Vol. 1986. IEEE Trans. Cambridge." IEEE Trans. Circuit Theory. 527-535." IEEE Trans. No. J. "On the Input/Output Stability of Nonlinear Time-Varying Feedback Systems". Multiplicative Seminorms and Approximate Inverses. [93] J. 2nd ed. Automat. [94] J. "Feedback and Optimal Sensitivity: Model Reference Transformations. Part I: General Theory. Vol. 1980. 37.BIBLIOGRAPHY 343 [87] A. 1972.. [99] G. Nonlinear Systems Analysis. 1996. MIT Press. C." SIAM J. Control. "Mechanisms for the Stability and Instability in Feedback Systems. Vol. and 465-477. pp. [89] M.

.

. . . . . . .. . . .. .. .14 (a) Three-dimensional view of the trajectories of Lorenz' chaotic system. . . . . . . .... .. . . . .. . . . . . .. . .. 1.8 15 16 17 19 1. .. .. . . . . .List of Figures 1.11 Stable limit cycle: (a) vector field diagram. . (b) original system System trajectories of Example 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 . .. . .. . System trajectories of Example 1. . . . . . . . . .. . 20 . . . .. . . . . . 27 29 . . . . ..17 Free-body diagrams of the pendulum-on-a-cart system . . . . . . .. .. .. . . ... . . . . . .. .. . . . . . . . . . (b) original system System trajectories for the system of Example 1. . 12 13 14 1. . . . .. 23 25 26 1. (b) two-dimensional projection of the trajectory of Lorenz' system. .. . . . .10 . .13 Unstable limit cycle . . . . .. . . . . . 20 22 1.. . . . . . . . . ... 1.5) . .. .. .19 Double-pendulum of Exercise (1. . . . .. . . . . .. . . . . .. . . . .6 The system ± = r + x2. . . (b) uncoupled sys- tem .12 1. . . . . . ..7 1. . . . ...10 Trajectories for the system of Example 1... . System trajectories of Example 1. . .12 Nonlinear RLC-circuit ..1 mass-spring system . . . . . . . . .. 2 7 1. ... Trajectories for the system of Example 1. . . . . . . . . . ..8 (a) uncoupled system... . .. .9 1.. (b) the closed orbit. . . . . . 1. .. . . 8 10 Vector field diagram form the system of Example 1.. . ... . . . ... .18 Ball-and-beam experiment . ... . .16 Pendulum-on-a-cart experiment . . . .15 Magnetic suspension system . . . 1. . ..7: (a) original system.. 1. . . . . .. .. . . .4 1. . . .2 The system i = cos x . . . . .6 . . . . . .. . .. .. . . . . . . . . . . 1. .. r < 0 . . . . .. . . . .. .3 1. .. . . . 1. . . . . . .9 (a) uncoupled system. . . . . . . 345 . . . .. . . . .5 1. .

.. . (b) the response y(t) = Hu(t). .. 160 6... .. .2 6. . . . . . . . . . Bode plot of JH(yw)j... .6 4. .9 The Feedback System S . . . .3 Experiment 1: input u(t) applied to system H. . .1 LIST OF FIGURES Stable equilibrium point . . . Discrete-time system Ed . . .. . . . . . . . . .1 . . . .. .8). . . . . . . . . (d) the final system after the change of variables.) . . . .. . . . Causal systems: (a) input u(t). uT(t) applied to system H . . . . (b) modified system after introducing -O(x). (a) Continuous-time system E. . . . . . . . . . 4. . . . .. . . . . . . . . .. . . . . 142 . . .. 6. .. . . . . . . . .. .. . . (c) truncation of the response y(t). . .. . . . . . . .. Experiment 2: input u(t) = . ... .. . . . . . Current-driven magnetic suspension system . . . . . The nonlinearity N(.. . . . . . . . .. . . . . . . . . . . . .. . .. . .. . . . . . . (b) discrete-time system Ed... . .2 . . . . . . . . . . . . . . . . . . 5. . . . . .4 6. . Notice that this figure corresponds to the righthand side of equation (6. .6 . . ... ... .346 3.. 67 69 74 79 95 126 . (d) truncation of the function u(t). . ..5 3.. . 6. .... . . . . .. and H2u = e1" 1 . . . .. . . . .. .. . . . . ..5 6. . . . .. . . . Notice that this figure corresponds to the left-hand side of equation (6. . . .. . . . .1 Asymptotically stable equilibrium point . . .. .. . ... . . .. . . .. . . . . . . . 169 170 173 175 176 6.. ... .. .21 . . . . . . . .. . . . .8 The Feedback System S . . .. . (a) The system (5. . . . . . Mass-spring system . . .. . . .. . . 161 163 163 The systems Hlu = u2. . . .. . . .3 3. . (c) "backstepping" of -O(x). . . .10 The Feedback System SK . . . . . . . . . . . . . indicating the IIHIIoo norm of H . 128 128 4. .... . . . . .. . (e) response of the system when the input is the truncated input uT(t). .. . . . . . .. .2 3. . . . . . . . . The curves V(x) =.3 . . . . (f) truncation of the system response in part (e).) .. .. . . . . . . . . . . . . .3 5. . . .. . . . . . . . . . . 6.. . System trajectories in Example 3. .1 Action of the hold device H . .4 3. ... . . .. . . . . . . . . .. . . . . . .. . . . ... . .8) . . . . . . 151 155 . . . . . . . . . .8). . 6. . . .. 177 . . . . . . . . . . . . .. . . . . . 66 3. The system H . . ...2 6. . . Static nonlinearity N(.7)-(5. . .. . . .. .. . .. .. . . . . . . . . . . . .. . . . . .7 6.. . . . .11 The Feedback System 5M . . . . . . . . . . Pendulum without friction .

. .. . .. . . . . .. . . 240 . .. . .. . .. . . . ... ... . . . .. . . . . . . . . .. . .. .. . . . .1 Cascade connection of ISS systems with input u = 0 . . 327 .... . .3] .. . . . . . . .. .. . . .3 9. .. .. .1 . . . . .6 Standard setup . 197 . .1 . ..5 8. . .2 . 6. .. . . . . . . . . . . .. . . . . . . . . . 9. .. ... . .. . .. . . A. . . . . . 308 309 318 . . 202 203 209 8. . 9. ... . . . . . A. . . . .. . . . . .. . .. ..2 . . . . . . . 8. . . .. . . . 211 The Feedback System S . . . . . . . .3 H=H1+H2+. 213 223 230 . . .4 9.. 248 249 Feedback Interconnection used in example 9. . . .. . .2 Asymptotically stable equilibrium point .. . . . . Passive network . . .3 Characteristics of H2 and H2 A. A Passive system . . . . . . .. . 7.. ... . . . . . 209 The Feedback System S1 . . . .. . . . . . . .2 . . . .. . . A.. . . . . .. The Feedback System S1 . ... . .. . . . . . .. ...4 8.. . . . . .. . ... . . . . .. .. Passive network . . .. . . .. . . . . . . ... . . . . . . .5 Feedback interconnection . . .. 178 195 Cascade connection of ISS systems . . . . . .. . . . . .. .. x) in the sector [a. . . . . .. . . . . . ..+H. . . .LIST OF FIGURES 347 . Mass-spring system . . . .. . . . . . . . . . . .. . . .. . . . .. . . 7. . . . . . .... . . . . . .. . ... . . . .. . . . . . . . . . 9. 8. . . .12 The nonlinearity 0(t*. . . .. . . . . . 250 . . .. . . . .. . . .. .2 8. .. . . . . ... . .. . .. . . . . . .. Standard setup . ... . . . . . . . . . .1 Asymptotically stable equilibrium point . .. . . . . . . . .. .... ... . .. . . . . . .. ..6 9.4 The feedback system Sf . . . .. . .. . . . . . . .. . . . . . .. . .

.

226 Cartesian product. 49 Discrete-time systems. 256 Diagonal form of a matrix. 44 Coordinate transformations. 193 Dissipative system. 259 Covector field. 54 theorem. 40 Eigenvectors. 77 Autonomous system. 296 Distribution involutive. 231 Backstepping chain of integrators. 100 Circle criterion. 159 Center. 148 Ball-and-beam system. 224 for input-to-state stability. 261 Eigenvalues. 80 Class JC function. 130 Discretization. 80 Closed-loop input-output stability. 141 strict feedback systems. 179 Asymptotic stability in the large. 45 349 algebraic condition for. 263 Contraction mapping.Index Absolute stability.. 21 Chetaev's theorem. 66 exponentially stable. 45 Causality. 263 nonsingular. 145 integrator backstepping. 16 Chaos. 67 convergent. 40 Equilibrium point. 49 Differentiable function. 127 Dissipation inequality. 223 QSR. 34 Bounded linear operators. 48 Bounded set. 262 variable dimension. 5 asymptotically stable. 224 Dissipativity. 49 Differential. 40 Diffeomorphism. function. 259 Differentiability.. 262 Distributions. 262 regular point. 46 Cauchy sequence. 178 Class )CL function. 168 Compact set. 45 Complement (of a set). 126 stability. 81 Class TC. 233 Distinguishable states. 54 Converse theorems. 44 Complete integrability. 67 . 262 singular point. 26 Basis. 125 Convex set. 4 Available storage.

265 Input-output stability. 218 L.350 INDEX globally uniformly asymptotically stable. Finite-gain-stable system. 82 Extended spaces. 265 47 Gradient. 46 restriction. 106 LaSalle's Theorem. 46 injective. 51 Hilbert space. 204 Holder's inequality. transformation. 48 Linear time-invariant systems. 174 Lyapunov function. 228 Fixed-point theorem. 255 Feedback systems. 46 uniformly continuous. 204 Linearization principle. Lagrange stable equilibrium point. 301 Loop transformations. 21 Frobenius theorem. 86 Inverse function theorem. operator. 183. 85 Invariant set. 16 Fractal. 52 Lipschitz systems. 156 . 264 Function spaces. 47 domain. 52 Inverted pendulum on a cart. 157 Feedback interconnections. 74 Lyapunov stability autonomous systems. 159 Input-to-state stability. 120 Lipschitz continuity. 156 Functions. 160 Input-output systems. 16 unstable. 71 . 46 continuous. 50 Kalman-Yakubovich lemma. 164 analysis. 109 uniformly convergent. 186 Jacobian matrix. 282 Invariance principle. 204 Input-output linearization. 47 surjective. 257 Lie derivative. 275 Input-output stability. 18 Limit point. 90 Lie bracket. 256 Limit cycles. 38 Inner product. 137 basic feedback stabilization. 38 Exponential stability. 66 Euclidean norm. 65. 86 Linear independence. 34 Linear map. 44 Limit set. 109 stable. 96 Linearization input-state. 109 unstable.. 66 uniformly asymptotically stable. 44 Internal dynamics. 226 Instability. 1 £ stability. 156 138 Inner product space.C2. 155 Input-State linearization. 100 Interior point. 46 bijective. 54 Focus stable. 109 uniformly stable. 46 range. 239 Feedback linearization. 25 ISS Lyapunov function.

206. 227 Small gain theorem. 37 Observability linear systems. 225 Strange attractor. 52 Memoryless systems. 113 351 Magnetic suspension system. 93 Relative degree. 210 Perturbation analysis. 228 Strictly passive system. 1 Matrices. 217 Strictly positive system.. 171 Stability input-to-state. 162 Storage function. 296 Nonlinear observers. 205. 217 Positive system.. 31 Partial derivatives. 110 discrete-time. 271 Mass-spring system. 110 Positive real rational functions. 148 Strictly output-passive system. 291 Nonlinear systems first order. 78 Rayleigh inequality. 214 theorem. 301 with linear error dynamics. 228 Strictly positive real rational functions. 201 of linear time-invariant systems. 39 symmetric. 99 Normed vector spaces. 50 Passive system. 237 State vector. 13 Schwarz's inequality. 44 interconnections. 162 Metric spaces. 38 Neighborhood. 207 . 39 skew symmetric. 13 Nonautonomous systems. 122 Phase-plane analysis. 208 Passivity. 32 Minimum phase systems. 246 Saddle. 23. 69 decrescent. 39 transpose. 8 Sequences. 48 Mean-value theorem. 277 Riccati equation. 5 linearization. 39 inverse. 3 Static systems. 21 Strict feedback systems. 107 Nonlinear observability.INDEX discrete-time. 285 Minkowski's inequality. 41 Radially unbounded function. 207 Quadratic forms. 21 Positive definite functions. 211 Passivity and small gain. 11 stable. 296 Observer Lipschitz systems. 291 Open set. 39 orthogonal. 8 Poincare-Bendixson theorem. 44 Node. 43 Region of attraction. 131 time dependent. 183 of dissipative systems. 45 Sets. 12 unstable. 204 Second order nonlinear systems. 298 Observers. 292 nonlinear. 132 nonautonomous. 39 Matrix norms.

44 Total derivative. 224 System gain. gain of an LTI system. 166 Topology. 44 in R". 19 Vector field. 9. 167 G. 256 diagram. 162 G2 gain of a nonlinear system. 36 Supply rate. 284 .. 280.352 INDEX Subspaces. 185 Unforced system. 49 Ultimate bound. 229 Zero dynamics. 9 Vector spaces. 243 G2 gain of an LTI system. 4 Van der Pol oscillator.. 32 Very-strictly-passive system.

Related Interests